Base
Full Name | HY |
Organization | UTAR |
Job Title | Lecturer |
Country |
Forum Replies Created
According to info from NVidia website https://developer.nvidia.com/cuda-gpus, GTX460 supports CUDA version 2.1.
Referring to http://en.wikipedia.org/wiki/CUDA, it seems to suggest that CUDA 2.1 only uses 48 cores for integer and floating-point arithmetic functions operations. (Correct me if I am wrong)
I wonder whether the dismay result of GTX460 is due to the low CUDA version, limited graphic memory or the nature of the simulation performed.
Thanks to Damian and Ravil, I will try to get funding for a Tesla-based workstation in office. But for home, I can only settle for a mid-range Geforce.
It will be a great help if anyone can rank the importance of the following for Optisystem simulation :
– CUDA Version
– Memory Size
– Data bus Width
– Memory type (DDR3 vs DDR5)
Hi Ravil, I have yet to decide on which GPU to buy. I am currently considering the following:
Geforce 210 (CUDA 2.0)
Geforce GT610 (CUDA 2.1)
Geforce GT630 (CUDA 2.1)
Geforce GT640-GDDR3 (CUDA 2.1)
Geforce GT650 (CUDA 3.0)
Geforce GT640-GDDR5 (CUDA 3.5)
As you can see, these GPUs support different CUDA versions (means they have different compute capability). Moreover, they come with various data width (64 vs 128 bits) and memory size (1GB, 2GB and 4GB).
I would like to know which factor (CUDA version, data width or GPU memory size) is the most vital in speeding up the simulation of IFFT/FFT operation in Optisystem.