In the recent past, GPUs have garnered wide attention in Data Science and AI as a cost-effective method to improve performance and speed of training ML models with huge data sets and parameters, as compared to CPUs (Central Processing Units). It is even being used at large commercial scales. For example, search-engine Baidu uses GPUs to fast-track visual search, speech recognition, click-through-rate estimation and more. In this blog, we cover the how-tos for choosing the right processing units for your deep-learning models.
Understand the basics
In recent times, GPUs have made headlines for different reasons. For example, the Stanford AI lab created the world’s fastest Artificial Intelligence (AI) training performance beating Google Data Centre. While this comparison is fair on some levels, comparing CPUs and GPUs is a bit like apples to oranges – they both serve a different function. So let us understand the basic definitions and understand the hardware in question at the core level. The table below illustrates the major differences in functioning and technology.
CPUGPUCentral Processing UnitGraphics Processing UnitFew powerful cores.Many weak cores.Emphasises on low latency.Emphasises on high throughput.Suitable for serial processing.Suitable for parallel processing.Can do a handful of operations at once.Can do thousands of operations at once.Consumes or needs more memory than GPU.Requires less memory than CPU.Faster at answering complex questions. Faster at answering simple questions. Designed to maximise the performance of the entire job, processes multiple tasks at once.Designed to maximise the performance of one job, processes tasks one at a time.Runs host code.Runs CUDA code (NVIDIA GPUs).
For deep-learning models, in particular, we only want to compare parallel compute capabilities. The image below illustrates how the two processing units perform by comparing FLOPS per Clock Cycle.
Undoubtedly, GPU performance has skyrocketed over the last few years as compared to CPU performance. To understand why to consider this – a 20 core CPU can perform only 20 data-sets at a time. The best CPUs in the market today have 96 cores. Currently, a server can have 8 GPUs with ~5,000 cores per GPU for a total of up to 40,000 GPU cores! However, as mentioned in this blog, the computer architectures of the two hardware are now closer than ever, and the only bottleneck for CPUs is the memory bus optimisation.
What role do CPU and GPU play in deep learning
As mentioned before, we cannot offload all of the CPU workloads to GPUs. The table below describes which deep-learning tasks are better performed on CPU and GPU.
CPUGPUHigh-definition, 3D, and non-image-based deep learning on language, text, and time-series data.Training with several neural network layers or on massive sets of certain data, like 2D images.Suitable for sequential algorithms like Markov models and support vector machines. More suitable for matrix-multiply with many parameters.Suitable for memory intensive applications.Currently the best GPU in the market, NVIDIA Tesla V100 has 32GB memory. If computation does not fit in memory of the GPU, operations will slow down significantly.
For a detailed analysis of CPU vs GPU performance in TensorFlow, you can refer to various research papers. In one of the papers, comparisons have been made for the following algorithms in neural networking – AlexNet, text classification and Mnist digit classification.
Some questions to ask before making the choice
For commercial applications, some factors to consider are:
- As illustrated in the image below, there’s no doubt that GPUs have much higher memory bandwidth than CPU. But, it takes time for data transfer from CPU to GPU too. One important question to ask – is the increased overhead time to switch to GPU worth the effort?
- Number of computations: How large is my data set? In general, the larger your data set, the more inclined you should be towards using GPU.
- Optimisation: Dense neural networks are not suited for GPU as parallelisation will be highly difficult. In other words, optimisation is much faster in the CPU. Before embarking on the journey, ask – does the amount of coding required far exceed the output?
This is, of course, one of the most important factors to consider. When the number of parameters is low, CPUs are still cost-effective. You can also consider optimising CPU performance through MKL DNN or NNPACK. Another important issue to note is that the scaling of GPU clusters is not linear, as illustrated in the following image.
Most importantly, GPU compute instances that cost about 2-3X more than CPU. So if you don’t get equivalent performance-enhancing, stick with CPUs. If all the above criteria are met, GPUs are the way to go!
Now that you have a fair idea of the CPU vs GPU comparison, you can make better decisions for optimising speed, performance and cost on deep learning analysis. Check out more of our blogs to understand deeper topics like benchmarking.
For free trial please click here :- http://bit.ly/3hyNiWB