Nvidia stakes its claim in deep learning by making its GPUs easier to program

Gigaom

GPU maker Nvidia has been riding a wave of renewed relevancy lately as the popularity of deep learning continues to grow. Over the weekend, the company tried to capitalize even more on the craze by releasing a set of libraries called cuDNN that can be integrated directly into popular deep learning frameworks. Nvidia promises cuDNN will help users focus more on building deep neural networks and less on optimizing the performance of their hardware.

Deep learning has become very popular among large web companies, researchers and even numerous startups as a way to improve current artificial intelligence capabilities, specifically in fields such as computer vision, text analysis and speech recognition. Many of the popular approaches — especially in computer vision — run on graphics processing units (GPUs), each of which can contains thousands of cores, in order to speed up the compute-intensive algorithms without requiring racks full of standard CPUs.

Nvidia…

Ver la entrada original 437 palabras más

Anuncios

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s