Stacey Higginbotham writing for Fortune:
Nvidia announced two new graphics accelerators Tuesday, which are aimed at helping large companies like Facebook, Baidu and Google develop new deep learning models and then deploy those models at a massive scale without requiring huge, expensive banks of servers all hooked up to their own power plant.
The article references the new Tesla M40 and Tesla M4 chips. It also points to IBM’s synaptic chip, which, I think, is much more interesting than building AI on top of a CUDA architecture.
Monday, November 23, 2015
Copyright © 2015-2018 Selected Links | RSS | Twitter | Linked list