Artificial Intelligence (AI): Intelligence exhibited by machines
Intel is a company that powers the cloud and billions of smart, connected computing devices. Thanks to the pervasive reach of cloud computing, the ever decreasing cost of compute enabled by Moore’s Law, and the increasing availability of connectivity, these connected devices are generating millions of terabytes of data every single day. The ability to analyze and derive value from that data is one of the most exciting opportunities for us all. Central to that opportunity is artificial intelligence.
While artificial intelligence is often equated with great science fiction, it isn’t relegated to novels and movies. AI is all around us, from the commonplace (talk-to-text, photo tagging, fraud detection) to the cutting edge (precision medicine, injury prediction, autonomous cars). Encompassing compute methods like advanced data analytics, computer vision, natural language processing and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world.
Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Intel processors power >97% of servers deployed to support machine learning workloads today. The Intel® Xeon® processor E5 family is the most widely deployed processor for deep learning inference and the recently launched Intel® Xeon Phi™ processor delivers the scalable performance needed for deep learning training. While less than 10% of servers worldwide were deployed in support of machine learning last year, the capabilities and insights it enables makes machine learning the fastest growing form of AI.
Adding Nervana Systems to the Intel AI Portfolio
Success in this space requires continued innovation to deliver an optimized, scalable platform providing the highest performance at lowest total cost of ownership. Today, I’m excited to announce that Intel signed a definitive agreement to acquire Nervana Systems, a recognized leader in deep learning1. Founded in 2014 and headquartered in San Diego, California, Nervana has a fully-optimized software and hardware stack for deep learning. Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks. Nervana’s Engine and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of our Intel Xeon and Intel Xeon Phi processors.
At Intel we believe in the power of collaboration: the goodness inherent in exchanging fresh ideas and diverse points of view. We believe that bringing together the Intel engineers who create the Intel Xeon and Intel Xeon Phi processors with the talented Nervana Systems’ team, we will be able to advance the industry faster than would have otherwise been possible. We will continue to invest in leading edge technologies that complement and enhance Intel’s AI portfolio.
We will share more about artificial intelligence and the amazing experiences it enables at our Intel Developer Forum next week. I hope to see you there!
Diane Bryant is executive vice president and general manager of the Data Center Group at Intel.