A Wave Of Artificial Intelligence Chips: Future Of Amazing Possibilities
A wave of artificial intelligence chips will be listed in 2018. The industry is committed to software integration for the first time.
In the early days of madness and fragmentation, even software convergence efforts were fragmented. An AI team conducted a survey that found 11 steps to bridge the gap between competing software frameworks for managing neural networks.

Source
The most promising is OpenNode Switching (ONNX), an open source project started by Facebook and Microsoft and recently joined by Amazon. The team released the first version of the ONNX format in December. It aims to convert a neural network model created with any one of a dozen competing software frameworks into a graphical representation.
Chipmakers can locate their hardware on the result graph. This is good news for start-ups that can not write standalone software to support the competing model framework, such as Amazon's MxNet, Google's TensorFlow, Facebook's Caffe2 and Microsoft's CNTK.
A team of more than 30 major chip suppliers released their preferred option, the Neural Network Exchange Format (NNEF), on December 20. NNEF is designed to offer chipmakers alternatives to creating their own internal formats, as Intel did with TensorRT at Nervana Graph and Nvidia.
Among other abbreviations found are ISAAC, NNVM, Poplar, and XLA. Greg Diamos, a senior researcher at the Silicon Valley Artificial Intelligence Laboratory, said: "It may be too early to know if a successful implementation will happen, but we are on the right path, one of which May eventually win. "

AI18012401 Photo: In the Artificial Intelligence Framework, Amazon claims its MxNet framework and the emerging Gluon API provide the best efficiency. (Source: Amazon)
In addition, Google has begun using software to automate the process of streamlining DNN models so that they can run everything from smartphones to Internet of Things (IoT) nodes. If successful, you can reduce the 50Mbyte model to 500K bytes.
Google is also exploring how to train limited training on cell phones by adjusting the top of the model or running at night based on the data collected on that day. Industry jobs like SqueezeNet and MobileNet similarly show the path to simpler imaging models.
Pete Warden, who works for Google TensorFlow Lite, said: "We've seen a lot of people using machine learning popularize in a wide variety of products." Letting me down the energy consumption of each operation made me work every day late at night. "
When experts look at the future of AI, they see some interesting possibilities.
Today we use supervised learning based on a manual adjustment model. One of Google's watchdogs (Warden) is a researcher who appears semi-supervised in the near future, with client devices such as cell phones doing some learning on their own. The ultimate goal is unsupervised learning - computers educate themselves rather than the programming help of engineers who build them.
Along the way, researchers are trying to auto-tag the data from devices such as cell phones or IoT nodes.
"We need a lot of calculations now, and in this transition phase, once things are automatically tagged, you just need to index new incremental content, which is more like how human beings process data," said Janet George, a Western Digital scientist .
Unsupervised learning opens the door to accelerating the era of machine intelligence, something some see as a digital nirvana. Others are concerned that technology may be out of control in a catastrophic manner without human intervention. Norm Jouppi, head of TPU at Google Inc., said: "It scares me.
Meanwhile, academics working in semiconductors have their own long-term outlook for the future AI chips.
Intel, Graphcore and Nvidia "are already building full-scale chips and the next step is 3D technology," said Patterson. "When Moore's Law is in full swing, people will flinch before they see complex packaging technology because of concerns over reliability and cost, and now that Moore's Law is coming to an end, we're going to see a lot of packaging experimentation."
The ultimate game here is to create a new type of transistor that can be stacked on logic and memory layers.
Suman Datta, a professor of electrical engineering at Notre Dame, is optimistic about transistors in negative-capacitance ferroelectric transistor technology. He looked forward to the prospect of the field at the recent so-called 3-D monolithic conference. Such design applications and advanced 3-D NAND flash memory have adopted on-die chip stacking technology.
Teams from Berkeley, MIT and Stanford University will showcase a similar cutting-edge technology at the International Solid-State Circuits Conference in February. The chip (bottom) stacks a resistive RAM (ReRAM) structure on a logical carbon nanotube made of carbon nanotubes.

AI18012402 Berkeley, MIT and Stanford researchers will report at ISSCC a new type of accelerator using carbon nanotubes, ReRAM and patterns as computational elements. (Source: University of California, Berkeley)
Inspired by DNN, the device was programmed to approximate mode, not the deterministic number the computer has hitherto used. Berkeley professor Jan Rabaey said the so-called high-dimensional computation uses tens of thousands of dimensional vectors as computational elements and Professor Berkeley contributed to the article and is a member of the Intel AI Advisory Board.
Rabaey said that such a chip can learn from the examples, and much less than the traditional system requires. A test chip will soon be available that uses an oscillator array as the analog logic in a related memory array that uses a ReRAM cell.
Rabaey told the IEEE Artificial Intelligence Symposium: "The engine I dreamed of was portable and the field gave me guidance ... My goal is to get the AI running at less than 100 millivolts. Think about how we do computing, and we're moving from algorithmic-based systems to data-based systems. "

References used:
- https://www.morganstanley.com/ideas/ai-semiconductors-2018
- https://www.forbes.com/sites/valleyvoices/2017/08/03/back-to-the-future-chip-makers-are-putting-the-silicon-back-in-silicon-valley/#6f6501b14d9d
- https://www.wired.com/2016/10/ai-changing-market-computer-chips/
- https://medium.com/software-is-eating-the-world/what-s-next-in-computing-e54b870b80cc
Support @steemstem and the #steemstem
project - curating and supporting quality STEM
related content on Steemit

Being A SteemStem Member
For more information, click here!!!!
Send minimum 0.050 SBD/STEEM to bid for votes.
Do you know, you can also earn daily passive income simply by delegating your Steem Power to @minnowhelper by clicking following links: 10SP, 100SP, 500SP, 1000SP or Another amount
I have a few problems, first of all there is no such thing as an "AI chip but there are neural network ASIC (Application Specific Integrated Circuit) chips and genetic algorithm ASIC chips and even minimax ASIC chips... All of these are types of AI (AI is a category, not a single application, therefore you could not make an ASIC chip that covers a broad category)
As well, many types of AI cannot actually be done on an ASIC chip due to immense amount of logic required, it becomes more practical to use a generic chip.
You have collected your daily Power Up! This post received an upvote worth of 1.64$.

Learn how to Power Up Smart here!