Revolutionary New AI can be Run Anywhere





The Impossibility of Human AI

The biggest hurdle of developing an Artificial Intelligence which could match the human brain in both efficiency and capabilities is the enormous energy consumption of today's computer chips. But recent advancements in neural-optimized chips and the emergence of the first neuromorphic computing chips, start painting a clearer picture on how we may soon develop an Artificial Intelligence which matches and even beats us in most areas.

Timing is crucial when it comes to brain computing. It's the way neurons connect to form circuits. It's how these circuits analyze extremely complicated data, resulting to life-or-death decisions. It's the ability of our brains to make split-second judgments, even when confronted with completely novel situations. We accomplish this without frying the brain as a result of excessive energy usage. 

To summarize, the brain is a wonderful example of a very powerful computer to imitate, and computer scientists and engineers have already taken the initial steps in this direction.

The goal of neuromorphic computing is to use new hardware chips and software algorithms to reproduce the brain's architecture and data processing skills.

It has the potential to pave the door for real artificial intelligence.

However, one key component is missing.

Most algorithms that power neuromorphic devices are simply concerned with each artificial neuron's contribution—that is, the strength with which they link to one another, referred to as synaptic weight.

What's missing—yet crucial to the inner workings of our brain—is time. 

How Neuromorphic computing hardware is built

A new approach to AI

A team from the Human Brain Project, the European Union's flagship big data neuroscience project, added the element of time to a neuromorphic algorithm earlier this month.

The findings were then tested against state-of-the-art GPUs and traditional neuromorphic systems on physical hardware (the BrainScaleS-2 neuromorphic platform).

Due to their intrinsic complexity, "the more biological archetypes...still fall behind in terms of performance and scalability" when compared to the abstract neural networks employed in deep learning, according to the authors. 

According to Dr. Charlotte Frenkel of the University of Zurich and ETH Zurich in Switzerland, who was not involved in the work, the algorithm performed "favorably in terms of accuracy, latency, and energy efficiency" in multiple tests.

We might usher in a new era of extremely efficient AI by including a temporal component into neuromorphic computing, moving away from static data tasks like image recognition and toward ones that better embody time.

Consider movies, biosignals, or computer-to-computer communication. 

According to main author Dr. Mihai Petrovici, the potential is reciprocal.

"Not only is our work fascinating in terms of neuromorphic computing and biologically inspired hardware.

It also recognizes the need to "translate so-called deep learning methodologies to neuroscience in order to further unravel the secrets of the human brain," he added. 

Spikes, a key notion in brain computing, are at the heart of the new algorithm.

Let's take a look at a neuron that has been heavily abstracted.

It has a bulbous centre piece bordered by two outward-reaching wrappers, similar to a tootsie roll.

The input is a complex tree that receives information from a prior neuron on one side.

The other is the output, which uses bubble-like ships packed with chemicals to send signals to other neurons, which causes an electrical reaction on the receiving end. 

The essence of the matter is that the neuron must "spike" in order for the complete process to take place.

The bulbous section of the neuron will emit a spike that goes down the output channels to notify the next neuron if and only if the neuron receives a high enough amount of input—a wonderful built-in noise reduction feature.

Neurons, on the other hand, don't merely employ one spike to transmit information.

Rather, they surge in a certain order.

Consider it like Morse Code: the timing of an electrical burst contains a lot of information.

It's the foundation for neurons connecting to form circuits and hierarchies, allowing for extremely energy-efficient processing.

Why not use the same approach with neuromorphic computers? 

An alternative approach to neuromorphic computing AI

Other approaches to AI

For breakthrough AI development, research in other avenues beyond deep learning may be required.

Some AI experts, such as Gary Marcus, argue that deep learning has hit its limit and that fresh AI methodologies are needed to achieve new breakthroughs.

In this study, Gray described his insights on AI's limits, addressed the most common objections to his research, and provided a timescale for these predictions.

He anticipates VC enthusiasm for AI to wane in 2021, but that the next AI paradigm, which would open commercial prospects (e.g., new deep learning), will be ready between 2023 and 2027. 

To answer more complicated problems, deep learning relies on computational power.

Learning may take too long to be helpful with today's technologies.

As a result, advancements in computational power are required.

Companies may use modern computer technology to create AI models that can learn to address more difficult challenges. 

Even the most sophisticated CPU may not be able to increase an AI model's efficiency on its own.

Companies require high-performance CPUs to apply AI in applications such as computer vision, natural language processing, and speech recognition. AI-enabled chips provide a solution to this problem. These chips make CPUs "intelligent" in terms of job optimization. As a consequence, CPUs may work on their tasks independently, increasing their efficiency. New AI technologies will need the use of these processors to accomplish complex jobs more quickly. Self-supervised learning (also known as self-supervision) is a type of autonomous supervised learning.

This approach, unlike supervised learning, does not require people to identify data and conducts the labeling process entirely on its own. Self-supervised learning, according to Yann LeCun, Facebook's VP and chief AI scientist, will be important in comprehending human-level intelligence. While this technology is now utilized mostly in computer vision and NLP applications such as picture colorization and language translation, it is projected to be employed more broadly in our daily lives in the future.

The difference between the brain and neuromorphic chips

Is this the Future of Artificial Intelligence?

Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.

The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sparse way to encode a neuron’s activity, but comes with perks. Because only the latency to the first time a neuron perks up is used to encode activation, it captures the neuron’s responsiveness without overwhelming a computer with too many data points. In other words, it’s fast, energy-efficient, and easy.

The researchers next put the system to the test with MNIST, a collection of handwritten numbers that changed computer vision.

With nearly 97 percent accuracy, the system excelled once more.

Even more surprising, the BrainScaleS-2 system classified 10,000 test samples in less than one second, using very little relative energy. 

To put these findings in context, the researchers compared BrainScaleS-2's performance (as a result of the new algorithm) to that of commercial and other neuromorphic devices.

Take, for example, SpiNNaker, a massively parallel distributed architecture that also resembles neural computing with spikes.

The new method was almost 100 times quicker at picture identification than SpiNNaker while using a quarter of the power.

True North, IBM's pioneering neuromorphic chip, had similar findings. 

Energy economy and parallel processing, two of the brain's most useful computing capabilities, are now heavily motivating the next generation of computer processors.

What is the goal?

Create machines that are as adaptable and versatile as our own brains while consuming a fraction of the energy used by today's silicon-based circuits.

Biologically realistic ones, on the other hand, have lagged behind deep learning, which uses artificial neural networks.

According to Frenkel, some of this is due to the difficulty of "updating" these circuits through learning.

It is now achievable, thanks to BrainScaleS-2 and a little time data. 

Having a "external" arbiter for modifying synaptic connections also offers the entire system some breathing room.

Mismatches and mistakes abound in neuromorphic hardware, just as they do in our brain's computing.

The whole system may learn to adapt to this unpredictability using the chip and an external arbiter, eventually compensating for—or even exploiting—its idiosyncrasies for quicker and more flexible learning. 

The algorithm's strength, according to Frenkel, rests in its sparseness.

Sparse coding in the brain, she noted, "may explain the quick reaction times...such as for visual processing."

Instead of stimulating whole brain areas, just a few neural networks are required, similar to speeding through empty highways rather than being trapped in rush hour traffic. 

Last Words

Despite its strength, the algorithm is not without flaws.

It has trouble comprehending static data, but thrives at temporal sequences, such as speech or biosignals.

But it's the beginnings of a new framework for Frenkel: significant data may be encoded with a flexible but basic metric, then generalized to enhance brain- and AI-based data processing at a fraction of the usual energy expenditures.

"It might be a critical step toward spiking neuromorphic hardware demonstrating a competitive edge over traditional neural network techniques," she added. 

No comments

intech company. Powered by Blogger.