Future Chip Innovation Will Be Driven By AI-Powered Co-Optimization Of Hardware And Software




To say we’re at an inflection point of the technological era may be an obvious declaration to some. The opportunities at hand and how various technologies and markets will advance are nuanced, however, though a common theme is emerging. The pace of innovation is moving at a rate previously seen by humankind at only rare points in history. The invention of the printing press and the ascension of the internet come to mind as similar inflection points, but current innovation trends are being driven aggressively by machine learning and artificial intelligence (AI). In fact, AI is empowering rapid technology advances in virtually all areas, from the edge and personal devices, to the data center and even chip design itself.

There is also a self-perpetuating effect at play, because the demand for intelligent machines and automation everywhere is also ramping up, whether you consider driver assist technologies in the automotive industry, recommenders and speech recognition input in phones, or smart home technologies and the IoT. What’s spurring our recent voracious demand for tech is the mere fact that leading-edge OEMs, from big names like Tesla and Apple, to scrappy start-ups, are now beginning to realize great gains in silicon and system-level development beyond the confines of Moore’s Law alone.

Tesla Model X Infotainment System

 
TESLA


Intelligent Co-Optimization Of Software And Hardware Leads To Rapid Innovation

In fact, what both of the aforementioned market leaders have recently demonstrated is that advanced system designs can only be fully optimized by taking a holistic approach, and tightly coupling software development and use-case workloads together with hardware chip-level design, such that new levels of advancement are realized that otherwise wouldn’t be possible if solely relying on semiconductor process and other hardware-focused advancements.

It used to be that hardware engineers would drive software engineering teams to complete full solutions. Now, however, best practice is based on much more of a co-development model. Consider Apple’s internal silicon development effort with its M1 series of processors for its MacBook and Mac mini portfolio. By engineering its own tightly-coupled, highly advanced solutions – with its software and application workloads considered during the design of the hardware — Apple has demonstrated time and time again how it can do more with less, with arguably some of the best performance-per-watt metrics in the PC industry. Likewise, Tesla realized if it were to achieve its lofty goals of full level 4 and 5 self-driving autonomy, that it would have to engineer its own custom silicon engines and systems, and as such the company has blazed a trail with its FSD (Full Self-Driving) chip technology. Again though, Tesla achieved this feat with the marriage of its own specialized application workloads and software driving its hardware development, not the other way around.


Obviously Apple and Tesla have huge resources and big budgets they can bring to bear to develop their own in-house chips and technology. However, new tools are emerging, once again bolstered by advancements in AI, which may allow even scrappy start-ups with much smaller design teams and budgets to roll their own silicon, or at least develop more optimized solutions that are more powerful and efficient, versus general purpose chips and off-the-shelf solutions.

AI-Assisted Chip Design Is Only The Beginning

And it’s in this area of chip design tools that companies like Synopsys are making great strides to usher in a new era of holistic design approaches for chip technologies, fueled by AI-enhanced automation. Previously, my firm partner Marco reported on Synopsys’ evolution of its DSO.ai technology that employs machine learning to drive dramatically faster place and route processes for design engineers. This is a critical step in the semiconductor design process, otherwise known as floor planning, as chip designs are mapped to silicon. The iterative nature of the process, targeted at optimizing for silicon area, power and performance goals, is a natural for machine learning and can dramatically improve time to market and engineering man-hours, freeing up engineers to focus on new innovations.

I had a chance to speak with Synopsys President and COO, Sassine Ghazi who notes, “We’ve been spoiled by Moore’s Law for far too long.” What Sassine was alluding to here, was that simply moving to a newer process node was all that was needed historically to achieve significant performance, power and efficiency gains with many semiconductor designs. While, to an extent, this is still technically the case today, it has become obvious that innovation in other areas is necessary to achieve the larger gains that are necessary to help us address current market demands. “Today’s technology inflection point is demanding us to rethink design approaches, and transition from the constraints of scale complexity to drive innovation at systemic complexity levels.” Sassine continued, “This is, in-part, how we’ll realize the lofty goal of 1000X performance advancements set by many major market innovators like Intel, IBM and others.”

Ghazi also notes that the company is working on harnessing AI to accelerate and automate the design verification and validation process of chips, where the goal is to wring out anomalies and application marginalities before chips are sent to mass production and deployment. “Validation and verification are great opportunities for machine learning, where the AI can help not only time to market, but also expand the test coverage area, which can be especially critical for general purpose silicon that needs broader confidence in a wider range of applications.”

Moving forward, Ghazi also notes the company is striving to develop new tools that allow OEMs to validate and achieve silicon design goals by running their specialized software and application workloads directly into the front-end design process, while also utilizing machine learning to optimize chips based on this early critical input. Ghazi reports the company is targeting 2022 for early customer engagements specifically in these new optimization areas. In addition, as the complex chart highlights above, Synopsys is focused on automating and advancing all areas of modern, cutting-edge chip design in the future, in an effort to address new market demand and dynamics, allowing us to scale beyond just Moore’s Law-driven chip fab process advancements.

Regardless, Synopsys is not alone in this realization and, as Ghazi notes, “it’s going to take an entire industry” to further drive innovation to its fullest potential, and meet current and future market demands for new, critical enabling technologies. We’re in an age now when nearly anything is possible, from the metaverse to autonomous vehicles and commercialized space travel, and machine learning is at the nexus of it all.

No comments

intech company. Powered by Blogger.