Transform 2019: Hear from the movers and shakers in AI



Artificial intelligence is transforming business and offering significant strategic and practical opportunities, from natural language processing and smart speech to IoT and edge computing.
While the technology has become increasingly democratized, allowing businesses of any size to reap the benefits, some companies and innovators are leading the charge — and they’ll be at this year’s Transform event in San Francisco on July 10 and 11. Join us to get in the room with them and look over their shoulders at how it’s done. They’ll offer inspiring and practical takeaways that are crucial to your business.






Andrew Moore, Head of Google Cloud AI





In an increasingly competitive cloud market, Google is positioning itself as the go-to for everyone from startups to enterprises, with dozens of new AI-powered products and services that are easy to access even for non data scientists. Consider Google Cloud Platform, which offers AI creators a new shared end-to-end environment for teams to test, train, and deploy models from the germ of an AI strategy all the way to launch. Google Cloud is making a bid to differentiate itself from competitors by offering small businesses or startups that depend on a cloud provider’s technology the opportunity to run their models on premise, or on GCP.
Plus there are new classes for AutoML, a collection of premade retail and Contact Center AI services, and AI Platform, a collaborative model-making tool. Developers with little coding experience can use AutoML, while AI Platform is for data scientists — part of Google’s attempt to deliver AI tools across the spectrum of experience and bring useful AI to all industry verticals. Conversations with independent and brand executives at Transform will help put all this in context.




Keynote speaker: Swami Sivasubramanian, Amazon AI vice president


If you’re looking to train machine learning models at massive scale while keeping costs down, Amazon’s AWS also offers all kinds of AI products for developers and business executives. Amazon hopes you’ll tap its SageMaker AI service, which uses innovative techniques to keep the required amount of computing power locked down while providing similar performance. The more data that gets fed through SageMaker’s streaming algorithms, the more training the system does, but the computational cost of doing so remains constant over time, rather than scaling exponentially.
This means the company has created a system that can handle incredibly large data sets running at global scale with the same amount of accuracy as more traditional methods of training AI systems. That’s important for Amazon’s work on its own AI projects, as well as customers’ needs.
Companies need to invest in NLP technologies to keep up with the revolution happening in search and engagement, and Amazon AI is keeping pace in the NLP space with leaders like Google. Scientists at Amazon’s Alexa division recently used cross-lingual transfer learning — a technique that entails training an AI system in one language before retraining it in another — to adapt an English language model to German, and in a new paper the researchers expanded the scope of their work to transfer an English-language model to Japanese.



Hilary Mason, GM of machine learning at Cloudera and founder of Fast Forward Labs

Hilary Mason, one of the highest-profile women in data science and general manager of Cloudera, stated earlier this year that the biggest trend in AI is the ethical implications of AI systems. Companies need to understand the importance of putting some kind of ethical framework in place, and both technical and business leaders need to accept accountability for creating products without bias.
Also, much as you’d expect business managers to be at least minimally competent at using spreadsheets to do simple modeling, they’ll now need to be minimally competent when it comes recognizing AI opportunities in their own products.
Mason also thinks an increasing number of businesses will need to form structures to manage multiple AI systems. A single system can be managed with hand-deployed custom scripts, and cron jobs can manage a few dozen. But when you’re managing tens or hundreds of systems in an enterprise that has security, governance, and risk requirements, you need professional, robust tooling and a shift from relying on pockets of competency or even brilliance to a systematic way of pursuing machine learning and AI opportunities. (Here too, we’ll have an array of companies — LinkedIn, Uber, Airbnb, and Lyft will be talking at Transform about how to do this.)

Greg Brockman and Ilya Sutskever, OpenAI cofounders



Gaming has been the benchmark in AI research, and OpenAI has been leading the way in creating an AI that can play many of the most complicated games better than humans. Built on deep reinforcement learning, the technology is arguably showing early steps toward a general artificial intelligence — and one that can be applied outside of games.



Greg Brockman and Ilya Sutskever, cofounders of OpenAI, will be discussing the latest AI behind NLP and text-generation, something many businesses are working on with their customer-engagement messaging apps. It all stems from the excitement OpenAI has generated from work in gaming: Its bot was thrown into one of the biggest rings yet. Between April 18 and April 21, the company conducted a massive-scale experiment to test how good it was against the best Dota 2 players.
OpenAI Five had a victory rate of 99.4%, and no one was able to find the kinds of easy-to-execute exploits that human-programmed game bots suffer from.
A bot that can navigate complex strategy games is a milestone because it begins to capture aspects of the real world. It’s a step toward an AI that can handle complexity and uncertainty, offering a clearer path toward developing autonomous systems that outperform humans at the most economically valuable work.

Kevin Scott, Microsoft CTO


The modern machine learning industry is built not just on advances in compute power but also on open source projects. It’s this architecture that will enable leaps forward in machine intelligence, and tech giant Microsoft is leading the charge with its new Azure Machine Learning and Azure Cognitive Services announcements.
Microsoft is working in a ton of areas relevant to enterprise, including AI on the edge for robotics and manufacturing companies. It has also made generally available FPGA chips for machine model training and inferencing. Moreover, the Open Neural Network Exchange (ONNX) plays to the company’s strengths because it allows Microsoft customers to use other, non-Microsoft technologies, heralding a new era of openness. ONNX now supports Nvidia’s TensorRT and Intel’s nGraph for high-speed inference on Nvidia and Intel hardware. This comes after Microsoft joined the MLflow Project and open-sourced the high-performance inference engine ONNX Runtime.
The interoperability ONNX brings to the collections of different frameworks, runtimes, compilers, and other tools enables a larger machine learning ecosystem. FPGA chips have been used for years now to run 100% of data encryption and compression acceleration tasks for Azure. You can now build custom models using TensorFlow, PyTorch, Keras, or whichever framework you prefer, and then hardware-accelerate it with any GPU or FPGA.
Microsoft is also now known as one of the largest employers of open source project contributors, according to the 2018 Octoverse Report released last fall by GitHub, which Microsoft acquired last year.
These are just a handful of the speakers coming to Transform, our flagship event for business executives looking to achieve results with AI. Register now to network with the AI leaders who are implementing practical, successful, real-world AI strategies.

Source
https://venturebeat.com/2019/05/14/transform-2019-hear-from-the-movers-and-shakers-in-ai/



No comments

intech company. Powered by Blogger.