ALL TECH KOREA글로벌

“The world is moving from ‘Mobile First’ to ‘AI First’

Google’s annual developer conference ‘I / O’ was held in Mountain View, San Francisco, US.

In keynote speech of this event, Google CEO Sundar Pichai said, “we are currently in a new era of computing. We are during transitioning from mobile-first to AI-first. An this time, Google will do its best to provide equal accessibility of information to anyone and discover new opportunities to help millions of people.”

He concluded the speech by saying, “there is still a long way to go before we enter the AI ​​world. But as soon as a greater accessibility can more people to the tools and technologies, then everyone will be able to benefit from them much sooner.

Sundar Pichai had joined Google in 2004 and led the development of the Google Toolbar and Chrome. After 11 years of working closely with Google’s co-founders Larry Page and Sergey Brin, he had become the CEO of Google in August 2016.

 

(The following is the entire text of the keynote speech by Google CEO Sundar Pichai. Source: Google Blog) https://blog.google/topics/machine-learning/making-ai-work-for-everyone/ )

I’ve now been at Google for 13 years, and it’s remarkable how the company’s founding mission of making information universally accessible and useful is as relevant today as it was when I joined. From the start, we’ve looked to solve complex problems using deep computer science and insights, even as the technology around us forces dramatic change.

The most complex problems tend to be ones that affect people’s daily lives, and it’s exciting to see how many people have made Google a part of their day—we’ve just passed 2 billion monthly active Android devices; YouTube has not only 1 billion users but also 1 billion hours of watchtime every day; people find their way along 1 billion kilometers across the planet using Google Maps each day. This growth would have been unthinkable without computing’s shift to mobile, which made us rethink all of our products—reinventing them to reflect new models of interaction like multi-touch screens.

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.

The Assistant is a powerful example of these advances at work. It’s already across 100 million devices, and getting more useful every day. We can now distinguish between different voices in Google Home, making it possible for people to have a more personalized experience when they interact with the device. We are now also in a position to make the smartphone camera a tool to get things done. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. If you have crawled down on a friend’s apartment floor to see a long, complicated Wi-Fi password on the back of a router, your phone can now recognize the password, see that you’re trying to log into a Wi-Fi network and automatically log you in. The key thing is, you don’t need to learn anything new to make this work—the interface and the experience can be much more intuitive than, for example, copying and pasting across apps on a smartphone. We’ll first be bringing Google Lens capabilities to the Assistant and Google Photos and you can expect it to make its way to other products as well.

[Warning, geeky stuff ahead!!!]

All of this requires the right computational architecture. Last year at I/O, we announced the first generation of our TPUs, which allow us to run our machine learning algorithms faster and more efficiently. Today we announced our next generation of TPUs—Cloud TPUs, which are optimized for both inference and training and can process a LOT of information. We’ll be bringing Cloud TPUs to the Google Compute Engine so that companies and developers can take advantage of it.

It’s important to us to make these advances work better for everyone—not just for the users of Google products. We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips. But today, there are too many barriers to making this happen.

That’s the motivation behind Google.ai, which pulls all our AI initiatives into one effort that can lower these barriers and accelerate how researchers, developers and companies work in this field.

One way we hope to make AI more accessible is by simplifying the creation of machine learning models called neural networks. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. That’s why we’ve created an approach called AutoML, showing that it’s possible for neural nets to design neural nets. We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs.

In addition, Google.ai has been teaming Google researchers with scientists and developers to tackle problems across a range of disciplines, with promising results. We’ve used ML to improve the algorithm that detects the spread of breast cancer to adjacent lymph nodes. We’ve also seen AI make strides in the time and accuracy with which researchers can guess the properties of molecules and even sequence the human genome.

This shift isn’t just about building futuristic devices or conducting cutting-edge research. We also think it can help millions of people today by democratizing access to information and surfacing new opportunities. For example, almost half of U.S. employers say they still have issues filling open positions. Meanwhile, job seekers often don’t know there’s a job opening just around the corner from them, because the nature of job posts—high turnover, low traffic, inconsistency in job titles—have made them hard for search engines to classify. Through a new initiative, Google for Jobs, we hope to connect companies with potential employees, and help job seekers find new opportunities. As part of this effort, we will be launching a new feature in Search in the coming weeks that helps people look for jobs across experience and wage levels—including jobs that have traditionally been much harder to search for and classify, like service and retail jobs.

It’s inspiring to see how AI is starting to bear fruit that people can actually taste. There is still a long way to go before we are truly an AI-first world, but the more we can work to democratize access to the technology—both in terms of the tools people can use and the way we apply it—the sooner everyone will benefit.

기자 / 제 눈에 스타트업 관계자들은 연예인입니다. 그들의 오늘을 기록합니다. 가끔 해외 취재도 가고 서비스 리뷰도 합니다.

댓글

Leave a Comment


관련 기사

트렌드

[특허법인 세움] 윤영진의 SEUM IP Alert #6. AI 스피커 분쟁

글로벌

구글, AR 서비스로 다시 한 번 중국 진출 꾀한다

트렌드

2017, 구글 최고 인기 검색어는 ‘너의 이름은’

인사이트 트렌드

구글 리더 서비스 종료 소식, 블로그 시대 종말?