When I saw the message on Slack that I have a chance to attend Google I/O 2018, I almost jumped out of my chair. Though there were thousands of others who also got the same message, it didn’t stop me from feeling excited. Before you set any kind of expectations, let me break the news - no, it wasn’t me who got that opportunity. And that was why I attended one of the many Google I/O extended events that happened across the world in many countries. If you are not aware, Google I/O is the annual tech festival that happens every year. Developers and tech enthusiasts from around the world, attend the event and take part in tech talks, codelabs, discussions and many more events. Google also announces its exciting products and new features during this event. Google I/O extended events are organized by Google Developer Groups or local tech groups across the world where people gather, watch live stream, have some discussions, fun activities and swags. There were few extended events that were planned and happened in Bangalore and I attended one.
We were all waiting for the keynote to start and we had to wait for more than an hour until that happened. Meanwhile, the organizers conducted a quiz by asking questions. There were some swags for those who gave the correct answers. When Sundar Pichai finally took the stage, there were cheers and claps both at the actual I/O venue and at the extended event. “Good morning”, he wished everyone in his calm and relaxed voice. Can you guess with what he started his keynote? A major bug! Now if you haven’t watched the keynote and wondering why the CEO of Google would do that on such a stage, revealing to the world a major bug in one of its products, what he was really talking about was the burger emoji controversy. Can’t get a better start to the event than this, can we?
Noticed the big bug?(Pic Credit)
There were many exciting things that were announced at Google I/O and almost all of them had one thing in common - the use of Machine Learning/Deep Learning. With technology being part of our lives more than ever, it is necessary to provide means for the people to get the skills necessary. Google through its digital skills training courses and programs has made this possible. With AI finding its place in many fields, healthcare is one among them where it has proven very vital. With the help of Deep Learning, doctors can now make an early prediction of a person suffering some disease and also helps in diagnosing. All of this gives doctors more time to act. Smart compose is Gmail’s new feature where the app suggests phrases instead of words as we type. This again uses Machine Learning. For those who use Google Assistant, there are features to get you excited too. Google Assistant will be getting 6 new voices to choose from, one of which is John Legend’s! Expect John Legend to answer your questions, help you with the assistant tasks sometime soon. But isn’t it time for the Google Assistant to do more than just answering our queries? Maybe book appointments for us and also have conversations instead of simply answering our questions? Yes and that’s what Google Duplex is for.
Google Duplex has been in the making for the past many years. Leveraging the advancements in language understanding and conversation making, Duplex is Google’s AI solution to make appointments using Google Assistant. This includes the Assistant making calls to, for example, a salon to book an appointment for you to get your haircut. The call happens in the background and is taken care of by the Assistant which gives you back the confirmation of the appointment. Though AI has already seen a lot of game-changing advancements in Computer Vision related tasks, language understanding and Natural Language Processing are yet to see that level of success. Since language understanding models are sequential, latency should be low, for the model to be efficient and reach anywhere close to the human level of language understanding.
Whenever I browse through my Android phone apps, there is one app that makes me feel that it’s left out. Yea I am talking about Google News. And Google has added it some new features, hopefully, to improve it. This again uses Machine Learning. Coming to Android, the version P of it comes with a dashboard, which acts as a one-stop place where the users can see how they are using their phones - what apps are they spending the most time on, how many notifications did the phone receive, number of times the phone is unlocked etc and all of this for the digital well being of the users. Also, Youtube is going to have this new feature which reminds you to take a break whenever it feels you have crossed the limit. So the next time you are on Youtube, binge-watching videos, it is not just your mom or dad who would remind you to take a break.
There were so many useful sessions that happened during the I/O event. All of these are available to watch on Youtube.