May 29, 2023

Here are some other highlights from Mr. Pichai’s remarks:

On the initial lukewarm reaction to Google’s Bard chatbot:

We know we want to be careful when we take Bard out… so I’m not surprised by that reaction. But in some ways, I feel like we took an enhanced Civic and raced it against more powerful cars. What amazes me is how well it performs on many, many classes of queries. But we’ll iterate quickly. We obviously have a stronger model. Soon, maybe with it coming online, we will upgrade Bard to some of our more powerful PaLM models, which will bring more capabilities, whether it’s inference, coding, it can answer math problems better . So you’ll see progress next week.

On whether the success of ChatGPT was unexpected:

With OpenAI, we have a lot of context. There are some really good people, some of whom have worked at Google before, so we know what the team is capable of. So I don’t think OpenAI’s progress has surprised us. I think ChatGPT … you know, thank them for finding product market fit. I think the acceptance by users has been a pleasant surprise, even to them, as it has to many of us.

On his concerns about tech companies’ race to advance AI:

Sometimes I get worried when people use the words “race” and “be the first.” I’ve been thinking about AI for a long time, and the technologies we’re working on are sure to be very beneficial, but clearly have the potential to cause deep harm. So I think it’s really important that we all take responsibility for how we handle it.

On the return of Larry Page and Sergey Brin:

I have had several meetings with them. Sergey has been with our engineers for some time. He is a deep mathematician and computer scientist. So for him, the underlying technology, I think if I used his words, he’d say it’s the most exciting thing he’s ever seen in his life. So that’s all the excitement. I am very happy. They always say, “Call us any time you need it.” I call them.

exist Open letter, Nearly 2,000 AI researchers and tech luminaries including Elon Musk signed the document urging companies to put a moratorium on developing powerful AI systems for at least six months:

In that regard, I think it’s important to hear concerns. There are a lot of thoughtful people behind it, including people who think about AI for a long time. I remember talking to Elon eight years ago when he was very concerned about AI safety. I think he has always cared. I think it’s worth keeping an eye on.While I may disagree with everything there and the details of how you’re going to approach it, I think the spirit [the letter] worth being there.

On whether he worries about the dangers of creating artificial general intelligence, or AGI (an artificial intelligence that surpasses human intelligence):

When is AGI? what is it? How do you define it? when do we get here All of these are good questions. But to me, that hardly matters, because I know very well that these systems are going to be very, very capable. So it hardly matters whether you reach AGI or not; you’ll have systems that deliver benefits and potentially do real harm on a scale we’ve never seen before. Could we have an artificial intelligence system that can create disinformation at scale? Yes. Is it general artificial intelligence? It really doesn’t matter.

Why climate change activism has him hopeful about artificial intelligence:

One of the things that makes me hopeful about AI, like climate change, is that it affects everyone. We live on the same planet over time, so the two problems share similar characteristics, that you can’t unilaterally achieve AI safety. By definition, it affects everyone. So that tells me that over time the collective will will responsibly address all of these issues.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *