Google Week
Hear from insiders building Med-PaLM M and PaLM 2. Plus, how you can shape the future of the show!
We’re back with another edition, and this time it’s Google Week at The Cognitive Revolution!
Plus, we’d love to hear your thoughts about the show, using this form.
Stay tuned til the end to see how you can help shape the future of the show. We’re in the arena trying stuff…
Let’s dive in!
💻 Google’s Paige Bailey on PaLM 2
Listen here: Spotify | Apple | Youtube
Can you imagine a more interesting job anywhere in the world today than "Lead Product Manager" on Google Deepmind’s PaLM 2?
Paige Bailey played this pivotal role, managing a mission-critical AI project at Google in the wake of ChatGPT? Google has more AI resources — data, compute, and algorithm expertise — than any other company in the world. But they also have more to gain or lose! How do you pull the team together? How do you reserve the compute?
And what is it like to manage the development of a frontier language model in general? These products have more surface area than any previously created, and with each new generation, or should I say each additional order of magnitude of pre-training compute? – we are finding all sorts of new, sometimes quite surprising capabilities.
And then what's it like to work with the rest of Google as that same core technology is deployed across dozens of products and begins to be customized for all sorts of specific purposes as well?
These were just some of the questions I had for Paige, and her answers did not disappoint. Comparing the project to the Apollo program, which is not an unreasonable comparison, particularly if you consider the incredible data & compute platforms on which PaLM 2 was built, Paige offers a truly unique perspective on the process of training a frontier language model in 2023.
We talk about:
her early AI eureka moments
her responsibilities as product manager
how the team comes together to define how they want to the model to behave
how many advances seem to be happening ahead of schedule at Google right now,
how we should understand models' reasoning ability and the special opportunity that citizen scientists have to contribute to this research
the AI products she is most excited about, where the next breakthroughs might come from, and lots more.
🩺 Google’s Vivek Natarajan and Tao Tu on Med-PaLM M
Listen here: Spotify | Apple | Youtube
We chatted about Google's new multimodal Med-PaLM M model with returning guest Vivek Natarajan and lead co-author Tao Tu.
This paper, published just a few months after Vivek was on The Cognitive Revolution to discuss Med-PaLM2, extends Google's insane run in generalist medical AI by training a single system that accepts not just clinical text but a wide range of medical imaging and even genomics data, and trains it to perform 14 distinct medical tasks, of which text-only medical question answering is just one.
The headline from this work is that this single model set new state of the art performance records across a number of tasks, while coming close to a few more, all with a single set of weights. For radiology report generation specifically, the AI output was preferred to that of a human radiologist more than 40% of the time.
The promise for society, over the next couple years, is no less than an AI doctor on everyone's phone around the world – one that can not only understand patient language and images, but also incorporate and interpret things like genomic data in superhuman ways.
The insights from this conversation were many. We talked about
how predictable such incredible progress has become
the many different tricks & best-practices that go into training a large-scale model like this
how quickly & efficiently they can conduct this work as they "Stand on the shoulders of giants" at Google
the extremely promising generalization that this system is already showing
how much low-hanging fruit remains available to improve future models' performance
how Google's strategy of building comparatively narrow specialist systems drives value while also promoting safety
the path to clinical testing and deployment of generalist medical AIs
If you've got any doubts about AI having major humanity over the next few years, I think that after listening to this conversation and considering – not just where we are today, but how consistently we are moving forward, and how much room we clearly have left to run – those doubts will pretty quickly fade away.
📚 Nathan’s Reads
Consciousness in AI paper and associated article
An interview with one of the contributors is maybe in the works… If you have any questions, send them our way! DM Nathan at @labenz on Twitter or email tcr@turpentine.co
AI that can detect what you're typing based on keystrokes paper
Classic example of AI reaching superhuman capabilities. Not surprising … especially considering how easy it is to collect quality data - you can map the sound to the output explicitly
Always good to get out of our bubble. I think the AI story is currently notably non-partisan (hope it lasts!), and in my view very reasonable in its ambivalence toward increasingly powerful AI
A contest for people to imagine how AI might create radically different yet positive futures
📣 Call for Feedback
To borrow from a meme… we’re in the podcast arena trying stuff. Some will work. Some won’t. But we’re always learning.
Fill out this feedback form to let us know how we can continue delivering great content to you by or sending the feedback on your mind to tcr@turpentine.co.
Thanks as always for listening and supporting the show.