Artificial intelligence systems powered by large language models today are transforming how people work and create, from generating lines of code for software developers to sketches for graphic designers.
Kevin Scott, Microsoft’s chief technology officer, expects these AI systems to continue to grow in sophistication and scale—from helping address global challenges such as climate change and childhood education to revolutionizing fields from healthcare and law to materials science and science fiction.
Scott recently shared his thoughts with us on the impact of AI for knowledge workers and what’s next in AI. The biggest takeaways:
In your mind, what were some of the most important advancements in AI this year?
When we were heading into 2022, I think just about everybody in AI was anticipating really impressive things to take place over the next twelve or so months. But now that we’re pretty much through the year, and even with those lofty expectations, it’s kind of genuinely mind-blowing to look back at the magnitude of innovation that we saw left-to-right in AI. The things that researchers and other folks have done to advance the state-of-the-art are just light years beyond what we thought possible even a few years ago. And almost all of this is a result of the incredibly rapid advancement that has happened with large AI models.
The three things I’ve been most impressed by this year were the launch of GitHub Copilot, which is a large language model-based system that turns natural language prompts into code and has this dramatic positive impact on developer productivity. It opens up coding to a much broader group of people than we’ve ever had before, which is awesome because so much of the future is dependent on our ability to write software.
The second thing is these generative image models such as DALL∙E 2 that have become very popular and more accessible. A fairly high degree of skill is required to sketch and draw and master all of the tools of graphic design, illustration and art. An AI system such as DALL∙E 2 doesn’t turn ordinary people into professional artists, but it gives a ton of people a visual vocabulary that they didn’t have before—a new superpower they didn’t think they would ever have.
(Editor’s note: All images in this post except for Kevin Scott’s photograph were generated by a producer using DALL∙E 2.)
We’ve also seen that AI models are becoming more powerful and delivering even more substantial gains for the problems that they’re being used to solve. I think the work on protein folding this year has been really good throughout the technology industry, including the work that we’ve done with David Baker’s laboratory at the University of Washington, the Institute for Protein Design with RoseTTAFold, and helping that with a bunch of advanced AI to do transformational things.
And so that’s just tremendously exciting. Anything that’s a force multiplier on science and medicine is just net beneficial to the world because those are where some of our biggest, nastiest problems live.
That’s a big, impressive year. And I think next year will be better.
Where do you see AI technology having the greatest impact next year and beyond?
I think with some confidence I can say that 2023 is going to be the most exciting year that the AI community has ever had. And I say that after really, genuinely believing that 2022 was the most exciting year that we’d ever had. The pace of innovation just keeps rolling in at a fast clip.
I talked about GitHub Copilot already, and it’s amazing. But it’s the tip of the iceberg for what large AI models are going to be able to do going forward—extrapolating the same idea to all kinds of different scenarios for how they can assist in other kinds of intellectual labor beyond coding. The entire knowledge economy is going to see a transformation in how AI helps out with repetitive aspects of your work and makes it generally more pleasant and fulfilling. This is going to apply to almost anything—designing new molecules to create medicine, making manufacturing “recipes” from 3D models, or simply writing and editing.
I think with some confidence I can say that 2023 is going to be the most exciting year that the AI community has ever had.
For example, I’ve been playing around with an experimental system I built for myself using GPT-3 designed to help me write a science fiction book, which is something that I’ve wanted to do since I was a teenager. I have notebooks full of synopses I’ve created for theoretical books, describing what the books are about and the universes where they take place. With this experimental tool, I have been able to get the logjam broken. When I wrote a book the old-fashioned way, if I got 2,000 words out of a day, I’d feel really good about myself. With this tool, I’ve had days where I can write 6,000 words in a day, which for me feels like a lot. It feels like a qualitatively more energizing process than what I was doing before.
This is the “copilot for everything” dream—that you would have a copilot that could sit alongside you as you’re doing any kind of cognitive work, helping you not just get more done, but also enhancing your creativity in new and exciting ways.
This increase in productivity is clearly a boost to your satisfaction. Why do these tools bring more joy to work?
All of us use tools to do our work. Some of us really enjoy acquiring the tools and mastering them and figuring out how to deploy them in a super effective way to do the thing that we’re trying to do. I think that is part of what’s going on here. In many cases, people now have new and interesting and fundamentally more effective tools than they’ve had before. We did a study that found using no-code or low-code tools led to more than an 80% positive impact on work satisfaction, overall workload and morale by users. Especially for tools that are in their relatively early stages, that’s just a huge benefit to see.
For some workers, it’s literally enhancing that core flow that you get into when you’re doing the work; it speeds you up. It’s like having a better set of running shoes to go run a race or marathon. This is exactly what we’re seeing with the experiences developers are having with Copilot; they are reporting that Copilot helps them stay in the flow and keeps their minds sharper during what used to be boring and repetitive tasks. And when AI tools can help to eliminate drudgery from a job, something that is super repetitive or annoying or that was getting in their way of getting to the thing that they really enjoy, it unsurprisingly improves satisfaction.
Personally, these tools let me be in flow state longer than I was before. The enemy of creative flow is distraction and getting stuck. I get to a point where I don’t know quite how to solve the next thing, or the next thing is, like, “I’ve got to go look this thing up. I’ve got to context switch out of what I was doing to go solve the subproblem.” These tools increasingly solve the subproblem for me so that I stay in the flow.
In addition to GitHub Copilot and DALL∙E 2, AI is showing up in Microsoft products and services in other ways. How is next-generation AI improving current products such as Teams and Word?
This is the big untold story of AI. To date, most of AI’s benefits are spread across 1,000 different things where you may not even fully appreciate how much of the product experience that you’re getting is coming from a machine learned system.
For example, we’re sitting here in this Teams call on video and, in the system, there are all these parameters that were learned by a machine learning algorithm. There are jitter buffers for the audio system to smooth out the communication. The blur behind you on your screen is a machine learning algorithm at work. There are more than a dozen machine learning systems that make this experience more delightful for the both of us. And that is certainly true across Microsoft.
We’ve gone from machine learning in a few places to literally 1,000 machine learning things spread across different products, everything from how your Outlook email client works, your predictive text in Word, your Bing search experience, to what your feed looks like in Xbox Cloud Gaming and LinkedIn. There’s AI all over the place making these products better.
One of the big things that has changed in the past two years is it used to be the case that you would have a model that was specialized to each one of these tasks that we have across all our products. Now you have a single model that gets used in lots of places because they’re broadly useful. Being able to invest in these models that become more powerful with scale—and then having all the things built on top of the model benefit simultaneously from improvements that you’re making—is tremendous.
Microsoft’s AI research and development continues through initiatives such as AI4Science and AI for Good. What excites you most about this area of AI?
The most challenging problems we face as a society right now are in the sciences. How do you cure these intractably complicated diseases? How do you prepare yourself for the next pandemic? How do you provide affordable, high-quality healthcare to an aging population? How do you help educate more kids at scale in the skills that they will need for the future? How do you develop technologies that will reverse some of the negative effects of carbon emissions into the atmosphere? We’re exploring how to take some of these exciting developments in AI to those problems.
The models in these basic science applications have the same scaling properties as large language models. You build a model, you get it into some self-supervised mode where it’s learning from a simulation or it’s learning from its own ability to observe a particular domain, and then the model that you get out of it lets you dramatically change the performance of an application—whether you’re doing a computational fluid dynamics simulation or you’re doing molecular dynamics for drug design.
There’s immense opportunity there. This means better medicines, it means maybe we can find the catalyst we don’t have yet to fix our carbon emission problem, it means across the board accelerating how scientists and other folks with big ideas can work to try to solve society’s biggest challenges.
How have breakthroughs in computing techniques and hardware contributed to the advances in AI?
The fundamental thing underlying almost all of the recent progress we’ve seen in AI is how critical the importance of scale has proven to be. It turns out that models trained on more data with more compute power just have a much richer and more generalized set of capabilities. If we want to keep driving this progress further—and to be clear, right now we don’t see any end to the benefits of increased scale—we need to optimize and scale up our compute power as much as we possibly can.
We announced our first Azure AI supercomputer two years ago, and at our Build developer conference this year I shared that we now have multiple supercomputing systems that we’re pretty sure are the largest and most powerful AI supercomputers in the world today. We and OpenAI use this infrastructure to train nearly all of our state-of-the-art large models, whether that’s our Turing, Z-code and Florence models at Microsoft or the GPT, DALL∙E and Codex models at OpenAI. And we just recently announced a collaboration with NVIDIA to build a supercomputer powered by Azure infrastructure combined with NVIDIA GPUs.
Some of this progress has just been via brute force compute scale with bigger and bigger clusters of GPUs. But maybe even a bigger breakthrough is the layer of software that optimizes how models and data are distributed across these giant systems, both to train the models and then to serve them to customers. If we’re going to put forth these large models as platforms that people can create with, they can’t only be accessible to the tiny number of tech companies in the world with enough resources to build giant supercomputers.
So, we’ve invested a ton in software like DeepSpeed to boost training efficiency, and the ONNX Runtime for inference. They optimize for cost and latency and generally help us make bigger AI models more accessible and valuable for people. I’m super proud of the teams we have working on these technologies because Microsoft is really leading the industry here, and we’re open sourcing all of it so others can keep improving.
These advances are all playing out amid an ongoing concern that AI is going to impact jobs. How do you think about the issue of AI and jobs?
We live in a time of extraordinary complexity and historic macroeconomic change, and as we look out 5, 10 years into the future, even to just achieve a net neutral balance for the whole world, we’re going to need new forms of productivity for all of us to be able to continue enjoying progress. We want to be building these AI tools as platforms that lots of people can use to build businesses and solve problems. We believe that these platforms democratize access to AI to far more people. With them, you’ll get a richer set of problems solved and you’ll have a more diverse group of people being able to participate in the creation of technology.
With the previous instantiation of AI, you needed a huge amount of expertise just to get started. Now you can call Azure Cognitive Services, you can call the Azure OpenAI Service and build complicated products on top of these things without necessarily having to be so expert at AI that you’ve got to be able to train your own large model from scratch.
For some workers, it’s literally enhancing that core flow that you get into when you’re doing the work; it speeds you up. It’s like having a better set of running shoes to go run a race or marathon.
As all these huge AI systems continue to grow and evolve, I think we can expect that these advances are going to fundamentally change the nature of work, in some places more than others, and in some cases create a whole spate of new jobs that didn’t exist before. You can look back and see the same thing happen adjacent to all kinds of famous paradigm shifts in technology over history: the telephone, the automobile, the internet. And I think that just like those examples, we’re going to need new ways to think about work, new ways to think about skills and to be super focused on making sure that we’ve got enough talented folks around and trained for the really critical jobs.
Another concern associated with AI technologies is the potential for misuse and abuse. What are the concrete steps that Microsoft is taking to ensure its AI tools and services are developed and used responsibly?
This is a thing that we take super seriously. We have a responsible AI process that our AI systems go through, and we continue to improve that process. We scrutinize what we’re doing with a multidisciplinary team of experts to try to make sure that we understand all the potential harmful things that could happen, and we mitigate as many of them as possible. Examples of that are things like refining the dataset used to train models, deploying filters to limit the generation of harmful content, integrating techniques like query blocking on sensitive topics that helps to prevent misuse by bad actors or applying technology that can return more helpful and diverse responses and results. And we have a plan in place with the AI system where we can detect and mitigate as quickly as possible post-launch any harms that happen that we didn’t anticipate.
Another very important safeguard is intentional and iterative deployment. Most of the work that we do is on models that have broad capability. We host them in our cloud, and we make them accessible by API or through our products. For the API, any developer can get access to it, but they have to comply with the terms of service in order to use it, and if they violate the terms of service, their access can be taken away. And for other products, we may start with a limited preview with a select number of customers with well-defined use cases in mind. Collaborations with these early customers will help us make sure the responsible AI safeguards are working in practice so we can scale adoption more broadly.
We truly believe safety and responsibility is important. Hopefully, we can offer some encouragement to the whole industry. To that end, all the resources and expertise that we’ve applied against developing some solutions are being shared with the rest of the broader community through our Responsible AI Standard and Principles.
Top image: Center photograph of Microsoft Chief Technology Officer Kevin Scott is courtesy of Microsoft. Left and right images were created by a producer using DALL∙E 2, OpenAI’s AI system that can create realistic images and artwork from text descriptions.
Leave a Reply