From ChatGPT to Google Bard: how AI is rewriting the internet – The Verge

From ChatGPT to Google Bard: how AI is rewriting the internet – The Verge

Filed under:
By Umar Shakir, a news writer fond of the electric vehicle lifestyle and things that plug in via USB-C. He spent over 15 years in IT support before joining The Verge.
Big players, including Microsoft, with Copilot, Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.
How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 
Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”
But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.
Feb 12
Gregory Barber
I was recently sitting in a hot tub with a friend — a glaciologist who studies how quickly the southern ice is sliding into the sea — when she mentioned that she had recently planned her honeymoon using ChatGPT. Our fellow bathers burst into laughter. “You’d done it, too?” This, apparently, is the present state of get-togethers among friends in their early 30s: six people and three AI-assisted honeymoons between them.
My friend is a pro at arranging helicopters and snow cat brigades to remote wedges of ice. But she was overloaded with decisions about charger plates and floral arrangements, and had put the task of arranging a 12-day trip to Tasmania on her husband-to-be. A statistician, he was using ChatGPT to answer questions at work, following the advice of a mentor who told him he’d better make a habit of it. So he asked it for an itinerary that would emphasize the couple’s love of nature, adventure, and (it being a honeymoon) luxury. A specific request: time for at least one lengthy trail run.
Feb 12
Mia Sato
In late December 2023, several of Brian Vastag and Beth Mazur’s friends were devastated to learn that the couple had suddenly died. Vastag and Mazur had dedicated their lives to advocating for disabled people and writing about chronic illness. As the obituaries surfaced on Google, members of their community began to dial each other up to share the terrible news, even reaching people on vacations halfway around the world. 
Except Brian Vastag was very much alive, unaware of the fake obituaries that had leapt to the top of Google Search results. Beth Mazur had in fact passed away on December 21st, 2023. But the spammy articles that now filled the web claimed that Vastag himself had died that day, too.
Feb 11
Wes Davis
The AI-generated stand-in voice for imprisoned Former Pakistan Prime Minister Imran Khan claimed victory on behalf of his party in Pakistan’s parliamentary elections on Saturday, according to The New York Times.
The party has used an AI version of his voice this way for months. As the Times writes, the use highlights both the usefulness and the danger of generative AI in elections.
Feb 10
Wes Davis
Now, the chatbot formerly known as Bard will respond to your queries when you stop talking, regardless of how you summoned it. Before, that only worked when you invoked Google’s chatbot with the phrase “Hey Google.”
Feb 8
Amrita Khalid
The rumors are true, even Notepad is getting a generative AI boost. A new update called “Explain with Copilot” will help users decipher any text, code segments, or log files they select within the text editor as Microsoft’s AI add-on enters its second year.
Microsoft announced the feature is in beta testing, available to Windows Insiders in the Canary and Dev Channels.
Feb 8
David Pierce
Google is famous for having a million similar products with confusingly different names and seemingly nothing in common. (Can I interest you in a messaging app?) But when it comes to its AI work, going forward there is only one name that matters: Gemini.
The company announced on Thursday that it is renaming its Bard chatbot to Gemini, releasing a dedicated Gemini app for Android, and even folding all its Duet AI features in Google Workspace into the Gemini brand. It also announced that Gemini Ultra 1.0 — the largest and most capable version of Google’s large language model — is being released to the public. 
Feb 6
Emilia David
OpenAI’s image generator DALL-E 3 will add watermarks to image metadata as more companies roll out support for standards from the Coalition for Content Provenance and Authenticity (C2PA).
The company says watermarks from C2PA will appear in images generated on the ChatGPT website and the API for the DALL-E 3 model. Mobile users will get the watermarks by February 12th. They’ll include both an invisible metadata component and a visible CR symbol, which will appear in the top left corner of each image.
Feb 3
Wes Davis
Hugging Face tech lead Philipp Schmid posted yesterday that users can now create custom chatbots in “two clicks” using Hugging Chat Assistant. Users’ creations are then publicly available.
Schmid directly compares the feature to OpenAI’s GPTs feature, and adds they can use “any available open LLM, like Llama2 or Mixtral.”
Feb 1
Emilia David
When Microsoft started its big AI push last year, it launched Copilot tools for Sales and Service to summarize meetings, manage customer lists, and find info for customer service agents, and now they’re more widely available.
Microsoft isn’t the only one applying AI to these tasks — AWS announced a slew of generative AI services for contact centers in December, including transcriptions of audio calls and Q for Amazon Connect, which lets users ask questions about their data.
[Microsoft Dynamics 365 Blog]
Feb 1
Emma Roth
Amazon has taken the wraps off of an AI shopping assistant, and it’s called Rufus — the same name as the company’s corgi mascot. The new chatbot is trained on Amazon’s product library and customer reviews, as well as information from the web, allowing it to answer questions about products, make comparisons, provide suggestions, and more.
Rufus is still in beta and will only appear for “select customers” before rolling out to more users in the coming weeks. If you have access to the beta, you can open up a chat with Rufus by launching Amazon’s mobile app and then typing or speaking questions into the search bar. A Rufus chat window will show up at the bottom of your screen, which you can expand to get an answer to your question, select suggested questions, or ask another question.
Feb 1
Andrew J. Hawkins
Google is bringing generative AI to — where else? — Google Maps, promising to help users find cool places through the use of large language models (LLM).
The feature will answer queries for restaurant or shopping recommendations, for example, using its LLM to “analyze Maps’ detailed information about more than 250 million places and trusted insights from our community of over 300 million contributors to quickly make suggestions for where to go.”
Jan 31
Emilia David
During the January Microsoft Research Forum, Dipendra Misra, a senior researcher at Microsoft Research Lab NYC and AI Frontiers, explained how Layer-Selective Rank Reduction (or LASER) can make large language models more accurate. 
With LASER, researchers can “intervene” and replace one weight matrix with an approximate smaller one. Weights are the contextual connections models make. The heavier the weight, the more the model relies on it. So, does replacing something with more correlations and contexts make the model less accurate? Based on their test results, the answer, surprisingly, is no. 
Jan 31
Wes Davis
New York lawyer Jae Lee will face an attorney grievance panel after trusting known liar ChatGPT for case research.
The court’s order says Lee filed a “defective brief” citing a made-up case. Lee isn’t alone — others have fallen for the allure of chatbots, including former Trump lawyer Michael Cohen and counsel representing a member of The Fugees.
[Reuters]
Jan 29
Emilia David
Meta’s latest update to its code generation AI model, Code Llama 70B, is “the largest and best-performing model” yet. Code Llama tools launched in August and are free for both research and commercial use. According to a post on Meta’s AI blog, Code Llama 70B can handle more queries than previous versions, which means developers can feed it more prompts while programming, and it can be more accurate.
Code Llama 70B scored 53 percent in accuracy on the HumanEval benchmark, performing better than GPT-3.5’s 48.1 percent and closer to the 67 percent mark an OpenAI paper (PDF) reported for GPT-4.
Jan 29
David Pierce
A few minutes ago, I opened the new Arc Search app and typed, “What happened in the Chiefs game?” That game, the AFC Championship, had just wrapped up. Normally, I’d Google it, click on a few links, and read about the game that way. But in Arc Search, I typed the query and tapped the “Browse for me” button instead.
Arc Search, the new iOS app from The Browser Company, which has been working on a browser called Arc for the last few years, went to work. It scoured the web — reading six pages, it told me, from Twitter to The Guardian to USA Today — and returned a bunch of information a few seconds later. I got the headline: Chiefs win. I got the final score, the key play, a “notable event” that also just said the Chiefs won, a note about Travis Kelce and Taylor Swift, a bunch of related links, and some more bullet points about the game.
Jan 27
Emilia David
Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together. 
Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion. Lumiere also generates 80 frames compared to 25 frames from Stable Video Diffusion.
Jan 25
Emilia David
In a blog post, OpenAI said the updated GPT-4 Turbo “completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task.”
The company, however, did not explain what it updated.
Jan 25
Emilia David
Google Cloud’s new partnership with AI model repository Hugging Face is letting developers build, train, and deploy AI models without needing to pay for a Google Cloud subscription. Now, outside developers using Hugging Face’s platform will have “cost-effective” access to Google’s tensor processing units (TPU) and GPU supercomputers, which will include thousands of Nvidia’s in-demand and export-restricted H100s.
Hugging Face is one of the more popular AI model repositories, storing open-sourced foundation models like Meta’s Llama 2 and Stability AI’s Stable Diffusion. It also has many databases for model training.
Jan 23
Emilia David
Google ended its contract with Appen, an Australian data company involved in training its large language model AI tools used in Bard, Search, and other products, even as the competition to develop generative AI tools increases. “Our decision to end the contract was made as part of our ongoing effort to evaluate and adjust many of our supplier partnerships across Alphabet to ensure our vendor operations are as efficient as possible,” Google spokesperson Courtenay Mencini said in a statement sent to The Verge
Appen notified the Australian Securities Exchange in a filing, saying it “had no prior knowledge of Google’s decision to terminate the contract.”
Jan 23
Emilia David
The Information reports the new GenAI team will focus on developing smaller language models (SLMs) that are similar to LLMs like OpenAI’s GPT-4 but use less computing power. Microsoft spent hundreds of millions on chips for one supercomputer to run AI models, so any saving helps.
The GenAI team will be led by Microsoft corporate vice president Misha Bilenko and report to Microsoft CTO Kevin Scott.
Correction January 26th, 2024, 10:08AM ET: The Information has updated its earlier report, which said this will be part of the Azure cloud unit. It will in fact report to Microsoft’s CTO Kevin Scott.
[The Information]
Jan 21
Wes Davis
Yesterday, The Washington Post reported that AI start-up Delphi cannot use OpenAI’s platform after it created Dean.Bot, a chatbot mimicking Representative Dean Phillips (D-MN) for a super PAC supporting his presidential bid.
The bot ran afoul of OpenAI’s recently adopted misinformation policy that, among other things, disallows political campaigning using ChatGPT. The super PAC will reportedly try again with an open-source alternative.
[The Washington Post]
Jan 19
Emilia David and Richard Lawler
A new report from Bloomberg says that once-again CEO of OpenAI Sam Altman’s efforts to raise billions for an AI chip venture are aimed at using that cash to develop a “network of factories” for fabrication that would stretch around the globe and involve working with unnamed “top chip manufacturers.”
A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia’s value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs.
Jan 18
Amrita Khalid
CES 2024 darling Rabbit has announced a partnership with Perplexity that will link the “conversational AI-powered answer engine” to the R1, a $199 Teenage Engineering-designed AI gadget that’s already sold through 50,000 preorders. Unlike LLMs that can only reference data up to a certain date in the past, what they’re pitching for the R1 is a built-in search engine with “live up to date answers without any knowledge cutoff.”
According to Perplexity co-founder Aravind Srinivas, who announced the deal in a live Spaces broadcast with Rabbit CEO Jesse Lyu, the first 100,000 Rabbit R1 purchases will also come with one year of its Perplexity Pro subscription. The plan includes access to newer LLMs like GPT-4 and normally costs $20 per month.
Jan 18
Emma Roth
9to5Google found code within the Google Messages app that suggests it’s going to add Bard to “help you write messages, translate languages, identify images, and explore interests.” The feature, codenamed “penpal,” will reportedly put Bard in a standalone chat where you can ask it to generate ideas, identify images, and make suggestions.
[9to5Google]
Jan 17
Sheena Vasani
Politico Pro paid subscribers will now see “thousands more Federal bill summaries” generated by AI. The new Legislative Compass feature delivers both brief and in-depth bill summaries of federal bills, which Politico says should help public policy professionals respond to legislative changes faster.
The news comes after Politico’s parent company, Axel Springer, and OpenAI announced a partnership in December. As a part of the deal, Axel Springer can build with OpenAI’s technology while ChatGPT can share content from its publications.

[POLITICO]
/ Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox daily.
The Verge is a vox media network
© 2024 Vox Media, LLC. All Rights Reserved

source