What you need to know
Researchers based in Germany and Belgium recently asked Microsoft Copilot a variety of common medical questions. After analyzing the results, the study suggested that Microsoft Copilot provided scientifically accurate information only 54% of the time. Additionally, 42% of respondents said the information generated could lead to “serious harm” and, in extreme cases, 22% could result in death. This is a further blow to AI search. Search giant Google has struggled with recommendations that users should “eat rocks” and bake glue on pizza. .
Alas, it looks like Microsoft Copilot may be facing a major lawsuit in just a few seconds. At least in theory.
It’s no secret that AI search is terrible – at least today. Google has been mocked over the past year for its bizarre and error-filled AI search results, with initial rollouts encouraging users to eat rocks or add glue to their pizza. Just last week, I saw a thread on Twitter (X) about how Google’s AI search was arbitrarily listing civilian phone numbers as video game publishers’ corporate phone numbers. I saw another thread mocking Google’s AI suggesting there were 150 Planet Hollywood restaurants on Guam. In fact, there are only four Planet Hollywoods in total.
I asked Microsoft Copilot if there is a Planet Hollywood restaurant on Guam. Thankfully, I got the right answer. But European researchers (via SciMex) are sounding the alarm about the possibility of a catalog of far more serious and less interesting errors that could confuse Copilot and other AI search systems.
The first “death by AI” will likely be due to misinformation, not murderous sci-fi robots. (Image credit: Kevin Okemwa | Windows Central)
This research paper details how Microsoft Copilot was specifically asked to enter answers to the 10 most popular medical questions in the United States and approximately 50 of the most commonly prescribed drugs and medications. I am. The survey generated a total of 500 responses, which were scored for accuracy, completeness, and other criteria. The results were not always encouraging.
“In terms of accuracy, the AI’s answers were inconsistent with established medical knowledge in 24% of cases, and 3% of answers were completely wrong,” the report said. “Only 54% of responses agreed with scientific consensus. (…) Regarding potential harm to patients, 42% of AI responses led to moderate or mild harm, and 22% resulted in death or severe harm. Approximately one-third (36%) were considered harmless.
The researchers conclude that we shouldn’t rely on AI systems like Microsoft Copilot or Google AI Summary (or perhaps any website) for accurate medical information. The most reliable way to discuss medical issues is, of course, with a medical professional. In some areas, access to medical professionals is not always easy or even affordable. AI systems like Copilot and Google could be the first point of contact for many people without access to quality medical advice, so the potential for harm is very real.
🎃Early Black Friday deals🦃
So far, AI search has been a costly failure, but that may not always be the case.
Microsoft’s Copilot+ PC series hasn’t exactly set the world on fire, at least not yet. (Image credit: Windows Central)
Microsoft’s efforts to capitalize on the AI boom have so far had little success. The Copilot+ PC series has come under a barrage of privacy concerns over its Windows recall feature. Ironically, this feature itself was also recalled to improve encryption. Just a few weeks ago, Microsoft announced a series of new Copilot features with much fanfare. This included a new UI for the Windows Copilot Web wrapper and some improved editing features for Microsoft Photos, among other smaller and relatively unimportant features.
It’s no secret that AI was not the catalyst that Microsoft hoped would give Bing a significant competitive edge over Google Search, and Bing’s search share remains relatively weak. As investors pour billions into Sam Altman’s generation empire in hopes of sparking some kind of new industrial revolution, Google worries about how OpenAI’s ChatGPT will affect its platform. , choking. TikTok has gotten rid of hundreds of human content moderators in hopes that AI will take over the role. Let’s see how it works out.
Reports on the accuracy of AI-generated answers are always a little funny, but if people start taking Copilot’s answers at face value, it feels like there’s a lot of potential for harm. Microsoft and other providers say in small print, “Always check the accuracy of your AI answers,” and I can’t help but feel this way. ? ” When it comes to potentially dangerous medical advice, conspiracy theories, political misinformation, or anything in between, Microsoft’s AI summaries will help you understand if you’re not careful or have a low chance of causing significant harm at some point. There is.
Other Prime Day sales and anti-Prime Day sales
We at Windows Central scour the internet for the best Prime Day deals and anti-Prime Day deals, and there are even more discounts happening right now. Here’s where you can save even more: