Early Earth looked much different than it does today. It was a high-temperature, high-pressure environment with no ozone layer and no oxygen in the atmosphere. Still, the organisms that existed then evolved into organisms that are compatible with life as we know it today.
According to an official statement from chemist Stephen Fried, researchers at Johns Hopkins University wanted to understand how to measure how certain proteins respond to high-pressure conditions. It could take decades to figure out what the problem is.
Luckily, the team had artificial intelligence on their side.
Google’s AlphaFold tool maps more than 2,500 proteins, identifies which parts are sensitive to pressure, and provides important insights into how they behaved millions of years ago. is completed.
“This study gives a better idea on how to design a new protein to withstand stress. Also, which kind of protein can be in high -pressure environments such as the seabed and different seabed. “This gives us new clues about the nature of the Earth,” Freed said in an official statement.
Siddhartha Rao, CEO and co-founder of Positron Networks, which develops technology for scientific research, pointed to this research as a prime example of why AI can be useful for customers. AI can accelerate processes that cost time, labor, and materials, potentially accelerating the rate of scientific discovery and minimizing the influence of funding agencies on university research content. there is.
“During drug discovery, pharmaceutical scientists often try to find a specific sequence of another protein or molecule that fits into the correct position of a protein, like putting a key in a hole,” Rao said in an interview. . Leverage government technology. “Historically, the only way they’ve been able to do it is literally build the chemicals involved or the proteins involved and put it in a physical environment that mimics the environment they’re targeting, such as the human body. By doing it repeatedly, we’re ultimately trying to find a molecule that works for them.”
AI can help at every stage of the scientific process
Rao pointed out that AI tools can capture a lot of information about a topic and identify aspects that would benefit from further research, potentially leading to new research. You can use existing information to generate hypotheses. You can use information about similar experiments and lab protocols to create experimental instructions. You can create simulations that model your experimental steps and estimate which parts of your experiment are most useful for running in a physical environment. Analyze data, aggregate findings, and write conclusions. Trying to reproduce results in a simulated environment can also be useful in completed experiments and can also help evaluate papers in the peer review process.
However, AI is only useful if it can reliably understand and generate complex ideas. According to Rao, the concept of nonlinearity is key to this.
In computer science, nonlinear refers to operations whose outputs are not simply directly proportional to their inputs. That’s why you can ask a chatbot a question and get an answer that goes beyond “yes” or “no,” which makes AI attractive for scientific research, Rao said. Nonlinear “thinking” is already built into AI tools, which make them good at predicting real-world behavior such as weather, fluid dynamics, and, in the case of the Johns Hopkins study, protein folding. It will be.
Dedicated research tools
While AI is poised to aid scientific research, Rao emphasized that existing AI tools are often not built with scientists in mind.
As an example of how this can be achieved, NASA scientists recently partnered with IBM Research to develop a model that uses AI to support climate and weather research. This model can be used to predict severe weather, create localized forecasts, and enhance regional climate simulations. It also improves the accuracy of existing physical processes used to do the same.
Rao wants to expand access to AI tools so that scientists with fewer resources than NASA can benefit from the technology. His company, Positron Networks, has created a tool called Robbie that simplifies AI and machine learning tasks used in research. This allows researchers to conduct complex experiments without the need for IT or software development skills.
“When you have a good idea, do you really want to go and tell a bunch of people or do you want to try it? You want to try it. You want to experiment. And that’s what’s behind the scientific method. The whole concept,” he said. “If you put scientists in a position where they have to find support, collaborate, or in effect pay to run experiments on this infrastructure, you create something that inhibits science. .”
In addition to private companies such as Positron and IBM investing in AI for scientific research, new publicly funded AI research centers are being established across the country. The National Science Foundation funds 27 AI research institutes across the United States through the National Artificial Intelligence Institute program.
Matt Reese, a computer science professor at the University of Texas, is co-director of the NSF-Simons AI Cosmic Origins Institute (CosmicAI). CosmicAI involves many astronomers and cosmologists, but Reese’s role is to adapt the AI tools to the institute’s research needs. One of his projects is an AI co-pilot designed specifically for astronomy.
“What can this AI co-pilot do to help the astronomers?” he said. “Well, the starting point for thinking about this is, ‘What do people think this kind of AI co-pilot will do for our regular jobs?'”
Co-pilots trained in astronomical databases and scientific papers that can clearly state the sources of claims could be useful for simple tasks such as brainstorming and writing research proposals, he said.
Mandatory human supervision
As AI becomes more prevalent in scientific research, Reese is thinking about ethics and practicality. He is a founding member of UT’s Good Systems initiative, which aims to create AI technologies that meet human needs and values.
He wants AI tools to be reliable, but he can’t trust them blindly.
“One idea is that when people review the output of the AI, they will be able to tell if there are mistakes and correct them,” he said. “Sometimes that’s true, but sometimes it’s very hard to tell when there’s a mistake. So if there are random mistakes or consistent systematic biases in the AI’s output, the AI If those who review the results cannot spot them, there is a risk that those errors will persist and bias will persist.”
This is the case for human up-to-date and routine auditing, which is often used in AI conversations. Carefully curating the materials for training an AI can help improve the reliability of the AI, but it also makes sure that the training materials infringe on intellectual property rights, jeopardize data privacy, or are unnecessary. It also helps to avoid contributing to carbon emissions.
“People have often used the simplest hammer: ‘Oh, give me more data and I’ll make my AI model better.’ But not all data is equal. Often, some of the data has a lot of bad points in it,” Reese said. “So we want to think carefully about data curation. This is not just about trying to reduce the amount of compute and environmental impact, but the more we can actually curate the data, the better. You want more of the good data and less of the bad because it gives you the behavior and performance of your model. ”