This is a complex question. I do not want to get too much into technical detail, but it is inevitable that we have to introduce some. Artificial Intelligence, in the most obvious instances, might be used to automate discovery. There are a great many research projects that have visual or some other sensory record which needs to be quantitatively examined to discover new species or new behaviors, for example. Datamining has already had a profound impact on research over the last decade and a half. This is mostly through Machine Learning though and hence it is not quite what most of those not in science tend to think “AI” means.
Actual generative intelligence—I do not mean generating text or images here—that generates theorems, proofs, hypotheses is still “around the corner.” What is more likely in the near term is using AI to automate the generation of models. Models represent aspects of the natural world that are too small or too large, or too complex, or too dangerous for us to observe experimentally through direct means. Looking at huge datasets by eye and coming to an appropriate model is rapidly becoming impossible for human beings. Computers can be particularly good at this though. At Yale and other places, we are experimenting with ways to use AI agents to do this work, but more importantly, exploring the roots of why this approach is or can be expected to be successful.
Automating the production of models and then the production of code that simulates the model to compare directly to measured data is the current forefront. It is like adding scientists—except that it isn’t. What is happening is that we need human scientists more than ever to judge whether the models generated make any sense or are promising for what might come next. Scientists are one of the few workforces that are not threatened by AI automation and, in fact, are best positioned to benefit through their rapid adoption.