Skip to main content

How Do We Know Anything For Certain?

Some practical advice for how to sit, happily, joyfully, with uncertainty—and in doing so, grow and learn from it.

An illustration of a scientific flask with rainbows draining out of it

Anaissa Ruiz Tejada/Scientific American

Uncertain

[CLIP: Music]

Brian Nosek: For me, science is a process of reducing uncertainty about the world.

Helga Nowotny: So many people have this craving for certainty. People want simple solutions for very complex problems. And the simple solutions do not exist.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


C. Brandon Ogbunu: Uncertainty is not a sign of a problem.

Justin Landy: I think that we should be learning from this is that uncertainty or variability is just part of the scientific process.

Chanda Prescod-Weinstein: And so that’s one way of thinking about that uncertainty is—that that information is spread out in ways that it’s not in the classical picture.

Pascal Wallisch: We know from a long history of research on people and their cognition that if there is uncertainty, they don’t just say, “Oh, no, I have this uncertainty, and now I can’t act.” Your brain, your mind, however you want to call that, fills in this uncertainty, but in a smart way. It's like autocorrect is, like, smart guessing.

Jennifer Dunne: There’s still so much that’s not known and not known in a systematic way or in a quantitative way or in a comprehensive way. And so that’s a huge area of uncertainty. That’s very exciting.

Christie Aschwanden: I’m Christie Aschwanden, and this is the fifth and final episode of Uncertain, a Scientific American podcast.

On this show, we’ve been talking about uncertainty from a variety of different angles.

We’ve heard how uncertainty can be a spark for creativity and scientific discovery.

We’ve discussed how uncertainty can go unseen and make science really difficult.

And we’ve explored some of the research techniques and habits of mind that researchers use to deal with uncertainty.

Today we’re going to end with two final questions: If science is always uncertain, how can we ever know anything? How can we have confidence in science if there’s always underlying uncertainty?

For help with these questions, I turned to Kevin McCain, professor of philosophy at the University of Alabama at Birmingham and co-author of the book Uncertainty: How It Makes Science Advance.

Kevin McCain: So my main areas of research are epistemology and philosophy of science. In epistemology, I work a lot on the nature of evidence, kind of in general, and what makes beliefs rational. And one of the things that’s pretty common amongst epistemologists is that we’re never fully epistemically certain of anything.

Aschwanden: The only thing certain about epistemologists is that they’re uncertain about everything? Is that why you wrote a book about uncertainty?

McCain: Something that really kind of prompted me was just—there was this sort of misunderstanding that I saw in a lot of students about, one, that there was some sort of absolute certainty that we were getting in science, and then, two there was a worry about their grasp of science and what they would trust was–if it’s not absolutely certain then it’s not real science.

Aschwanden: Oh yeah. I’ve seen that kind of thinking a lot as a journalist. What else have you learned from teaching epistemology to college students?

McCain: One, they would assume that if we weren’t certain, we didn’t know—we had no knowledge.

Aschwanden: Uh-huh.

McCain: And then the second thing is they would erroneously assume that if it’s taught in their science class, that it must be certain. And I tried to explain to them that no, this is not how it is.

Aschwanden: But it’s a really common misperception, isn’t it, that science is certain and any time you encounter uncertainty it’s not science, or we just don’t know anything?

McCain: I co-authored a paper on common misunderstandings about evolutionary theory. And one of the big things that you sometimes see, and one of the big things that you sometimes see is the “it’s just a theory” objection, where people say, “Well, look, scientists aren’t certain about evolution, and they refer to it as a theory. So therefore it’s not real science, or it’s not well accepted or well known.”

Aschwanden: That’s when you tell them that gravity is also a theory, right? It’s interesting that people so rarely question that. I have seen it a lot with climate change deniers, though.

McCain: And you see it in popular media of, like, well, scientists—there’s one person, this one scientist who thinks humans aren’t at all responsible for climate change; therefore it’s uncertain, and so it’s not real science, like, it is uncertain.

Aschwanden: So how can we know something if there’s uncertainty?

McCain: Yeah, so typically when we think about whether we know something or not, a key component of that is how reasonable it is, the thing that we believe.

So, for instance, very plausibly, I know that I’m talking to you right now. That means, at least, among a few things, that I’m in fact talking to you right now—but also that I’ve got good evidence for believing that.

Aschwanden: Right. But you can’t be absolutely certain. We’re talking over a video conference right now. I could be tricking you.

McCain: There’s some really small, remote chance that maybe you’ve constructed some sort of really advanced AI chatbot that we haven’t let everybody know about yet. And it’s got an image and sound and seems to respond, and maybe that’s what I’m talking to, or, possible even more remote, maybe I’m having a really strange dream or something, you know, because I’m preparing…I’m gonna have this, we’re gonna have this discussion on a podcast–and I’m like, ahhhh, so maybe I’m dreaming it.

Now, granted, those, I mean, the odds of those are just incredibly unlikely. But for full uncertainty, the odds have to be zero.

Aschwanden: That’s an extremely high bar.

McCain: Right. And so if the—one of the things I often try to explain to people is if you put that as your level for knowledge, you just don’t know anything, which seems really implausible.

Aschwanden: But we deal with uncertainty all the time in our personal lives, right? And we don’t go around feeling like we don’t know anything.

McCain: Just simple things like the weather: if the weather forecast is that it’s 95 percent likely that it’s going to rain, well, then if I don’t want to get wet, I probably ought to bring an umbrella, even though it’s not certain that it’s going to rain.

Aschwanden: Right. Ninety-five percent chance of rain—most people would say we know it’s going to rain.

McCain: On the flip side, you get people downplaying it. People sometimes run into the opposite error of just ignoring the uncertainty and taking the information as if it were 100 percent without a doubt.

Aschwanden: An example of downplaying the uncertainty you cite in your book is genetic testing.

McCain: Right, people think, “Well, you know, I’ve had this sort of DNA test, and I’ve got a certain gene; therefore I must end up being this kind of person,” right, “I’m definitely going to have this sort of problem later,” which is not true.

Aschwanden: It can be true that a certain gene variant is associated with an increased risk of cancer and also true that not everyone who has that variant gets the cancer—just like we know that smoking causes cancer and yet it doesn’t strike everyone who smokes. In your book, you argue that this kind of uncertainty drives science forward.

McCain: If we were really certain of something, not only would we not have a motivation to investigate further, it would actually be irrational to, because our evidence is as good as it can get.

Aschwanden: You can’t learn what you already know.

McCain: The fact that all of science is uncertain to at least a degree lets us know that there’s more evidence to gather. And it also pushes us because, just psychologically, we don’t really like being uncertain, even though we can’t escape it.

Aschwanden: That human tendency to crave certainty is hard to let go of.

McCain: Just in general, as humans, we’re always trying to mitigate the uncertainty as much as we can. So we’re always, if I can be 95 percent sure of something, I’d still rather like to be 96 percent sure maybe 98 percent sure. It kind of pushes us to keep looking for more.

Aschwanden: What do you want the public to take away from your book?

McCain: I guess a couple ideas. One is that all of science is uncertain, right? So—but so is all of the rest of life, right? Even the things, for the most part, that we take ourselves to be most confident of, most sure of, if we really bear down and think about it, there’s uncertainty there, too.

And so what I think is most important is less this sort of search for being absolutely certain, which, if we’re talking about our evidence, we can’t have, and more of aiming for what’s the rational thing to think. And so that is: look to the evidence.

Aschwanden: So you want us to look to the total body of evidence when determining what’s true. What else?

McCain: The general public could recognize that what matters is we look to the evidence and see how strongly the evidence supports a particular claim, not whether it makes it certain—but go where the evidence leads.

And if the evidence strongly supports something, like we mentioned—strongly supports that a vaccine is safe and is likely to help protect you from something that could be very dangerous—that’s good evidence in support of taking the vaccine.

Aschwanden: One of the things I hear you saying here is that we have to have some threshold for being comfortable with uncertainty. We can’t know most things with absolute certainty, and if we demand that, then we’re going to go around the world not knowing anything.

What does that comfort with uncertainty look like when you’re presented with a particular idea like the effectiveness of a vaccine, say?

McCain: So we have really good evidence to accept a particular theory at a time, so much so that it would be irrational not to accept it. We accept the theory, and later, we get new evidence that suggests that we were incorrect about that theory.

Well, then we revise: we give up the theory and hopefully replace it with a better theory. That sort of self-correcting is what helps science advance and is what’s allowed it to do so well.

[CLIP: Music]

Aschwanden: What really fell out of my conversation with Kevin was the idea that we need to calibrate uncertainty. There are degrees of knowing. Some scientific results contain a lot of uncertainty, and others are strong enough that we can put our faith in them, at least until better evidence comes along.

Steve Goodman: So there’s settled science that basically we’re so certain about that we’re willing to risk a lot on its dependability.

Aschwanden: That’s Steven Goodman, associate dean of clinical and translational research at Stanford University. Among other things, Steve is an expert on scientific methodology, statistics and the ways that science might be made more reliable.

Goodman: Settled science are a set of what we might call facts—that is, things that we can all agree on that we find that we can rely on in future experiments or in our daily lives. We trust that bridges don’t fall down. We trust that the planes we get on don’t crash. And this science has proven to be extremely reliable for the purposes that we put it to.

Aschwanden: Even settled science still retains some uncertainty—but that uncertainty isn’t relevant to the task you’re putting it to. You can build a bridge or fly a plane with Newtonian mechanics, for example, even though it doesn’t work at the speed of light.

Goodman: And what we try to do through that science is reduce the uncertainty, reduce it, reduce it till we finally get to that point where we can rely on it so much that we can give a drug to somebody or we can get on that airplane.

Aschwanden: And yet, even then, some uncertainty remains.

Goodman: So here’s a really simple example. This drug worked in a clinical trial or it worked for people from age 20 to 70. But my grandfather is 78 years old. Will it work for them, too?

Aschwanden: Right…your drug trial could only answer how it worked for the very specific participants in the trial—you have to extrapolate for your grandfather. A study answers one question, but so many more remain.

Goodman: It could be—what are the causes of something? So is it the food that somebody ate? Or is it the environment in which they live? Or is it the racism that they’ve been exposed to all their lives? And sometimes these are very hard to disentangle, in fact, questions like those are almost always impossible to completely and cleanly separate.

Aschwanden: If there’s one thing I’ve learned over my 20-plus years as a science journalist, it’s that science can be messy. The answers aren’t always as generalizable as we want.

Goodman: Around every question we get sort of an answer, but it’s a fuzzy answer.

Aschwanden: A scientific paper might represent that fuzziness with error bars around the results, but there are more practical examples, too, right?

Goodman: People see these fuzzy answers every time they look, for example, at a hurricane map, and there’s bounds around where the center of the hurricane will be.

They sometimes focus on the middle of those bounds, but they know that it can hit anywhere in there. And that represents our uncertainty about where the hurricane will hit.

Aschwanden: I’ve heard you talk about “getting the uncertainty right.” What exactly does that mean?

Goodman: We want it to be the case—let’s take the hurricane case, that people can actually plan either to evacuate or not to evacuate based on where it is thought the hurricane might hit. So it’s really important that that uncertainty be right.

Aschwanden: In other words, if you say there’s a 95 percent chance that a hurricane will hit somewhere between town X and Y, you want to be sure that those bounds are correct.

Goodman: So you care about how wide is, the uncertainty, and you care that the truth does lie within those bounds with that degree of confidence. That’s what I mean by getting the uncertainty right—something you can, in a sense, bet on, but there’s always a margin. There’s always a margin in everything we measure in science.

**

Aschwanden: What I’ve been hearing today is that it’s important to accept that uncertainty will always be with us. Given that, the way forward is to calibrate the uncertainty and use that calibration as a means to understanding how to make better decisions.

For more on that, I decided to talk with someone whose work focuses squarely on decision-making in the midst of uncertainty.

Nidhi Kalra: My name is Nidhi Kalra, and I’m a senior information scientist at RAND [Corporation].

Aschwanden: How did you get interested in uncertainty?

Kalra: I was passionate about climate change and the way we use energy. And as I started working in this field, I became increasingly aware of the ways in which uncertainty is such a force in the conversation.

Aschwanden: A force? How so?

Kalra: Sometimes it’s the scientific uncertainty, which is, you know, totally valid, and we have to work with it. And sometimes it was the way in which uncertainty is weaponized and politicized, and the scientific reality becomes a tool for people to get their agendas across.

Aschwanden: How do we make decisions in the face of uncertainty?

Kalra: So, for example, I was working with the city of Lima in Peru. This is a drought city; it’s a very large city, one of the driest cities in the world. And they’re asking the question [about] in the future, where we expect to have more population growth and where climate change may affect our water resources. How do we decide what to do to keep people physically hydrated?

Aschwanden: That’s a lot to think about. How do you even start?

Kalra: They had a lot of projects that they could have invested in at billions of dollars of expense. Which project they invested in, which project made sense, depended a lot on what you believed about the future: whether it was going to be wetter or drier, whether there would be more people or fewer, whether—where those people would be located, how quickly they would need more or less more water.

Aschwanden: That’s a lot of uncertainty, with big repercussions.

Kalra: What we did was we said, okay we don’t know these things; we can’t predict this future; let’s find out what the right thing to do is across hundreds of different combinations of possibilities. What would you choose to do under these assumptions and those assumptions and that assumption?

And what we found, for example, was that there were certain projects, water-investment projects, that cropped up and were necessary in every future. Doesn’t matter what happened—you always needed this one water treatment plant. In fact, there were quite a few projects where you always needed them.

Aschwanden: So you’re essentially saying, what are the things I can do that will be beneficial, no matter what direction things go?

Kalra: And then there were some projects where you’re like, “Well, we’re gonna need these if climate does this. And if climate does this other thing, we’re gonna want these other ones.” We were able to construct that into a dislike, essentially [an] adaptive or dynamic plan, one that says, “Okay, we’re going to do what we know is necessary. Now suppose we can’t do everything at once. So let’s do what we need to do. And then we will, as time evolves, do those things that become apparently necessary.”

Aschwanden: How does that play out?

Kalra: It’s a series of “no regret” actions. You’re never going to regret building that absolutely necessary water treatment plant. And so essentially what we’re doing is asking this question over and over again: What if this happens? What if that happens?

Aschwanden: But what about when uncertainty is very high?

Kalra: A big part of it is reframing the question from “What’s the best thing I can do, given what I believe will happen?” to “What is the good enough thing to do, given that I don’t know what will happen?”

Aschwanden: What’s the mindset to use in this situation?

Kalra: So instead of optimizing to a particular set of predictions or a particular assumption, we are trying to do well enough. We’re trying to be satisficers instead of optimizers

Aschwanden: So, a quick interlude for that word: satisficer. It’s something psychologists use. You probably haven’t heard it before, but …. satisficers, in this case, are people who are trying to meet the minimum criteria that Nidhi has set.

Optimizers probably sounds more familiar to you. They’re just like what they sound like. They’re striving for the very best of all possible answers…

Back to Nidhi.

Kalra: Knowing that satisficing in a wide range of possibilities can often be better in the real world than optimizing to a future that never comes to pass.

Aschwanden: So you’re setting the bar at coming to a pretty good decision for the greatest possible number of scenarios versus trying to find the very best one for just a few. What are some of the other challenges?

Kalra: What compounds this problem is not just the uncertainty, but people’s values, and sometimes people’s preferences for outcomes weighs far more heavily on a problem than does the actual scientific uncertainty.

Aschwanden: Tell me more about this human element.

Kalra: There was plenty we didn’t know about COVID—how quickly it would be transmitted [or] when the interventions would be around. But the debate was usually not driven by the scientific questions but about the trade-off between, you know, economic health and human health—what the trade-off was that we couldn’t adjudicate, even if we had perfect scientific knowledge about COVID.

Aschwanden: In other words, science provides the evidence, but human values determine what we do with it.

Kalra: That’s actually where the real messy stuff is. It’s in human values, human understanding of what should happen in the world. And in that context, scientific uncertainty is the easy part.

Aschwanden: What would you like the public to know about uncertainty in the context of science?

Kalra: The acknowledgement of uncertainty is a strength of science. That’s what I would wish for the public to know. It is not a reason to dismiss the science or say, you know, I hear people say, “Well, it’s just a theory.”

Aschwanden: Right. In science, a theory is actually a pretty well-considered idea. What else?

Kalra: I would wish for the public to understand that uncertainty does not mean we don’t know things. We know that an apple dropped is going to fall to the ground. We know, you know, we know climate change is real. We know it’s caused by the burning of fossil fuels. These are facts.

Aschwanden: How should we think of facts like that in light of their uncertainty?

Kalra: I think about it this way: at some point, we have to round up. As consumers of science, we have to round the science up and say, “You know what? We’re like 95 percent sure.” Yeah, there’s an alternative way to round that up so that we can make decisions in our world. So that, that idea—not all uncertainty is the same.

Aschwanden: What kind of uncertainty should people look out for?

Kalra: How uncertainty in science can be used to manipulate people—that to be vigilant about it, to know, “Is this a legitimate use of uncertainty? Is this a legitimate representation of uncertainty? Or is this, or is uncertainty being used to sow indecision—to sow doubt about an action that we actually really need to take?”

Aschwanden: How can you tell when uncertainty is being used as a tool of propaganda?

Kalra: When you hear blanket statements, like, “Oh, but science is always uncertain,” or “We don’t know enough about that yet.” The generic statements, those are a little bit of a tell. Like if you have to speak in generalities and say, “We don’t, well, we don’t know,” or that we don’t know enough. “Is that really true?” Often, we, we know a lot more. We certainly know enough to make a decision.

Aschwanden: So you’re saying that even in the face of uncertainty, we can make decisions. How do you parse out the uncertainty that’s being inflated to manipulate public opinion?

Kalra: I would wish for people to be cautious. Note and notice: Who is using uncertainty as a way to prevent action? Who is preventing us from getting to the choices that are actually good for almost all of us because it serves their interests? And how is uncertainty being used to manipulate what I’m seeing right now? Because there is no world of certainty about any of these big issues. And yet there are some hard choices to be made that we can make, and we need to make, and we ought not let uncertainty stand in our way.

Aschwanden: We ought not let uncertainty stand in our way. Nidhi is saying that we shouldn’t let uncertainty prevent us from acting when there is reasonable evidence, and yet this hits on a paradox.

On the one hand, we don’t want to be overly sure of uncertain findings, and on the other, we don’t want to be paralyzed by uncertainty that can never be removed.

To further explore this paradox, I turned to someone whose work combines computer science, math and physics.

Cris Moore: I’m Cris Moore. I’m a professor here at the Santa Fe Institute.

Aschwanden: How do you think about the way science proceeds?

Moore: In school, we’re taught there’s this thing called the scientific method, and it means, like, first you form a hypothesis, and then you do an experiment. And a little bit of science happens that way. But most science is more about just, “How are we going to figure out whether which of these things are true?”

Aschwanden: So how would you describe the way science really works?

Moore: My favorite way to say what the scientific method is is just considering the possibility that you might be wrong.

Aschwanden: You mentioned a saying about this.

Moore: We should hold our stereotypes lightly, and modify them gladly. And I think for scientists, our beliefs about the world—about physical things, medical things, whatever field you’re working in—it’s a little bit the same way. Like, if it turned out we were wrong, that would be great. It would mean that we had learned something more, and that would be celebrated.

Aschwanden: I was at a conference once where someone in the audience asked the panelists, “How many things that science knows today, or that we currently hold as facts, will hold up in 100 years and, and 1,000 years?” What would you have said?

Moore: I mean, this is fascinating, too, because I think it varies from science to science, right? So let’s start with what I know best, which is physics. That’s where my training is.

Aschwanden: Okay, so let’s talk physics. How would you answer the question for that field?

Moore: So in physics, there’s this very active story of how the ground has shifted enormously underneath our feet repeatedly. It’s not that the previous physical theories were wrong; they had a lot of truth to them. And there are interesting echoes of classical Newtonian mechanics in quantum mechanics…

There are certain things about the framework that are actually very similar—the same with classical to general relativity. And, as I think anybody who reads popular physics books knows, we’re kind of prepared for another huge shift underneath our feet because it seems as if reconciling general relativity with quantum mechanics might require a similarly seismic shift in how spacetime works. And we’re looking forward to it, right? That’ll be great.

Aschwanden: So you’re saying that we can embrace what we think is true to the best of our evidence at the moment, but we should be prepared for updates. Is that right?

Moore: So physics has this sort of dual message. It’s like, we can do these great calculations, we can do these amazing predictions, we can figure out stuff to, like, you know, three, four, or five digits after the decimal point, [but] every once in a while there’s some persistent error.

Aschwanden: And what do those errors tell us?

Moore: When there are these persistent errors, sometimes we know that it’s actually this, you know, horn call from across the hills letting us know that something big is on its way. So for physics, there’s this part of our founding myths, our—over the past [400] or 500 years is, yeah, we’re wrong, we’re probably really wrong about some things, and it’s going to be great.

Aschwanden: You keep saying that errors are exciting—it heralds something great. That’s so contrary to how the public so often thinks of science. Can you explain what you mean by this?

Moore: With climate modeling: it might be really hard to figure out which continents are going to get wetter and which ones are going to get drier 50 years from now. That’s, that could be really hard, right...?

On the other hand, at some level, do we need to know that? You know, do we need to know the details of the various catastrophes: floods here and droughts there?

Aschwanden: In other words, finding something that doesn’t fit the prevailing theory means that there’s a new discovery awaiting. But what about our provisional knowledge? How do we process the uncertainty that’s inherent in it?

Moore: In terms of taking action, we already have enough scientific knowledge. We already know we need to shut down the coal plants. We already know that we need carbon-free sources of energy and that we need to shift things over to electricity when we can and to conserve where we can. We already know all that.

Aschwanden: Right, so on a big-picture scale, we know it’s happening, and we know the repercussions will be enormous.

Moore: It’s sort of like if there’s a really expensive vase on a table, and I start kicking the table and making the vase wobble on, and you say, “Please stop doing that. You’re going to knock over the vase, and it’s going to smash. It was my grandmother’s vase.” It would be silly for me to say, “Yeah, but come on, you vase scientists, you can’t tell me exactly which pieces of the vase of which shapes are going to be—were on the floor. So you’re uncertain.”

“Yeah, but I know the vase is gonna fall over and smash at some point. And please stop kicking the table.”

So, I don’t know, I think, I think the public is confused and frustrated and policymakers are confused and frustrated with what it is that science can do, especially when there are things like pandemics happening in real time—and climate change happening faster than we thought it would.

So please join with us in the difficult job of being uncertain and having to take action anyway, even in the presence of uncertainty, which is hard. Like, be in the driver’s seat with us. Don’t, you know, don’t wait outside. They’re waiting for us to develop, to deliver, some perfect solution to you ‘cause it's not like that we can't.

Aschwanden: There are people who will say, “but we’re not totally certain about this stuff, so we need to be cautious.” What would you say to that?

Moore: We’re uncertain; you’re uncertain; we’ve got to be uncertain together. And, you know, we all have to embrace that and—which doesn’t mean giving up. It means working hard, as fast as we can, to figure out what’s going on and what we need to do.

Aschwanden: It seems like what you’re talking about is calibrating the uncertainty while also acknowledging it.

Moore: Sometimes it means engaging in strategies that are kind of deliberately robust to uncertainty, right? Like, we’re not sure exactly how this disease works, but hey, could we all get vaccinated? We know that’s going to help. And yes, we can’t promise that the vaccine will give you 100 percent immunity, because it’s going to mutate. But it’s still going to help. So can we do that, please?

Aschwanden: What does that look like in real life?

Moore: We don’t know how fast Greenland is going to melt. But we can see it’s melting, and it’s faster than we thought it was going to be. So can we at least try to melt it less fast?

Aschwanden: So even when we don’t know the exact details of the problem, we can often know the magnitude and most of the extent of it. What’s your parting advice on how people should think about uncertainty in science?

Moore: Scientists are trained to be willing to say cheerfully, happily, openly, “We don’t know yet. I don’t know the answer to that question.” And for scientists, that’s not a source of despair. That doesn’t mean we’re giving up or throwing up our hands. It’s a much more dynamic thing. It’s almost a joyful thing. It’s like, “We don’t know. So let’s find out. And let’s, you and I, together, find out.”

Aschwanden: I think that’s a good note to end this series on. We don’t know everything, but putting our heads together with an open mind can help us figure it out.

Scientific knowledge is incremental, and it’s provisional. It’s always subject to new evidence, but we can do a lot with our intermediate understanding.

Science is a powerful tool for understanding the complexities underlying the world around us, and it can be incredibly useful for making decisions. The more that we can embrace the uncertainty underlying all science, the better we can become at sitting with that uncertainty and being open to updating our beliefs in light of new evidence.

What makes science so powerful is that instead of being dogmatic and rigid, it’s always open to new ideas and new data. As Cris says, let’s embrace the joyful act of finding out!

Our show is produced by me, Christie Aschwanden, and Jeff DelViscio. Our series art is by Annaissa Ruiz Tejada. Our music is from Epidemic Sound.

Funding for this series was provided by UC Berkeley’s Greater Good Science Center—it’s part of the Expanding Awareness of the Science of Intellectual Humility Initiative, which is supported by the John Templeton Foundation.

I’m Christie Aschwanden, and this is Uncertain, a podcast from Scientific American. Thanks for listening.

Listen to every episode of Uncertain here.

How Do We Know Anything For Certain?