Ralph Hertwig: Beyond Nudging—How Boosting Empowers Citizens to Make Good Decisions

Show Notes for Episode #5 of Unraveling Behavior

May 20, 2025

In this episode, Ralph Hertwig, Director at our institute, makes the case for a major shift in how policy makers approach behavior-related challenges—whether it’s preventing dooring accidents, reducing math anxiety, or countering misinformation. Rather than relying on the common paternalistic strategy of nudging people without their awareness, Ralph Hertwig advocates for a more empowering and transparent approach: boosting people’s competences so they can make informed decisions on their own terms. He argues that this shift, supported by regulations and incentives, is crucial for strengthening people’s agency in the face of global challenges like pandemics and climate change. Empowering citizens also helps protect our autonomy and supports informed choices in environments that are often addictive and manipulative, like the junk food industry or social media platforms. After exploring the limits of nudging, Ralph Hertwig introduces the idea of “boosting”—a behavioral science approach that fosters people’s agency, self-control, and decision-making skills. Boosting, he argues, provides a first line of response, especially since laws are often slow to change and vulnerable to outside influences. He shares some examples of boosting interventions, like the Dutch Reach, bedtime math stories, and lateral reading. He also talks about “self-nudging,” a strategy that helps individuals shape their own environments in ways that align with their personal goals. Our conversation highlights how boosts are transparent, non-manipulative, and designed for lasting impact—and how they can be implemented fairly and accessibly for everyone. Ralph Hertwig encourages policy makers to move beyond the narrow view that people are just error-prone, and instead invest in strategies that tap into our ability to learn and adapt. Join us for a forward-thinking conversation on how we can collectively empower citizens and equip them with the tools to live better lives—on their own terms.


Watch the interview on YouTube (curated subtitles: English, Deutsch, Português). Listen to the episode on SpotifyApple Podcasts, and other players, or subscribe via our RSS feed.

 

Links


Timestamps

  • 00:00 Introduction
  • 01:17 The need for competent and empowered citizens
  • 05:57 Role of policy makers and scientists
  • 08:01 Citizen agency during COVID-19
  • 12:02 Introducing nudging
  • 16:36 Opt-in vs. opt-out defaults for organ donation
  • 19:18 Concept of libertarian paternalism
  • 21:40 Criticisms of nudging
  • 24:32 Research behind nudging
  • 27:40 Evidence suggesting citizen empowerment is feasible
  • 31:51 Introducing boosting
  • 33:48 Dutch Reach to prevent dooring accidents
  • 36:09 Bedtime math stories to reduce math anxiety
  • 39:15 Lateral reading to detect misinformation
  • 43:23 Transparent nature of boosts
  • 44:34 The potential of boosting for long-term change
  • 45:58 Boosting through self-nudging
  • 50:44 Core competences and interventions to boost them
  • 55:30 Communication of boosts to the public
  • 56:53 The trap of individualizing responsibility
  • 58:42 Cognitive and motivational requirements
  • 01:00:40 Message to policy makers and citizens
  • 01:03:56 Conclusion

Transcript (edited)

Sofia Morais: Hi, Ralph. It's great to have you here today. Thank you for joining us.

Ralph Hertwig: Hi Sofia, it's wonderful to be here, and thank you for having me

Sofia Morais: In your work, you've argued that highly commercialized environments, from fast food to social media, really need competent and empowered citizens. Could you explain why you believe that's so important?

Ralph Hertwig: Yes, that was indeed one of the arguments, and for me, it's one of the important reasons why we need competent and empowered citizens. It's not the only reason, but it’s a very important one. What I had in mind was the dramatic change in our environment in the 20th and 21st centuries. For instance, think about the digital environment, which didn’t even exist 50 years ago, but today it has become so ubiquitous. These environments are highly sophisticated and are based on a lot of scientific knowledge—some of it psychological. I’ve characterized these environments as “ultra-processed environments.” The term “ultra-processed” originally comes from the food environment. In the food environment, we refer to ultra-processed foods—foods that are highly processed with many ingredients, often ones we can’t even pronounce. These foods typically combine high sugar and high fat. There’s both the idea and evidence suggesting that these environments can be quite addictive.

There’s even a recent study, from 2023, out of one of the Max Planck Institutes, showing that the ongoing consumption of high-calorie, high-fat, and high-sugar foods actually changes the reward circuitry in our brain. If you think about it for a moment, that’s pretty amazing. It means that our preferences, as represented in our brain, change systematically, so we begin to prefer low-fat food less than we did before we started consuming high-fat, high-sugar food. This is fascinating because through our food consumption, we develop preferences that are not good for our nutritional health.

Now, if we consider these environments more broadly, the food environment is just one example. Think about the digital environment and how sophisticated it is at grabbing our attention, steering our behavior, and keeping us hooked to social media platforms and various apps. What this means is that we need competences to preserve our autonomy in these environments—by autonomy, I mean freedom of choice—and also agency, the ability to act according to our own desires, preferences, and will, and not just be pushed around by commercial interests.

To avoid any misunderstandings, I’m not arguing that competent citizens alone will fix the problem. The problem with digital environments and the extent to which they are based on an attention economy won’t be solved by competent citizens alone.

Sofia Morais: These are very powerful forces.

Ralph Hertwig: These are extremely powerful forces or, as they are often called, “choice architectures.” If you think about the food environment, we need more than just competent citizens to deal with these forces. We need systematic approaches, regulation, and laws. We might even need taxes, such as a fat or sugar tax. Competences and empowerment are just one aspect of the whole story.

But as we know, these kinds of regulations take time and are subject to powerful industry interests. That’s why I’ve argued that, at least as a first line of response for citizens, we need empowered and competent citizens.

Sofia Morais: And when you say “we,” who is “we”? Who empowers the citizens?

Ralph Hertwig: I would argue that public policy makers who, with the health and well-being of citizens in mind, should think about how they can put regulations or interventions in place that make people more competent. These measures would help people deal with these, as I call them, “ultra-processed” environments, which are primarily designed with commercial interests in mind.

Sofia Morais: And what role do scientists play in this process?

Ralph Hertwig: One of the developments we've observed over the last 10 to 20 years is that policy makers and politicians have become more and more interested in evidence from psychology, economics, and behavioral economics. These fields have a lot to offer in terms of understanding how people behave and how we can guide their behavior. I think this shift is largely due to the work of Dick Thaler and Cass Sunstein, the inventors of the concept of “nudging.” Nudging is an approach to what is now known as “behavioral public policy making.” What it means is that we can bring our knowledge of human behavior to policy makers and politicians, helping them address problems like how to reduce waste or increase organ donation. There are all kinds of public policy issues that need to be solved. In the past, these were often addressed through laws and regulations. Over the last 20 years, politicians have increasingly realized that there’s valuable knowledge about human behavior in psychology and economics, and this knowledge can be used to design smart interventions.

Sofia Morais: What lessons from the COVID-19 pandemic do you think can be applied to global challenges like climate change or threats to democracy?

Ralph Hertwig: I think that’s a very good question. For me, the COVID crisis was also an eye-opener in the following sense: a lot of what was expected from citizens during the pandemic can be described by the concept of agency, which is often discussed in psychology and beyond. It refers to the importance of an active, responsible citizen who can face a challenge and deal with it responsibly. That sounds a bit abstract, but what do I mean?

Think back to the pandemic. There were many things we expected from citizens. For example, we expected them to understand some pretty complex concepts.

Sofia Morais: Exponential growth.

Ralph Hertwig: Exactly. I mean, before the pandemic, who would have known exactly what that meant? Also, remember when COVID tests first came on the market? We needed to understand concepts like hit rates and false alarm rates. We also had to understand why social distancing—or physical distancing, as it was later called—was so important. And why, for instance, it was essential to quarantine yourself if you suspected you might have been infected.

Sofia Morais: There was also a lot of misinformation.

Ralph Hertwig: Exactly. You're totally right. And remember, the WHO also spoke about the “tsunami of misinformation” in the context of the pandemic, which meant we also expected people to be able to search for good, accurate information. It was a time of great uncertainty for many people, and one way to deal with uncertainty is by obtaining reliable and accurate information.

What this meant, and these are all examples, is that we wanted an active citizen—someone with a sense of self-efficacy, someone who feels in control of their behavior. Just think about it. What was expected of us? Many of us were expected to work from home, which meant reconstructing our apartments, finding places to work, and balancing family life with work life. We expected people to be adaptive and adjust to the challenges of the situation. If you expect that kind of adaptability, you have to value agency because, without an active citizen, you won’t achieve the kind of adaptation that was absolutely necessary at that time.

Now, the argument I would make is that many of the crises we’re currently facing—whether it's climate change and climate adaptation, obesity, or our behavior on digital platforms where there's also the danger of addiction—require adaptability on our part. To do that, we need to invest in people. We cannot expect citizens to be competent and active players who take their fate into their own hands without improving their competences. Again, I'm not arguing for a vision where the citizen is entirely on their own. Not at all. We need systemic approaches as well, including good laws and regulations and resilient institutions. That’s all true. But, as we saw with the pandemic, we also need the active and competent citizen.

Sofia Morais: This idea of empowering citizens by building their competences is quite a different approach to behavior change than simply nudging them in one direction. Can you break down what a nudge is? Maybe give us a few examples so we can better understand how it differs from empowering people?

Ralph Hertwig: I think nudging is a really novel and interesting idea. The question is, and we’ll discuss this later, what are the potential downsides of nudging? But originally, the idea of nudging was that people are often challenged. They tend to make mistakes or be motivationally challenged. You could also say they suffer from inertia. What that means is that, if we want to change human behavior, maybe it’s not the best idea to appeal to people. That was the assumption. Instead, maybe we can let the environment do the work. In combination with the challenges people face, this smart environment can create a dynamic that pushes them in a certain direction. That was really the idea behind nudging.

What it meant was that we shouldn’t focus too much on the people themselves—because we know they’re full of biases, errors, and mistakes. Instead, we should focus on designing “choice architectures.”

Sofia Morais: Could we also say “choice environment” instead of “choice architecture”? That’s what “architecture” means, right?

Ralph Hertwig: Yeah, it sounds a bit more sophisticated, but you could definitely say “choice environments.” The emphasis is on aspects of the choice environment that we don’t necessarily pay attention to, but that still guide our behavior. What’s also part of the choice environment are things like economic incentives, taxes, and fines, but that’s not the focus here. The focus is on things we typically wouldn’t even think about, like the way things are ordered. For example, we know from supermarkets that there’s the famous slogan “eye level is buy level.” Producers try to get their products onto the eye-level shelf because research shows that items placed there grab our attention and are more likely to be purchased. But we may not even be aware of this effect—it’s an effect of the environment that may occur without us realizing it. The idea behind nudging was to identify these kinds of features in the environment that guide our behavior, often below our level of awareness, yet still have an impact on our behavior.

In combination with the mistakes and errors people make, these environmental features can push us in certain directions—directions that, from the perspective of a policy maker, are desirable for the individual. That’s the core idea of what has come to be called “architectural nudges.” It’s the most innovative and interesting class of nudges.

There’s also a second class of nudges called “educative nudges.” This typically includes things like warnings (think of cigarette package labels), disclosures of information, or reminders. But the adjective “educative” here refers to a very narrow interpretation of education. The nudging framework wanted to highlight that there are other important types of nudges, such as reminders or warnings, though in my view, this is too narrow an interpretation of what we can achieve with something like education.

Sofia Morais: There’s also one class of nudges that people may have heard about: defaults. So defaults would fall under the architectural category. Would you like to explain how the default works? It’s also a good example.

Ralph Hertwig: Defaults are actually, I’d say, one of the most powerful nudges there are. There are even some meta-analyses suggesting that defaults can have very powerful effects in certain environments. The idea behind defaults is that in certain choice situations, there’s one option that is predetermined by a choice architect. If you don’t do anything, if you don’t make a decision, that predetermined option is implemented for you. It might sound a bit abstract, but perhaps the most famous example is organ donation.

Some countries have what’s called an “opt-out default,” meaning the policy maker automatically declares everyone, starting at a certain age, to be an organ donor. If you don’t want to be an organ donor, you have to explicitly opt out. But by default, everyone is an organ donor. This is in contrast to an “opt-in system,” where you are not considered an organ donor unless you actively decide to be one. In this system, you, as a citizen, make the decision at some point in your life that you think organ donation is a good public service, and then you opt into the system.

The difference is significant: in the opt-out system, you are an organ donor by default, whereas in the opt-in system, you are not unless you choose to be. The research shows that in countries with the opt-out default, many more people—at least theoretically or hypothetically—could be organ donors. This doesn't mean that everyone is actually an organ donor, but the pool of people who could potentially be organ donors is much larger in opt-out countries than in opt-in countries.

This was one of the fascinating examples that drew attention to nudging, because it seemed so simple—just change the law...

Sofia Morais: And then people stick to the status quo.

Ralph Hertwig: Exactly. The assumption was that, due to inertia, people would stick to the default and, theoretically or hypothetically, become organ donors. There are other interpretations of why defaults work. Sometimes they are seen as implicit social recommendations about what is the desired course of action. So it’s not just inertia; there could be other reasons why defaults may be effective.

Sofia Morais: There is one term that we often hear in association with nudging, which is “libertarian paternalism.” What does that mean?

Ralph Hertwig: Yeah, this is an interesting concept because, of course, and that was the very point of it, these things seem to clash with each other. “Libertarian” emphasizes preserving freedom of choice and liberty, while “paternalism” suggests that there may be a public choice architect or policy maker who knows what’s best for you and ensures you will do what they consider to be good for you. These two concepts—emphasis on liberty and emphasis on paternalism—don’t seem to go together. But here they do, in the sense that to construct a choice architecture that benefits the person and nudges them in a particular direction, I need to be, or I end up being, a paternalist. I tend to know what’s good for you and design a choice architecture so that you exhibit behavior that, in my assumption, is good for you.

But here comes the libertarian part: in this process of building a choice architecture, you still have the option to act otherwise.

Take the example of organ donation. In the opt-out situation, where you are by definition an organ donor, you still have the choice to say, “I don’t want to be an organ donor,” and opt out of the system. So, in this case, there is a choice architecture that builds on people's inertia, on their status quo bias—they stick with the default—yet they are not excluded from choosing against it. And in this sense, you have a combination of paternalism and liberty. That’s the idea behind libertarian paternalism, which is the political philosophy underlying nudging.

Sofia Morais: So I think this provides a good transition for us to talk about the criticisms that have been raised against nudging. What are the main concerns with it?

Ralph Hertwig: From my point of view, what I would criticize in nudging is the neglect of agency. There’s also, of course, the criticism that nudging undermines autonomy. Autonomy, in this context, means freedom of choice to the extent that people may not even be aware they are being nudged. That’s a criticism, particularly when nudging happens without people realizing it. However, I find the agency argument even more important. It goes back to what we discussed in the context of Covid-19. The situation clearly required an active citizen—not just an active citizen, but an active and competent citizen—someone who could take responsibility for their behavior but also think about others. We needed to exercise responsibility. So, we needed competence, agency, and responsibility for others.

I would argue that nudging doesn’t invest in these skills. Nudging typically doesn’t aim to make people competent. The idea behind nudging is that the choice architecture interacts with the problems people have and builds on them. For example, remember the inertia argument? In the organ donation case, I don’t want people to opt out and not become organ donors. I want them to stick to the default.

Sofia Morais: So you want to make use of the way the mind works in order to get people to do the right thing.

Ralph Hertwig: Exactly. But for that to happen, it requires a specific view of the mind—a view that nudging tends to have, which is somewhat negative. At the same time, I don’t want to change people’s competences because, to make the defaults work, I need an inert citizen. For other effects to work, I need a person who is loss averse. For other effects to work, I need someone who neglects base rates. These are all potential cognitive illusions that have been discussed in the literature. For many nudging interventions to work, I essentially need to exploit or take advantage of these cognitive illusions because they, combined with the choice architecture, lead to the behavior that I consider to be the right one for the person in question.

Sofia Morais: You've also mentioned before that the nudging approach is based on this deficit model of human nature, and this is a model that is not universally agreed upon. Would you like to tell us a bit more about this, and about the research that inspired the nudging approach?

Ralph Hertwig: The research behind it is rooted in a very successful tradition in psychology known as the “heuristics-and-biases program,” originally developed by Danny Kahneman and Amos Tversky. The central idea in this research is that, because of our cognitive limitations, people often rely on relatively simple heuristics—decision-making strategies that can be efficient but can also lead us to systematic errors and biases. Over the years, many psychologists, and also economists and behavioral economists, became very focused on the idea that people make errors and biases. You could interpret Kahneman and Tversky's work differently—by saying that there are heuristics that often work quite well. But, as I see it, in this research tradition, there was hardly any attempt to investigate when heuristics work well. Researchers were fixated on the biases and errors people make. The term “cognitive illusions” was coined to describe these biases.

The view that emerged from this is that human cognition is fundamentally error-prone, and there's even a famous statement by Richard Thaler, one of the authors of Nudge, that “mental illusions are the rule rather than the exception.” I disagree with that, because it’s an overly negative portrayal of human cognition. The second assumption of this view is that not only are these errors ubiquitous, but we also can’t correct them. They’re so stubborn that we can’t “debias” people. And since we can’t “debias” them, it seems like it’s not worth investing in human capital to help people make better decisions by overcoming these biases.

Sofia Morais: Instead, you make use of them.

Ralph Hertwig: Exactly. That’s the trick. We don’t try to overcome them; instead, we design choice situations where these biases work in our favor together with the choice architecture.

Sofia Morais: But not everyone agrees with the way that the heuristics-and-biases program studies human thinking, right? What other perspectives exist?

Ralph Hertwig: Interestingly, in the 1950s and ’60s—shortly before Kahneman and Tversky began their work in the early 1970s—there was a comprehensive and empirically-driven research enterprise. These researchers had a completely different perspective. Importantly, they also examined statistical reasoning and statistical intuitions. So, while they were studying the same subject as Kahneman and Tversky, they arrived at a very different conclusion about how good or bad people are at reasoning statistically. This earlier research field, based on numerous experiments, suggested that people are actually very good intuitive statisticians. Then, just a few years later, Kahneman and Tversky essentially overturned that view and reached a very different conclusion. A very interesting question is: How can that be?

One key difference—and this is something my colleague Tomás Lejarraga and I explored in a paper a few years ago—is that the experimental methodology used to study statistical intuitions changed dramatically. In the tradition of Kahneman and Tversky, experiments often involved text-based vignettes where participants provided a single response and were done.

But in the tradition that viewed the mind as an intuitive statistician, the experiments were very different. Instead of just providing a single judgment, participants would often make tens or even hundreds of judgments. This allowed researchers to observe learning over time. These experiments, which are more rooted in our experience and provided learning opportunities through feedback, led to very different conclusions about people’s cognitive and statistical reasoning abilities.

What many researchers did—and to some extent, I would say Kahneman and Tversky did as well—was to infer from these errors and biases that human cognition, in general, is flawed. But these conclusions were drawn from a very specific type of experimental setup, one that largely excluded learning. From this limited scope, broad inferences were made about people’s overall reasoning and learning abilities. I believe this is inappropriate—an overgeneralization. What we actually have are two different perspectives. While we can draw insights about cognition from both, we should not overgeneralize, as each perspective only applies to a specific class of situations. If we integrate these two views, we can develop a more accurate and representative understanding of human learning and cognitive abilities.

What happened in the 1950s and ’60s resurfaced again in later research conducted at this institute, particularly by Gerd Gigerenzer and the ABC group, and subsequently continued by our group. This research focused on the positive aspects of heuristics. Heuristics, remember, are simple strategies people use to navigate complex situations, deal with limited information, and make decisions under time pressure. We refer to them as ecologically rational heuristics, sometimes also as fast-and-frugal heuristics. Many examples have been developed showing that when the right heuristic is applied in the right environment, it can lead to remarkably effective decision making.

This is yet another research tradition demonstrating that it is worthwhile to consider how we can help people make better decisions. People can learn, and we can improve our decision-making abilities.

Sofia Morais: You've contributed to a behavioral science approach designed to empower citizens, which you refer to as “boosting.” Could you define what boosting is and give us a few examples?

Ralph Hertwig: Yes, absolutely. The idea of boosting is really to come up with interventions designed to improve people's competences. And when I say competences, I have a broad understanding. I mean cognitive competences, but they could also include emotional, motivational, or behavioral competences. We can do this by either building on what is already there in terms of competences and trying to foster them further, or we can work to build new competences that a person may not have had up until this point. We can do this by focusing on the internal operation of the mind—our cognition—but we can also make use of the environment, treating it as an ally, and bringing it into the process. But in the end, the major idea is that we believe people can make better decisions, and we help them do so by boosting these competences.

Sofia Morais: And can you share a few examples?

Ralph Hertwig: The examples I have in mind, or that I will give you, are quite different in terms of setup costs—the time it takes to learn them. You might think, “Well, boosting sounds like it takes a lot of effort or time,” and people might be turned off by that. But I would argue that there are boosts that are incredibly easy to pick up and learn.

Sofia Morais: They can be learned in a podcast, right?

Ralph Hertwig: Yes, they can be learned in a podcast. Let me give you one example. It’s called the Dutch Reach. The problem here is what's called “dooring.” Dooring is an issue that occurs when, as a car driver, you park your car at the side of the street, and when you're ready to leave, you're distracted. Maybe you're looking for your kid, turning off the radio, or searching for your cellphone. You don't pay enough attention to what's happening behind you, and you open the door without checking. Suddenly, a bicyclist may crash into your door. That's what's called dooring, and it can lead to really bad accidents, with people getting severely injured. The question is, how can we overcome that?

Now, I could, of course, use nudging. Educative nudges might warn you, “Just don’t do that.” But that’s the very problem: in these situations, we forget. You could have warned me five minutes ago, but in the moment, I may forget. So, what if we told you not to open the door with the hand next to the door handle, but instead, with the other hand? If you reach over with your hand that’s further away from the door, you can’t help but turn slightly around, and that serves the purpose.

Sofia Morais: So you’ll see if a cyclist is coming.

Ralph Hertwig: Exactly. Now, you could say, well, even that needs to be remembered. That’s absolutely true. Though, at some point, it may become a routine and you won’t even need to think about it; you’ll just do it. But to foster that process of ritualizing it, what we do is give you a little memory cue. For instance, you put a small red ribbon on the door handle. So when you look down and want to open the door, you see the red ribbon and think, “Ah, right, I’m supposed to use the other hand.” This is a very simple boost, a very simple intervention that can be very effective in reducing the risk and incidents of dooring. That would be one example of a boost. And here, we are dealing with a public policy problem—dooring.

Okay, that’s one example. Here’s a different one. It’s the idea of how to deal with math anxiety. Math anxiety is something that you can observe across many societies. It’s basically the anxiety people experience when it comes to numbers, equations, or arithmetic. This math anxiety can compromise your success in school and, if you extend it into your professional career, it can also affect the career choices you make and whether you go to university or not. So, it can have an important impact on your economic well-being and life satisfaction.

Here’s a very simple intervention. It goes back to the beautiful work by a group of developmental psychologists from Chicago University and Columbia University, if I remember correctly. They had the idea that, why wouldn’t parents, when they put their kids to bed in the evening, read very simple stories that have some mathematical content—some number content? Essentially, when you read these little stories, fairy tales, to your child, it’s a very playful way of engaging with numbers, which helps prevent math anxiety from building up.

This is something that even parents who are themselves math-anxious can do, because the content is so simple. Of course, you can then build up the complexity of the content as the child gets older. There are lots of different stories, and you can download them through an app. In one of the studies they conducted, they found that within a school year, children who were regularly read math stories had a three-month advantage in math performance compared to children who weren’t exposed to these stories. This is an enormous effect. And while this is a more engaging, more time-consuming boost, it’s still a very simple one. And when you think about its effectiveness, it’s really amazing. Again, we’re dealing with a public policy problem—math anxiety—that also compromises the economic success of both children and adults in terms of their professional choices. If we had a solution for that, wouldn’t that be fantastic? Now that’s a solution. It’s a solution that assumes we can help people and foster their competences. I’m fascinated by these findings, and I think it’s just a forward-looking intervention I love to talk about.

Now, here’s a third example, coming from a very different area: misinformation. We talked about this earlier, Sofia, in terms of Covid. The WHO spoke of a “tsunami of misinformation.” But we don’t just see it in the context of Covid; we also see it in the context of climate change and many other areas as well. So, how do you, as someone looking for information and wanting to read up on a certain topic, make sure the website you’re looking at is credible and provides reliable information?

There’s a really excellent educational scientist from Stanford University, Sam Wineburg, who came up with what he calls “lateral reading.” Lateral reading is the idea of providing people with the competence to do what professional fact-checkers do. What does that mean? Here’s how Sam Wineburg started: he gave his Stanford students and some of his colleagues the task of looking at particular websites, a number of them, and determining whether they were credible or not. It turned out that neither the students nor the professors were particularly good at this task. They were really bad—not much better than chance. And why is that?

Well, what he found is that the reason is that people love to engage with the content. They stay on the website, read what’s written there, and assess whether it looks professional, whether the graphics are nice, and if the design is good. Are there references? And if all these are met, if you can cross them off, then you may conclude that you can trust it. So, you put a lot of cognitive effort into figuring out whether you can trust the content. But it turns out that this is not a good strategy.

Sofia Morais: Because it’s easy to fake.

Ralph Hertwig: Exactly. That's the point. It’s easy to fake. So, what is a better strategy? Sam Wineburg, after observing what fact-checkers do, argued that they don't focus on the content itself. Instead, they immediately leave the website open on the tab but open other tabs and read up on what is being said about the people or the institution, the agency behind this particular website.

For example, you might find that a website discussing climate change—whether it’s human-made, caused by other factors, or even denying it entirely—might be connected to a group financed by the fossil industry. This additional information puts the content you’re reading in a different light, because now you can see that some of it may be driven by interests, lobbying, or other factors. The idea is not to invest all your cognitive resources into the text in front of you, but to seek out more information, which the internet easily provides.

Sofia Morais: And the name “lateral” comes from the fact that you're opening another tab, right? You go sideways.

Ralph Hertwig: Yes, we go sideways. We're not doing vertical reading, which would involve staying on the site and trying to understand it. Here, we’re doing something different. We’re actually ignoring that content, what we call “critical ignoring,” and looking laterally for other information that helps us quickly determine whether it's credible or not. This is just one of the competences you need to navigate today’s digital, information-rich environment. But it can be quite an important one, and it's definitely a competence worth having.

Sofia Morais: So here, the citizen has the choice to adopt the boost or not, right? It’s entirely transparent. This is very different from nudging.

Ralph Hertwig: Yes, absolutely. I think this is one of the key differences between nudging and boosting. If you think about all these examples—the Dutch Reach, lateral reading, or bedtime stories—these only work with the agreement, full comprehension, and cooperation of the person who wants to engage in the boost. That means two things:

  1. Boosts need to be completely transparent. They need to be transparent in how they work and in their objectives—what they aim to achieve.
  2. They also require the person to cognitively and motivationally engage with them. So, in this sense, boosts require agency and some level of motivation to reach the objective that the boost is aiming for.

Sofia Morais: And what can you say about their longevity?

Ralph Hertwig: We know too little about the longevity of all the interventions we’re talking about, whether it's nudging or boosting, or even many other interventions. We don’t quite know at what point the effects start to decline. The reason for that is that we have too few studies. However, we can still step back and think conceptually about the longevity of boosts.

The hope, of course, is that with a particular boost, once it is in place and, for instance, routinized—think of the Dutch Reach—this becomes a behavior that lasts for a long time. That’s the idea. It’s independent of a particular choice architecture. You’ve learned to do it, and you can apply it in any car you sit in or enter. It’s generalizable to any car you use. So, at least conceptually speaking, boosts are built to last.

Nudges, on the other hand, depend on the presence of a choice architecture and are therefore subject to changes in that architecture, as well as to changes in the intentions of the choice architect.

Sofia Morais: You have proposed that self-nudging can help people control their own behavior and resist temptations. How does self-nudging work, and what makes it a boost?

Ralph Hertwig: Self-nudging is something I really love. My colleague, Samuli Reijula, a philosopher from Helsinki, and I thought about how we usually think about nudging. It occurred to us: is it only the public choice architect who can make changes in a choice architecture? No, because it could be us. We could be the—actually, we are the masters of our own choice architecture. Think about that. Once you understand that, it’s really powerful.

For instance, think about your kitchen. I bet your refrigerator looks like mine. If you open it up, the salad and vegetables are typically in that opaque drawer at the bottom. And if you're like me, by the end of the week, you’ll remember: “Oh, yes.”

Sofia Morais: They’re still there.

Ralph Hertwig: Yes! You’ll think, “Didn’t I buy something like this?” Then you open it up and think, “Oh, God, it doesn’t look so good anymore. Maybe even rotten.” And that happens. It used to happen to me all the time. But it doesn’t happen anymore because... I mean, why does it happen? Because it’s not at eye level. It’s at the bottom, and you can’t even see it. You forget about it.

But we can undo that. We can change it immediately by taking the vegetables and fruit, putting them at eye level, and placing them in transparent glass containers. You can really see what’s in them, and you can even pre-slice them into bits and pieces to reduce all kinds of friction when it comes to actually consuming them. When you open the refrigerator a little bit hungry and want to eat something, you don’t want to think about cutting, peeling, and all that. It should be ready to eat. And we can do that.

What this means is that, as citizen choice architects—that’s the notion we coined—we are also choice architects. We are the architects of our proximate environment, of what’s around us.

Sofia Morais: Including the digital one.

Ralph Hertwig: That's a very powerful one, where we have lots of degrees of freedom in how we do that—whether we turn on notifications or not, whether we set certain pages as default. There are so many degrees of freedom where we can act as choice architects. The important thing is to act as choice architects in ways that are conducive to our well-being. Self-nudging also requires, and that’s why I view it as a boost, a fantastic boost, that we share some of the knowledge about human behavior that has been collected through nudging, and share it with the individual. We empower the person to be their own choice architect.

This has many positive downstream consequences because it means the person can really choose the architecture that fits their objectives. It also means I can stop self-nudging if I feel like I no longer need it, or if it no longer aligns with my interests. Self-nudging builds on the notion of competences—competences that I share with you.

There’s one final aspect that I find very important, and that’s agency. Remember, we talked about the importance of agency. The citizen choice architect is an active person because they think about, and may even experiment with, their environment. They see what aspects of the environment affect them that they hadn’t even considered. It took me so long to finally understand why I was forgetting the fruit and vegetables in the refrigerator—it was because they were sitting in a damn opaque drawer at the bottom! Of course, I’d forget them.

Just realizing that little aspects of the environment, they don’t need to be huge to make an impact—those little changes can have such enormous effects on our behavior. That’s empowerment. And I want to share this empowerment with people, to put them in a position where they can make strategic changes to their environment.

Sofia Morais: What sort of competences do you think make an empowered public today, and what sort of boosting interventions can help people develop these skills?

Ralph Hertwig: I think there are some very obvious domains we should think about. These days, and especially considering the pandemic, we are often dealing with uncertainty and probabilistic information, medical tests, health statistics, and many of the most important decisions in our lives deal with our health. These decisions are profound and often difficult to make. Let’s make sure people understand the important information they receive, for instance, from healthcare professionals. We can do that. There are existing boosts, like “fact boxes” or “icon arrays.” We can also train people in better formats for probabilistic information, often called “natural frequencies.” We can empower people to convert information from probabilities into frequencies.

Another area where we can help people is in financial decisions. There are really interesting studies showing that we can help micro-entrepreneurs, who often struggle with accounting for their finances, by training them with simple accounting heuristics. By the end of the month, they have a much better understanding of the money coming in and going out.

We’ve already talked about misinformation and disinformation. These often toxic information environments in the digital world also require competences. There are various boosting-based interventions for this. Some of them are called “psychological inoculation.” Lateral reading is another example. Another skill that can be taught is something we call “critical ignoring.”

Now, think about this in the digital world. Critical ignoring is a real competence because it enables us to recognize that certain things, and certain players in particular, are actively trying to hijack our attention. This is the essence of the attention economy. Many digital information products are designed specifically to capture our attention by triggering emotions, presenting supposedly novel content, or using negative framing. These are all psychological hot buttons meant to draw us in.

Sometimes, the best defense is to practice critical ignoring. This doesn’t mean simply ignoring information—it means smartly ignoring it. It involves actively deciding what information we should consume and what we should avoid, much like choosing a healthy diet. Critical ignoring is about recognizing informational “junk food” and choosing not to consume it. That’s the core idea behind critical ignoring.

Here’s another example: we can also boost people’s ability to reach their goals by giving them motivational skills and competence. One simple example is “temptation bundling.” We all know the experience of thinking, “Ah, I should exercise. I should get on my stationary bike. But, oh, damn.” What’s been shown in the literature is that if you do something called “temptation bundling”—a competence—you commit to being on the bike and, while you're on it, you watch your favorite TV show. And you only watch the show while on the bike. That’s temptation bundling. It makes it easier to do something you don’t like to do while reinforcing you with something you love.

It’s smart. And it also ties back to the notion of the citizen choice architect. We are creating an environment for ourselves that makes it easier to reach the things we want to do. These are a few examples, but they already illustrate that the sky’s the limit. We can use boosting in all kinds of domains.

Sofia Morais: Where can citizens learn about boosts?

Ralph Hertwig: There are many ways to inform citizens about potential boosts. Of course, we can use traditional methods, like brochures, fact boxes, but there are other ways too. For instance, I can make use of apps. There’s a fantastic app developed at this institute called One Sec. I’m not invested in it, but I think it’s a great app because it helps us deal with the issue of being drawn to certain social media sites that we might be using too often. This app is a self-nudging tool that creates just a little bit of friction, making it easier for us to control our consumption.

Public policy makers can also think about institutions that could provide boosts. For example, the Dutch Reach is regularly taught in driving schools. If you think about the structure of society, there are many entry points where boosts could be introduced—boosts could be taught in kindergarten, in schools, and many other places as well.

Sofia Morais: What are the requirements for applying boosting interventions in a way that is fair and responsible?

Ralph Hertwig: One important point for me is that I don’t want boosts to be misunderstood as blaming the individual. Often, it’s the environment around us that triggers particular behaviors, leading us down certain pathways. So, it’s crucial that boosting doesn’t mean blaming the individual. Rather, we want to provide people with the competences to deal with what we’ve called ultra-processed environments.

Very often, the formation of competences needs to be complemented with systemic changes. For example, in the case of being boosted to act as a citizen choice architect in my own kitchen—while that’s helpful and can encourage healthier eating, if you consider the ubiquitous food environment with its ultra-processed options, we need more than just empowered citizens. We also need systemic regulation. However, systemic regulations tend to be slow, and sometimes, they’re influenced by lobbying interests. So, you may not always get the best regulations or the ones that scientific consensus suggests would be most effective. That’s why, as a first line of defense, I believe we also need boosting.

One aspect I often hear when discussing boosting is that it requires certain cognitive and motivational abilities. Yes, people need to want to change their behavior, and they need to be able to change it. I agree with that. But it’s also important to emphasize that, in many cases, these are minimal requirements. For example, the Dutch Reach is something anyone can learn once they get their driver’s license. You don’t need to be a rocket scientist to learn that. We should develop boosts that are accessible to as many people as possible.

The bedtime story app is another good example. It asks kids questions based on the stories, and the answers are provided in a way that even an anxious parent can use the material, potentially overcoming their own math anxiety. Isn’t that fantastic? This is another example of how, when done smartly, boosts can reduce entry barriers and make them accessible.

Some people argue that boosts could increase inequality in society because only certain groups may be able to engage with them. I can’t completely exclude that possibility, but I believe that if we build boosts smartly, we can also use them to overcome some of the problems of inequality. Think again about the example of math anxiety.

Sofia Morais: Ralph, before we wrap up, I would like to give you the chance to speak directly to two groups. First, if policy makers were listening, what would you want them to take away from this conversation? And second, what's your message for everyday people—people who want to take control of their lives and make better decisions for themselves and for society?

Ralph Hertwig: If I were addressing policy makers—and sometimes I do—I would emphasize the importance of reflecting on their fascination with what we call the deficit model of cognition. This model fixates on people’s mistakes, blunders, and cognitive illusions. Yes, these errors exist—some are debated—but they do exist. However, people also have the ability to learn, and we must harness this potential to learn.

This leads me to my second point: the need for people to exhibit agency. Many of the challenges we face cannot be solved by the government or the state alone, nor simply by implementing choice architectures. Some choice architectures are within our control, and if we are benevolent architects, we can design them to benefit people. But there are many commercially constructed choice architectures beyond our control. We need to equip people with the skills to navigate these environments effectively. These two points are crucial: first, overcoming the obsession with human cognitive shortcomings, and second, the importance of fostering agency.

To individuals, I would say, “Yes, we can,” to borrow a well-known phrase from a former American president. By this, I mean instilling the spirit that we do not have to accept the world as it is—we can change, learn, and adapt.

As policy makers, we must engage in conversation with people. We should not leave them in the dark about the process of behavioral change; instead, they must take ownership of that process. This requires policy makers to trust people and show respect. If policy makers extend trust and respect to people, I believe we will be richly repaid.

Sofia Morais: Ralph, this has been a very insightful conversation. Thank you for joining us.

Ralph Hertwig: Thank you, Sofia, and thank you for these wonderful questions.

[End of transcript]

Go to Editor View