Recent article: “The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains”
In one of my prior Applied Behavioral Economics lectures, I mentioned the notion of not only looking at individual studies but also finding meta studies (essentially studies of studies) to help inform behavioral perspectives. This article covers a meta study of behavioral architecture interventions: https://www.pnas.org/content/119/1/e2107346118
In this article, there were two observations that really stuck out to me:
1) Considering a range of domains (health, food, environment, finance, prosocial) where behavioral architecture is applied, there is the highest effect on food choices and lowest effects in the financial domain; effects are potentially moderated by domain because of lower behavioral costs and lower perceived consequences in the former versus higher behavioral costs and higher perceived consequences in the latter.
2) Decision structure changes (choice architecture) outperforms decision information (information architecture) and decision assistance approaches, potentially because choice architecture approaches require less demand on cognitive information processing, and there is low susceptibility to individual differences and goals. (But remember that we will start to address personalization and individual differences in upcoming classes).
This post is based on a question that I answered previously on Quora.
Although it’s not exclusively from the realm of behavioral economics, the notion of A/B testing is something that I often try to work with companies to include. On the one hand this includes the capabilities of companies to integrate specific aspects of their product management, software development, UX, data science, and marketing processes. But it also means developing a research mindset that comes from the experimental side of behavioral economics. For example, if one really wants to nail down which aspects of a UX or customer experience affect behavior and outcomes, the gold standard is using randomized assignment, A/B testing, and discipline that between testing conditions only one item is changed. In setting up the A and B test conditions for a behavioral insights based UX isolation test, one can add, subtract, or substitute a single element between two test conditions. If you change more than one element, then your findings will be confounded between the multiple elements changed, and you won’t be able to tell what change worked or didn’t. UX teams should become used to working in worlds that include testing harnesses like Visual Website Optimizer, Optimizely, and the like.
For a little more on A/B testing, see this WSJ article by one of my colleagues. It describes a simple, but extremely powerful A/B test we worked on with a FinTech company’s UX. It’s Time to A/B Test Your Financial Life
The role of a Chief Behavioral Officer (CBOs) varies, but a common theme I’ve seen is that they analyze, plan, innovate, and implement aspects of the business using insights and methods from the behavioral sciences (e.g., behavioral economics, psychology). Some of the companies with CBOs do mostly marketing communications or thought leadership (e.g., research) while others may get involved with bringing insights and designs to product development (e.g., applied research). Some CBOs may directly manage people, such as a team of PhDs, analysts, etc. as well as partnerships (e.g., with academic researchers). The approach of CBOs may also vary in terms of the science. For example, some may leverage pre-existing research. Others may work with big data (e.g., proprietary) and correlational or instrumental variable type analysis. Yet others may take an experimental approach (e.g., A/B testing) and work with product and service teams to directly measure how designs affect behavior and outcomes.
A key aspect of determining the activities of the CBO really come down to setting goals for the larger organization, assessing gaps and resources, and developing a tactical plan to meet the goals over time. As an example, for the past few CBOs I have helped, we often worked to develop 30–60–90 day plans to initially get the organization rolling with longer-term planning and thinking happening in parallel.
In my free time I have been developing a course, tentatively called Applied Behavioral Science in the Digital Age to be taught to business school students at either the undergraduate or graduate level. In the course, students will study how the pervasive reach of digital technology into our lives affects our heuristics, biases and other behavioral patterns. In addition to learning about behavioral science theories in the digital age, students will then learn how to apply those key theoretical concepts through discussing actual, corporate case studies and participating in hands-on exercises related to nudging and experimental design. The class will discuss key elements to starting and implementing behavioral science initiatives within a company. The course will be especially geared toward those interested in professional careers within consulting, product development, marketing, services, and technology app (e.g., FinTech) settings.
As related to that course, I have started to develop a short book that will cover specimens and cases based on the real world, such as sample websites, app designs, email campaigns, and customer journeys with ideas about how to evaluate such designs though the lens of behavioral science. If you have interesting examples and specimens for me to consider including (can be disguised or made anonymous as needed), please feel free to correspond with me at email@example.com. If the specimen is from your company and you are interested, I can potentially perform a behavioral audit on the materials provided.
Thanks to Rick Unser for having me recently on his 401(k) Fridays podcast. This interview is geared toward defined contribution plan sponsors and those closely related with this segment of the market (e.g., advisors, consultants, recordkeepers, investment only). I do draw from some insights and activity that is occurring in other areas of the financial services market (e.g., retirement income, wealth management). The podcast may be found at:
This post is based on an answer I wrote in response to a question posed to me on Quora, “What do choice architects do?” I wanted to repost my answer here because I still feel there is a lack of understanding about what it means to implement nudging and behavioral science within companies, and the role of choice architects are key.
Choice architects essentially use insights from behavioral science to design environments for people that encourage or support some sort of end goals.
For example, suppose there are a layered set of three main goals to encourage people to 1) participate in a retirement savings plan, 2) save enough money, and 3) invest wisely. A choice architect may address behavioral obstacles that may hamper these goals from being met through creating solutions. These solutions could include auto-enrolling people into a retirement plan (versus having them opt-in) to address behavioral obstacles associated with status quo biases that hamper participation in a saving plan. In order to get people to get to healthy saving rates over time, the architect may create a way for people to commit today to savings increasing in the future (a process which addresses psychological biases associated with present bias and hyperbolic discounting). Finally, an architect may default most people into an automatically managed, diversified portfolio that evolves as the person reaches and continues into retirement. This essentially makes a healthy investment choice easy as a default for most people and for most of their money.
So choice architects do the following things:
They identify goals of all constituents, any guardrails (e.g., ethical, philosophical, financial), and desired outcome measurements.
They look for behavioral obstacles that people face in whatever environment is being addresses or designed (e.g., financial spending, medication adherence, governmental compliance).
Architects try to leverage behavioral science research where they can (e.g., to inform the precise nature of obstacles, potential ways to address).
They innovate and try to create solutions and interventions to address behavioral obstacles (e.g., website design, text messages, email content, customer outreach, product design, decision tools).
Architects also look to measure and perform A/B testing where they can to see how solutions and interventions impact outcomes.
I only recently learned about the term “boosting”. Boosting takes a different worldview of addressing a person’s competencies whereas nudging tends to address immediate behavior. There does appear to be some overlap between boosting and System 2 nudges (where the nudge tries to engage a person’s slow, reflective thinking). There is also overlap between short-term boosting and educational nudges. However, long-term boosting is about building a person’s competencies (e.g., teaching them, giving them tools, getting competencies to persist even beyond the immediate decision point). A boost appears to necessarily require both transparency of the intervention and cooperation of the person who is a target of the boost. Those advancing the concept of boosting admit that boosting may actually be more costly to implement and less effective on affecting immediate behavior as compared to nudges.
For more details on boosting, I recommend starting with the following paper.
In recent study I having been working with Hal Hershfield and Shlomo Benartzi at UCLA, we worked with a FinTech company that had its roots in providing a mobile app to Millennials to get them to save incremental money through rounding up purchases. For example, if you bought a cup of coffee for $4.55, you could round things up to $5.00 and save the incremental $0.45.
We wanted to introduce the concept of a recurring savings feature, where people could save a specified amount of money at regular intervals. As part of that effort, we constructed an experimental design and A/B/C test where during the sign-up process, users were randomly assigned to one of three treatments where they were given an opportunity to save: A) $150 per month, B) $35 per week, or C) $5 per day. At the heart of the design is the notion of presenting essentially equivalent information but using temporal reframing to present the choice option differently. Our hypothesis is that the $5 per day treatment would yield the most success in terms of sign-ups for recurring savings. So we use traditional statistics to show that the difference in sign-ups between these treatment conditions is statistically significant. In this case, we provided evidence that sign-ups were 4x higher using the daily frame and that we could close an income discrimination gap of 3x between the highest and lowest income users in terms of percentage of people saving comparatively between the monthly and daily temporal framing. More details on the study can be found here: Temporal Reframing and Savings: A Field Experiment
In other studies I am involved with related to the different framing of information and savings, I measure not only outcomes like savings rates (i.e., what people do and choose) but also people’s thoughts, perceptions, and mental associations regarding the financial decisions (i.e., the psychology and process). Using statistics to better understand the underlying psychology behind people’s decisions can help inform one in providing better user experiences (e.g., to improve outcomes, reduce confusion, increase confidence).
Statistics can be a very powerful tool to have when trying to analyze messy things like social science processes and human decisions. Companies are starting to ramp-up their data science capabilities a lot more, and while I think much more can be done in terms of incubating behavioral science initiatives, I think the shift to data science will be here to stay for quite awhile.