Behavioral Science Casebook Project

In my free time I have been developing a course, tentatively called Applied Behavioral Science in the Digital Age to be taught to business school students at either the undergraduate or graduate level. In the course, students will study how the pervasive reach of digital technology into our lives affects our heuristics, biases and other behavioral patterns. In addition to learning about behavioral science theories in the digital age, students will then learn how to apply those key theoretical concepts through discussing actual, corporate case studies and participating in hands-on exercises related to nudging and experimental design. The class will discuss key elements to starting and implementing behavioral science initiatives within a company. The course will be especially geared toward those interested in professional careers within consulting, product development, marketing, services, and technology app (e.g., FinTech) settings.

As related to that course, I have started to develop a short book that will cover specimens and cases based on the real world, such as sample websites, app designs, email campaigns, and customer journeys with ideas about how to evaluate such designs though the lens of behavioral science. If you have interesting examples and specimens for me to consider including (can be disguised or made anonymous as needed), please feel free to correspond with me at sds77@cornell.edu. If the specimen is from your company and you are interested, I can potentially perform a behavioral audit on the materials provided.

What Can User Experience (UX) Designers Learn from the Field of Behavioral Economics?

This post is based on a question that I answered previously on Quora.

Although it’s not exclusively from the realm of behavioral economics, the notion of A/B testing is something that I often try to work with companies to include. On the one hand this includes the capabilities of companies to integrate specific aspects of their product management, software development, UX, data science, and marketing processes. But it also means developing a research mindset that comes from the experimental side of behavioral economics. For example, if one really wants to nail down which aspects of a UX or customer experience affect behavior and outcomes, the gold standard is using randomized assignment, A/B testing, and discipline that between testing conditions only one item is changed. In setting up the A and B test conditions for a behavioral insights based UX isolation test, one can add, subtract, or substitute a single element between two test conditions. If you change more than one element, then your findings will be confounded between the multiple elements changed, and you won’t be able to tell what change worked or didn’t. UX teams should become used to working in worlds that include testing harnesses like Visual Website Optimizer, Optimizely, and the like.

For a little more on A/B testing, see this WSJ article by one of my colleagues. It describes a simple, but extremely powerful A/B test we worked on with a FinTech company’s UX. It’s Time to A/B Test Your Financial Life

If you are interested in other aspects related to the digital UX world and behavioral economics, you might also want to check out a book that was written by two of my colleagues: The Smarter Screen: Surprising Ways to Influence and Improve Online Behavior.

What Does a Chief Behavioral Officer Do?

This post is based on a previous question posed to me on Quora.

The role of a Chief Behavioral Officer (CBOs) varies, but a common theme I’ve seen is that they analyze, plan, innovate, and implement aspects of the business using insights and methods from the behavioral sciences (e.g., behavioral economics, psychology). Some of the companies with CBOs do mostly marketing communications or thought leadership (e.g., research) while others may get involved with bringing insights and designs to product development (e.g., applied research). Some CBOs may directly manage people, such as a team of PhDs, analysts, etc. as well as partnerships (e.g., with academic researchers). The approach of CBOs may also vary in terms of the science. For example, some may leverage pre-existing research. Others may work with big data (e.g., proprietary) and correlational or instrumental variable type analysis. Yet others may take an experimental approach (e.g., A/B testing) and work with product and service teams to directly measure how designs affect behavior and outcomes.

A key aspect of determining the activities of the CBO really come down to setting goals for the larger organization, assessing gaps and resources, and developing a tactical plan to meet the goals over time. As an example, for the past few CBOs I have helped, we often worked to develop 30–60–90 day plans to initially get the organization rolling with longer-term planning and thinking happening in parallel.

Behavioral Science Casebook Example: Ring Website Screens

The example below is a working draft of material related to the Behavioral Science Casebook project and is intended to be used for educational purposes and helping both students and working professionals to think about digital designs and customer experiences using behavioral science concepts. High-level behavioral audit comments and potential approaches to consider are provided in some cases (for instructor purposes).

Things to Think About

  • Goals
    • What is the apparent priority in terms of product versus service sales?
    • To what extent does this website target more knowledgeable versus less knowledgeable users?
  • Choice Architecture and Design
    • What are some key elements of the design?
    • How do the design elements behaviorally address user attention?
    • How do the design elements support user decision-making?
    • How do the design elements inhibit user decision-making?

Behavioral Audit of Website Screen 1 (Instructor Notes)

  • Goals
    • Products appear to be the priority, although some space is given to services.
    • For example, products appear as the first item in the menu bar (primacy effect).
    • The site appears to target somewhat more knowledgeable users as there is a fairly long list of choices to navigate without assisted help.
  • Behavioral Architecture and Design
    • The design includes a long menu list (possibly triggering choice overload).
    • Photo thumbnails garner user attention although some photos not intuitive (e.g., not aligned with System 1 fast thinking, such as “Offers” and “Professional Installation”).
    • More than half of the menu items have a label of “New” which grabs attention, but to what extent is it clear what the label means?

Behavioral Audit of Website Screen 2 (Instructor Notes)

  • Goals
    • Best sellers are given priority, but it is unclear what product priorities are given within the category.
  • Behavioral Architecture and Design
    • “Best Seller” and “Save $X” graphic elements highlight attention points.
    • Unclear how the blue circle “Sale” icon differs from the “Save $X” label.
    • List prices are provided as reference points with sales prices also listed.
    • Orange “Buy” buttons make it clear where actions can be taken to buy, although it is not clear whether users have enough information at this point and are ready to buy.

Potential Behavioral Approaches to Consider (Instructor Notes)

  • Reduce choice overload by creating a better way to navigate based on their behavioral need (e.g., users who know what they are looking for versus those that need help or want ideas).
  • Reduce choice overload by changing the presentation method (e.g., fewer items per page or per line).
  • Revalidate goals and measurement outcomes
  • Create research and testing strategy, e.g.,
    • control versus challenger case of simplified design (A/B test)
    • research to assess how well website works for different users

Podcast Interview on 401(k) Plans and Behavioral Finance Trends

Thanks to Rick Unser for having me recently on his 401(k) Fridays podcast. This interview is geared toward defined contribution plan sponsors and those closely related with this segment of the market (e.g., advisors, consultants, recordkeepers, investment only). I do draw from some insights and activity that is occurring in other areas of the financial services market (e.g., retirement income, wealth management). The podcast may be found at:

Apple Podcasts – https://apple.co/2EAsw5J
GooglePlay – http://bit.ly/2Hfsgfa
Stitcher – http://bit.ly/2Uj08eT

Additionally, some references that I allude to on the podcast which may be of interest to folks include the following:

When Implementing Behavioral Science, What Is the Role of a Choice Architect?

This post is based on an answer I wrote in response to a question posed to me on Quora, “What do choice architects do?” I wanted to repost my answer here because I still feel there is a lack of understanding about what it means to implement nudging and behavioral science within companies, and the role of choice architects are key.

Choice architects essentially use insights from behavioral science to design environments for people that encourage or support some sort of end goals.

For example, suppose there are a layered set of three main goals to encourage people to 1) participate in a retirement savings plan, 2) save enough money, and 3) invest wisely. A choice architect may address behavioral obstacles that may hamper these goals from being met through creating solutions. These solutions could include auto-enrolling people into a retirement plan (versus having them opt-in) to address behavioral obstacles associated with status quo biases that hamper participation in a saving plan. In order to get people to get to healthy saving rates over time, the architect may create a way for people to commit today to savings increasing in the future (a process which addresses psychological biases associated with present bias and hyperbolic discounting). Finally, an architect may default most people into an automatically managed, diversified portfolio that evolves as the person reaches and continues into retirement. This essentially makes a healthy investment choice easy as a default for most people and for most of their money.

So choice architects do the following things:

  1. They identify goals of all constituents, any guardrails (e.g., ethical, philosophical, financial), and desired outcome measurements.
  2. They look for behavioral obstacles that people face in whatever environment is being addresses or designed (e.g., financial spending, medication adherence, governmental compliance).
  3. Architects try to leverage behavioral science research where they can (e.g., to inform the precise nature of obstacles, potential ways to address).
  4. They innovate and try to create solutions and interventions to address behavioral obstacles (e.g., website design, text messages, email content, customer outreach, product design, decision tools).
  5. Architects also look to measure and perform A/B testing where they can to see how solutions and interventions impact outcomes.

Quick Thoughts on Boosting Versus Nudging

I only recently learned about the term “boosting”. Boosting takes a different worldview of addressing a person’s competencies whereas nudging tends to address immediate behavior. There does appear to be some overlap between boosting and System 2 nudges (where the nudge tries to engage a person’s slow, reflective thinking). There is also overlap between short-term boosting and educational nudges. However, long-term boosting is about building a person’s competencies (e.g., teaching them, giving them tools, getting competencies to persist even beyond the immediate decision point). A boost appears to necessarily require both transparency of the intervention and cooperation of the person who is a target of the boost. Those advancing the concept of boosting admit that boosting may actually be more costly to implement and less effective on affecting immediate behavior as compared to nudges.

For more details on boosting, I recommend starting with the following paper.

An Anecdote on How Experimental Design and Statistics Are Used in Behavioral Economics and Business

In recent study I having been working with Hal Hershfield and Shlomo Benartzi at UCLA, we worked with a FinTech company that had its roots in providing a mobile app to Millennials to get them to save incremental money through rounding up purchases. For example, if you bought a cup of coffee for $4.55, you could round things up to $5.00 and save the incremental $0.45.

We wanted to introduce the concept of a recurring savings feature, where people could save a specified amount of money at regular intervals. As part of that effort, we constructed an experimental design and A/B/C test where during the sign-up process, users were randomly assigned to one of three treatments where they were given an opportunity to save: A) $150 per month, B) $35 per week, or C) $5 per day. At the heart of the design is the notion of presenting essentially equivalent information but using temporal reframing to present the choice option differently. Our hypothesis is that the $5 per day treatment would yield the most success in terms of sign-ups for recurring savings. So we use traditional statistics to show that the difference in sign-ups between these treatment conditions is statistically significant. In this case, we provided evidence that sign-ups were 4x higher using the daily frame and that we could close an income discrimination gap of 3x between the highest and lowest income users in terms of percentage of people saving comparatively between the monthly and daily temporal framing. More details on the study can be found here: Temporal Reframing and Savings: A Field Experiment

In other studies I am involved with related to the different framing of information and savings, I measure not only outcomes like savings rates (i.e., what people do and choose) but also people’s thoughts, perceptions, and mental associations regarding the financial decisions (i.e., the psychology and process). Using statistics to better understand the underlying psychology behind people’s decisions can help inform one in providing better user experiences (e.g., to improve outcomes, reduce confusion, increase confidence).

Statistics can be a very powerful tool to have when trying to analyze messy things like social science processes and human decisions. Companies are starting to ramp-up their data science capabilities a lot more, and while I think much more can be done in terms of incubating behavioral science initiatives, I think the shift to data science will be here to stay for quite awhile.

Example Companies Involved with Behavioral Economics

Here are some consulting firms and practice groups in the behavioral science and economics space:

These days I see an increasing number of internal behavioral economics groups or initiatives starting within traditional companies or organizations. Here are some examples:

I also see more independent consultants in this space as compared to what I used to see 10 years ago.

The lists above will be updated from time to time.


Steve Shu specializes in incubating new initiatives with a primary focus on strategy, technology, and behavioral science. He is author of Inside Nudging: Implementing Behavioral Science Initiatives and The Consulting Apprenticeship: 40 Jump-Start Ideas for You and Your Business.

What Is It Like to Work in a Nudge Unit?

This post is based on a question that was posed to me on Quora, What does it look like to work at the Behavioural Insights Team, or any other Nudge Unit?

If you are interested in nudging within the government context, I would check out the book by David Halpern, Inside the Nudge Unit, which essentially addresses the genesis of the Behavioral Insights Team in the UK and its approach to execution, which includes addressing problems one nudge at a time.

For some additional flavor on the US side and how nudging has affected policy, with some of those ideas being implemented in the White House Behavioral Sciences Team, check out Cass Sunstein’s book, Simpler. I am not sure how that team is currently doing given the Trump administration and policies to dismantle organizations from both the outside and within, but it is a good book. Although it’s been some time since I read that book, I recall a distinct a vibe around calculating return on investment as a key process intertwined with implementing nudges within government.

If you are looking for information on implementing nudge units within companies, I published a book in 2015, Inside Nudging: Implementing Behavioral Science Initiatives. Like the books previously mentioned, I typically have found nudge units to be very project-focused, and they often include elements of randomized controlled trials (RCTs) or experimental work. I have also found that nudge units tend to work best when they deliberately address elements and processes around Goals, Research, Innovation, and Testing. I call this Behavioral GRIT (essentially organizational fortitude to implement behavioral science). I propose that how companies implement behavioral science initiatives should be based around a vision implemented through a predominant organizational model (like an innovation center or consulting office) and then adding implementation elements (like an advisory board or a chief behavioral scientist). I’ve made an appendix to my book available here, which details predominant organization models and implementation elements to jump-start one’s thinking: https://steveshuconsulting.com/wp-content/uploads/2015/08/Inside-Nudging-Appendix-A-August-27-2015-v7.pdf. Leafing through that appendix might also give you some indirect ideas about what it might be like to work in a commercial nudge unit (recognizing that these vary a lot).

Finally, I also put together a short video introducing behavioral science initiatives (again related to commercial settings), and the video might also provide some additional color: