Case Study: A design research process to define opportunity areas for new services development

Cristina Colosi
14 min readAug 15, 2023

aka, exploring a real-world encounter with digital shadows to create the research-led foundations for the design of services on user’s data.

The Context

In December 2021 I was compiling my master thesis in Service Design at POLI.design, and I chose to focus my project on the relationship between digital users, data and profiling.
As every design student knows, the project needed to build its foundations on primary research, to inform the ideation and conceptualization of a service proposal based on actual user’s needs.
In order to gather reliable insights on the topic, I was looking for a real-world encounter between people and their digital shadows.

Using “Spotify Wrapped” to base research on a known user experience

Since 2016, in early December, the music streaming app Spotify crafts for its users the “Spotify Wrapped”, a compilation of songs built on their activity on the platform over the past year. It is an award-winning, viral marketing campaign of huge success, that every year floods social feeds and let us take a colourful walk on our own and other people’s music taste.
Spotify Wrapped seemed the perfect boundary object* to have a conversation about how people felt while interacting with their own digital footprint, and seeing the interpretations and profiling that service providers can embroider around them.

* a boundary object is a tangible artefact that facilitates the communication on a topic that is otherwise too abstract, complex, or intelligible.

What will you find and learn in this article?

By reading this case study, you’ll see how the exploration of a specific user experience (in this case, the one with Spotify Wrapped) can lead to learnings and insights that go beyond the experience itself, and can sustain a research-led design process on bigger and wider topics.
I’ll walk you through the research process behind my service design master thesis project, made of:

  • interviews with target users about their experience with the 2021 Spotify Wrapped;
  • synthesis of insights that speak more generally about users’ hopes and fears around data, algorithms and web personalisation;
  • definition of the behavioural archetypes of digital users related to the topic of data and digital shadows;
  • identification of opportunity areas for the design of new services.

At the end of the article, there is also an easter egg for you: an escamotage to overcome the challenges of taking design decisions in a team of one (spoiler: it’s a data-led approach).

But let’s go step by step.

Step 1: User Research

Research Objectives

The design of user research started by defining the research objectives:

  • discover why and how data fascinates and inspire people;
  • discover the value and potential power that people see in data about themselves;
  • discover what people find disturbing or annoying about the use (and misuse) of data about themselves.

As you can see, the research objectives don’t focus at all on the Spotify Wrapped experience per se. The Spotify Wrapped is a catalyst for igniting a conversation about how a user’s digital footprint is collected, interpreted, and used by service providers, to explore people’s feelings about it.

Research questions

The research questions that guided the conversations with interviewed subjects were the following:

  • In your opinion, how accurate was your Spotify Wrapped?
  • What did you find interesting about it?
  • What surprised you about your own data?
  • What did you find interesting to see in the Spotify Wrapped of other people?
  • What other services, beyond Spotify, come to your mind that do a similar job of giving you back data and insights from your use of their services?
  • What other services would you like to see doing a similar job, but actually don’t?
  • In conclusion, make a wish: if you could have a different access to your own digital footprint, what would you use it for?

The interview started by interrogating users about their actual experience of the Spotify Wrapped —to then move to a more generic exploration of the topic.
Starting the interview by asking users to share a recent experience they had, instead of asking a more abstract questions on their feelings or opinions on the use of digital data, shadows, and profiling, is a good way to explore their thoughts on a complex, possibly ambivalent, subject — and a better strategy to receive more sincere, authentic answers. After all, people are the experts of their own experience, and not of subject matters.

Collection and analysis of interview data

This research was conducted on 10 subjects, each interviewed for 30 minutes. Notes from the interviews were taken on the app Notion; each interview was recorded as a database element. Relevant quotes from the interviews were reported on a Miro board to create clusters of emerging topics and inform the synthesis of insights. Each interview in the database was also tagged for to three categories: pains, gains, and jobs to be done, which guided the later definition of behavioural archetypes.

Backstage operations of research analysis, with tags and quotes from user interviews.

Step 2: Synthesis of insights

Identifying macro-topics and relevant data types.

During the analysis of the interviews, I identified the macro-topics that emerged during the conversations with users, as they answered to the questions in the interview guide.
The macro-topics that were mentioned were:

  • information ecology;
  • informed consents on the use of personal data;
  • personal media and data archives;
  • data economy & data brokers;
  • digital engagement & tech-addiction;
  • quality control of AI and interpretation models;
  • quantity VS quality of data.

For each macro-topic, I collected the relevant quotes from the interviews , looking for patterns in the thinking between subjects — the foundation of insights.

As a further analysis, I highlighted the different “types of data” that have been mentioned by interviewed subjects during the conversation: Posts on Social medias — Shopping suggestions — Device usage stats — Media archives — Editorial content — Ratings, feedback and product reviews — Geo-localization — Money, expenses and credits — Medical data — Test and quiz results — Mood trackers. This helped in the later definition of Behavrioural Archetypes.

From data analysis to Insights

The approach used to identify insights (and possibly the only approach) was to simply sit with data. As you go back and forth between notes and quotes (I’m a huge fun of quotes, as they are by far more evocative than any rephrased post-it you may ever write), you’ll start having that “a-ha” moment in which you’ll start seeing something appearing between the lines.
I clustered quotes from the interviews using the macro-topics as a guidance. I initially formulated a dozen of insights — but after some further fusing and clustering they had reduced to four: two of them referring to what people love, expect and appreciate about the power of data (Pump my Brain — Unique like me); and two referring to what people fear and dislike about their use or misuse (Profile Prison — Free-range Algorithms).

Insight #1: Pump by Brain

People are generally annoyed by the ways data are used to manipulate their decisions and perceptions, in a way that at first seems to advantage primarily those service providers playing in the economy of data.
On the other side, they realise how a different use of data could be a powerful tool to serve their goals. Even if these goals were different among people, there was one overarching function that data was having for all of them: enhancing one or more of their cognitive function, like memory, correlating facts and infos, interpreting and understanding, or planning, forecasting and decision making.

Insight #2: Unique — like me

People expressed a fascination for those interpretation or collections of their data in which they really felt represented.
If on one side they were a little concerned with the level of accuracy of their profiles, due to a general lack of trust in the way data could be used against them, they were positively amazed by receiving a very tailored service, that made them feel unique and seen. Anything less than a high level of personalization may be considered unable and unjust in representing them among a rich spectrum of variety among people.

Insight #3: Profile Prison

The traces of our digital profiling are recognizable in the sponsored content that target us on the web, the search suggestions of our browser, or the hierarchies in the feed of our social medias. This profiling exposes us to a digital realm shaped on our (supposed) preferences and (supposed) desires — with an ambivalent utility. If it’s true that this help us reach content and information we’re interested in, saving us time and energy, it may also damage our information ecology, reducing us to be how we’ve been in the past, wether we like it or not. The algorithm may want us to be, or become, always more like the persona we’ve been profiled into, more than the person we could or aspire to be.

Insight #4: Free-range algorithms

In modern society, there are many requirements on the quality and transparency of the food we consume, the ingredients they contain and the production and transformation processes they go through. This does not apply yet to the ingredients and processes behind the algorithms that digest online data.
People are more or less aware of this, and feel the uncertainty of not knowing the operations happening behind their screens, the accuracy of the models applied to interpret the data they produce, the type and quality of data used to train these algorithms, and the trustfulness of the results of their manipulation and decision-making process.

Step 3: Defining Behavioural Archetypes

From the further analysis of primary research data, we could identify four different behavioural archetypes. They emerged cross-sectionally from different users by clustering their Jobs to be Done, Pains and Gains in an affinity map, and defining the two tension axis that distinguish one from the other in a matrix chart.

How to identify the tension axis that distinguish behavioural archetypes?

My personal way of identifying a good tension axis is to sit with data (again!). As you go back and forth between quotes, clusters, and insights, you’ll start seeing some “ways of being” that apparently distinguish one respondent from the other. Initially, you may find more than two dimensions, or spectrums, in which a behaviour can be described. Once you have a few, you want to play with these tension axis, creating a plane with two of them, and seeing which type of “archetypes” would emerge for fixing different values associated with these tension axis. Imagine to create a 2x2 matrix, in which the value for each tension can be high or low: you will obtain 4 combinations, aka 4 archetypes — like the one you’ll see below from this project. I suppose you can create more archetypes using a third axis, or a more granular scale (i.e. high, medium, low) — but I don’t have any advice to spare on when you may want to have more than 4 archetypes.

I do have another type of advice, though: don’t try to trace back each of the interviewed subjects to one specific archetype. If you try, you’ll probably find that a person is (much) more complex than any schematic description of an archetype, and may share some aspects with one, and some other aspects with another. If that happens, cheer up! It’s a good sign.
Yet, what needs to happen is that for each of the interviewed subjects you can recognize at least one dominant archetype. If that’s not true, your archetypes may not be representative enough. In this case, take a step back, sit another bit with your data, and reconsider either the tension axis or the archetypes emerging through them. By trial and error, you will identify those two axes that are more relevant, and that truly distinguish user archetypes according to their behaviour.

In this project, the two tension axes were: (i) the type of data that interest users the most (from a collection of raw data to their analysis and interpretation), and (ii) the attitude they have when they find more value in data (from organized to chaotic).

Horizontal axis: Interest

The horizontal axis refers to those aspects of data that people find more interesting or useful to them.
Collecting data refers to an interest in creating personal archives, collecting memories, and a tendency towards considering data as a didascalic narration of one’s personal story, with a focus on the past and history.
Analyzing data refers to an interest towards the meaning behind data, a fascination for their interpretation models, and a focus towards their atemporal, universal or forward-looking potential.

Vertical axis: Attitude

The vertical axis refers to the attitude that people have towards the management of their data, in terms of consumptions, production or storage.
Methodical attitude refers to the tendency of keeping track of online activities, traces, and online preferences. People with a methodical attitude are interested in maintaining high efficiency and control over what they give attention to.
Chaotic attitude let the scattered content of the internet inspire and guide the navigation. A user with a chaotic attitude is interested in many things, that change in time, and they don’t care about keeping strictly track of such changes, but rather value the enjoyment in the ride.

The Four Behavioural Archetypes

To describe the different archetypes, I defined an archetype-specific value proposition, the major pains and gains, the data they’re interested mostly in, and a declination of how the four overarching insights apply specifically to each of them.
ps: reporting a relevant quote is always a good idea.

Archetype: The Moment Glorifier
Interest: Data itself
Attitude: Methodical

  • Archetype: The Content Muncher
    Interest: Data selection
    Attitude: Pseudo-chaotic
  • Archetype: The Self-optimizer
    Interest: Data correlation
    Attitude: Pseudo-methodical
  • Archetype: The Meaning Wonderer
    Interest: Data interpretation
    Attitude: Chaotic

Step 4: Opportunity Areas

Opportunity Canvas

At this point, my brain was overloaded with a lot of information about the topic of data, footprints, algorithms , and what not— and it was frenetically looking in the background for a valuable opportunity space to go on with the project.
The first places I looked into in the quest for valuable opportunity areas were the insights and the pains emerged during research. A good dose of desk research was also done at this point, to find inspiration on solutions, trends, or signals of change that exist or were emerging in the real world on the topic of user’s digital data. I started clustering these elements from primary and secondary research in a series of potential opportunity areas. At this point, I needed to trust my instinct, and look for something that glowed as I wasa sitting (yet another time!) with research data. But I didn’t want to trust my instinct too much, so I tried to avoid selecting just one opportunity area quite yet.
After some back and forth (a classic, designerly dance motif) I was ready to define the opportunity areas by merging insights, quotes from interviews, relevant trends and inspiring existing services in an Opportunity Canvas.

The Opportunity canvas contained:

  • A description of the opportunity area: the most important part of the canvas, but also the last one you will fill-in.
  • The insights that feed and inform on user’s needs around the opportunity.
  • Contextual, socio-political trends or events that spoke the relevance of the opportunity area in the real world.
  • Existing services that address this opportunity area, in a direct or indirect way.

In this project, I identified three opportunity areas:

Kickstarting Ideation

Now that I had a few promising opportunity areas to explore, it was time to move to ideating solutions for the opportunity areas (note how the description of the Opportunity Area can easily become an How Might We question!).
For each opportunity area, I went through an ideation phase to collect service ideas. As every design student knows, an ideation phase usually starts by preferring quantity over quality — and, soon enough, the Miro board dedicated to idea generation was abundant with more-or-less interesting service concepts across the three opportunity areas.
To go from quantity towards quality, I performed two rounds of what I called a “data voting”.

Data Voting — an alternative to dot voting for design teams of one.

We are all familiar with dot-voting: a simple and somehow reliable method to use a group judgement to select “some from the many”, in this case to select the most promising service ideas.
For how sad it may sound, a master thesis is more often than not a solo project. So, how to overcome the challenge of selecting the best ideas in a team of one?
My solution was to transform dot-voting into data-voting: each idea would obtain a data-vote for each behavioural archetype whose Jobs to be Done or Pain Point was addressed by the service concept sketched, using an additional decider vote if it the idea was intersecting a macro-topic among those emerged during primary research. The ideas that collected less than three votes and no decider vote have been discarded.
At the end of this process, I had 15 service concept ideas across the three opportunity areas.

To further increase the quality content of the survived ideas, I listed the more interesting aspects of each service concept more explicitly, fused similar ideas, and crafted a set of more refined service concepts. At the end of the process, I had obtained three service proposals identified by a value proposition.

The final selection of the idea was obtained through a second round of data-voting, giving a vote for each Jobs to be done, Pain or Gain of the behavioural archetypes that these ideas addressed— using the Jobs to be Done as a discriminant factor in case of equal results. This way I selected one final service concept to be developed for the thesis project.

⚠️ Important Disclaimer ️️
Please, consider that a master thesis project can enjoy a unique freedom in addressing potentially any opportunity area, and design for any behavioural archetype or user persona— which justify the criteria arbitrarily chosen for the data-voting session of this project. In a real world environment, an opportunity or a service idea could receive data votes based on other type of criteria: i.e., alignment with the company strategy, internal capabilities, market dimension, entry barriers, and so on…

Conclusions

This master thesis project proceeded with the development of the service concept that won the two rounds of data votes. But the rest of the story of this service-design journey is out of the scope of this article.

I hope that retracing with me the steps of the research behind it inspired you on some level — sparked an idea on how to craft an insight, an opportunity area, an archetype, or a research guide — deepened your love for data-driven design choices — unlocked a thought, an intuition — or simply entertained you for a while.

And next December, as you listen to your Spotify Wrapped, think of all the learnings, topics, and interesting conversations you may have with your friends — or a stranger design researcher — about the greatest systems that hide behind an everyday experience, like re-listening to your more recent music obsession, or discover which is the true color of your music aura.

--

--

Cristina Colosi

~3 minutes speculations that bridge Science, Design and Magic