What we found in our 10,000-person learning experiments

At Kinnu we spent ten months researching learning interventions rooted in cognitive science. Here’s how we did it, and how it’s shaped the app.

We’ve spent the best part of two years building an app that harnesses cognitive science to superpower people’s learning abilities. And we were doing pretty well at it! But ten months ago, we decided we could do better. We felt that we could build something truly special.

At the end of 2023, we decided to halt all our other operations and focus on deep research to answer one question: what’s the very best way to learn? We think we’ve come close to answering that question, and you can read about it in our findings below.

But first, some context

Throughout our research work at Kinnu we use a learning metric called a K-Score to measure how much someone has learned from an intervention. Put simply, the higher the K-Score, the more effective the learning intervention. Based on a massive literature review, and the expertise of several of our team with PhDs in the science of learning, we set out 15 different experiments to run. Each of these was based on a different hypothesis about how we could improve learning.

The outcomes of these experiments would be measured in K-Score improvements, which we believe are a failsafe method for measuring actual learning.

The next step was to find learners who’d be our test subjects. We put out a notice to our existing users, with an ambitious target of recruiting 500 volunteers. Within a couple of weeks we actually had 10,000, a number that truly blew us away. If you were one of them, we thank you from the bottom of our hearts.

So, now we had a list of experiments, a method for measuring their outcomes, and an army of volunteers.

There was one more thing we needed – a platform for testing on. Our proposed experiments deviated pretty heavily from our existing app, and it became clear that we’d actually need to build a totally new app to run our experiments on. So that’s what we did – we built a minimal, rough-and-ready experimental learning app called Kinnu Labs. We gave access to our testers, and we were ready to start experimenting.

The Big Picture

Let’s cut to the chase – what did we learn from our experiments? The short answer is… enough to rebuild our whole app from the ground up. Based on our findings, we’ve rebuilt Kinnu into Kinnu 2.0, which is a massive leap forward from the original app.

If we could put all of our findings into a single, big-picture summary, it would be this: people learn by building schemas. That means that learning is about figuring out how to arrange disparate pieces of information to piece together your own model of a concept.

A lot of that comes down to the pace of learning, and delivering the right information at the right moment for the learner to continue to improve their understanding. It also means minimising cognitive load – a lot of our new features work by compressing content, such as questions or definitions, while retaining the same essential information. This allows learners to retain more information from a lighter load of content.

These were our very high-level findings, but the really interesting stuff (if you are as geeky as we are) comes in the details. Let’s take a look at some of the experiments we ran, and how they’ve shaped the app:

Finding Number 1: Content Is (Still) King

Across every single one of our experiments, we found the most significant variable in determining K-Score was the quality of the written content. As a result, we’ve seriously shaken up content design in Kinnu 2.0.

Orb Pathways, resulting in +69% K-Score

The brain learns best when it can master clearly defined concepts. ‘Chunking’ is how your brain groups pieces of information together to form discrete concepts. It’s essential to be able to form those chunks in order to construct schemas.

As a result we’ve implemented a new unit of content called ‘orbs’. Each session you now do on Kinnu will be a single, coherent unit of content designed to teach a distinct concept. This has had a major impact on K-Score, while also making the pathways a lot more engaging and readable.

Pathways that tell a narrative +68% K-Score

Another quirk of the human brain is that we love stories. We’re way, way better at retaining information when it’s linked into a narrative.

We’ve now shifted towards writing content in a more narrative, storified way, as opposed to drier, more factual styles. We found our test subjects were much better able to retain information as a result.

Introductory questions and summary points +70% K-Score

The priming effect is what happens when you’re given a little preview of what you’ll learn, before you learn it. This gives us an outline, that we can then fill in with the details.

In Kinnu 2.0, before you read an orb, you’re given a preview of what you’ll learn in the form of some questions (which you don’t need to answer). At the end, you’ll read some bullet points that summarise what you’ve just learned.

Finding Number 2: Interaction Matters

We played around with a number of novel ways of interacting with Kinnu. Some of these were surprisingly unsuccessful, like an AI microtutor which seemed to have little to no effect on learning. Others were better, and have made it into Kinnu 2.0:

Concept Lookup +66% K-Score

For people to build schemas, they need to piece together previously learned concepts into a bigger picture, or higher level of abstraction. To do this, it’s essential that they grasp those building-block concepts first.

Concept Lookup allows you to tap on a highlighted concept and be taken to a definition of it. You can also jump back to the point in the pathway you learned it from. This helps with staying on top of complex definitions and building larger models of understanding.

‘Why/How’ Write-In Answer Questions +66% K-Score

It’s much more effective, from a learning angle, to write your own answer than to select from multiple options. We’re introducing ‘How/Why Questions’, a feature where users enter their own answers to open-ended questions.

Learners write their own answers to open-ended questions in-app. These are then graded by AI. Originally we had wanted this to be a peer-grading feature, with users rating each other’s answers. This isn’t simple to build, so to gauge interest, we built an AI marking system instead. Users thought they were marking each other’s answers, but they were actually grading and being graded by GPT-4o.

It turns out that AI is actually pretty decent at grading learner’s answers. While we love the idea of implementing peer-grading one day, for the time being we’ll be sticking with AI. Side note: If you were one of those users who thought they were doing peer grading – surprise! Sorry we couldn’t tell you this at the time, but it was important for the experiment that this was kept secret. 

Finding Number 3: Questions Can Be So Much More

Many of our experiments were focused on gauging which kinds of questions have the greatest impact on learning. We experimented with LOADS of different question types, to varying degrees of success. Here are our best-performing ones, which you’ll see in Kinnu 2.0

Collapsing Questions +59% K-Score

Cognitive load is what happens when you try to think about too many things at once. It’s a huge problem when you’re trying to learn.

Having a massive pile of questions to complete can contribute to this. Collapsing questions is a neat design that combines several related questions into one – reducing the number of reviews you need to complete, while still teaching you the same information

‘Graph’ Questions +33% K-Score

Another huge boost to learning is the ability to situate ideas in context. It’s not enough to just know one thing – you need to understand how it relates to the other stuff in the same topic. Graph Questions are a whole family of new question types that allow you to answer questions on a load of different data types, including:

    • Timelines for dragging and dropping events

    • A world map you can drop pins on

    • Different forms of ordering (tallest to shortest, hottest to coldest, etc.)

    • Matching pairs

Some honorable mentions (keep your eyes peeled in the future 👀)

Above, we listed the main takeaways we took from our experiments, and the features we’ve built to implement those findings in Kinnu. But there were plenty of other experiments too. Some of these were successful, others less so. Here are a few honourable mentions, some of which we’re more than a little likely to visit again in future.

Case studies for higher-order thinking +82% K-Score

We let learners explore case studies teaching higher-order thinking skills. These were all about teaching abstract reasoning and critical thinking. The amazing learning impact means we will build in this space, but the content design is tricky. Watch this space!

Interactive Non-Fiction +63% K-Score

Interactive non-fiction turned pathways into a kind of text-based adventure, where you could interact with different elements in a highly storified way. It was a lot of fun! However, some users found it a little confusing, and it was perhaps more a fun novelty than something we’d want to do long term

Stickers +58% K-Score

Probably our most divisive experiment! This feature allows users to place stickers on their favourite parts of the app, as a personal tag and reminder for stuff they want to remember. Some loved this, others couldn’t bear it. Overall it was too divisive for us to build. But never say never…

So, there have been big things underway at Kinnu. As a result of these experiments we’ve built something that’s truly rooted in the research on how best to learn.

And you can experience all of this for yourself very soon. We will be rolling out Kinnu 2.0 in the coming weeks. If you’re reading this, there’s a good chance it’s already available. Have a look on the App Store and Play Store. Go on. You know you want to. We’ll see you on there 🐙

You might also like

Re-Re-Thinking How We Learn. Say Hello to Kinnu 2.0;

The octopus is finally out of the bag

How to produce consistently styled image sets in Midjourney;

Follow our formula for creating beautiful image sets with a consistent style. Ideal for content creators, marketers, web designers and more. This article assumes you have a basic familiarity with how to use Midjourney.

The Generative AI Revolution;

The Generative AI Revolution is here, and like the French Revolution it will shake up the old order.

From (wo)man to machine – building Kinnu’s learning pathways;

Can an AI create any learning pathway you can think of in under 5 seconds? We are about to find out. A deepdive into how Kinnu produces its learning materials.

Scan to download