While our European engineering and design team is hard at work finalizing our game Sema, our local Kenyan team is equally hard at work running test sessions in slums around Nairobi, most notably in Kibera. We’ve been present on the ground for over a year, drawing insight and inspiration from the ground and observing children who play with Sema to inform every piece of the game, but we’re now starting to run more structured trials with a three-pronged goal: 1) gather early usage data and anecdotal evidence to explore the effectiveness and inform the design of the game, 2) build a continuous relationship with local communities and familiarize children and their parents with Sema, and 3) build our local capacity for running high quality trials in the future. While our data sets are still limited, it’s thrilling to see data coming in from our early users – and we want to share the excitement of the moment.
We focused our first trials on children who are not in any formal schooling. These are generally kids aged 6 through 11 with street backgrounds, many are orphans, and all have had little or no teaching. They represent the typical learner whom we want to help become literate. As you can imagine, however, organizing those trials was nothing like we initially imagined, and we’ve learned a lot in the process! We’re working with a number of incredible local partners – daycare centers, schools, informal learning centers – such as AMREF’s Dagoretti Rehabilitation Centre, Rescue Dada Centre, Mother House Centre, Agape Hope Children’s centre and Red Rose School. But operating in slums raised a host of issues like security, charging, participation, implementation and consent – all of which take a while to work through.
For example, measuring impact on one child means arranging for that child to use one device continuously. That’s actually quite difficult to achieve in the shared worlds and weak boundaries of, say, an orphanage. Although we wish a child could have unrestricted access to SEMA, in practice most children have scheduled access. Capturing learning data is an equally difficult technical challenge since tablets and phones aren’t for the moment connected to the network. We know that won’t apply in the future, but for now, we schedule downloads of the game data for each child’s device. Despite those challenges, we’ve learned a lot from our first Cohort and are now moving forward with a second Cohort.
By way of context, we administer the EGRA tests to assess literacy levels at regular points pre- and post- gaming experience. In a full scale trial with hundreds of participants, we would crunch that data to search for patterns and correlation that can tell us anything about literacy acquisition: do the kids who use the game more make more progress? For now, however, it’s just cool to see information starting to come out.Below, you’ll see a couple of visualizations of some parts of our early datasets, which give some intriguing clues on how SEMA is being used. These children’s use of the game
seems to be mostly short and frequent sessions, punctuated by the occasional binge of several hours. Here are the durations of the first 30 game sessions of two users:
For these two kids the usual duration of sessions is around 10 minutes, once they get established. If they continue, these kinds of patterns suggest that our game has to deliver learning for users who want bite-size experiences as well as contain enough depth to sustain a regular binge.
We also analyze data for any correlations between starting level on EGRA and the change in EGRA measures, answering the question: is the game more effective at raising the levels of those who already have some basic literacy or those who have none? Here is a snapshot visualization of the scores of five users before using our game, and four weeks later.
It’s worth noting that this is in just one of EGRA’s measurement dimensions; we observed different trends in other dimensions for these users. We also know that any data at this point is only anecdotal, although it’s nice to see on this sample that all the pupils show progression. Nevertheless, it’s interesting to note that child #5 in the data achieved no score before or after. This occurrence was quite frequent in our first cohort, and leads us to reflect on how suitable the EGRA instrument is for assessing children with very weak literacy. With this new cohort, we are going to be testing more frequently to give us greater confidence in the scores we obtain.
Beyond collecting this initial data, we’re particularly excited to build up our capacity for running high quality trials in the future. We know that field tests of learning technology are really demanding, so we are getting our local teams up to speed in preparation for the necessary scaled Randomized Controlled Trials (RCTs) that will prove the literacy benefits of our game. For now, we know that the size of the cohorts we are working with, and the time the children spend playing SEMA are not large enough to yield proper evidence about the impacts of SEMA on literacy. However, even at our small scale, we’re making really useful progress in trying to solve the challenges of running an assessment process under real African conditions and laying the foundations for the future.
As our experience with testing continues to grow, we will be running a placebo trial for a statistical control. We will also try out a method for investigating community effects. There is anecdotal evidence that a successful literacy intervention for one child, has knock-on impacts for their immediate friends and siblings. We hope to devise a test to research this question.
Stay tuned and reach out at firstname.lastname@example.org if you have any questions.