Lovecraftesque update

So, as you might have noticed from my earlier post about playtesting, the first round of Lovecraftesque playtesting is over. We picked up a lot of issues – a few quite major, most not so. We’ve moved quickly to make some changes and additions to tackle the former, in order to get some rapid feedback from some playtests we’ve already got lined up. So: here is a summary of some top-level issues we encountered and what we’re doing about them, with the caveat that this is only a first cut and we reserve the right to have a total rethink in the coming weeks.

1. Not Lovecraftian enough.

Man, this was a real disappointment to hear. The structure of the Lovecraftian tale is clearly in our game, but in terms of realising the alien, uncaring universe of Lovecraft – we didn’t do so well. Admittedly, this was most visible in groups who were not familiar with Lovecraft, but even some experienced players found that just following the rules wasn’t enough to make the game feel like Lovecraft. (Although one group said it was about as Lovecraftian as most Call of Cthulhu, which is either a backhanded compliment or damning with faint praise… but better than we’d feared.)

The main change we’ve made is to provide a style guide to Lovecraft, covering the themes, paraphernalia and language used by Lovecraft. This is supported by other changes which I’ll describe in a moment. It remains to be seen whether knowing the themes and having them at the forefront of one’s mind will be enough to make the game feel like Lovecraft – we should know pretty quickly after the next few playtest reports come in.

We’re also thinking about introducing a requirement to choose a theme for the story, from a list we’ll provide. But that’s something we need to think about over a slightly longer time period – it hasn’t gone into the game yet.

2. Hard to teach, hard to learn

This was also a bit of a disappointment, if I’m honest. I have to explain complicated concepts for a living, so I thought I’d do pretty well at this. I think most of my playtesters managed ok at learning the game. Those that didn’t, were new to indie-style games, which may account for the problem. The teaching of the game, however, seems to have been more laborious than it needs to be.

We’ve created a teaching guide to tackle the latter problem. It’s pretty clear that, since this is a game that has a number of stages that work quite differently to each other, the best way to do this is to teach the game as you play it, not attempt to explain it all at once. That’s what we’ve done – create a guide which you read out at key junctures to explain the key concepts (at the start) and how the basic procedures differ as the game evolves (when the changes happen). The guide also includes a potted summary of the Lovecraft style guide, so that it isn’t just the facilitator who benefits from that. The whole thing would take about 15-20 minutes to read out if you literally just monologued it, but it’s broken into chunks, so hopefully the job of teaching is a bit less strenuous.

We’re going to have to think about whether the rules are just too complicated, or the rules guide not structured in the right way. That’s something we’ll get to in a later iteration.

3. The Final Horror

Quite a few groups found that it was a real challenge to weave together all the clues they had seeded through the story into a single compelling Final Horror. They ended up either ignoring some clues, or laboriously explaining through exposition how they fit in, or having a lengthy discussion as a group which obviously breaks the tension.

We’ve introduced a new rule to address this. In the new version, after every scene there’s a pause in which everyone individually writes down what they think is going on. Obviously nobody really knows – but the rules say you have to leap to a conclusion. The idea is that you’ll then use that premature conclusion to guide what you narrate in the next scene. Since the other players will surprise you, your ideas will change every scene – but because nobody is just firing off ideas into the void, the story will be a bit more coherent. More importantly, when the Final Horror comes, nobody is starting from a blank slate.

Other stuff

These weren’t the only issues our playtest uncovered! But they’re the biggest – we think the rest will be relatively easy to crack. We’ll be going over these, and thinking more broadly (and maybe more deeply) about the game’s overall design, over the next few weeks, with a view to commencing a fresh playtest on a completely revised version of the game.

Playtesting: some reflections

Lovecraftesque playtests

I’ve collated the information from the first Lovecraftesque external playtest and I thought it might be useful to discuss it here. I’m not going to talk about our game, instead I’ll be talking about the playtest in more general terms, in the hopes of deriving some more general lessons about playtesting.


We advertised the playtest through our website, Black Armada, and through G Plus, Twitter and Facebook. We put the files in a public drop box but only provided the link on request to people who expressed an interest in playtesting.

We received 31 expressions of interest. 29 of these were from people who appeared to be men, 2 from women. 6 were from people who we know quite well in real life, and another 3 from people we’ve met a few times in the flesh. The rest were from comparative strangers.

We allowed six weeks for playtesting from the day we announced it. We sent a reminder out at the midway point to anyone who we hadn’t interacted with for at least a week, and another one a few days before the deadline.

Of 31 expressions of interest, 19% sent in a report.

We received 6 playtest reports within the playtest period – just under a 20% response rate. All of these were submitted by men. 2 came from friends, 4 from comparative strangers. Between these we got 22 session-hours of playtesting, or 72 person-hours.

It seems to me that we were fairly fortunate to get as many as we did. In previous playlists using a similar method I only had a 10% response rate, from a smaller number of expressions of interest. The improved success comes, I think, from a combination of us being better connected within the indie roleplaying community than I was back then, and having a game pitch that was always likely to be a bit more popular.


None of the playtesters received any guidance from us or clarification. They were given a set of detailed questions covering 10 aspects of the game, which were rather bossily labelled “READ THIS FIRST”, in addition to the rulebook and some supporting materials.

None of the playtests involved us, either as a participant or a witness.


All six playtest reports responded to the questions we asked fairly assiduously. I wouldn’t say they were all completely comprehensive, but none of them ignored the structured questions, and all responded to most of the points we wanted covered. One came with a blow-by-blow actual play report (which was quite valuable beyond what our questions elicited).

I shall now provide a breakdown of the issues identified by the playtest. (Either identified by the playtesters themselves or apparent from their report whether they themselves realised it or not.) I have classified them as follows:

  • A critical issue is one which would make the game unplayable.
  • A serious issue is one which would make the game not fun or prevent the design goals of the game from being realised. If even one group identified a serious issue, I’d count it.
  • A major issue is one which makes the game very clunky or interferes with realising the design goals of the game.
  • A minor issue is one which doesn’t interfere with the design goals or make the game avery clunky, but rather is a matter of polish. Minor rules clarifications also fall into this category.

I’ve obviously had to exercise judgement as to whether an issue identified by a group is attributable to the design, and whether there’s anything that can be done in the design to ameliorate the issue. In one or two cases, because different groups reported radically different observations, I haven’t recorded an issue, but will instead watch for these recurring in the next round of playtesting.

Here’s what our groups found:

  • Critical issues – 0 (phew!)
  • Serious issues – 1
  • Major issues – 2
  • Minor issues – 16

50% of our groups caught all three major or serious issues, but 33% only caught one and 17% didn't catch any.

A note here about consistency: not all our issues were detected by all of our groups. Two groups (one of which played twice) did not pick up the serious issue identified above, and the two major issues were each picked up by only three of the six groups (arguably one of them was detectable in a fourth group, but I think we might have dismissed it based on their evidence alone, as it didn’t look that serious). More importantly, these were clustered: 3 groups caught all the serious and major issues, 3 groups missed at least two of these issues.

I want to be clear, by the way, that I don’t consider the above to be a poor reflection on any of our groups. I suspect the ones that missed issues did so because they were more familiar with the style of game or the genre. Some of our clearest and most helpful feedback came from groups that didn’t catch a lot of the bigger issues, but did notice many smaller ones. All the feedback was immensely useful.

The above suggests to me that you want at least three groups to test a game to be reasonably confident of picking up on major and serious issues. With fewer, you might get them, or you might be unlucky. (Of course in our case, we would need four groups to guarantee catching them all.)

By the way, I haven’t analysed the minor issues, but my impression is that they were sprinkled liberally through all six groups. I doubt if there’s a single group that didn’t pick up some minor issues missed by the rest.


The top line conclusion is that you need to playtest, and not just with one or two groups. The comparison with the playtesting on my previous game is instructive. I only had one response, which added a little to my own efforts at playtesting. But clearly, my analysis above means that there is a high risk of failing to catch even quite serious issues with such a low level of response. There would be innumerable smaller issues that will have slipped the net.

Getting playtesters isn’t at all easy. I think we were fortunate this time around. Our voices carry a bit further as a result of a few years circulating in the online indie gaming community. We got support from a couple of people with a very wide reach, and although it’s hard to say how much impact this had, I would guess a lot. And our game concept was more grabby – though whether we would have been taken as seriously if we’d proposed such a concept three years ago, I can’t say.

One thing I would observe is that it’s a lot easier to make playtests happen if you offer to organise them yourself. That’s pretty obvious, but it is worth saying anyway. You can tackle the tendency for the game to get cancelled by providing a venue, making sure you pick people you can rely on and above all not dropping out yourself. And you can make sure decent notes are taken and guarantee to take them away with you. It’s more effort, and if you want it to have the same value as an external test you’ll have to be disciplined about not facilitating the game itself, but it dramatically increases your sample size, which reduces the chances of missing a given issue.