How long does it take to make an RPG?

Over the last couple of years I have been keeping a regular record of the time I spend on RPG design projects – especially the larger ones. During this time I’ve taken two big projects to conclusion: Flotsam – Adrift Amongst the Stars and Last Fleet.

I’ve combined the data from these two to make a rough chart of how I spend time on a big RPG project. Specifically, most of the post-Kickstarter campaign data (layout, art management, printing, shipping) comes from Flotsam while the rest comes from Last Fleet. This is because I only started keeping records partway through Flotsam’s lifecycle, while one impact of the current pandemic has been that my Last Fleet record keeping fell apart post-Kickstarter – though anecdotally it looks pretty similar to Flotsam so I feel comfortable combining the two.

The data isn’t perfect. Realistically I sometimes lost track of time and wrote down a best guess on how long I’d spent working on a given occasion. I also likely failed to catch some smaller bits of working time.

With the above caveats in mind, I estimate that one of these projects takes about 300 hours from start to finish. That’s just my time – not the wider project contributors (stretch goal writers, editors, artists etc). That’s how long it takes me; I would think every designer is different. Others may do more or less playtesting, take more or less time iterating their design ideas, or do more or less marketing work. So this is just one example, but hopefully it gives some sense of how long it might take you, dear reader.

Pie chart shows data as follows.
- Design 34.5%
- Playtest 17.2%
- Art 8.6%
Layout 3.4%
Editing 10.3%
Kickstarter 5.2%
Publicity 13.8%
Printing 1.7%
Shipping 1.7%
Admin 3.4%

As you can see from the chart (blue segments), I spend about half of my time on design (34.5%) and playtesting (17.2%) combined. That encompasses all the thinking and writing that goes into creating the draft game text, all the planning for the playtests and the actual time spent in playtest sessions. It’s likely more of an underestimate than the other segments, because who can really quantify time spent thinking – I do a good chunk of that in between formal design sessions.

The next biggest chunk, in red, is publicity (13.8%). That includes time spent on interviews and the like, but excludes time spent refreshing Twitter during the Kickstarter campaign. This is because I figure the latter is something I would have done anyway. I find Kickstarter campaigns very stressful. Perhaps in theory I should attempt to account for that stress and the time it eats up, I don’t know. However part of the reason for doing this accounting is to consider how much I might reasonably charge someone else to run their campaign, so it’s useful to know the actual hours spent working as opposed to time wasted because of the psychological impact of crowdfunding.

I’ve separately accounted for time spent setting up the Kickstarter page (in purple), doing Kickstarter updates and suchlike, which you could consider publicity but are often actually taken up with more admin type tasks. At any rate – quite a small category (5.2%), probably because it’s mostly writing down stuff I’ve already worked out elsewhere and communicating it to backers.

After that, in orange, you have editing (10.3%) and layout (3.4%). The editing time is huge! To some extent the figure is arbitrary, because design work itself includes a great deal of editing. I have counted the time I spent re-reading the text after I had notionally settled on a final ruleset, polishing it, and also time I spent reading my copy editor’s comments and implementing them. (Aside: that’s how long it took me as someone very close to the text: imagine how long it takes a copy editor who has never even read your text before. Pay your copy editors well, folks!) Layout was mostly done by my layout artist but there was a bit of review, comment and editing to make stuff fit within a particular page template.

Art (grey) also took up a surprisingly large amount of time (8.6%). This covers generating the ideas for the illustrations, liaising with the artist(s), and reviewing their work and providing comments. Given that Flotsam has about 25 pieces of art in it, that’s over an hour per piece, which seems like a lot – I guess quite a bit of it just thinking.

In the “surprisingly low” category, in green, is printing (1.7%) , shipping (1.7%) and admin (3.4%). This covers tasks like setting up all my products on Backerkit, liaising with the printer and warehouse, fixing errors, dealing with customs, etc. I think this excludes post-Backerkit admin, such as setting up the new product on itch, Drivethru and our website, and handling orders. So in that sense, it’s probably an underestimate over and above the caveats mentioned further up. And since neither project was my first rodeo, there’s an element of familiarity with the admin systems that might take a newbie publisher longer to get to grips with (not least because you can copy data over from previous projects in Backerkit).

One bit of “lessons learned” from this is that I need to create “how to” guides for some of the things that I do as part of a Kickstarter project. For example, I wasted a small but nonzero amount of time figuring out how to complete customs forms for Last Fleet that I had done for previous projects but forgotten. Now I have a customs template of my own to make the process easier. It’s well worth your time to systematise this stuff if you’re planning to do multiple projects, as there are all sorts of fiddly details that can be hard to remember (and indeed, if you forget them, can cause problems).

Anyway, I did this analysis for my own benefit but hopefully someone somewhere might find it helpful.

How I curate my ideas

It is fashionable in game design circles to say that an idea is worth zero dollars. This is meant as a rebuttal to people who try to sell you their brilliant idea for a game. Which, fine – those people can’t really sell you an idea anyway, so that is indeed worth zero dollars. But that doesn’t mean ideas are worthless. On the contrary, an otherwise well-implemented game that lacks interesting ideas probably won’t get very far.

The thing is, ideas are ephemeral. Until you write them down, they’re just this slippery thing in your head. You can come up with dozens of them in a day – on the toilet, in the shower, while you’re trying to get to sleep. But most of them are lost.

In fact, they’re worse than that in many ways, because while you’re busy losing them, they distract you. They stop you sleeping because your brain won’t stop thinking about them. They stop you implementing your current project because you get excited about a different one. This is not good.

And you really don’t want to be at the mercy of your ideas. That way lies a trail of unfinished projects, each abandoned in favour of the latest shiny th- SQUIRREL!

So it is important to curate your ideas. To find a way to capture them before you forget them, and get them out of your head so they don’t distract you. And this, it turns out, is fairly simple: you just write them down.

Here’s what I do:

  • I write a simple one or two sentence summary of any idea that captures my attention for more than a few minutes and add it to my ideas list. In my case that’s a sticky on my laptop, but a notebook would be just as good.
  • I subdivide my ideas list. At the top are things I’m working on now. Then there’s the things that are next in line to work on. I break them down into small games and long games, and non-game things like articles or events.
  • I keep it updated, moving stuff in and out of each category. If it becomes clear I’m not going to finish something (at least not now) then it goes into the back burner section. Abandoned but not forgotten.
  • Because I know what I’m meant to be working on now, and I know I’m not losing the other ideas, I can focus on my top priorities. I’ve always got an idea of what I want to work on next, so if I have to take a break from my current projects (e.g. because they’re out for playtesting) then I can pick up something new right away. My subdivisions enable me to easily choose something small that I can do in a spare day, or something longer, as appropriate.
  • I take breaks from working on active projects to review the list and see what looks good. What has sustained my interest and what now seems less brilliant than it initially did. Which ideas might need to be merged or dropped. So the list isn’t just a dumping ground, it’s a breeding ground for my next project.

Sometimes, an idea is so compelling that even with the above discipline I can’t get it out of my head, I write a concept document. This is a half-page document where I write down:

  • The elevator pitch
  • My design goals – the things I’d want to achieve through it
  • A short summary of how I think I might implement those goals right now.

That goes in a dedicated folder of ideas, where I can easily pull it out again if I need it. Again: I’m getting it out of my head, and written down, but I’m limiting its ability to dominate my creativity and draw me away from what I’m meant to be prioritising.

Of course, sometimes having written a concept document, it’s not enough. I want to flesh out the ideas. I’m struck by passion for this new idea! That’s ok. Sometimes I give myself permission to do this. I might even end up writing the game. But for the most part, the structured process above ensures I retain sustained attention to my current project. I get to keep all the ideas that constantly fly into and out of my brain without letting myself chase those ideas fruitlessly.

How do you manage your ideas? Let me know your top tips!

Flotsam: Adrift Amongst the Stars – playtesting

I’ve just completed a full version of Flotsam for external playtesting!

Flotsam is a roleplaying game about outcasts, misfits and renegades living in the belly of a space station, in the shadow of a more prosperous society. The focus of the game is on interpersonal relationships and the day-to-day lives and struggles of a community that lacks the basic structures of civilisation.

System-wise, it was originally a Dream Askew hack, but has wandered a great deal from those roots. It owes quite a bit to Hillfolk and Archipelago, too. Everyone gets to act like a GM some of the time, controlling one aspect of the game setting and the threats it contains, and everyone gets to play a Primary character some of the time, exploring their life and relationships.

If you like the sound of that, and you think you might like to give the game a try, please get in touch by commenting here or emailling me at flotsam (at) vapourspace (dot) net.

Game feedback: different kinds

I was listening to one of the Metatopia panelcasts from last year, and the panelists[*] mentioned that there are different types of feedback and wouldn’t it be nice to have a way to say what kind of feedback you wanted. Well, I agree, and it’s something I’ve been meaning to write about. So here goes.

Before I start, let me say that when I send my games out for feedback (playtesting, normally) I always provide a list of specific questions. This is partly to ensure that specific things I’m wondering about get covered; it’s partly to avoid feedback I’ll find unhelpful; and it’s partly to provide a structure to help people think about the play experience. But anyway. Let’s talk through different kinds of feedback.

  1. Drafting feedback. This includes identifying spelling and grammar errors, as well as areas where language might not be as clear as it could be. You might want this when your game is in its final draft form. You probably won’t find it that useful before that point, because you’ll be redrafting anyway.
  2. Comprehension feedback. This is a bit like drafting feedback, but a bit higher level. It’s asking whether there are aspects of the rules that are confusing. Can you understand the game? This might be particularly useful for an early draft read-through. I normally check on it with playtesting as well.
  3. Experiential feedback. What did the game feel like to play? Was it humorous or scary? Was a particular mechanic hard work? Did you get emotionally invested in your character? This is generally a key component of playtesting for me. I want to create a game that feels a particular way, and so I need you to tell me what it felt like to play it. That’s much less useful if you’re just testing out a mechanic in isolation, though. You also might not need it so much if, say, you’ve already playtested the game quite a bit and you’re just testing a modification to the original design.
  4. Mechanical feedback. What happened, mechanically? Did you seem to crit fail constantly? Was there an exploit where you could build up unlimited bennies? Did some mechanics just never get used? Did anything break down at the table? You’ll probably want this sort of feedback at some point in playtesting, unless your game is super freeform. Some people like to playtest mechanics individually, outside the context of a full session. It’s not something I do, but worth considering.
  5. Design advice. It is often said that it is very annoying when people try to design your game for you through their feedback. And generally, I do agree with that. But, sometimes that may be exactly what you want: you know something isn’t working in your game, and you want suggestions on what to do about it.

So, when you’re asking for feedback on your game, be clear which kind(s) of feedback you’re looking for and, where appropriate, which kinds you aren’t looking for. I would add that you can, and probably should, say which specific bits of your game you are asking for feedback on. If there’s a particular mechanic or aspect of play you want to hear about, say so! Even if there isn’t one particular aspect, you might want to break your game down into specific areas you want covered.

Of course, it bears noting that you might not always realise that you need feedback on something. Maybe you think your mechanics are working perfectly and you don’t need feedback on them. If a playtest reveals they broke down completely, I’d hope my playtesters would tell me that, even if I was only asking for experiential feedback.

I hope that’s useful. I’ve probably missed something. Comments welcome!

[*] I don’t know exactly who said it. Panelists included Emily Care-Boss, Julia Ellingboe, Avonelle Wing, Shoshana Kessock and Amanda Valentine.

Playtesting: some reflections

Lovecraftesque playtests

I’ve collated the information from the first Lovecraftesque external playtest and I thought it might be useful to discuss it here. I’m not going to talk about our game, instead I’ll be talking about the playtest in more general terms, in the hopes of deriving some more general lessons about playtesting.

Recruitment

We advertised the playtest through our website, Black Armada, and through G Plus, Twitter and Facebook. We put the files in a public drop box but only provided the link on request to people who expressed an interest in playtesting.

We received 31 expressions of interest. 29 of these were from people who appeared to be men, 2 from women. 6 were from people who we know quite well in real life, and another 3 from people we’ve met a few times in the flesh. The rest were from comparative strangers.

We allowed six weeks for playtesting from the day we announced it. We sent a reminder out at the midway point to anyone who we hadn’t interacted with for at least a week, and another one a few days before the deadline.

Of 31 expressions of interest, 19% sent in a report.

We received 6 playtest reports within the playtest period – just under a 20% response rate. All of these were submitted by men. 2 came from friends, 4 from comparative strangers. Between these we got 22 session-hours of playtesting, or 72 person-hours.

It seems to me that we were fairly fortunate to get as many as we did. In previous playlists using a similar method I only had a 10% response rate, from a smaller number of expressions of interest. The improved success comes, I think, from a combination of us being better connected within the indie roleplaying community than I was back then, and having a game pitch that was always likely to be a bit more popular.

Method

None of the playtesters received any guidance from us or clarification. They were given a set of detailed questions covering 10 aspects of the game, which were rather bossily labelled “READ THIS FIRST”, in addition to the rulebook and some supporting materials.

None of the playtests involved us, either as a participant or a witness.

Results

All six playtest reports responded to the questions we asked fairly assiduously. I wouldn’t say they were all completely comprehensive, but none of them ignored the structured questions, and all responded to most of the points we wanted covered. One came with a blow-by-blow actual play report (which was quite valuable beyond what our questions elicited).

I shall now provide a breakdown of the issues identified by the playtest. (Either identified by the playtesters themselves or apparent from their report whether they themselves realised it or not.) I have classified them as follows:

  • A critical issue is one which would make the game unplayable.
  • A serious issue is one which would make the game not fun or prevent the design goals of the game from being realised. If even one group identified a serious issue, I’d count it.
  • A major issue is one which makes the game very clunky or interferes with realising the design goals of the game.
  • A minor issue is one which doesn’t interfere with the design goals or make the game avery clunky, but rather is a matter of polish. Minor rules clarifications also fall into this category.

I’ve obviously had to exercise judgement as to whether an issue identified by a group is attributable to the design, and whether there’s anything that can be done in the design to ameliorate the issue. In one or two cases, because different groups reported radically different observations, I haven’t recorded an issue, but will instead watch for these recurring in the next round of playtesting.

Here’s what our groups found:

  • Critical issues – 0 (phew!)
  • Serious issues – 1
  • Major issues – 2
  • Minor issues – 16

50% of our groups caught all three major or serious issues, but 33% only caught one and 17% didn't catch any.

A note here about consistency: not all our issues were detected by all of our groups. Two groups (one of which played twice) did not pick up the serious issue identified above, and the two major issues were each picked up by only three of the six groups (arguably one of them was detectable in a fourth group, but I think we might have dismissed it based on their evidence alone, as it didn’t look that serious). More importantly, these were clustered: 3 groups caught all the serious and major issues, 3 groups missed at least two of these issues.

I want to be clear, by the way, that I don’t consider the above to be a poor reflection on any of our groups. I suspect the ones that missed issues did so because they were more familiar with the style of game or the genre. Some of our clearest and most helpful feedback came from groups that didn’t catch a lot of the bigger issues, but did notice many smaller ones. All the feedback was immensely useful.

The above suggests to me that you want at least three groups to test a game to be reasonably confident of picking up on major and serious issues. With fewer, you might get them, or you might be unlucky. (Of course in our case, we would need four groups to guarantee catching them all.)

By the way, I haven’t analysed the minor issues, but my impression is that they were sprinkled liberally through all six groups. I doubt if there’s a single group that didn’t pick up some minor issues missed by the rest.

Conclusions

The top line conclusion is that you need to playtest, and not just with one or two groups. The comparison with the playtesting on my previous game is instructive. I only had one response, which added a little to my own efforts at playtesting. But clearly, my analysis above means that there is a high risk of failing to catch even quite serious issues with such a low level of response. There would be innumerable smaller issues that will have slipped the net.

Getting playtesters isn’t at all easy. I think we were fortunate this time around. Our voices carry a bit further as a result of a few years circulating in the online indie gaming community. We got support from a couple of people with a very wide reach, and although it’s hard to say how much impact this had, I would guess a lot. And our game concept was more grabby – though whether we would have been taken as seriously if we’d proposed such a concept three years ago, I can’t say.

One thing I would observe is that it’s a lot easier to make playtests happen if you offer to organise them yourself. That’s pretty obvious, but it is worth saying anyway. You can tackle the tendency for the game to get cancelled by providing a venue, making sure you pick people you can rely on and above all not dropping out yourself. And you can make sure decent notes are taken and guarantee to take them away with you. It’s more effort, and if you want it to have the same value as an external test you’ll have to be disciplined about not facilitating the game itself, but it dramatically increases your sample size, which reduces the chances of missing a given issue.