Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

The Making Of Complex Newness


A process for designing & constructing big or complex things, from spells to magic items, castles to space stations, industrial processes to political campaigns, new chemistries to better TVs and AC systems, using something every RPG already has.

It’s not often that I have a clear idea of what I want in a featured image before I go looking, but this time I did – and couldn’t find it, and didn’t have time to put it together for myself. So I threw this together, instead, adding some color to the original image by Gordon Johnson from Pixabay

The real world caught up with me somewhat in the course of writing this article – I had hoped to publish it Monday (everything but the examples were finished Sunday night, so that was a realistic ambition) but things just didn’t work out that way. I also had no idea that it was going to turn into this 17K+ behemoth; the rules are described in their source location in just 1200 words. This was supposed to be a quick article to let me turn my attention to other things; instead, it has consumed my week.

Background: The evolution of a plotline

The Zenith-3 campaign is transitioning, in the course of the current adventure, from a phase in which a lot of roleplay is handwaved to full operation, following its long shutdown. There have been whole weeks of game time in which lots of little things happened but no major decisions that impacted the team’s overall mission, so it seemed like a functional approach.

For the one or two major decisions that did have to be made, I stepped outside the deliver-events-as-narrative approach and let the players roleplay, having covered all the options and their consequences in the adventure text.

Over The Christmas Break

When you have multiple events per day over multiple weeks, you need ideas – a lot of them. During the Christmas break, I jotted down 14 of them to slot in as they matched up to the narrative. The intention was for full gameplay to restart with Wednesday, Day 53, when the action would segue into… more of the same, but building in intensity as the main mission of the adventure sequence proceeded.

One of those ideas was an all-PCs disaster story. It was summarized in eight lines (four times as many as most of the ideas), and scheduled for Tuesday, Day 52.

Of course, to describe the events within one of these seed ideas, they had to be expanded, in the manner I’ve described in several past posts. How did the story start, Where, How did the participating PCs (usually not all of the team) get involved, What setback(s) were encountered, How were they overcome, and How did the story resolve?

At one sentence or a short paragraph each, that takes a 1-2 line encounter idea and turns it into a 6-12 sentence / paragraph short-short-story. Where it was natural to do so, I indulged in a little more world-building, exactly as I would have done if these had been played out fully. Special attention was paid to the characterization given each PC by the player and each established NPC. All this added a few more – sometimes many more – paragraphs to the story, but it was all kept as concise as possible. I wanted this to serve as a reminder of the tone and style of the campaign, making it easier for everyone to step back into character when the time came.

Because of its length as an idea, the ‘wildfire’ plot idea – and I’m being circumspect because it hasn’t started in play yet – was always likely to be 3-4 times the length of these smaller encounters. Part of the story involved a new creation, a species the PCs hadn’t encountered before. I had vague ideas regarding the morphology, abilities, and persona of this encounter, nothing more.

January 6th

So that was the state of development when I got back to work on the campaign on January 6th. Step one, initial ideas, done. Step four, making the situation and its differences from normality seem both more visceral and more credible, was accomplished by having the team rescue some trapped firefighters.

Step three, better delineation of the threat, followed. That lead straight into Step five, a complete description of the new life-form. As it developed, it became clear that if I delivered it all as a single text block, it would be overwhelming. Too big an infodump. So the overall shape of the story had to change; half of the PCs would not participate in the rescue, while the other half played detective and churned out parts of the infodump.

They needed somewhere to be while that was happening, so that led back into a complete revision of Step two, the ‘how the PCs get involved’ sequence. And then another one. And critical decisions started mounting up. Some could be resolved in narrative, because there was only one logical solution to the problems at hand. Verisimilitude demanded the introduction of additional NPCs, and interactions with those NPCs.

It became clear, after about 9 days of development, that this needed to be more of a full adventure with roleplay of at least the critical moments, and that led to further changes, both in the work done already, and in the work still being planned.

The important bit – the end result

As it currently stands, the eight-line idea has become 98,650 words plus 13,475 words in notes still to be integrated into the text. This is spread amongst 282 scenes, most of which will never see play – there are 28 different pathways through the adventure, based on PC decisions, skill rolls, etc. Some of those decisions are absolutely critical and could have impacts long outside this single adventure, and the adventure itself is going to cast a longer shadow, too.

The adventure will push the PCs into areas they haven?t had to explore for a while, if at all. And it’s had to push the rules system into areas it has never covered before, too. And one of those is the subject of today’s article.

New Rules for the construction of complexity

In one (or more) turns of events, a PC has to create a new chemical. I’m not going to get into purposes or reasons. So I wrote some rules for that, and carefully built in limitations and restrictions to keep this from overpowering the campaign while maintaining verisimilitude.

And, at another point in the narrative, another character has to create a complicated device with a simple function. And I found that the same rules worked for that, too.

And then I realized that there was a way of simplifying that task to make it more manageable, by having another PC use a power in way they had never done before. And the process of doing so could also be described and managed with the same rules.

These rules are nothing like anything I’ve seen in any other RPG. That always meant that they were slated to eventually become an article here at CM. But because I knew that at this point in time, I couldn’t quote the actual usage in the adventure to which those rules have been put, I either had to delay the article until that wasn’t a problem, or offer some examples from outside the adventure.

So I started thinking of some, and ended up with a list sufficiently long that I would never cover them all in just one article. With some additional ideas on the side to throw out there for people to use as they see fit.

I’ll develop one or two of these examples in full, touch on some highlights exemplified by the rest, and call it a day.

The mechanics

The mechanics of this process are really simple, but at the same time, quite elegant and capable of deep levels of richness and complexity – if the game system permits it.

They are capable of simulating the design of a new product, a new technology, a new magic item, a new spell, a new chemical, a website, a computer program, you name it..Anything that requires some form of design.

Conceptual & Functional Elements

To make the system work, you need to have a clear and detailed design objective, because that’s what the process simulates making.

The functional requirements and possibly their underlying concepts need to be listed. That’s step one of the process. It doesn’t happen instantaneously; the GM should decide how long it takes, based on the character’s expertise in the most relevant skill.

Skill Foundation

But what is the most relevant skill? That’s for the GM to specify, based on the desired end product and the level of specificity of the game system.

It’s even possible that there may be a number of different skills required.

Interval

Incorporating each element of what the end product can / will achieve takes time. The time required is standardized and is referred to as the design interval. This is also specified by the GM based on the desired end result and the expertise of the character. It could be seconds, minutes, hours, days, weeks, or years.

    A preliminary discussion of interval selection

    Interval multiplied by the total number of specifications gives the minimum time to complete a project. The actual time spent incorporating any given specification is quite a bit more variable, but under most circumstances, it will be either the minimum or something more. When thinking about intervals, I suggest using the following scales:

    Tactical: seconds – Jury rigging a door, a simple emergency repair, etc.)

    Positioning: minutes – solving a critical problem of some sort when time is critical.

    Professional: hours/days – writing code, crafting a new Grimoire spell

    Strategic: days/months/years – developing a plan to achieve a shift in a strategic balance (or imbalance), a campaign to change public perceptions, a new National Constitution, a plan to end national or city-scale dependence on a particular industry.

    Industrial: minutes/hours/days/weeks – Developing a prototype or one-off gadget. The wide range is sub-selected by the purpose of the project.

    Industrial II: weeks/months – City planning, a new product or system design (eg a new Engine), designing a building or a space station.

    Major: months/years/decades – We Choose To Go To The Moon (even if we don’t currently know how), Terraforming, Inventing something current game physics doesn’t permit eg FTL, Time Travel. The latter devolve into Industrial II if there is a working model whose principles are understood, and may devolve further if the technology is commonplace.

Specifications

This is the GM’s opportunity to add to the list of functional elements. Some may be implied foundational steps ruled necessary in order to achieve a functional element already listed, some may be necessary to convert a prototype into a manufacturing process, and some may simply be parameters that the player hasn’t thought to list. There will almost always be something that the character hasn’t thought to include.

One question that the GM will eventually face is the difference between global effect and specificity. In most cases, specificity is easier to achieve than global effectiveness. It’s a lot easier to find a cure or treatment for one specific cancer than it is to find a universal treatment, simply because “cancer” is a generic label applied to many different diseases with some common elements.

But sometimes, where you don’t want to affect everything, specificity can be just as hard to achieve. If you are designing a new chemical, for example, there may be things that you specifically DON’T want it to react to, and engineering that can be a lot harder than letting things happen – because, if you don’t specify it as a requirement, the GM is free to have the combination have any effect that HE wants.

The scope of the purpose is all-important in this context. A one-off solution to a specific problem is always to be easier than a global cure-all or even a general solution to a specific type of problem, simply because those reduce the GM’s scope for storytelling. Yes, this is blatant metagaming – so what? It’s in the best interests of the long-term campaign, so that’s fine in my book. It would be worse metagaming if the GM made a blanket ruling of ‘you can’t do that’, in my book.

There is also ‘the rule of cool’ to consider. This is a character doing something extraordinary, and bringing abilities to bear on the problem that rarely get shown off. They are character-building.

And, finally, there’s the spotlight issue – does this solve the problem all on its own, or does it require deployment in a specific way; can it in fact require the efforts of the whole team of PCs just to get the solution into the right place at the right time? The first is only satisfactory if this is a last-ditch Hail Mary after the other characters have had their shot at glory and failed; the second makes this a group victory, which inherently creates more opportunities for the action to be tense and dramatic. Both can be good, but the first can also be bad, making the other PCs fifth wheels. This consideration can go in all sorts of directions depending on the circumstances and the purpose of the creation effort. Imagine a situation in which the other PCs have to face certain defeat and possibly death, just to buy the one character with a shot at winning the time that they need?

If it makes for a good story, that’s a big tick. If it will make similar stories harder to tell in the future, that’s a down-check. Neither is the totality of consideration, but either can tip the balance one way or another.

If the GM thinks there are too many Specifications, there are ways that he can conflate two or more into a single entry on the list, making them two or more aspects of the single requirement – for example, on a space station, there might be ‘functions in an Earth orbital environment’ which covers everything from radiation-proofing to vacuum-seals. Applying this to every room / space so that they are independently safe requires a second specification, though. The GM should take the Specification as it ends up as the basis for determining the difficulty.

Rolls & Difficulty

Each Specification needs a successful die roll to integrate it into the design. The GM sets the difficulty, aided by a list of successive values of the potential, taking into account the conditions. Each roll consumes interval in time; that requirement has to be met, with the roll coming at the end.

Rolls that fail by a small amount – and the GM can determine how much this is, on a case-by-case, roll-by-roll basis – may, at his discretion, achieve a partial success. If there is no numeric variable involved, the roll is generally all-or-nothing.

If checkpoints are employed (see below), only the final roll ultimately matters; what the intervening checkpoint results reveal is the path taken to get there, one that can be full of ups and downs, but none of them critical to the final result (but also see the section dealing with critical successes and failures).

Barriers / Problems

Rolls can either succeed or fail. And there can be a gray middle ground offered by the GM in terms of partially meeting the requirements. This then bounces the question back to the player of the character – accept the partial solution, or encounter a Barrier / Setback.

A setback is a situation in which the GM feels that the desired functional Specification can be met, it’s just going to take a little more time. There’s a domino effect involved – the character has to go back one, two, or three steps and implement THAT specification in a different way. They can then work their way forwards, with a bonus to the success of the die rolls, until they achieve the required specification.

A barrier is more difficult to overcome; it adds an additional Specification to the list, inserting it just prior to the point of failure.

99%+ of medicines created in the lab never see human trials. Some of them simply don?t work, there’s an error in the theory on which they are based. Some of them have severe side effects, whether or not they work. And some of them are simply too toxic – they might cure whatever condition they are aimed at, but only at the expense of killing the patient. Those are all Barriers to success, and potentially insurmountable ones.

It’s the GM’s decision what sort of barrier the development process encounters, and its a decision based on whether or not they actually want the development process to succeed or fail – which hearkens back to the points made in the previous section.

If you hit a barrier, the time spent has been used discovering that there is a barrier. It does not magically vanish from the clock. The character has pursued a theory to the point of proving that this approach doesn’t work – an essential, if frustrating, part of the real world.

Extra Time

Another option open to the GM in such cases is to apply an ‘extra time’ modifier. This enables them to say, “You succeed but it takes N times as long as you thought it would / should.” You achieve this by looking at the margin of failure and calculating how much extra time is needed to compensate for it. This becomes a little trickier in that chances of success have to be rounded to whole values. The Hero system has the rule that any rounding happens in the character’s favor, and that seems fair enough to me.

It also opens the door for a character to state, “I’m getting close to completion, and have a little time up my sleeve, so I’m going to spend some extra time on each step from here onwards, dotting i’s and crossing t’s. That should improve my chances on each roll.” This is a perfectly legitimate application of the system, but it precludes the GM ‘helping’ the character with extra time – that ‘help’ has already been taken into account.

Extra Time can therefore be used in one of two ways, mutually exclusive. It can be used presumptively by the character to improve their chances of success on what they perceive to be a critical stage, i.e. one in which a partial success isn’t good enough, or it can be used by the GM at his discretion to turn a failure into a partial or complete success. The player’s choice to use extra time actively precludes the GM’s ability to help with extra time. I know I’ve pointed that out before, but reviewers of the draft rule still missed it.

If a player allocates extra time and still fails the roll, it must result in a Setback or a Barrier.

The reason for the exclusion is geometric expansion – two sources of extra time multiply, and the total can escalate out of control too quickly for effective game management. If the player specifies 4x normal time be used proactively, and the GM found that more was needed, you could end up with 4 x 8 = 32 times the normal interval. Neither 4x nor 8x are unreasonable, but the compound of the two takes the system right to the edge. 32 x 32 = 1024 – so if the interval was originally one minute, the player will have spent more than 17 hours getting there.

With intervals of one minute, you can reasonably expect the task to be complete in minutes – anything more than say 90-120 breaks the limit of being ‘reasonable.’ If you had a task that you thought was going to be 1-4 minutes in time, and decided to take the full 4 minutes to do it well, would you be happy pursuing that path for more than 2 hours, or would you stop and look for a faster way? Even if it meant starting over from an earlier point in the process?

I know what my answer would be.

The player can also set a hard limit on the amount of extra time the GM can force them to use before they fall back to an earlier step and try a different approach. Neither the player nor the GM have to actually describe that ‘different approach’ – the rules assume that one exists. They still use the time that they spent chasing down what they now perceive to have been a blind alley.

xN time = 5% x log(N)/log(2) is the usual pattern, but the “5%” then has to be modified to fit the mechanics of the roll. For a 3d6-based system, 18 (maximum)-3 (minimum) = 15 (range), and 5% x 15 (range) = 0.75. For a d20, the base number would be 5% of 19, or 0.95. All rounding should be in the character’s favor, i.e. round up.

     Time x1 1/3 = +0.3 (3d6) = +0.4 (d20) = +2 (%)
     Time x 1.5 = +0.4 (3d6) = ++0.6 (d20) = +2.9 (%)
     Time x 2 = +0.75 (3d6) = +1 (d20) = +5 (%)
     Time x 3 = +1.2 (3d6) = +1.5 (d20) = +8 (%)
     Time x 4 = +1.5 (3d6) = +1.9 (d20) = +10 (%)
     Time x 5 = +1.8 (3d6) = +2.2 (d20) = +11.6 (%)
     Time x 6 = +2 (3d6) = +2.5 (d20) = +13 (%)
     Time x 8 = +2.25 (3d6) = +2.85 (d20) = +15 (%)
     Time x 10 = +2.5 (3d6) = +3.2 (d20) = +16.6 (%)
     Time x 12 = +2.7 (3d6) = +3.4 (d20) = +18 (%)
     Time x 16 = +3 (3d6) = +3.8 (d20) = +20 (%)
     Time x 20 = +3.25 (3d6) = +4.1 (d20) = +21.6 (%)
     Time x 24 = +3.4 (d%) = +4.4 (d20) = +23 (%)
     Time x 30 = +3.7 (3d6) = +4.66 (d20) = +24.5 (%)
     Time x 32 = +3.75 (3d6) = +4.75 (d20) + 25 (%)

I would not extend the table further than that without explicit permission from the player – in fact, I would probably get such permission far sooner than the table implies. “You’ve spent 10x as long on this as you thought it would take, and you’re not sure you’re anywhere near a solution. You can either call the attempt a failure and deal with the consequences, or you can keep going in hopes of finding a solution.” And then revisit that question at 20x and 30x. Or do it by eights, or fours. (It can be worthwhile, once the base system is understood by the player, to get an indication from them of what time-checks you want them to use – remembering that this does not intentionally spend extra time on this stage of the design process, it caps the amount of extra time the GM can use before consulting the player).

Extra time applies only to the current Specification; the interval resets for the next one.

A clarifying note

3d6 have a nonlinear probability curve, but the system deliberately ignores this. That has consequences, which in turn have consequences.

The ’round in the character’s favor’ covers a lot of the resulting issues; it causes a flattening of the non-linearity of the probability curve, undervaluing the most probable results and overvaluing the extremes, but not by so much that it can’t be tolerated.

The net effect of this is to make the roll a little more ‘knife edge’, because success by any amount is a success (and so the flattening of the best results can be ignored), while failure is made a little more probable. But this is mitigated by the round-in-favor rule, and the availability of partial successes further soften the impact to produce a system that is intuitive art the game table rather than robustly perfect in its statistical modeling.

A second clarifying note

Rounding for a die roll always results in integer values – you can’t roll “3.6” on 3d6, it always collapses into a 3 or a 4 – and the round-in-favor rule makes this explicitly a 4.

Equally, a 3.2 is actually a 4.

Rounding errors are a fact of life. Compression of 5% of a 1-100 range to a 3-15 range (3d6) or 1-20 range (d20) is always going to introduce them anyway. In fact, they are so ubiquitous that their absence is the exception, not the expectation. Don’t stress about it; there are far greater sources of error that can and will drown this out, even in the course of a single project.

A third clarifying note

To be statistically robust, the table only needed entries for 2^N x Time – 2, 4, 8, 16, 32. I’ve included selected other values because they are likely to occur in the real world (x3, multiples of x5), and because they help players and GMs visualize the curve, ie the relationships between values.

There are enough results on a d% to make the curve appear smooth despite the rounding. That’s not the case with other dice structures.

There’s no real need for a Time x 14 entry, for example – so none was included.

    Metaspecifications

    I can only think of one of these, but I’m making it a general category in case GMs find others.

    “I want / need to complete this project in half the usual time”.

    Okay, so halve the interval, and then think of the downsides.

    If extra time gives a bonus to success, less time should give a penalty to all rolls.

    Multiply 32 x (1 minus the fraction of time) and look up / calculate the result of the resulting ‘extra time’.Double the resulting modifier and make it bad instead of good.

    The following are intended to show how it’s done (and provide a bit of a cheat sheet), not to be a comprehensive table of results.

         9% Time Reduction = Time / 1.1 = 32 x (1 – 1 / 1.1) = 3 = -7.9%
         17% Time Reduction = Time / 1.2 = 32 x (1 – 1 / 1.2) = 5 = -11.6%
         23% Time Reduction = Time / 1.3 = 32 x (1 – 1 / 1.3) = 7 = -14%
         29% Time Reduction = Time / 1.4 = 32 x (1 – 1 / 1.4) = 9 = -15.85%
         33% Time Reduction = Time / 1.5 = 32 x (1 – 1 / 1.5) = 10 = -16.6%
         37.5% Time Reduction = Time / 1.6 = 32 x (1 – 1 / 1.6) = 12 = -17.9%
         42% Time Reduction = Time / 1.7 = 32 x (1 – 1 / 1.7) = 13 = -18.5%
         44% Time Reduction = Time / 1.8 = 32 x (1 – 1 / 1.8) = 14 = -19%
         48% Time Reduction = Time / 1.9 = 32 x (1 – 1 / 1.9) = 15 = -19.5%

         5% Time Reduction = 95% Time Taken = 32 x (1 – 0.95) = 1.6 = -3.4%
         25% Time Reduction = 75% Time Taken = 32 x (1 – 0.75) = 8 = -15%
         30% Time Reduction = 70% Time Taken = 32 x (1 – 0.7) = 9.6 = -16.3%

         Time / 2 = 32 x (1 – 1/2) = 16 = -20%
         Time / 3 = 32 x (1 – 1/3) = 21 1//3 = -22%
         Time / 4 = 32 x (1 – 1/4) = 24 = -23%
         Time / 5 = 32 x (1 – 1/5) = 25.6 = -23.4%
         Time / 6 = 32 x (1 – 1/6) = 26 2/3 = -23.6%

    Add an extra Specification to the start – “Accelerated Development” – and another to the end, “Minimal Testing”.

    The GM gets to add a free “unwanted side effect”.

    If the purpose could be described as “Industrial II” or higher, add another “Early Release”, and the GM can add a second free “unwanted side effect” or an “application restriction” – which reduces the effectiveness of the solution but usually doesn’t make it completely useless for the intended purpose. “Takes twice as long to have an effect” is about the softest choice.

    If the character fails ANY of the extra specifications, there IS no way to complete the project in the time desired.

    The character has two or three options:

    1. The GM can rule that some specification carried over from a base model are affected as though it was a Specification that had failed. This option is ONLY available in that specific circumstance.

    2. Revert immediately to the standard timeline, crossing out the accelerated development Specifications (but not the consequences of their having been there).

    3. Keep the accelerated development timeline but further compromise the effectiveness of the project with partial solutions that are twice as bad as those normally encountered.

    Regardless, the time spent trying to fine a method of achieving the accelerated development is gone.

    Slices Of Time

    If you study the numbers in the table closely, you will see that the arrangement is non-linear (as you would expect with logarithms involved). Four attempts at Time x 8 are 4 attempts at +15. You can’t conflate those into +60, but the combination is obviously going to be somewhere between that value and +15 – you can derive an algebraic expression for this specific series of numbers, but it’s not worth the effort. Spending all that time on one attempt gives a total bonus of +25%, and it’s intuitively likely that this is below the compounded value. That means that it appears worthwhile for the researcher to divide up their time into smaller slices and multiple attempts, trying multiple different paths to success. What’s more, it is also logical to make all those attempts concurrently, so that you achieve a greater likelihood of success in a fraction of the time that margin of success would otherwise incur.

    I’ve described this logic in detail because it is in this detail that you discover the flaw in this arrangement – the system assumes that you are doing this already. This ‘suggested’ layering of processes is how you get to the +15% in the first place. So this ‘logic’ is counting the benefits twice. And that’s how you get to a modifier somewhere in the vicinity of +50% when the system dictates +25%. It’s a subtle but definite attempt to cheat the system. Those 4 attempts at +15% simply mark 1/4, 1/2, 3/4, and completion ‘checkpoints’ on the path to a net +25% in the final roll.

    Checkpoints can be useful with long intervals, describing the process and it’s progress toward ultimate success or failure in the integration of this specific requirement. The character can make 3 rolls at +15% or it’s system equivalent, which the GM interprets as measuring progress – a success doesn’t get the character all the way to the next step, and a failure doesn’t obstruct forward progress. They then make the fourth roll at the indicated +25% for ultimate success or failure in this step.

    Greater verisimilitude comes from the use of cumulative time for these rolls – in the case of this example, 8x, 16x, 24x, and finally 32x. This ‘weights’ the intermediate results to reflect the character discarding paths that seem to be going nowhere and honing in on their ultimate solution and its success or failure.

    Critical Successes do nothing but affect character confidence. If a Critical Failure occurs on a checkpoint roll, the GM should invent a number off the top of their head for progress, which the character will know not to be correct – but he doesn’t know how badly incorrect it is. The resulting confusion, uncertainty, and doubt is the consequence of the failure. These interpretations do NOT apply to the final roll needed to complete the Specification’s implementation, they are only about indicating the progress-to-date. But they do serve one additional function: they remind the players that the project is ongoing. The GM should look for opportunities to insert ‘progress text’ and event descriptions, even little roleplay moments, into the ongoing narrative on a regular basis.

Notes regarding Burnout and Fatigue

While ‘burnout’ and ‘compounded fatigue’ are real world phenomena, they are deliberately ignored by the system in favor of game-play.

The potential for large-scale intervals, for designing and constructing a space station for example, implies that characters don’t have to focus continuously on the task, but can interrupt it as necessary; time not spent on the project doesn?t count toward it (though a generous GM might allow that the problems are still ticking over in the back of the character’s mind and permit 10%, 5%, or 1% of such time to contribute to the total.

I thought about excluding sleep time from that, but there are many documented cases of problems being solved ‘on awakening’ which suggests that the subconscious keeps working on problems even while sleeping. That in turn suggests that this would only complicate things as an exclusion or as a separate ‘passive time’ accumulation; it’s a detail that is either unnecessary (interval less than hours) or undesirably complicated (intervals more than minutes).

So they have been left out for cleaner game-play.

Endurance – if you have it

In any system where Endurance gets tracked, skill use – concentration on a task – should cost Endurance. The amount should be determined by the recovery frequency and amount.

In the hero system, Endurance costs are generally determined by dividing the active cost by 5, then applying any modifiers to that. I originally split modifiers into two groups – one that affected END cost and one that didn’t – because it didn’t make sense to me that “reduced END cost” should increase the END cost of using a power, while “increased END cost” decreased the END cost by reducing the Active cost.

In the Hero system, characters will recover 2-4 END or more twice per 12-second turn. That’s 4-8 (plus) in 12 seconds, with the character’s Speed determining how many opportunities they get to act, i.e. to spend that Endurance. It’s geared to relatively low levels of powers – 4-dice attacks costing 4 END each. But it can be made to scale to more epic power levels, and that’s what my original home rules were intended to do. I wanted Superman types who were epic but ran out of steam and had to pause to rest for a while, creating windows for other characters to have the spotlight, and lower power-level characters with low END costs who could act more frequently and more continuously, and all points in between.

If you consider the Purchase price plus improvement cost of skills in the Hero System to be the active cost, then this same division by 5 works perfectly. Skills at very high level (15+) cost 2-3 END, skills at competent levels (10-15) cost 1-2 END, skills at a relatively low level (5-10) cost 1-2 END, and skills at the amateur level (1-5) cost 0-1 points. The bigger the potential game impact, the bigger the END cost.

The current version of the homebrew game system rules is d% based. Many of the stats can range higher – a lot higher – END reserves being one of them. It’s not unusual to have 50-60 END, recover 5-10 per turn, and act just once in a turn. But END costs are also higher, and you can do more in a turn. Skills are purchased with Skill Points, which in turn are paid for with character points, so characters with high capacities for learning skills can get more skill points per character point. Skill points spent on a skill divided by 20 almost works, but penalizes skill-heavy characters; instead, there’s a flat 1, 2, 3, or 4 cost based on skill level.

In implementing the system described in this post, with multiple skill rolls required over a time frame specified by the GM, I would specify a 2-END cost, that cannot be recovered until the end of the process. If there are 8-10 Specifications (not uncommon), that’s 16-20 END, lowering the character’s capacity to act in the meantime without making them completely helpless. That’s within the range of normal people in the system. Furthermore, taking a substantial break (one interval) would deduct END Recovery from that accumulated total.

For the standard Hero System, I would do the same, but price the END cost per Skill Roll at 1.

D&D and Pathfinder don’t track END, but they do have the concept of “Shock Points” – the character gets as many of those as they do hit points, and they go down with Damage just like regular hit points. I would contend that these represent mental fatigue amongst other things, and some attacks do non-lethal Shock damage instead of regular damage. They are normally fully recovered at the end of a character’s turn, or once a minute, or something like that, and – like hit points – the pool can grow quite large at higher levels.

For low-level campaigns, I would use a similar approach to the base Hero system – non-recoverable shock points, 1 per skill roll. For mid-level campaigns, i would make it 2 points, and for high-level campaigns, 3. Note that this doesn’t describe the current levels of the characters, but where those characters are expected to be at the end of the campaign / for the majority of the campaign. This limits the capacity of low-level characters in a higher-level campaign so that characters can feel like they are growing in competence as they progress.

Of course, if a character runs out of END in the process, they suffer from Burnout and need to take a significant break of 1 interval. If the intervals are seconds or minutes, that’s not all that significant; if its hours or days, that’s inconvenient; if it’s weeks, months, or years, that’s a lot more painful. The first interval restores 2 END; if the break is extended another interval, this rises to 4, then 8, 16, 32, and so on, up to the original maximum.

If the system doesn’t have any mechanism for tracking Endurance, just ignore this whole section.

Critical Successes and Failures

These don’t apply to every game system but are generally profound when present.

A critical success halves the interval for that Specification and may optionally reduce the interval for subsequent Specifications, reflecting the “stroke of genius’ or ‘flash of inspiration’ inherent in the concept of a critical success.

Optional: A breakthrough can carry extra momentum into the next stage, worth a +5% or +10% bonus (+1 or +2 in other game systems).

Optional: The GM can rule that the breakthrough makes a future Specification redundant or simplified to the point of being incorporated into this Specification, removing that future item off the list completely.

Optional: sticking with the main proposal, the GM has to decide how much the interval reduces by. I recommend values of 10%, 20%, and 25% be considered, but 15% is a good compromise.

A critical failure should be a failure like any other, but worse, or may be interpreted by the GM as a Barrier to a later step, only discovered when that stage of the process is achieved.

That turns the failure into a time bomb with a hidden clock – the player will know that it’s ticking but not when it will blow up under their feet. For the moment, the character thinks they have succeeded even if the player knows better, and this should be made clear to the player. However, the solution found to that Specification contains a hidden defect that only time will reveal.

The GM should devote some thought to what this hidden defect might be – it shouldn?t be anything that would be obvious in an earlier step, it should be something subtle but catastrophic in terms of the intended purpose. Designing an air-breathing jet engine only to discover that it needs to be kept underwater in operation because parts would otherwise overheat, makes the design worthless. The solution is to replace the effected parts with something more heat-tolerant, or recalculate and remodel the engine to divert the excess heat away from the affected parts. The choice of solution can have a profound impact on the look-and-feel – an experimental jet engine with huge radiator fins is SO steampunk or pulp!

Success at last!

Eventually, the last roll required will be achieved with the last Specification successfully incorporated. The results are now ready for use as specified in the purpose. But here’s where the fun starts – anything NOT specified in the design is free for interpretation by the GM. Side effects are always possible. The goal should never be to make the results useless, but to make the experience interesting. So save these ideas for occasions when the application of the results themselves are less interesting than they should be. Remember both the “Rule of Cool” and that no product is EVER perfect.

Optional Rule, All systems

Sooner or later, a setback will require the character to repeat part of a series of rolls, looking for an alternative route to success (the most obvious approach having failed). That potential is baked into the system, deliberately.

The GM can choose to provide a +5% or +1 modifier to subsequent iterations of a repeated step, reflecting the thought that the character has already put into the integration of that Specification.

This does two things, one more important than the other(okay, maybe three). First, it slightly changes the balance between setbacks and partial solutions in favor of the first because it weakens the penalty involved; but to my mind it doesn’t do so by enough to warrant concern.

Second, it adds a hard limit to the number of times a single step can recur before it becomes an automatic success (critical successes and failures notwithstanding). Automatic success rolls still need to actually rolled to test for critical success or failure, but such rolls take no time. This puts a cap on the process. That’s the most important consequence of this optional rule, I think.

And third, it changes, slightly, the way a player looks at the system in their favor. That can be an important element of the decision-making process when it comes to using this system at all. Given that the system itself enriches the tactical options open to the player and enriches the storytelling at the game table, its existence within the operating rules of the campaign is a benefit to the GM; but a benefit that remains only potential until and unless the rules are actually used. If they then become tedious, they are unlikely to be used again; if they add to the tension and drama, and hence the entertainment value of the game, players are more likely to call upon them again. This is an in-between consequence, starting small and growing with successive uses of the system.

It’s never possible for an Automatic Success result if the system has critical successes and failures, because they override a success if rolled. However, you can approach that point – 17/- on 3d6 is just about there. But there’s always that chance to roll box cars. It doesn’t matter if your change is 28/- (it would never actually get that high) – box cars or their equivalent is ALWAYS a failure.

If you don’t have criticals, then automatic success does become possible, but it never becomes possible to automatically get a critical success because this set of mechanics doesn’t have them.

Before moving on, I want to highlight that there’s at least one other Optional Rule described within the examples, so don’t skip over them too quickly, even if you think you understand the functioning of the system or they are referring to a game system other than the one you’re using.

Selected Full Examples

I kept thinking up new ways of using this system. Too many to offer them all as fully-worked examples. So I have divided them into three categories: a couple of full examples to illustrate the application of the system, a few examples in which some key conceptual element can be brought to light, and which are discussed briefly and perhaps partially worked up in furtherance of that, and a few more that are going to get nothing more than a high-level summary, or perhaps even less.

Using the system to design a new magic spell for the Hero System

Interpreting this system for the base Hero system is conceptually simple – one Specification for the power or skill or ability that’s going to be used to simulate the results from a game mechanics standpoint, or a general conceptual description simplified as much as possible (stripped of anything detailed, in other words), and one Specification per modifier.

Plus one Specification that has to be #2 on the list – Ad-hoc vs Permanent.

    Ad-hoc vs Permanent magics

    Ad-hoc spells are here-today, gone-tomorrow deals, you get one shot at successfully using them. I highlighted the word successfully, because the GM can arrange circumstances in which the first shot fails – the goop doesn’t hit the target or whatever – and it’s not fair to make the character go through the whole process again. One interval is enough to whip up a whole new batch, if necessary. But when the effect that the character has invested their time in does have the effect specified, the mechanism for delivering that effect fizzles and is gone.

    Some circumstances – constructing a new chemical – may seem to preclude this from being reasonable. That’s tough luck for the creator, it still happens. But for a spell, this is entirely reasonable.

    Permanent magics are stored or recorded somewhere and can be used again. Depending on how magic is configured in your game mechanics – and there are multiple options – this can usually be regarded as creating a new ability for the character; the required character points are then spent, and it becomes a permanent addition to their character sheet.

    Once this Specification is successfully incorporated, the GM can assign a blanket modifier to all the subsequent skill rolls. He should be consistent but open to exceptions. If the GM doesn’t want ad-hoc spell use, apply a -50%. If the GM thinks that permanent magics, with side effects that are tolerable on a recurring basis, should require more effort dotting i’s and crossing t’s, then a -20% for permanent solutions is reasonable. If they want to emphasize the flexibility of magic, they can apply a +25% to ad-hoc spell creation.

    Increasing the likelihood of success on the skill rolls shifts the actual time closer to the minimum. Decreasing it adds to the likelihood of some sort of roadblock by making success less likely. This will generally slow the project down.

    All this is relative, of course. If you have 8 or less in a skill, a -4 modifier is huge, especially taking into account the non-linear nature of a 3d6 roll. If you have 13/-, that same modifier is significant but not catastrophic; if you have 17/- or 18/-, it’s niggling but little more.

    The other way to represent this distinction is with Interval. Almost by definition, the Interval for ad-hoc spell use is seconds; I would set the Interval for the crafting of permanent magics at hours if not days. Given how many seconds there are in a day (86400), that’s a massive ratio. Using the divide-by-5 and convert to log-2 scaling of the Hero System, that’s a little more than +14. But hours or minutes seem too short for realism and game balance to be maintained.

    What CAN be done is to impose another blanket modifier to reflect the breakneck pace of development. Not the whole -14, though – that makes the system just about unusable. Maybe half that – a -7 makes ad-hoc more difficult. -7 on a d20 would be equivalent to -35% chances, on a 3d6 it would be more – an eyeball, seat-of-the-pants estimate, the same as I would make when GMing, is between -40% and -45%, and probably at the lower end of that range.

    Bearing in mind that there might already be a blanket modifier for ad-hoc use, this can either make the process almost untenable, or can moderate the net impact back toward neutral.

The Example: Eyes Of Hodur

I’m going to lift a spell from the Grimoire of one of the characters in my Zenith-3 campaign. Credit to Nick Deane for the source (and the character). The numbers for the modifiers might not be (almost certainly won’t be) the same as in the official material but it will be close enough.

Skill: Spell Use
Value: 13/-
Interval: Seconds (ad-hoc spell)
Blanket Modifier: -25% = -25/5 * 0.75 (from earlier in the system description) = -3.75, round to -4. Net roll: 9/-.
Conditions: Magic Workshop +2, significant past experience at crafting spells +2. Net roll: 13/-.

Spell Description:Eyes of Hoder
School: Mind Magic
Effect: 2d6 Flash vs entire sight group + Range Modifier x2 (10 pts)

Modifiers:
Skill Roll Required -1,
Incantation -1,
Gestures -1,
Linked -1 GM’s Note: Linked to what? Assumed valid
Can Use Normal Mana or Mana Battery + 1/4,
Extended Duration: 30 minutes per point of success +3,
Only Affects One Target /4

Modifiers’ Totals: -4 +3 Œ /4

Base Cost: 30
Active Cost: 25
Net Cost: 5
Mana Cost: 1
END Cost: 5
Range: 30′ (Flash has a range of 5″ for every 10 active points, round down – power description) GM’s Note: Range miscalculated, from the rule cited should be 25/10 x 5 = 2.5 x 5 = 12.5″ = 24m when rounded down.

In the base Hero system, there is no division between types of modifiers, they are all + or -, with the + all counting toward active cost and the – not. So the modifier totals become +3 1/4, -8. This impacts the results:

Base Cost: 30
Active Cost: 30 x (1 + 3 1/4) = 127.5 = 127
Net Cost: 127 / (1 + 8) = 14
Mana Cost: n/a
END Cost: 127/5 = 25
Range: 127/10 = 12? = 24m

Clearly, the spell would not be designed quite this way using the base Hero system, but this is good enough for example purposes.

Specifications:
1. Flash
2. Ad-hoc Spell.
3. 2d6
4. Skill Roll Required -1,
5. Incantation Required -1,
6. Gestures Required -1,
7. Linked -1
8. Can Use Normal Mana or Mana Battery + 1/4,
9. Extended Duration: 30 minutes per point of success +3,
10. Only Affects One Target /4

1 sec, Roll #1, Specification 1: 13/- -> 7, success
2 sec, Roll #2: Specification 2: 13/- -> 11, success
3 sec, Roll #3: Specification 3: 13 +2 modifier = 15/- -> 17, failure
     GM applies extra time, x4 intervals = +2 = success
6 sec, Roll #4: Specification 4: 13/- +2 modifier = 15/- -> 7, success
7 sec, Roll #5: Specification 5: 13/- +2 modifier = 15/ -> 12, success
8 sec, Roll #6: Specification 6: 13/ +2 modifier = 15/- -> 9, success
9 sec, Roll #7: Specification 7: 13/- -4 modifier = 9/ -> 15, failure

The GM, having used extra time once, could do so again, but the modifier required to get from 9/- to 15/- is huge and probably beyond the limits of that capability. But he imposes a partial extra time adjustment of x4 time for this step anyway, because reducing the gap from +6 required to +4 required lets him be a little more generous with his partial solution offer.

The player can choose the GM’s offer of a partial success, “Link takes 2 rounds to establish each time”, or can choose a block/setback.

This offer is very nuanced. If it were 1 round each time, the offer would almost certainly be accepted, because it doesn’t compromise the spell’s function very much. If it were 3 rounds each time, the offer would almost certainly be rejected. 2 rounds is the sweet spot at which the character might be tempted if he needed the spell in a hurry – which he does, it’s an ad-hoc spell.

The GM warns the player that the magnitude of the failure means that the setback will be substantial, and gives him one last chance to change his mind, but the player feels that a few extra seconds is tolerable.

The GM returns the clock back to the start of the “Links to” Requirements (Specification 4), explaining that the simple method of making the spell restricted is being overridden by the link. The details beyond that don’t matter.

So that last entry now reads,

9 sec, Roll #7: Specification 7: 13/- -4 modifier = 9/- -> 15, failure
     x4 Extra Time -> 11/- still failure

…and the process continues from there.

12 sec, Roll #8, Specification 4: 13/- +2 modifier = 15/- -> 12, success

The GM notes that a full turn has passed and lets everyone else act.

13 sec, Roll #9, Specification 5: 13/- +2 modifier = 15/- -> 11, success
14 sec, Roll #10, Specification 6: 13/- +2 modifier = 15/- -> 17, failure
     x 4 Extra Time – 17/-, success
15 sec, Roll #11, Specification 7: 13/- -2 modifier = 9/- -> 9, success

The revised approach solves the problem, but could easily have failed again.

16 sec, Roll #12, Specification 8: 13/- +2 modifier = 15/- -> 15, success
17 sec, Roll #13, Specification 9: 13/- -4 modifier = 9/- -> 9, success

This is the second potential ‘choke point’ identified by the GM when considering the list of specifications. He really wanted to be able to ‘force’ the spell effects down to 30 or even 15 seconds per point of success, because 30 minutes of blindness is massive, in tactical terms.

He then realizes that the design fails to specify ‘success on [what]’. He can define it as ‘success on the skill roll’ (6 x 1/2 = approx 3 hrs of blindness, maximum) or ‘points of flash rolled in excess of defense’ (ave 3.5 x 2d6 = 7- flash defense of 0-to-5 for a net 1-3.5 hours of blindness resulting). He doesn’t know what the character intended when designing the spell parameters and not seeking clarification on this point leaves him the latitude to decide. He chooses the latter option, but deliberately doesn’t tell the player – he will find out when he casts the spell. I’ve considered labeling this an “Ambiguity Tax” but “Ambiguity In, Ambiguity Out” probably comes closer.

Note, too, that the player hasn’t specified the skill, so that reverts to the system default of the most appropriate skill, “Magic Use, 13/-“. If the player wanted to employ something other than the system standard, he would have had to Specify that, and justify it to the GM.

18 sec, Roll #14, Specification 10: 13/- -2 modifier = 11/- -> 13, failure

The GM notes that the player hasn’t specified casting time, and therefore also expects that to default to the system standard of 1 round activation time. He could ‘solve’ this failure with additional time, but really wants to reduce the effectiveness of this spell a bit, so he takes advantage of the players’ assumption to add an 11th Specification, x2 casting time.

The player’s choice not to list this variable was not unreasonable; the GM’s use of the added Specification overriding the system standard is exactly what the player could have done originally, but he couldn’t have specified the base standard, it would have to incur a modifier. It’s even possible that the player deliberately left this open for the GM to exploit, confident that the GM wouldn’t be unfair.

This changes the failure into a success:

18 sec, Roll #14, Specification 10: 13/- -2 modifier +2 additional specification = 13/- -> 13, success

19 sec, Roll #15, Specification 11: 13/- -> 11 success.

The modified spell is now complete and ready to use. It has taken 19 seconds to craft, and the spell itself has changed slightly in the construction process thanks to the x2 Casting Time added at the end. So the ‘stat block’ describing the spell would have to be recalculated to accommodate the additional parameter.

The GM can now append special / side effects and a description of the spell when it is cast. He notes that there is not a ‘no attack roll’ parameter specified, so the character still has to make one when using the spell (system default) but he adds an additional element: if the attack roll fails, the spell affects the nearest character, with a preference for anyone between the target and the caster for tie-breaking purposes. Assuming that the target is being attacked by the other characters, that almost certainly means a team-mate. He describes the spell as a ball of light the size of a fist that streaks from caster to target and wraps around his eyes for the duration of the spell’s effects. This is not at all what the player was expecting; Holdur was a blind deity in the Norse Mythos, he expected the effect to be one of simply denying the target the ability to see. But if he wanted ‘no visible effects’ he should have Specified that and incorporated the resulting modifier; he didn’t, and the system default is for a visible effect; the GM simply drew inspiration for the nature of that effect from the power, “Flash,” on which the spell is based.

Using the system to design a new magic spell for the D&D System

I know I said that there would be a second complete example, but I completely ran out of time after discussing the translation mechanics. Sorry!

This is equally straightforward, at least conceptually. Each spell has a stat block that describes most of its specifics; each one is the basis of a Specification. In each case, the GM has to determine whether the numbers are a ‘per level’ or absolute number. Some spells have additional numeric descriptors in the spell descriptions, so these should also be scoured.

On top of that, in some versions of the D&D system, Metamagics can be used to adjust these values, but no Metamagic can be incorporated directly into a base spell without GM approval. Such approval still doesn’t incorporate the metamagic, but it does simulate it and does require an additional Specification for each increase, for example “x2 Range”.

    One-off vs Spellbook storage

    The exception to that statement would be if the GM adopted some variant of the “ad-hoc” spellcasting concept. Ad-hoc spells always have to build their spell variations directly into the spell construction and that includes metamagics, because those metamagics can’t be tacked on after the fact – ad hoc spells are always cast ‘as is’.

    In addition, I would consider the following as a House Rule within that new subsystem: Multiply each spell level by the number of spells within that spell level that the caster can memorize / use, daily, and divide by 3; the results are a ‘Spell Creation END pool’. This limits the ad-hoc elements of the spell to manageable levels. It should not be used outside of this system, it’s not sufficiently robustly-developed for that.

    All that said, D&D does in fact, offer a form of one-off ad-hoc spell variants, permitting these to be captured in either potion or scroll form. Of these two, Potions are the better choice from the GM’s perspective, because they can’t then be transferred into a character’s spellbook for free (in terms of this system).

    To offset the benefits of attempting to rort the system in this way, all scrolls generated using this system should have the property “Fragile” appended to their descriptions by the GM; this means that there is a 75-80% chance that any attempted transfer into a spellbook fails because the scroll self-destructs prematurely, requiring the whole creation process to be repeated. This applies ONLY to spells created using this process; the GM is free to set whatever rules he wants regarding “normal” scrolls and their fragility. And it only applies in situations where a character is trying to cheat the system.

    For such repetition, I would normally grant a +1 modifier because its been done by the character before, but in this instance, forget it. The cosmic halo of Mana has shifted or changed character or something, the ‘weave’ has been stretched by the failure, or whatever. You don’t earn a GM’s goodwill by attempting to cheat the system.

    On the other hand, creating a ‘permanent’ magic item (see below), with its attendant difficulties, is going about this honestly, and no such penalty or ill-will should result. But note that the penalties for creating a spell this way are already greater than those involved in crafting a permanent spell.

      Magic Items

      You can use this system for the crafting of a magic item. It is the GM’s option whether or not to use it for ‘standard’ items, but my recommendation would be not to. However, using this approach to get an estimate for the crafting time of a “variant item with no significant variations”” is perfectly valid. The GM should also estimate the crafting cost of magic items using standard items as a guide.

      Start with the name of the item. “Sword of,” “Armor of,” etc are your cues to the first type of specification, the Form.

      There is a hierarchy to these things, providing a scale for partial successes like any other. That hierarchy is:

      Consumables:
           Potions (n x s / m / h)
           Scrolls (h)
           Wands (h / d)
           Arrows / Other (d)

      Permanent:
           Miscellaneous Minor (d)
           Daggers & Arrows (d)
           Miscellaneous Medium (n x d)
           Shields (w)
           Rods & Staves (n x w)
           Miscellaneous Greater (n x w)
           Other Weapons (n x w)
           Swords (n x w)
           Armors (n x w / mo)
           Minor Artifacts (n x mo / y)
           Major Artifacts (n x y)

      The higher up the scale you go, the greater the global penalty to rolls. I would set the zero point to be Miscellaneous Minor; Forms lower in the ranking get a +2 bonus per step, forms higher in the ranking get -1 per step.

      Artifacts of any kind should get an additional global penalty.

      Note the codes next to the forms – these are the recommended intervals (s = seconds, m = minutes, h = hours, d = days, w = weeks, mo = months, y = years). If a recommended interval is preceded by an (n x ), it means that the intervals should be multiplied by n, which is an integer from 2 to 6. Where the magic item has a plus associated with it, n should normally be ‘plus’+1, but I would make exceptions for consumable items falling into this category.

      But you don’t know what the ‘plus’ is yet?

      That’s the next Specification. Each magical ‘plus’ gives a -1 modifier to the roll for this Specification.

      Each ‘plus’ permits (but doesn’t require) the incorporation of one spell-like effect. If this effect is already extant within the rules, a single Specification is needed for each. The GM should use modifiers to adjust for more powerful effects at his discretion.

      The first Power incorporated gets a +5 modifier, the second a +4, and so on. These are not fully universal, but they apply to everything related to that Power.

      What’s more, if the designer voluntarily gives up some of these Power Slots, he gets a bonus to the rest – +2 for each slot ‘locked out’. This enables larger-plus equipment that is conceptually focused to have bigger abilities than one that is all over the shop.

      If the effect is not normal, but is described by a spell that the character knows, it requires two Specifications: Spell-like effect is one, and the spell name and Character Level it is set to is the other. The higher the ‘effective’ character level, the worse the modifier.

      If the effect is a customized version of the spell, it should be listed as both it’s original form (one Specification) by name, and followed by each of the modifications.

      If the effect is a completely new spell, even one derived from a Reference Spell, the whole spell design process has to be incorporated. That means that you can save a lot of work if you create you new spells in permanent form, and only then commence enchanting the item.

      Most spell-like effects are then followed by either ‘permanent’, ‘at will’ or ‘X times a day’, describing how often the effect can be used. This is a separate Specification, but there’s such a big gap between even “5 times a day” and the firs two that some special handling is needed.

      X times per day is handled as a single Specification, with a penalty increasing as X increases, the amount of which is left to the GM to determine. But there are already a lot of penalties, so I recommend -X, which is therefore a peak modifier of -5.

      ‘Permanent’ and ‘At Will’ also have a -5 modifier, but they confer an additional -1 modifier on all other Specifications related to this specific ability. That gets fairly significant when there are a lot of rolls, as with incorporating a custom spell.

      For wands and their equivalents, these have a different Specification at this point; every spell in them is the same, but the number of copies of a given spell that can be included per ‘plus-equivalent’ is 2, 4, 6, 8, or 10. The lowest value gives a +6 modifier, then +4, +2, +0, and -2.

      Potions can usually only contain 1 ‘charge’ by default, a second and third Specification have to be included for a second and then a third ‘charge’. But all of these have the same modifier. In general, this makes potions more compatible with experimental spell design in the form of one-off spells.

      After the first power, you have the second, and so on.

      There are a lot of negative modifiers, so expect low net rolls and lots of failures. Almost every non-standard magic item is compromised in some way. I strongly recommend the optional rule that gives bonuses for repeated efforts.

      Exotic Materials

      Some materials are more easily enchanted than others. These materials give a blanket modifier to certain types of uses. I want to discuss two of them, and then mention a couple of others in more general form.

      Adamantine is the ultimate recipient for Dwarven magics in the form of weapons and armor. It grants a blanket +8 to all rolls subsequent to this Specification. The Specification “Adamantium” (or “Adamantine”) does not receive this bonus, because it is such a difficult material to work with.

      Mithril is the ultimate receptical for Elven magics in any non-martial form, giving +6 to all rolls subsequent to its Specification. It’s not quite as difficult as Adamantine to work with, so it also gives a +2 to it’s own Specification.

      Other materials should be assessed by their purity. Steel that is forged and folded multiple times becomes more pure, and so do most other materials – enough to make this a general rule. Some other materials can can also be considered effectively ‘pure’ like gold, silver, platinum, ebony, ivory, gemstones, etc. The most pure give a +5 modifier to the first 3 specifications in each powers slot, then +4 to the first 3, +3 to the first 4, +2 to the first four, and +1 to the first 5.

      Herbs and woods are the weakest of the lot. They give +1 to the first 2.

Back to crafting a unique spell!

Wow, that side-trip into magic item design was a lot more extensive in the end than I originally had planned!

Reference Spell & Modifier Levels

Each spell should also list a ‘reference spell’ that is used to guide the GM in assessing the Specifications. This doesn’t have to be a spell that the character has access to, and it’s more powerful within the ad-hoc system if the character doesn’t; it serves purely to set a baseline for the GM evaluation of differences relative to the chosen Specifications.

Each numeric Specification is a point on a continuum, and can be moved up or down by discrete steps (being careful to avoid using the term ‘intervals’ because that already has a specific meaning in this set of rules). It’s up to the GM how large these steps are, but they should make most desirable values a small integer number of steps away.

Range might be “20′ per level” based on the Reference Spell; steps down from that (making the spell easier to craft) might be “15′ per level”, “10′ per level”, “5′ per level”, “1′ per level”, and “touch” – with “touch” being a floor to that particular Specification. Similarly, you can’t have a fraction of “Instantaneous,” so that’s a floor on Casting Time. Each of these steps down would add +2 to the roll for that specific Specification (but note the optional rule below).

Increasing the Range Specification from a Reference level of “20′ per level” would yield values of “25′ per level,” “30 feet per level,” and so on, and these give a -2 modifier to the skill roll for incorporating that Specification into the design.

This approach is especially advantageous because it builds a scale for the formulation of “partial solutions” directly into the system. However, if necessary, the GM can use different step values for partial solutions.

Let’s say the player was aiming for a range of 30′ per level against a reference of 20′ per level. He flubs the roll by 3. Using the standard intervals of ±5′ per level, there isn’t a whole lot of room tom maneuver, there’s just one intermediate step. If the GM chooses each unit of failure to represent 2′ per level difference, that failure by 3 can be interpreted as being 6′ per level away from what the player was aiming for, or “24′ per level”. The GM can then offer this as an option to the character.

Optional Rule (all systems): Transfer Of Bonuses

I noted the GM’s determination of “Choke Points” in the first example. There’s nothing stopping the player from making the same assessment and ‘banking’ modifiers from what he regards as ‘easy steps’ later in the process to boost their chances of success on the earlier steps.

Sometimes, the player’s assessment will be correct, and this can cushion the process in their favor. And sometimes, the GM will have thought of a reason why a different step is a / the Choke Point in the process, and the player will have made a hard roll worse. The better the player understands the game world and its internal physics, and the GM and his way of thinking, the more accurately he will make this assessment; and when those understandings are more limited, this can provide a direct access to that knowledge.

Modifiers can only be transferred from Specifications yet to be rolled. Once the roll is made, only the specified modes of variation are permitted; player modifiers of this type become fixed, as does any matching penalty.

Players can decide that they are about to roll an easily-successful step, and take a penalty on it to make a later, more difficult, step, easier to complete. That’s fine. But they can’t look at a bonus after the roll and decide that the bonus was unused; they aren’t allowed to bank it for later.

This makes each Specification and its roll a more dynamic process, and boosts player interaction with the system and its mechanics – not a bad thing. But it does reduce the GMs ability to modify the outcomes, either in the character’s favor or against it, and that can be a bad consequence, especially from the GM’s perspective.

My own thoughts are balanced 50-50 on the question of whether or not to implement this; I can see both benefits and liabilities. That’s why it’s an optional rule. I would probably give the system a couple of opportunities to establish itself without the optional rule, and then introduce it on a trial basis. Or run a couple of “solo playtests” to see how big a difference it made to the ‘look and feel’ of the system mechanics. Or both. But that’s me; every GM is different and has different underlying philosophies to their GMing style, so you do you.

If there’s no one right answer, there are no completely wrong answers, either.

    Spell Variants

    Oftentimes, the goal isn’t an entirely new spell, it’s a variation on an existing one, which therefore becomes the Reference Spell. If the character already knows or has access to the Reference Spell, this confers a +5 advantage when crafting an ad-hoc variant and a +2 advantage when crafting a permanent addition to a Spellbook or Grimoire.

    Intelligent Items

    To craft an intelligent, sentient item is HARD.

    They start with a seed of intelligence taken from the caster. This can be as large as the caster wants, so long as it is 1 point or better, but his own intelligence goes down by the amount of the seed, so they tend to favor fairly small ones. This point cannot be recovered by any means so long as the crafting is underway, and the character suffers from all the attendant consequences of his lower INT.

    Each Specification of that seed doubles the resulting INT of the item, or doubles it’s rate of maturity. The INT growth normally takes 32 years to mature, so cutting that down to 6-12 months is highly desirable; most mages would prefer to take it further, but each such doubling also adds -1 modifier to the subsequent Specifications of this type.

    Int Seed 1, doubles to 2, doubles again to 4, doubles again to 8, doubles again to 16, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1 – that’s a total of -9 – four doublings and five halvings.

    Int Seed 2, doubles to 4, doubles again to 8, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1, halves to 6 months – that’s a total of -8.

    Int Seed 3, doubles to 6, doubles again to 12, doubles again to 24, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1, halves to 6 months, halves to 3 months, halves to 1 1/2 months, halves to 3/4 months (about 22 days) – that’s a total of -12.

    The creator gets to specify one personality trait, the GM can add 3 more, or one more and add some words before or after the player-specified trait to modify it’s meaning. These must be specified secretly and i writing; only the GM is permitted to know everything. The other players at the table then supply (secretly and in writing) a single word each, which the GM has to arrange into 1-3 additional personality traits. To do so, he can transform any noun into adjective or verb form, or substitute a quality especially associated with the word, or attach any emotional state to a noun. Any unused words get discarded. The personality emerges as the item matures; only at the end of that process does the GM reveal the substance of the personality summary.

    EG: The creator supplies the one word, “Loyal””. The GM adds “to himself” and after contemplating “puppy-dog eager” as a second personality trait adds the tried-and-true “Manipulative” instead. Player #2 offers “Lemon”, #3 provides “Eccentric”, #4 suggests “Affectionate”, and #5 gives “Gleams”. The GM transforms these into “Affectionate About Lemons” and “Gleams Eccentrically” (Eccentric to adjective).

    So this is a magic item that is self-centered to the point of potential disloyalty, who likes to manipulate others to protect itself, who loves everything about lemons from their color to their scent to being bathed in Lemon Juice, and which somehow twists the light striking it’s surface to reflect that light in unusual and unexpected directions, a peculiar expression of vanity.

    The GM could also have transformed “Lemon” into “sour”, and using it as a standalone personality trait. But he decided not to be that mean.

    The item has all its powers while maturing, but is not able to apply as much intelligence to such use. If employed in this time, it might well make mistakes – serious ones.

    If the final integration roll fails, does the caster get their INT seed back? – no, because failure isn’t necessarily the end of the story. The GM can apply extra time modifiers that turn the failure into an eventual success. He can impose a Block or a Setback, forcing the process to retrace some of its steps, or navigate an additional Specification; either of these choices keep the chance of success alive. Or the character can wait an interval and just try again. And again. And again.

    It’s perfectly legitimate for the GM to rule that the final integration can only take place under certain conditions – “inside a magic circle on the night of a full moon” for example. However, he should ensure that the character at least hears hints as to such requirements long before he actually reaches this point in the casting. If he doesn’t follow up on this information, that’s on him. Me, I would use this as the trigger to a whole adventure – the character has to steal into the tower of a bunch of evil wizards and ransack their library for the information he needs, that sort of thing. And, should the character be discovered by the Wizards (he will be), he’ll need the other PCs to help him escape!

Selected other examples

There are three other examples of using this system that I want to highlight because they show off some aspect of what can be done with these mechanics. Like the spells (and magic item) example above, these will be more ‘how-to’s’ than full examples.

Using this system to design a better television set for mass production

To design and create a better TV set (or any other industrial gadget) you need to first define its fundamental properties. I picked this as an example because I think every reader will know what a TV set is and what it does..And for that reason, I think we cam define a TV set as, well, a TV set. So that’s the first Specification: “Prototype Television Set”.

What are the fundamental characteristics of a TV set? What do you look at when considering a purchase?

    Price

    The first item is retail price, but that gets a little complicated because the price is relative to the value of the currency. In general, it takes the form of a range from “0.5 x X” to “1.5 x X”, where X is the median price range. But you can’t define a median price range unless you’re comparing like with like. This is the Target Retail Price. It guides later design choices in ways that are too complicated to map out in a general form, and may or may not be achieved at the end of the process.

    Screen Size & Function

    Which brings us to the second item: Screen Size. These are defined as a basic shape (square, letterbox, cinema) and the size of the screen from corner to corner, and these are traditionally measured in inches long after every other measurement has been converted, in those counties that have switched to the metric system – but sooner or later, these measurements will also make the switch.

    10, 20, 30, 40, 50 – those are the sizes in cm of small, portable units, divide by 2 to get inches. Choosing one of these adds the third item to the list, portable, which implies a weight range.

    60, 80, 100, 120, 150 – these are the sizes of smaller domestic units in cm. Again, divide by 2 to get inches. Third Specification is ‘domestic, small’.

    180, 220, 260, 300, 400 – those are medium modern domestic units in cm. Divide by 2 for inches. Third Specification is “domestic, medium”.

    500, 800, 1000, 1200, 1500 – those are large domestic units in cm. Third specification: “Domestic, large”.

    1600, 2000, 2500, 3000, 4000 – these are the “Home Cinema” sizes in cm, giving the third specification accordingly. Only the first two are common, but the next two are around – my nephew has a set that’s somewhere in the 3000 range, it takes up an entire wall.

    5K, 10K, 20K, 40K, 60K, Special – those sizes are starting to reach the point where people have trouble grasping the size, so let’s switch the units up – meters and feet or yards: 50, 100, 200, 400, 600 meters, Special, multiply by 3.3 to get feet or 1.1 to get yards. The latter conversion is so simple it can be done in your head, so let’s use it: 55, 110, 220, 440, 660 yards, Special. 660 yards is around 1/3 of a mile. To the best of my knowledge, none of these sizes are in actual production – it would be more common to have a bank of smaller sets.

    Note that anything in this final size group adds a penalty to the Reliability testing later in the process. These should be -10%, -20%, -30%, -40%, -60%, and -80%, respectively, or their game system equivalents.

    These are the “fantasy” set-sizes, and I don’t think we need to go much further. Admittedly, my Dr Who campaign featured a spacecraft recently whose cylindrical body was a TV “set” more than a km in length, but it’s display was segmented into different levels within the spacecraft:

    I’ve included the size category “Special” for such purposes.

    Once you know the size category and associated X-value you’re talking about, you can go back to the actual size and fill that in, and that will inform the typical price point for that size of set.

    EG:
         X=2000
         Size = 1000-3000
         Price: $2500 AUD

    Resolution

    Each of these carries an implied resolution, an expectation of display resolution. In the old TV world of cathode-ray tube displays, these were measured vertically in “lines”, in the modern, wide-screen world, they are pixels across the top.

    Old-style sets: square display area (more or less, they were actually 4:3 ratio). Later versions offered a “letterbox” format for showing widescreen images. You can still find sets in the smallest screen sizes that preserve this arrangement, if you search hard enough, but they are increasingly rare. 60, 80, 100, or 120 lines were on offer in portable, domestic small, and the lower sizes of domestic medium – which were considered large sets back in the day – but the real standards were 480 or 576 lines

    In more modern designs, the smallest portable sets will have 512 pixels, but older sets might have 128 or 256 pixel displays. The next size up is 512 (256 in early sets), and the rest of the category is 1024-pixel resolution (twice as sharp as the best of the vacuum-tube screens, basically).

    In the “Domestic, small” category, you have 1024 and 2048 pixel displays.

    “Domestic, medium” is 2048 and 4K in the two smallest sizes, 4K exclusively in the middle, and 4K and 8K at the top end.

    “Domestic, large” is pretty much all 4K and 8K, but I have seen some sets that go to 16K and “interpolate” every second pixel (or AI-upscale the image, such as the BOE 110-inch device). This blurs the image slightly if you get too close to it, but is good for more distant viewing. This is a way of getting around the problem that the source media / transmissions are rarely 8K, let alone better. 32K is barely theoretically possible, but a slow uptake of 8K means that there’s no commercial impetus to go there, and all sorts of technologies need breakthroughs to support this resolution. Your Sci-Fi screens (Bridge-of-the-Enterprise stuff) might go to this resolution, and might even go to 48K or 64K. One of the major hurdles is the size of the pixel, which becomes very hard to manufacture when they ALL have to work, reliably, for a long service life, or you get damage to the displayed image.

    HD is 1080 pixels, Ultra-HD 3840 x 2160 pixels, also known as 4K. UHD2 is 7680 x 4320 pixels, also known as 8K.

    The standard ratio these days is 16:9. If you do the math, that gives a corner-to-corner value of 18.35756 – so if you divide the screen size by this, you get the size of each pixel, which can be useful in terms of appreciating that size.

    EG Cont:
    Size 1000-2000 cm = 500-1000 in. HD = 1000/1080 = 0.926 mm per pixel, a little smaller than the lowercase ‘e’ and ‘s’ in ‘smaller’ – using Campaign Mastery’s default font size.

    Resolution is the fourth specification. And, if you want to allow the use of media with other resolutions, it might also be the third, fourth, and fifth – one Specification for each resolution on offer in the set.

    Sources & Inputs

    You get one for free – that’s usually antenna and digital decoder, here in Australia. The first additional specification gets you two more – commonly a HDMI and a USB. The second adds three more to that list – frequently a second USB, a second HDMI, and a composite input. A fourth Specification in this area adds up to 4 more source inputs.

    What about internet streaming? You may need that 4th Specification, or you may need to sacrifice one of the existing sources to make room.

    What about cable, or satellite? Same story. An inbuilt CD/DVD player? You got it. And then there’s the input that everyone forgets – the remote control.

    And then, there’s the kicker – price points. One of the ways to get sets down to a low price point is to sacrifice inputs, but what’s acceptable in this department has changed a lot. These days, no-one would dare to offer a TV without a remote control and at least one other source. My TV is a mid-priced small unit (slightly smaller than I wanted, in fact) and it has a remote, two HDMI inputs, two USB inputs, a composite input, internet streaming, and Bluetooth that I can use to connect it to my laptop.

    Outputs

    Outputs work the same way – one (the screen) for free, +2 for the first Specification, +3 for the second, +4 for the third, and so on.

    Outputs can include Headphones (virtually all sets have one jack, some have two), HDMI (sometimes x2), loudspeakers (usually x2 for stereo), audio output (to a soundbar or hi-fi). A DVD/blue-ray player may also be a DVD/Blue-ray Burner. Some TVs have a sound bar built in (that can be bypassed if you have a better one) and so will have connections for two external speakers (stereo), three (adds sub-woofer), 5 (better stereo), 5 (surround sound), or 7 (better surround sound). I have also seen a 9-speaker rig (differentiated low to high based on the part of the screen with the greatest brightness) but that didn’t work too well).

    What else? Well, some TVs let you ‘cast’ (short for broadcast) the image to a second TV in a different room. Some have Bluetooth (for wireless headphones), some have Bluetooth for video, some have cameras (an additional input) and function as a telephone for videocalls (there?s the matching output, but it’s also another input).

    That’s about it, really.

    Switching

    Most sets don’t even have this as a Specification, but a few do – video-within-video, casting one channel while watching another, recording one or two channels to an internal hard disk while watching another, and so on. These are 0 free, +1 for 1 Specification, +2 for a second, and so on – one per ‘display mode’. Okay, most TVs will have a control panel, I guess that counts as the free one.

    Controls

    You get volume, brightness, input selection, and channel selection, for free. Everything else costs you. +2 controls for the first Specification, +3 for the second, and so on.

    Controls can be grouped into Visual, Audio, and Other, and it can be helpful to think of them that way, but it’s the grand total that matters.

    Visual contains contrast, color saturation, tint or hue or both, sharpness, and probably a few that aren’t coming to mind. In the old cathode-ray tubes you may have had pinch and skew and x and y adjustments. Some had a degauss function that could either be considered a visual or other control.

    Audio includes tone, wide, bass, treble, sub-bass, ultra-treble, independent headphone volume control, Dolby, de-Dolby, boost, crossover, front-to-rear volume, and I’m sure there are a few that I haven?t thought of. You can also get preset EQ settings for movies, TV, rock music, classical music, and so on. And mute, possibly accompanied by separate Mute Headphones and Mute All. Oh, and balance.

    In the ‘Other’ category you will find channel tuning, channel offsets, auto-tuning on/off, the ability to turn off powering some functions if you aren’t using them, a system reset, a favorites option, TV guide, automated channel switching, search functions, internet browsing, and a text input mode. Some systems have certain streaming modes built in – mine has over 500, including most of the domestic free-to-air channels (but not the Channel 7 family, for some strange reason) – those don’t count separately, consider them one item. Headphones on/off are a common one that could go in either this category or in sound..

    Also in that category, I’ve seen at least one TV which ad a USB printer-port, so that you could do more than just browse the internet, you could print out part or all of the page. Selectable printing is a second item above whole-page print. And I’ve seen one internet-enabled set that let you crest your cursor on a link, click a different button on the remote, and print the page at the other end of the link without even opening it – very handy for gathering research.

    As you can see, these can add up quickly. That’s because, in modern times, these are all done by software. Back in the analogue times, they had to be done with physical circuits and engineering, and that meant that there were a lot less of them. But there was nothing to stop manufacturers adding some of these extra functions, and some did. My first color-TV (I started with a much older black-and-white set inherited from my step-great-grandmother) had both bass and treble controls.

    Special

    This is a catch-all for other parts of the package, the clever bits that can make you stand out from the crowd. OLED, LED, LCD, QLED, mini-LED / Neo-LED / QNED, MicroLED, RGB Mini-led / Micro RGB, Laser Projection, and Lifestyle TVs – most of these are acronyms but all modern TVs are one of these, and they all add different pros and cons.

    Anything you can think of can go in this category. If you want a TV with an attached 3D printer that builds a diorama of the currently-displayed TV image, interpolating multiple frames through the scene to construct a 3D map of the scene – go for it. I’ve never heard of this being done, but with AI, it should be possible. Or maybe you want to license some popular games and build them into the system.

    As a general principle, forget about the acronyms and just Specify the technological benefit that you want – “better blacks”, “sharper colors”, “curved screen”, and so on. Each of these forms a separate Specification.

    If you want to know more, go to google and ask for “Types Of TV Screen” – that’s what I did to generate the list given above.

    This category can also contain things like “User-friendly Menu design” or “Quiet” – some TVs generate so much heat that they need a lot of cooling to maintain reliability, and those fans make noise.

    Prototype Testing

    So, with all this specified, you have defined this particular model of TV. Next, you have to build a prototype. Since that’s effectively the end result of integrating all these specifications, it gets built for free.

    But then you need to test that prototype for efficiency, reliability, heat generation, electromagnetic interference, and physical robustness. Each of these tests on the prototype (save physical robustness) is a separate Test, listed as a separate Specification, and has to be passed. There’s a +5 modifier to doing so, but the number of Specifications listed prior to this point each contribute -1; the expectation is that ‘extra time’ will be spent just to get to the point where this roll is achievable. The more complicated your TV set, the harder these tests are to pass and the longer they will take. Obscure design flaws often don’t show themselves until this point.

    Factory-ready design

    Next, you need to redesign the prototype in terms of a compact design that can be manufactured by assembly-line processes that’s an additional Specification. Modern product development often does this through CAD as each component is selected and its function incorporated into the design, but I’ve gathered it all into this discrete step.

    The next two Specifications can be done in whatever order the player decides.

    The first one I’m going to discuss is Component Sourcing. Prototypes can be built with as many custom parts as desired, and are often over-engineered because until he gets there, the designer doesn’t know what’s going to be needed. There are three levels to Component Sourcing: There’s all off-the-shelf, there’s bespoke components that a manufacturer of parts builds specifically for this model and design – these should be limited in number, and are generally minor variations or adaptions of existing components – and there’s custom parts that have to be commissioned from scratch just for this model, these should be avoided if at all possible, because they are VERY expensive. Either of these last two run the risk of supply lines being compromised; while that can happen with off-the-shelf parts where there is only one source, that’s relatively rare.

    The designer makes his roll, and has to keep making this roll until he succeeds (or extra time can be applied to make a roll successful. This might not be the first roll, or even the second – it will be a roll which is ALMOST a success, because that’s the more efficient pathway).

    The GM takes the margin of success and subtracts the number of intervals required for this Specification.

    If the results are greater than or equal to zero, this is an entirely “off the shelf” design, and costs just plummeted to the middle of the lower half of the range. If the results are -1 to -5, then there are some bespoke parts required, but they are needed in sufficient quantity, or have sufficient other applications, that a manufacturer of such parts will partner up for the manufacturing of these components. But if the process was a troubled one, resulting in an adjusted roll of -6 or worse, at least one custom part is needed, and someone will have to be contracted to make it. Or it’s manufacture can be brought in-house, but that may require training and special expertise that the factory doesn’t have. Costs per unit immediately spike, doubling or tripling ([d8+4]/4, keep fractions).

    The player is permitted to reject this outcome and go back to the start of this Specification (time used is lost, of course), and try again.

    This Specification not only represents replacing the custom builds and expensive components in the prototype, it covers the negotiations with suppliers, legal, and the signing of contracts.

    The other Specification is carried out simultaneously and in conjunction with the first: Miniaturization. The smaller the parts, the smaller (and sometimes the cheaper) the product. Until computers came along and used plug-and-play in 1995, there wasn’t a whole lot going on in terms of off-the-shelf circuitry; almost everything was built by the factory. This was a paradigm shift in manufacturing that not a lot of people were aware of. Plug-and-play computer chips soon followed, and these found utility in markets the chip manufacturers never dreamed of. The extent of the revolution became clear when Y2K loomed; aside from the bigger, more obvious devices, there were millions upon millions of microprocessors embedded in other technologies, and all of them had to be considered suspect until proven otherwise. Between 25 and 100 million products had to be tested, and in some cases, hastily redesigned or reprogrammed.

    The more that can be done with off-the-shelf components, even if that’s not what they were intended for, the better. The more the top-grade components used in the prototype can be replaced with cheaper ones, the better. The smaller the level of over-engineering compatible with safety, the better.

    It can be cheaper to use an off-the-shelf chip in which 90% of the functionality is ignored than to have a custom chip with just the circuitry you want. And designers tend to then look for ways to use that additional functionality even if that wasn’t what it was intended for.

    Miniaturization can reduce the electrical demands, which reduces the cooling required, which reduces weight and noise and size.

    So Miniaturization is incredibly important. The designer rolls until successful, as with Component Sourcing, then subtracts the total number of Specifications from the margin of success until reaching a net minus 5, then adds the remaining Specifications to this total – because, past a certain point, the capabilities of the parts make it easier to add more functions. The results can then be read off the following table:

         +6 or better: very large scale miniaturization, product size halved
         +4 or +5: large-scale miniaturization, product size to 60%
         +2 or +3: large-scale miniaturization, product size 70%
         +0 or +1: considerable miniaturization, product size 80%
         -2 or -1: some miniaturization, product size 90%
         -4 or -3: minimal further miniaturization, product size 100%
         -5: No effective miniaturization, product size +(2d4-1) x 5%

    That’s followed by one last item – aesthetics. These are generally kept fairly simple, these days, but wood-grain vs plastic vs pseudo-metal, all were valid considerations in the older days.

    The end result is a second prototype, and guess what? It needs to be tested, too, just like the first – but this time WITH physical robustness and possibly portability added to the list.

    It is worth noting that the choice of sequence can matter if different foundation skills are used, as a critical success in the first greatly then benefits the second. That won’t matter, most of the time, but never assume that it will never take place.

    Logically, these two processes should influence each other. Getting a “custom part” result should give a bonus to miniaturization, and part of the process of achieving a high level of miniaturization might involve using bespoke components, otherwise ‘steering’ the outcome of the Parts Sourcing roll.

    So I was very keen to establish a firm sequence for these two Specifications – but every attempt (and there’s only two orders they can be in) collapsed completely, because the bonuses run both ways..

    Either I implemented something complicated in which each roll could feed back into the other, or I could completely divorce the two in terms of sequence and just leave it to the GM to interpret the actual results on the day it comes up. How he chooses to interpret the results of the first roll, whichever one it happens to be, should be reflected in a tweak of the results in the second, and in the bonuses offered for successful completion of the second.

    I chose the second of those two choices. The truth is that there are probably dozens of ways a particular circuit could be implemented (in modern times – don’t try to apply this pre-WW2), and the process is an iterative one of finding out what’s available, and how much of the circuitry it can provide the design, and how miniature the results are.

    Patents and Trade Secrets

    Any Custom Part will be patented by the design owner. Any bespoke parts will be patented by the manufacturer, though sometimes this can be shared with the design owner. Neither of these prevents a rival from reverse-engineering the product to learn its secrets; instead, it places those secrets in plain sight, but protects the profits from the use of them.

    But the sweetest result of all is when the design yields a Trade Secret – something that isn’t protected by a patent because it’s a lot harder to reverse-engineer, and so is exclusive to the product manufacturer – at least for a while.

    Either a critical success on the Miniaturization roll, or a Critical Failure on the Component Source roll, can yield a Trade Secret.

    Having a trade secret means that no-one else can use the technology. There’s an unresolved variable in their analyses. But that secret has no legal protection; if anyone else does figure it out, it’s too late to patent it. So the designer gets the choice – publish or don’t publish?

    If he decides to go down the Trade Secret route, he can add one more Specification (which he has to roll for, of course) – a difficulty modifier to be applied to all attempts to reverse-engineer the tech. He specifies how hard he wants it to be, and that becomes a negative modifier on his roll, but his Margin of success increases the penalty for anyone else duplicating his work.

    Manufacture

    And, finally, there is Manufacture. This is a Specification that doesn’t require a roll by the player; instead, it’s a case of what seems reasonable to the GM. The design might seem perfectly reasonable to him, in which case, he has only one variable to assign – manufacturing time.

    But the GM may feel like there’s been too much built into the design for the price-point – the unit cost for manufacture is higher than desired. He gets to assign any variations on the initial parameters that he considers appropriate. Throughout the process, the designer should have had one priority in mind as the most important; and the GM should respect that. It might be the price (and note that the numbers assigned for this are Retail, not manufacturing; you need 30-50% allowance for store profits and 5% for transportation per division in the size category, except 2% for Portable units). It may have been portability, which is a compound of size and weight, It may have been quality.

    These three, and manufacturing cost, form the unholy quadrilateral – you can have any three, but the fourth bears the brunt; you can have any two as specified and the consequences can be spit amongst the remaining pair.

    Quality, Features, Price, and Size/Portability/Weight. Every design exists at a specific point in the resulting 4-dimensional product space, and the GM decides what that point is, and what – if anything – has to be compromised, based on what the player’s choices have been during the design process, and what he said when making his decisions, and his rolls.

    This process can’t be carried out by the player on his own. It HAS to be done face-to-face with the GM.

    It’s not at all uncommon for potential functions to be disabled in this step. Sometimes, you find products which have all the software built in to perform a certain function, but the hardware has been removed from the design, which often shows the legacy of where those components used to be – mounting holes and what have you.

    Manufacturing Time

    From the manufacturing cost, the GM can set the number of manufacturing processes involved – the lower the cost, the fewer of these there will be. The exact numbers don’t matter; what’s important is that the more Specifications there are, the more has to be done at each step of the process. For game purposes, it’s best to reverse the relationship – setting an average time for the number of processes, then multiplying by the number of specifications. Anything from 1 second to 5 minutes is reasonable.

    The average time per workstation dictates how fast the units can be manufactured and packaged, ready for shipment. The product gives the total manufacturing time for a collection of parts to become a completed unit.

    There is then only one consideration left: profit per unit. But that’s way beyond the scope of this article.

Using This system to design and construct a space station

Each required system is a Specification – Life Support is one, Accommodations are one, Meeting Rooms is one, a Control Center is one, docking facilities is one, making those universally compatible is another, and so on.

Anything that has to be there in order for the station to do whatever the designer wants it to do should get listed.

The environment for which it is being designed should also be a Specification.

Weapons are the bare-basics minimum currently available that is capable of functioning in the specified environment. Each upgrade in type or in effectiveness has to be contained in an “Improved Weapons” Specification. From bullets to lasers? One Specification. Bullets to better lasers, that’s two Specifications.

Using this system to design a more efficient air-con

This illustrates two important principles, but beyond that is very like the “Better TV” process discussed earlier.

Principle One: you can’t just say “Better Air Con”, or even “More Efficient Air-Con” – you have to give the GM some sort of conceptual point to hang the difference in efficiency on. “Two force-fields in a closed box separate, lowering the air pressure within. This cools the air. A valve releases the cool air into the room while drawing new air into the chamber.”

So long as the description meets the minimum standards for credibility within the campaign, it’s fine; it doesn’t even matter if it would really work (I suspect that it would be rather noisy). There’s enough there to work with – chamber size, force field creation, force field location, force-field speed of movement, movement mechanism, valve size, outflow fan. The design process might show that more effective cooling is achieved by compressing the air in the chamber and then letting it expand when released, with some sort of radiator mechanism or pool of water (water can absorb a lot of heat) in a tray underneath. As I said, there’s plenty of sci-fi / engineering crunch for the GM to use in the design process.

The second important principle is scale – there’s no mention anywhere in “More Efficient Air Con” of the system scale, but refrigerating a room is way different from refrigerating an entire floor-plan or skyscraper. Adjust designs and difficulty levels accordingly.

Still more examples

Some more applications of the system for you to consider:

  • Designing and constructing a castle or stronghold (or dungeon) – done in a similar way to a space station, each element that you want to add becomes another Specification.
  • Anti-magic grenades – which reduce a spellcaster’s capacity for spellcasting for the rest of the day, or until they Rest sufficiently.
  • Designing and constructing a robotic companion or servant – bread-and-butter for a system like this.
  • Designing and manufacturing a chemical solution to a problem – this is exactly what the system was designed to do; each chemical property you want the result to have is a Specification.
  • Building a complex shape using force-fields – the second purpose to which this system was actually put. This purpose ties in directly to Power Skills, which is why last week’s post had to precede this one – they make a 1-2 punch.
  • Designing and constructing a piece of computer software – each function is a Specification, the Operating System is a Specification, the minimum hardware is one or more Specification.
  • Designing and running a PR campaign to resurrect the career of a politician after a scandal – you didn’t think they’d slip away quietly into the night, did you? Even caught red-handed, they would at least try.

The list goes on and on – even one of the items above hadn’t been considered before I started typing from my notes.

This is a methodology for creating complex structures, objects, patterns, and effects using the tools provided in every RPG for which those things are relevant, from a Royal Carriage to a plot to take over the world. The basic processes are simple, but have enough depth that projects of any complexity

Some final notes on Interval selection

This is one of the most critical aspects of the system. As stated earlier, Specifications x Interval = minimum time, but the totality could be much higher, and is dependent upon choices made by both player and GM along the way, and so, unpredictable.

The likelihood of success of the character’s skill is the only guide that the GM has as to how much extra time the totality is likely to take – the higher the skill, the more closely the total will approach the minimum.

This can be used by the GM to get an indication of the Interval to use, based on available time and task complexity. But that’s not a perfect solution. Set a total estimated time, deduct a margin for Setbacks and Barriers based on the character’s skill relative to the scope of the task, and divide the total by the number of Specifications to get a rough indication. Probably round off to something useful, because the raw result is likely to be anything but.

What’s more, the time interval itself can be part of the Specifications, where the player wants to accomplish in minutes something that should take hours, etc.

By looking at the character’s success margin and the number of Specifications, the GM can “tune” the Interval so the project fits the campaign’s pacing. If the hero has a month of downtime and the project has 10 Specs, an Interval of 2 days feels high-stakes, whereas an Interval of 1 day feels comfortable.

There you have it!

There will be no post next week as I have to deal with medical issues and a property inspection mid-week, and have to prep for the latter despite the former. But I’ve already started work on a post for the following week!

Comments (4)

Power Skills in Zenith-3 (and elsewhere)


A ‘Power Skill’ measures how adept a character is at pushing an ability beyond its normal limits. These are rules for handling them, from my Zenith-3 System, and adapting them to other game systems, permitting their application to D&D class abilities and Feats and so on. Useful in any genre with unusual abilities.

What Is A Power Skill?

A Power Skill is the skill of using a power or ability in ways beyond the straightforward applications given in the applicable rules. The More of their capability a character has invested in the power or ability, the more adept they are at using it in different ways. A character can start with just the one ‘trick’ and expand their repertoire as they gain in power.

The Zenith-3 Rules

The rules system used by my Superhero campaign (1982-) have gone through multiple iterations over the years.

  • v1: 1982: 18 pages of handwritten amendments to the official Champions Rules. One of the initial changes was to go to a d20-based set of mechanics instead of the Hero System’s original 3d6.
  • v2: 1982-1984: Expanded to 20-odd pages typed on a manual typewriter by the sister of one of the players. The first draft consisted of post-it notes attached to both a copy of the official rules and a photocopy of the v1 rules. Also incorporated the official Champions 2 and Champions 3 rulebooks.
  • v3: 1985-1986: Supplementary notes that built on the v2 rules but didn’t change much; most of the changes were to power and skill descriptions from the Hero System. In this period, a comprehensive game physics was written and provided to players for the first time, and there were a few skills and powers revised to accommodate it.
  • v4: 1987-1989: Added some new powers and skills and redacted some old ones, generally aimed at ‘tightening up’ overly broad skills by splitting them into two smaller ones. More notable now for the general principles, which attempted to create a coherent skills structure for the first time using the concept of ‘dependencies’. Most of these incorporation of these principles was never completed.
  • v5: 1989-2000: I now had a Commodore-64 and some word-processing software to go with it. That soon became a C-128. The rules were now printed by an actual computer printer, for which I had to write my own device driver, triggered by codes that I could embed in the documents. This was the first attempt to transition to a fully-self-contained game system, based on Champions 3rd Edition. It ran to over 800 single-sided pages in 5 volumes, two-columns (mostly), and was never finished – but enough was done that it was playable. Introduced the concept of Hybrids – subsystems that were partially one thing and partially another – initially restricted to things like Running. Those page were completed in about a year-and-a-half, and remained the basis of the campaign for 11 years.
  • v6: 2000-2001: An aborted effort to compress, compact, and complete v5 while incorporating the accumulated errata and revisions from 11 years of play-testing. By now, I had a laser printer and a windows-98 PC. Some concepts were carried forward into the next edition wholesale, some were further compacted, and some were flagged for deletion because testing showed that they didn’t work as they should in practice.
  • v7.0: 2001-2002: A co-writer came on board by uttering the immortal words, “It will only take three or four weeks”. A huge amount of progress was made in this year-and-a-half. For the first time, it was segregated into individual files, one for each chapter and appendix In fact, we were working on the appendices and only had one chapter of skill descriptions outstanding. Notably, the system foundation switched from d20 to d%.
  • v7.1 2002-2003: Revision of some powers & disadvantages concepts that didn’t quite work as hoped and correction of errata. Started on the skill descriptions. Most of the contents were completely unchanged from v7.0.
  • v7.2 2003-2005: Addition of some new powers, new disadvantages, and more skill descriptions, plus more inclusion of errata and corrections. A complete overhaul of the three frameworks – Magic, Psionics, and Martial Arts. Most of the content was completely unchanged from v7.1.
  • v7.3 2005-2006: More errata, more revisions, more clean-up and replacement of things that weren’t satisfactory. Most of the content was unchanged from v7.2 but almost 1/3 of the documents had been revised from v7.0 through accumulated revisions. Stayed stable for about 3 years.
  • v7.4: 2009-2012: More errata, more revisions (including a significant revision of the core concepts of the Magic framework, and discarding of the ‘Contacts’ system). But it was a bit more comprehensive than the revisions of previous iterations. There was a lot of effort to find an eliminate ‘infinite points generators’ from the rules – something we thought had been achieved with version 7.2.
  • v7.5 October 2010-2026: In October of 2010 it became apparent that it was possible to push the mechanics too far, and that the points costs for some things needed to be changed. I set about analyzing the situation, discovering that the same power could be under-priced, correctly priced, and drastically overpriced, all at the same time, depending on the combination of modifiers and prices, respectively. Ultimately, it was shown that this was a rounding error from a process simplification. Dozens of analysis graphs were prepared like the one below, and a complete top-to-bottom revision of the fundamental concepts began. This mostly consisted of removing that simplification, but after the deep-dive into prices that its discovery engendered, I also ended up discarding the notion that all things should have the same price at all power levels. Instead, progressive costs were introduced for things other than skills, especially stats. I also took the opportunity to incorporate some changes that the players had been asking for, like shifting the minimum score in skills to 0. The adoption of this iteration of the rules remains incomplete; for the most part, we’re still operating on a hybrid of v7.4, with some revisions, and some incorporation of v7.5. While unstable, this hybrid remains playable. But the change is so substantial, I’m cobntemplating renaming this to V8.0.

First Question, having noticed the problem: How significant was it? Answer: Very. I had hoped that it would resolve into a single curve, which would make correction simple. It didn’t.
 

Second Question, how pervasive was it? Answer: Very. 90%-plus of prices were incorrect through enlargement of rounding errors.
 

Third Question, how significany was it at a practical scale? Was it being exaggerated in high-price purchases. could it be ignored at lower levels? Answer: Not really – not by enough, anyway. The ‘chord’ where prices are correctly calculated is really obvious in this graph – as is the fact that it’s a very small percentage of the whole.

Fourth Question: Continuous plotting, connecting each data point to the next, was great at showing the overall shape, but implied data that wasn’t really there. Since there was an obvious set of patterns that didn’t fit a simple curve, switching to individual data points would be more useful in trying to understand the pattern.

Fifth Question: Could the problem be isolated / simplified by reducing the values of a single variable? Could a formula be derived that way? Was there a pattern? Answer – there was a pattern – but it’s structure varied with all five variables. In other words, it was systemic; the whole technique was flawed. I would never be able to look at ’rounding errors’ quite the same way again!

The Concept of Power Skills and first efforts

The concept of power skills was introduced in v5 of the rules, but virtually nothing was done about implementing that concept until v7.1. The roots of the concept can actually be traced further back, to v2, and the idea of ‘pushing,’ which enabled characters to spend Endurance to boost the effectiveness of a stat or power, either buying more of it temporarily or overcoming a limitation placed on it by sheer effort.

Between this initial idea (which worked) and the formal mention of Power Skills in v5, I began to feel that it was too easy, that there should be some sort of skill in using the power against which the character should have to roll in order to push that power beyond it’s normal limits.

From that seed, the idea grew that this same skill check would permit the character to use their basic ability in all sorts of tricky ways – bounced shots, some combat tactics, and so on. V7.0 had included, for the first time, rules structured in such a way that skills could also be pushed.

First efforts at implementing the idea went no further than the drawing board, because some fundamental issues remained unsolved: What should the skill cost and how fast should it improve? Should everyone have it, or should it be an ‘optional extra’ that a character had to buy? Or should they get some of it for free but have to pay for the rest? And how much should they pay? Should the expenditure reduce the END cost?

Debate went back-and-forth, with positions adopted, revised, and reversed. When it came time to actually bite the bullet and draft some rules, I deliberately chose a relatively simple solution with some of these proposals to be listed as optional rules or options for future consideration after seeing how well the simple proposal worked in play.

Power Skills In Other Systems

My rules weren’t the only ones to adopt similar concepts. I don’t think that any of these played a role in shaping what my rules became, but thought it worth actually mentioning these. Feats from 3.x are a more likely influence.

    Burning Wheel (2002)

    “Ugly Truth” allows a character to manipulate social situations to an extreme degree, or use “Intimidation” to completely break a target. I heard about Burning Wheel when it came put but have never read the rules.

    GURPS Powers (GURPS 4e Supplement) (2005)

    “Power Parry” and “Power Stunts” allowing for exceptional combat feats that transcend standard melee attacks.

    D&D 4e (2007)

    Similar to Daggerheart, “Powers” permit significant alterations to combat conditions and the like. I never played or read the rules for 4e; I was still transitioning to 3.5 at the time, and the negative press and edition wars made me reluctant to spend the money for a copy.

    Powered By The Apocalypse (2010)

    This is a game design framework that has been the foundation for hundreds of Indie game systems. “Moves” or “Actions” function similarly to power skills by focusing on dramatic, genre-specific, high-impact actions rather than mundane tasks. I had never heard of it until I started doing background research for this article.

    Forged In The Dark (2017)

    An SRD used as a foundation for other RPG systems. To date, there are over 300 systems based on these standard mechanics, which derive from the Blades In The Dark rules set (2015-2017).

    Daggerheart (2025)

    “Power Cards” or utility powers are used to give characters, particularly martial ones, more engaging and specialized options beyond just moving and attacking.

The V7.4 System

Power Skills are defined in the same way as other skills in the campaign. Excerpting them for this presentation has bypassed all the explanations that go with that standard approach, so I’ll have to explain them as I go along.

Such explanations will be boxed off, like this. I’ll try to keep them to a minimum.

Let’s start with some fundamentals:

Skills are bought with Skill Points, enabling the subdivision of a single character point into smaller (and hence more flexible) pieces. All prices given within the skills system are in skill points (SP) unless otherwise stated. These Skill Points are purchased with character points, and are used to both buy and improve skills.

How many skill points you get for a ten-character-point investment (purchases are usually made in blocks of 10) – is determined from General Aptitude. If characters purchase less than the full block of 10 character points worth, they get a -1 on the Skill Point Conversion and round the fraction down.

4.9.1 Ex-cathedra Commentary

The concept of Power Skills coalesced from four directions simultaneously. I was looking for a way to reward characters who invested a lot of character points in a given power, and at the same time I was looking for a way to encourage characters to think about using what powers they had in different ways. I was also looking for a mechanism that would describe how effectively characters had mastered what abilities they had bought. And, finally, there was the existing concept of “pushing” a power, which should not be automatic, but should require a skill roll of some kind. All of these contribute towards character consistency and focus, encouraging characters to become singular masters of a single related ability instead of buying everything under the sun – thereby leaving more scope for unique individuals within the rules.

The concept of each power having an associated skill which would permit the character to express the power in different ways first arose during initial work on the d%-based skills system, when considering better ways of representing a flying character’s ability to do barrel rolls, immelmans, etc, initially inspired by the techniques and rules provided for ad-hoc spell use. This would define the various flight maneuvers as tasks with an associated base difficulty, which could then be used as the basis for an appropriate skill check. The approach appealed because it would be equally applicable to stukas, B52s, hot air balloons, and flying characters – only the difficulty would change. The idea was expanded on further consideration, prompted by recollections by Graham MacDonald of some of the ways in which Force fields could be used – frictionless surfaces, simple shapes for grabbing things (a-la Green Lantern), ball bearings, etc, and by similar expansions of capabilities by other characters in the past, such as Ian Mackinder’s use of the “Earthquake Special” attack with Titan’s STR.

It was originally thought that most of the power skills would be reflections of existing skills, for example EDM (Extra-Dimensional Movement, including Teleport would be analogous to Warp Physics, and that the character would get a number of skill points towards the purchase of the appropriate skill free with the purchase of the appropriate power. This plan did not survive however; in some cases there were too many appropriate skills, in others there were no appropriate skills, and in still others there were skills that seemed appropriate but which just didn’t work on closer examination.

And so, the current, less-defined, more flexible system was created, in which each power has it’s own unique Power Skill. Ordinary skills may be complimentary to the Power Skill (see Skill Use below), and (when appropriate) the Power Skill may be complimentary to a more traditional skill – thereby reflecting the benefits a character gets to his understanding of Warp Physics from his many EDM journeys. But each power has a skill all its own.

4.9.2 Basis

All power skills are based on a specific single Characteristic or Skill Roll, and are treated as Fundamental or Expert skills based on the Base Cost of the power per level.

Skills are d% based. Each stat converts to a characteristic roll to permit saves vs STR, for example. Some of these are referenced frequently in play, some are quite rare. The suffix “#” is used to distinguish characteristic score from saving roll, so CON refers to the stat and CON# to the roll. The stat rolls generate Aptitudes, which are ‘the potential for skills’ – there are 15 of those.

The Aptitude scores are then used to generate actual Fundamental skill values. This approach means that you can improve a stat without having to recalculate dozens of skills; all that changes is the cost of improving aptitudes, saving a few points off the cost of the stat improvement.

Skills are broken down into Fundamental and Expert skills. The two major differences are (1) The Fundamental Skills are a fixed list; and (2) Expert skills are based on Fundamental Skill scores. There are generally 3 Fundamental skills per Aptitude. There are also user-definable Advanced Expert Skills, but we don’t need to worry about those.

Skills range in value from -80 to +150. “Average”” works out to be -12. A skill of 0 or better is enough for the owner to qualify for a job using the skill; a skill of 20 or more is having a professional qualification in the skill, or the equivalent. There are ten ‘ranks’ of characters, from pathetic normal (not called that) to mega-deity (not called that, either), with various grades of Paranormal occupying three of the middle grades. Each caps skills to a different maximum and adds or reduces the cost of skills; the scores quoted in this paragraph relate to Veteran Paranormals, one step below Demigods.

The choice is a matter for assessment by the referee and of the creativity of the character. In some cases, the Basis will be obvious, in others it may require some thought by the character’s creator. EG HKA (Hand-to-hand Killing Attacks) would normally be based on STR#, but a character could be built where it was based on AGIL# (the character doesn’t use brute strength, he uses precision) or even INT# (the character identifies and targets vulnerable points on the target. Or even WILL# (the character uses determination and puts English on his blows to create the additional damage).

The examples illustrate how the appropriate choice of Basis adds to the definition of a power – the above are 4 different interpretations of HKA. What was previously justification and explanation of a power now has a real impact on the description of the power and what can be done with it.

It is expected that characters will tend to play to their strengths – a character with a high stat will normally have powers that are derived from that stat in some way – but specious logic will be frowned upon. Just because a character has a high stat is not a good enough reason for power skills to be based on it.

When no compelling case is made for any other choice, it is presumed that the basis will be INT#, reflecting that the character’s ability to use the power in different ways is dependent on his ability to work out how to use it in those different ways.

Skills are also classified into subcategories – A-F for Fundamental Skills, G-J for expert skills. These designate sub-tables within the system – lower is cheaper and easier to learn and generally gives more skill ‘bang’ for your buck, higher is more expensive, harder to learn, and gives a lower score.

4.9.3 Classification Code for Base Value

This is determined by the base cost of the power per level. Note that codes A-E indicate that the Power Skill is treated as a Fundamental Skill when necessary and F-J indicate that it’s considered an Expert Skill.

    <5     A
    5     B
    6-10     C
    11-15     F*
    16-20     D
    21-25     G
    26-30     E
    31-35     H
    36-40     I
    41+     J

         * Use the F column under “Expert Skills”.

EG: Telekinesis has a base cost of 15 points. It would use the “F” column of the Expert Skills table to determine the Base Value of the Power Skill. A character with a Basis of 15 would therefore have a Base Power Skill of 5%.

4.9.4 Base Cost

This is always 0 for a Power Skill.

4.9.5 Free Improvement

Each power has a “difficulty of learning” value given in the table below, based on how flexible the power is and how difficult it is to adapt the basic power to some exotic usage. The character gets 1 skill point worth of improvement to the base value for every 2 character points in the power’s Net cost.

The costing for powers has been broken into the application of modifiers in two stages, compared to the Hero System’s one. “plus” and “minus” modifiers get applied to the total base cost first, and yield Active Points Cost, which is used to determine the END cost of using the power. The formula is

     Active Cost = Total Base Cost x (1 + total “plus modifiers”) / (1 – total “minus modifiers”).

The Active Cost is then adjusted by “times modifiers” and “slash modifiers” to get the Net (or Actual) Cost:

     Active Cost = Total Base Cost x (1 + total “times modifiers”) / (1 + total “slash modifiers”).

These are normally summarized with math symbols – “+1, -2, x3, /4”. Most modifiers contain only a single type of modifier value, but there are a few rare ones with both an Active and a Net contribution.

I’m not going to quote the whole list, just the first dozen or so entries.

    Ablative Armour     b
    Aid     d
    Armour     b
    Change Environment     g
    Characteristics     –
    Clairsentience     d
    Cosmic Awareness     f
    Damage Reduction     d
    Damage Resistance     c
    Danger Sense     d
    Darkness     b
    Density Increase     b
    Desolidification     e

EG: A character has bought 80 points worth of Telekinesis. Assuming the character had a basis of 15, giving a Base Power Skill of 5%, that would give them 40 skill points worth of improvement in the Telekinesis Power Skill. Consulting the table above, TK (Telekinesis) has a code of “d”, so finding 40 points worth of improvement on the “d’ column in section 4.7.1 gives +40, for a total skill of 45%.

Click the link to download

I’d love to include the actual tables here that the above example is using, but Hero Games imposes a four-page limit to the presentation of House Rules. I’m skirting close to the limits already, even assuming these explanatory interjections don’t count.

What I can do, I think, is to provide those tables in a PDF.

4.9.6 Maximum Improvement

Power Skills cannot be improved by more than +75%, as shown on the improvement table.

4.9.7 Costed Improvement

Characters can improve Power Skills using skill points like any other skill. The “free improvement” amount does not count as improvement for the purposes of determining the cost of such purchased skill, but DOES count against the +75% improvement limit.

EG: Our character with the 80 points of TK and 45% TK Skill wants to buy an extra +35% for his power skill. Looking up the table in 4.7.1 shows that this costs 35 skill points. Since 40*+35=75, this is the maximum that the character can buy in improved Power Skill.

     * From the earlier part of the example.

    Purchase Restriction

    You cannot spend more additional skill points on a power skill than you get free.
    EG: In the case of our Telekinetic, he can’t buy more than 40 skill points worth, or +40. Since the purchase he has made, +30, is less than this, the purchase is fine.

    Lost Points

    The downside of buying additional improvement to the Power Skill is that the points are “locked in” once play begins. That means that if the character buys additional power, raising his free improvement, any points expended in a purchased improvement are lost. Purchasing additional power skill should be perceived as a bootstrap to give the character a desired level of flexibility before the character has sufficient points to invest in the power level actually desired.

4.9.8 Reduction Of Power Skill Scores

It’s unusual to do so but characters can reduce their power skill scores in the same way that they buy improvements. However, any such reduction is considered a permanent reduction in the base power skill, so even a later improvement in the power skill, or the purchase of additional power, leaves the character with a lower net Power Skill, and the maximum improvement in the power skill becomes +75 from the reduced value. This is useful for simulating powers that the character wants to have under only marginal control.

Whenever the character chooses to reduce power skill scores, they should also suggest (in writing for future reference) a story arc that permits the character to “buy back” the reduction. The referee can schedule or rewrite this plot arc as he deems desirable, but cannot force the character to pay off the reduction.

In other words, the referee can’t run the scenario until the player wants him to, but he can wait as long as he wants to thereafter. These restrictions are designed to prevent characters taking advantage of the rules to buy extra ability and then paying off the limitation before it has a chance to bite the character.

Now we get to the meat of the rules, the parts that will matter the most to readers.

4.9.9 Default Use Of Powers

Each power description should include a default use of the power. This must be as basic as possible (while retaining the special effects that flavor the power and the appropriate consequences of any advantages and limitations) and is the effect that takes place (if any) when the character fails a power skill roll. Some powers have these largely predefined.

EG: Our continuing TK example: “Default: Push the target aside with full STR” (normal attack).

4.9.10 Routine Use Of Powers

Anything that is a default or obviously-straightforward use of the power will usually be declared a “Routine” use of the power. This is anything at the “aim and fire” or “just pull the trigger” level. The referee will usually not require a roll for such uses of the power unless the character’s power skill is extremely low (score less than 0) before the difficulty modifier.

4.9.11 Congruent Powers

It is possible for a character to have two or more variations on the same power. When this happens, they have the choice of declaring the variations as “Congruent Powers” or treating them as separate.

When powers are Congruent, the “free points” are determined by adding 1/2 the net value of the most expensive power, 1/3 of the net value of the next most expensive, then 1/4, 1/5, and so on. This is a compromise between assuming that the additional variations contribute fully, with NO expertise overlap between the two powers, and assuming 100% overlap (which could penalize characters for focused concepts). Both powers use the same Power Skill roll. The higher classification code (the one closest to “A”) is used for determining both Base Skill Values and improvement costs.

“Force Field” and “Force Wall” are considered eligible for treatment as Congruent Powers.

4.9.12 Elemental Controls

Skipped as irrelevant to most readers.

4.9.13 Multipowers

Skipped as irrelevant to most readers.

4.9.14 Spellcasters, Psis, and Martial Artists

Skipped as irrelevant to most readers.

4.9.15 Gadgets

Gadgets, by definition, are ad-hoc constructs, ie the character has minimal skill in using them for any given purpose. These are always treated as per the basic system, but there’s no point in listing the relevant skill because the device is here today and gone tomorrow. That’s why it’s always better to give the gadget to someone else who has skills in the relevant area than using the gadgets yourself. HOWEVER, in terms of CONTROLLING the gadget, the referee can choose to permit the use of a “Control – Gadgets” skill based on the cost of the gadget pool in character points.

This is in contrast to FOCI which are gadgets bought more-or-less permanently using character points, in which the character gets a skill based on the net cost of the Focus or 1/4 the ACTIVE points, whichever is higher.

An exception to these rules are vehicles, which are controlled using the appropriate driving / piloting skill, regardless of the vehicle’s cost to the character.

4.9.16 Congruances With Framework Elements

It is obviously possible for a character to have a “normal” ability and an element within a framework that are congruent, eg a character could have an EB (Energy Blast) and a separate EB that is in a multipower or elemental control or whatever. Where this is the case, use the appropriate “modified” value for the power in the framework as a congruent value to stack with the power outside the framework.

EG a character has a 100 point EB and a spell that cost 6 character points, on which he has listed a 20-extra-Mana cost. The spell is therefore worth a net “value” of 46 points for the purposes of determining skill level in the power; but this is treated as congruent to the 100 point EB. The characters “free” skill with EBs is 1/2 of 100, or 50 points worth, plus 1/3 of 46 which equals 16 points (rounding in the character’s favor), for a grand total of 66 points worth of “free” skill.

Author’s Note: This is a complicated situation that I hope never arises in real life but feel that sooner or later it’s sure to come up….

Planned Expansions in v7.5

4.9.17 Specialties

Characters will be able to buy a specialty in a specific use of a Power Skill, for example “Trick Shot”. This costs the standard amount in Skill Points for such a specialty as though the skill were the same as any other, and gives +30% to the use of the power in that way.

4.9.18 Expert Versions

For powers rated A-E, once the maximum improvement has been achieved, characters can choose to purchase an Expert Version of the power skill with GM approval. Such approval will only be given if the character has made extensive use of the Power Skill in play. The expert skill has a base level of the skill achieved in the Fundamental Skill version and permits improvement of the skill by another +75%.

The process is as follows:

1. The code transitions to the code 5 higher – A to F, B to G, C to H, D to I, E to J.

2. Consult the purchase price table looking in the appropriate column for a base skill of 1/2 the indicated level.

3. The indicated price is the cost of +0% in the “Expert Version.”

4. The same code is cross-referenced with the desired improvement in the Expert Version to get the price of the improvement.

5. Improvement purchased is added to the existing skill score.

Purchasing the Expert Version reduces the level of difficulty for maneuvers by one step – ‘Routine’ becomes ‘Easy’, ‘Difficult’ becomes ‘Routine’, etc., in addition to permitting further increases in skill level. This makes extremely difficult or complex maneuvers more achievable and less complicated ones more reliable.

Task Difficulty

There’s no section number on this section because I cut out a whole truckload of stuff not relevant to power skills – and what’s left starts off in mid-section.

Task Difficulty is the GM’s response to the question, “How hard should this proposed action be, under ideal conditions”. It sets a baseline modifier. That then gets adjusted progressively to take into account the differences between “ideal conditions” and the actual circumstances in the field.

    Task Difficulty Table

         Trivial task +100
         Routine task +50
         Easy task +25
         Moderately Difficult task +0
         Difficult task -10
         Very Difficult task -30
         Extremely Difficult task -50
         Almost Impossible task -75
         Absurd to even try -100
         “Permission Denied” -120

    Environmental Circumstances

    Environmental circumstances are usually rated on a +50 to -50 scale, but extreme cases may call for plus-or-minus more than that. A positive modifier indicates a more ideal environment, a negative modifier indicates a handicap. It is generally easier to rate the suitability of the environment from 1-10, multiply the result by 10, and subtract 50, but the technique employed is left to the referee’s best judgment.

    Action Modifiers

    Third, the referee should assess anything else the character is, or has been, doing, that might improve or lessen the chances of success. This includes any modifiers from combat maneuvers. These are generally rated on a +25 to -25 scale each, and in general there will be no more than 1 or 2 of them. He should also assess and include anything else about the character who is making the check that is relevant, which includes Aiming (refer Chapter 12 Combat), Complimentary Skills (see below), Specialties, etc.

    Target Modifiers

    Fourth, the referee should assess anything the target is, or has been, doing, that might improve or lessen the chances of success. This includes any modifiers from combat maneuvers being performed by the target, movement, etc. These are generally rated on a +25 to -25 scale each, and in general there will be no more than 1 or 2 of them. He should also assess and include anything else about the target that is relevant, for example the size of the target relative to the range +([size/range-1] x 25), in inches (one inch = 2m).

    Range Modifiers

    Fifth, the referee should apply the appropriate range modifier. This is normally the standard range modifier given in chapter 12, Combat, but this can be modified by advantages and limitations on powers, etc.

    Anything Else

    Finally, the referee should apply anything else that’s applicable. There generally won’t be anything, but it’s worth a moment’s thought to double-check.

    The Total

    The total should then be determined (assuming the referee hasn’t been working that out as he went along) and, if necessary,adjusted to fit within the absolute limits of ±150 modifier. The referee need not announce the exact modifier, simply the closest “category” to the total – taken from the same Task Difficulty scale given above.

    EG: If the modifiers total +35, the referee need only announce that it’s a “Fairly Easy” task (Easy = +25, Routine =+50).

4.14.5 Power Skill considerations when designing Powers

In the old days, what mattered was getting your power for the fewest possible points. The less you spent on something, the more you could spend elsewhere.

Well, the old days are gone. The Power Skill system rewards focused characters with flexibility and ability.

In the past, it was enough to decide how much of something you wanted to have – an RKA that was so big, Flight that was this fast, and so on. Then you tried to afford all these abilities. With the advent of the power skill, it is now just as important, or even more so, to decide how much you want something to cost. There is a benefit to NOT reducing the END cost, and it has to be weighed and compared with the benefits of doing so.

When designing powers, the definitive question is now “What do you want to be able to do with it?”. Power Levels and Power Skill levels are both necessarily defined by the question. Are there any defined standard maneuvers that you want to be able to achieve? The difficulty, and resulting chance of success may well be the defining issue for the power.

In Practical Terms

1. Powers have a base rating according to how much a minimum level costs. You look up that rating on a table.

2. Next, you decide the skill basis of the power. This is usually a stat roll, but can be a skill roll if a sufficiently convincing case is presented.

For example, St Barbara’s Flight is based on her Acrobatics skill rather than her Agility, because she literally uses aerial acrobatics for sharper changes of direction. The downside is that she has to shut her power off for a round to do so. For most characters this would be a massive restriction but because she is literally an Olympic Gymnast with skills to match, St Barbara gains massively in the accuracy of her flight maneuvers and the likelihood of success in them compared to most characters. This also feeds verisimilitude – this “stop, reorient, re-start” approach is probably closer to how a character with that background would fly.

3. Cross-referencing the score in the Basis with the classification code gives a base score in the skill.

4. Next, you find the power itself on a list for it’s improvement code, and cross-reference the net cost of the power with that improvement code to get the amount of free improvement in that base score that has resulted from spending more than the minimum on that power.

5. The maximum improvement in the power skill is either determined by subtracting this ‘free improvement’ from 75%, or by doubling that free improvement, whatever is LOWER.

6. Another table using the same codes can either yield the cost of that much improvement or the amount of improvement for a given cost, whichever is more useful.

Adaption To Other Systems

Until the shift to the d% based system, skills were rated on a d20 scale. To facilitate characters being adapted from the old system, or from standard Champions / GURPS, this section listed approximate equivalents.

What’s interesting is that this is a two-way street, providing an opportunity to adapt the mechanics provided for D&D or whatever. Even if you only use this system when your existing mechanics don’t cover whatever a PC is trying to do, it can be useful to have this in your back pocket.

    As a guide, the following are a list of approximate conversions from the d20 scale to the new d% scale:

      1 = -80
      2 = -71
      3 = -64
      4 = -57
      5 = -48
      6 = -41
      7 = -34
      8 = -25
      9 = -18
      10 = -11
      11 = -2
      12 = 4
      13 = 12
      14 = 20
      15 = 27
      16 = 35
      17 = 43
      18 = 50
      19 = 58
      20 = 66
      21 = 73
      22 = 81
      23 = 89
      24 = 96
      25 = 104
      26 = 112
      27 = 119
      28 = 127
      29 = 135
      30 = 142
      31 = 150

    The same conversion scale can be used for the Official 3d6 Champions System. However, it should be noted that it is now much harder to get higher scores; it also recommended that before conversion, the old score be reduced by 2 to give a more realistic target.

    EG: A character used to have Acrobatics 18/-. In the new d% system, that’s equivalent to an Acrobatics skill of 50%, but a more realistic figure to aim for in character conversion comes from converting 18-2=16/-, ie 35%.

But, if you want to go further, or your game mechanics are neither d20 nor 3d6-based (Traveler, I’m looking at you), there are two essential translations that you will have to make; everything else will flow from those.

Skills Extremes

I you look at the tables provided in the attachment, you will find that the highest base skill value is 100, and there’s a maximum improvement of +75 to that. And a specialty can get you another +30, if it’s relevant. So the highest possible skill to have is 205.

What is the highest skill possible in the game mechanics you are using? If it’s open-ended, use 3x, 4x, or 5x the highest roll result. So, for Traveler, that’s 2d6 -> 12; x3 = 36, x4 = 48, x5 = 60. One of those three numbers will be the maximum.

The absolute minimum that you can usually have in a skill would appear, at first glance, to be zero – but in the realm of roll conversions, that’s misleading. Remember the -2 suggested for 3d6 conversions? That’s because 3d6 have a minimum roll of 3, and both d20 and d% have a minimum roll of 1. You have to ‘pin’ the adjustment to the same foundation.

Let’s pick a hypothetical 4d6 system and derive a conversion to a basic d%.This consists of a mathematical formula of the form, d% = md + 4d6r x (Md – md) / (M4d – m4d).

Looks complicated. But it gets a lot simpler once you realize that defining the system basis, you’ve defined everything in that formula as a constant, so that you end up with something that reads, d% = ## + 4d6result x ##.

md = minimum roll on d%, Md = maximum roll on d%, m4d = minimum roll on 4d6, M44 = maximum roll on 4d6. What we’re trying to match is the range of variability.

So: d% = md + 4d6r x (Md – md) / (M4d – m4d)
= 1+ 4d6r x (100-1) / (24-4)
= 1+ 4d6r x 99/20
= 1+ 4d6r x 4.95.

It’s almost certainly close enough that you could use 4d6r x 5.

Here we have a range of -80 to +150 and a conversion target of whatever.

But if I were working up such a conversion for my own use, I would actually break the results into two bands – one less than average human and one more, simply because the Zenith 7.4 rules bias that value low to make little more room for skilled individuals.

Basis Decision

Once you have the ranges, the next decision to be made is what you’re going to base the ‘free base competence’ on – in other words, how much skill are you going to give the characters for free, and how are you going to measure the answer?

Both the Zenith 7.4 rules and the Hero System from which they derive are point-buy systems. Everything is under the control of the player. That’s not the case with other systems, where stats may be rolled, not chosen.

I urge GMs to get creative. For example let’s pick a class ability from D&D – just about any version will do. It first becomes available at, say, level 8, and is based on a character’s STR score.

As a basis for a skill in “using that ability,” i would look at 8+STR score. I might add multipliers to change the relative importance of each contribution and bring the STR scores closer to the range of the aptitudes used in this system to set skill levels, that’s up to you.

Improvement Cost & Quantity

The third design parameter also deals in the points-buy question – how much does it cost to improve the skill, and how much improvement should you get for that expenditure?

Difficult Decisions

And finally, you have all those decisions that were so hotly debated about the philosophical underpinnings of the system. Really, these boil down to one simple-to-state question: Does everyone with the relevant ability get some or all of the Skill that goes with it, or is this something extra that they have to buy or obtain somehow?

Again talking about D&D for a moment, I can envisage a whole range of magic items – tokens or badges or rings – that do nothing but unlock “Prowess” in a particular class ability.

I had to focus in on this system because the adventure that I’m working on will give one or two of the PCs some difficult challenges in using specific powers – and neither of them have the Power Skill scores for those powers written down. So I will have to calculate them.

And there wasn’t enough time for me to do that AND write a post for Campaign Mastery. This, on the other hand, was 70% copy-and-paste. Nevertheless, the more I worked on it, the more I realized the value of the premise of the article – this IS something that should be more widely available for GMs to consider. It IS valuable as a concept and as a technique outside of the Zenith 7.4 rules. And therefore, this is NOT a filler post – which is what I thought it might be, when I started it.

Comments (1)

What’s The Real Value? A ‘Trade In Fantasy’ Extra


A simplified mechanism for the simulation of trade in an RPG where it is not to be the focal point.

Image by Roy from Pixabay

Background

A confluence of thoughts from different sources came together the other day relating to how we assess profit from selling something. I’m not sure it was strong enough to count as a revelation, but it’s an insight at the very least, a way of looking at objects and trade goods that helps encapsulate an entire economy.

It’s completely irrelevant to the currently-in-progress chapter of the Trade In Fantasy series, but completely relevant to the broader subject, so it will eventually get given a place in the total text – I’m just not sure where it should go at this point. Because it’s a fairly fundamental conceptual tool, it will probably end up being tacked on to the end of one of the chapters already published, or inserted somewhere into the middle of it.

For today, though, it’s a standalone subject for later integration into the main text.

The fundamental concept of Trade

The whole basis of Trade as a concept is the notion that some commodity or item is worth more over there than it is here, and the difference is more than the cost of transporting the Goods over the intervening distance. A merchant therefore buys it here, moves it there, and sells it, becoming wealthier at the end of the process than they were at the start of it.

To avoid bogging down in nomenclature, let’s just call it a ‘thing’.

Processed ‘Things’

There is often an intermediate step in which a character with appropriate expertise takes the commodity, adds work to it, and transforms it from one ‘thing’ into another. It’s usually simpler to disconnect the supply chain into separate transactions, but that’s not always the case.

So,
1. Person #1 makes or extracts Thing A at Location 1.
2. Person #2 buys Thing A from Person #1.
3. Person #2 transports it to location 2.
4. Person #2 sells Thing A to Person #3.
5. Person #3 transforms Thing A into Thing B by adding Work to it.
6. Person #4 buys Thing B from from Person #3.
7. Person #4 transports Thing B to location 3.
8. Person #4 sells Thing B to Person #5.
9. Person #5 either resells Thing B to Person #6, or adds more Work to it to create Thing C.
10. If Thing C was created, the process loops back to step 6 with new People added to the supply chain.

Each of these steps is as simple as its possible to make it, but to make it even clearer, let’s look at an example.

1. Person #1 digs up some iron ore.
2. Person #2 buys the iron ore from Person #1.
3. Person #2 transports it to a smelter.
4. Person #2 sells the ore to the owner of the smelter, or pays them to add work to it.
5. Person #3 transforms the ore into iron, probably in the form of rods or ingots.
6. Person #4 buys the ingots from the smelter (or Person #2 reclaims his property, becoming Person#4 in the process).
7. Person #4 transports The iron to location 3.
8. Person #4 sells the iron to Person #5.
9. Person #5 adds more Work to it to create a steel sword.
10. Person #5 sells the sword, either direct to the public, or by completing the commission to create a sword, or to a retailer (Person #6), or to another intermediary (Person #7).
11. Person #6 (if any) sells the sword to the public, or joins it with others to fulfill a supply contract. It will almost certainly have to be moved, a service Person #7 is hired to provide.
12. Person #7 moves the sword (and other trade goods) from the place it was made to a place where there is higher demand for such.
13. Person #7 sells the sword if they own it, or delivers it. The purchaser, Person #8, either sells it to the public, uses it to fill a commission or contract, or keeps it as a personal possession.

This example breaks down a little in steps 10 and 13 because swords are typically sold by the blacksmith and not to a retailer, but it’s good enough. Many steps may be added – decorations, and scabbards and hilts – before the final product is achieved. For a presentation sword, the sort of item one Noble might gift to another, I could easily double the length of the list.

Value isn’t what you think, perhaps

At each stage of the process, the Thing being traded has three ways of being valued, and they are all valid in some respect.

There’s how much it has cost so far.
There’s how much the current owner can sell it for.
And, there’s how much the ultimate end-product can be sold for.

At each stage of the process, the current owner sells the product after increasing its value, either by adding Value of Location or by adding work. They incur costs in the process, which diminish the profits, so they want those profits to not only cover those costs, but pay therm enough to live on until the next sale.

The third value helps increase the second, helping achieve this goal.

So, at any given point in the process, how much is the Thing, in its current form, actually worth?

To someone who has already sold it, it’s worth exactly what was paid for it.

To someone who currently owns it, it’s COST is what they paid for it plus the Cost of whatever they are doing to it to increase its value. It’s either worth the total of those two costs, or its worth what they can sell it for at the end of that process.

If it were taken from them, the Cost Sum is how much they are actually out of pocket. But the effect on their prosperity is the higher, second, value.

Profits

A lot of people think that a business adds up its costs, including what they paid for the product that they are selling, and add a % profit margin to the total to get the price that they charge.

That’s not how it works.

For any given product, there’s a price that customers are willing to pay, and that’s what drives the retail price.

Even that’s an oversimplification in two important ways. First, there is a correlation between sale price and sale volume. Drop the price, and you sell disproportionately more of a product. If you chart the multiple of those two products against profit (assuming all costs are fixed), you find a dumbbell curve, with a peak at the point of maximum overall profit.

But all costs aren’t fixed, some of them are proportionate to shelf time, and there are other factors that impact sales volume – products stored at eye height outsell those stored somewhat higher, which in turn outsell those stored lower. The higher the sales volume, the shorter the shelf time – so lowering your price a little below that predicted peak volume can actually reduce costs and boost profits. And, if you’re already selling a large volume of a commodity, there’s a temptation to place it at eye height – but that can be a mistake; you’re already selling more relative to a market’s capacity to buy, so there might not be enough room for growth in sales for the better placement to bring maximum benefit; you may be better served putting the popular product just below eye level and using that optimum shelf space for a product with greater capacity for sales volume.

Second, because the correlation between sale price and sale volume doesn’t even mention cost directly, but cost is a critical constraint on profitability, it can be worthwhile selling one commodity at a lower price even than the ideal in terms of profitability and pricing a more premium product on the high side. That’s a modern perspective, driven by studies in the economics of supermarkets, but the principle can apply to farmers markets of a more medieval nature as well.

And you can confuse matters even further with sales and discounts. These are often kept simple for the understanding of the buying public, but “ten cents off a dozen plums if you also buy a melon” can often be a more lucrative approach.

And then you have to factor in quality, both real and perceived. Actual quality pushes up both costs and the price people are willing to pay, but not as a simple addition that would be easily mapped onto sales charts – there’s a complicate relationship between quality and desire to purchase at a given price (it’s not a simple proportionate impact, either).

Perceived quality – a component of reputation in an industrialized setting, but largely independent of it in a more medieval society, where brand identities were subordinate to personal identification – pushes both volume of sales and price tolerance upward, at minimal increase in cost.

I once read somewhere that for every dollar spent improving a product, you should spend $10 telling people about it, but I think that’s more an aphorism of perceived wisdom in the 1970s and 80s than it is a useful guideline – word of mouth is still a thing, and some companies are adept at various forms of free media. I do think the general principle would still hold true in pseudo-medieval times, but the ratio is likely to be 1:1 or less.

Even today, I think the principle is correct but the ratio is not to be relied on save at a global level of development budgets vs marketing budgets, and not at a product-by-product level – and even then, 10:1 seems extreme. Between 2:1 and 5:1 seems far more persuasive to me as a realistic set of numbers. But such marketing aphorisms often exaggerate to get the point across (like everything else in marketing).

Ultimately, there are so many interlocking variables that an informed best-guess is probably the best that you can do in terms of setting an initial price point, and actual measurements of revenue vs price carried out over a period of time used to tweak prices toward the optimum.

Modern production methods also make for more consistent price levels; there would have been a lot more variability in market prices in a pre-industrial era, and seasonal factors probably outweighed everything else, also affecting factors like perceived quality.

Don’t get that last point? When a product is in season, quality perception sits at a different bar to when things are late-season, early season, or off-season. Something that you wouldn’t give a second glance to at season peak might be seen as very high quality in the off-season – so quality expectations are a relative thing.

One final point before I move on: What about our Iron example? Quality there won’t change as a function of the season, there’s no such thing as an “iron mining season”. But I would contend that seasonality is just as important in this product space as in any other – the season might impact mining costs, it might impact how hard workers can or will labor and so impact yields, it will impact transport difficulties and costs, and so on. As a result, even if no-one thinks of it in those terms, there would in fact be an “Iron Ore Season” in every practical sense.

Costs

Before you can properly evaluate what price to sell at, you need to calculate your total costs, and that can be a lot trickier than many people imagine.

There are costs per commodity, like the purchase price. There are costs per load, such as drivers and guards. There are costs per trip, like wagon maintenance. There are costs spread over many loads, like the purchase price of a new wagon (or the repayment of a loan to permit the purchase of this one). There are all sorts of license and permit fees. There are tolls. If it’s available, and your sensible, and can afford it, there’s insurance. And there are taxes and import duties and the like.

If you’re a retailer, you have often-overlooked items like shelf space and product positioning (which has been mentioned already) on top of all of the above. You may need to woo vendors and suppliers, creating entertainment expenses. There may be bribes and protection money. There are staff wages – possibly including your own. There may be advertising and marketing. You may need to hire spies to watch an opposition. There will be guards, often hired from a specialist organization.

Each of these is more complicated than it appears. So you may also need bookkeepers and accountants and a paymaster.

To see just how complicated things can get, let’s simplify things down and consider a single wagon-load.

An example wagon-load

Alphonse has been hired to transport 12 cases of vintage wine to the city 100 miles away. It will take him one day to load the wagon and one day to unload it, and he can travel about 20 miles a day, so the total length of the trip is going to be 7 days. Alphonse owns his cart outright but has maintenance costs to pay, or (more specifically) a small amount of cash that he has on hand to pay for repairs when they become necessary. His wagon is drawn by two horses who are nearing retirement age, so he’s saving for replacements. He will have to pay three tolls along the way – one to enter the city, one to cross a bridge, and one to pay a ferryman. He needs three guards, and he has to pay them well and hire the best he can find. He drives the wagon himself, but he needs a relief driver in case he falls ill. He needs a cook, who will also serve as a medic. He hopes to be able to buy a second cart sometime soon, and so is training an apprentice, but he’s not experienced or skilled enough, yet, to act as the relief driver. He has to buy and carry food and water for his people and fodder and water for the horses. Every trip, he pays a blacksmith to check the horses and replace any horseshoes showing signs of significant wear. He has to allow for a sales tax and a luxuries tax and an income tax, and he also has to pay a fee for each of his workers to safeguard them from being pressed into state service. He has to pay vet bills for the care of the horses. Twice along the route, he stays at inns, which cost money for himself and his crew; the other three nights on the road, they have to rough it. He has to provide tents, and cooking equipment.

12 cases of wine only fill his wagon 1/3 of it’s capacity. Tools and personal effects and other items for use along the way fill another third of the available space. But that still leaves a significant amount that’s not earning any money, it’s just dead weight.
The bulkier a commodity is, the more space it takes up, and the heavier it is, the more carrying capacity it consumes. Spotting a wagon that is carrying high-density items, and therefore is not packed as high, increases the risk of interest on the part of bandits. Riding unusually high (lighter) or low (heavier) can indicate the presence of gems or gold, respectively, and disposable wealth always gets the attention of the more attentive low-lifes.

The only available commodities that could fill that space are low-profit cargoes like wheat or timber. To make a decent return, multiple cargoes will be needed to fill the space. Each with a different weight, a different volume, a different cost, and a different profit level. That’s complicated enough on its own, but then you have to factor in the value of position – if there’s a commodity that can be sold at one of the intervening stops along the way (for a profit) and then replaced, that ’empty space’ becomes even more profitable. There are umpteen jillion combinations of cargo and quantity, and the conveyor has to pick the one that is most likely to be the most profitable, without wasting a lot of time in the process.

The greater the diversity of products within that space, the less he’s carrying of whatever is the most profitable at the end of the day, but the more reliable the earning of some profit.

So that adds questions of supply and demand, not just at the destination, but all along the travel route. Assuming that ‘home’ is the city where the wagon is bound, and that the carrier brought a load out with him that he sold before loading the wine, he may have had the opportunity on that trip out to get a sense of demand that he could fill upon his return; the risk is that someone else will have filled that demand in the time in between. The closer to the far end of the trip, the less likely it is for that to have happened, so any knowledge picked up will be least reliable as he approaches his final destination.

There are endless possible outcomes. The trick is always to turn a profit, even if it’s less than hoped; anything more than that is a bonus. The wine itself is paying all the expenses of the trip save for the actual purchase of goods to fill the void, so that’s a lot easier to achieve than it might have been.

A barrel of apples, a cask of apple cider, a smaller cask of apple vinegar, a couple of bags of beans, a quarter-bin of pumpkins, six crates of shingles, a barrel of nails, 50 horseshoes, a small barrel of pig’s trotters in brine with a hidden compartment in its base to conceal half-a-dozen gemstones, and six live chickens in a cage, plus any eggs they lay en route. And six woven blankets of wool and four cow-hides and a side of beef – that last won’t quite fit and overload the wagon slightly, but after a day or two, enough weight in water and food will have been consumed to solve that problem. Plus 500′ of rope to tie it all down beneath a weather-resistant tarpaulin.

If the wagon owner can just get the wine to its destination, he will turn a profit, just the win and gemstones will make it a very profitable trip, even if he sells the rest at a small loss or just breaks even.

There are always more variables to take into account. The roads will be at their worst when the wagon is most over-loaded, so there is an increased risk of a breakdown of some sort that diminishes as he travels, for example. Is that risk worthwhile, or should he forget the side of beef or the barrel of apples or maybe the bags of beans? Those all act as low-cost camouflage, hiding the real source of profits from spying bandits, so they have a value beyond the obvious.

Minutia and an alternative

As this example demonstrates, Trade as an activity is all about minutia, and – most of the time – minutia is boring. A Traveler GM I know was so put out by this that he ended the campaign when the players decided to go into being traders instead of blindly engaging in the politics that he had set up as the centerpiece of his campaign; it was that incident that led to my writing the original Trade In Traveler article, “Buy Low, Sell High“.

A Mathematical Trick

I developed all sorts of tricks to speed up mental and paper arithmetic as a child because I had trouble learning, of all things, my times tables. Some of those tricks continue to serve me, well even today, and the principles that they exploit can be even more useful.

Let’s say that I have 20 numbers ranging from 1 to 10, as might result from a series of d10 rolls that have to be totaled to give 20d10: 2, 8, 8, 1, 6, 3, 9,10, 5, 7, 6, 9, 5, 4, 10, 1, 5, 7, 1, 1.

If I add the highest possible result to the lowest, I get 11, which is not very useful. But if I exclude the highest possible result and use the one below it, I get 1+9=10. And that is VERY useful for quick counting. So I partner the results up as much as possible and see what’s left over.

    2+8=10
    8+1+1=10
    6+4=10
    3+7=10
    9+1=10
    10=10
    5+5=10
    9+1=10
    10=10

    That’s 9 tens for a total of 90, and I have 6, 5, 7 left over. But I’m not finished yet – I take the highest and lowest of these leftovers, and add them together: 5+7 = 12. Which makes the final addition, 12+6, even simpler – 12+6=18. Add the 90, and you get 108.

This is even easier to do if you actually have 20d10 to roll, because you can physically move the dice into their ‘partnerships’. But even without that, with a list of numbers generated 5 at a time (the number of d10s that I happen to have gotten out), it’s easy – just cross the numbers off the list as you partner them, or use backspace / delete if your list is in an electronic format.

It’s faster than simply doing it as an addition, because it’s easy to lose count of how many dice you’ve rolled.

If I’m talking about d6s, the goal is still to make tens. Here’s 50d6: 1, 6, 1, 4, 3, 3, 3, 5, 5, 4, 2, 2, 5, 5, 6, 3, 1, 4, 3, 5, 2, 5, 1, 2, 4, 3, 5, 5, 4, 3, 4, 4, 5, 4, 1, 2, 6, 6, 1, 6, 3, 6, 1, 2, 5, 6, 5, 2, 3, 4.

    5+5=10
    6+4=10
    3+3+3+1=10
    4+5+1=10
    2+2+6=10
    5+5=10
    3+4+3=10
    1+2+5+2=10
    1+5+4=10
    3+5+2=10
    4+6=10
    4+6=10
    4+6=10
    3+1+6=10
    5+5=10
    4+6=10
    3+2+5=10
    4+3+2+1= 10

    That’s 18 tens and I have a 1 left over – a total of 181.

A Cargo Standard

So, our commodities have 4 values that don’t change but that are different from one commodity to the next: Purchase Price, Quantity, Volume, and Weight. We also have Other Costs and Profit. Most of those numbers are relative to something – kg per bag or kg per 10 items or whatever. How can we repackage those numbers to eliminate some of these in favor of a more user-friendly description that has less minutia?

Volume

Let’s start with volume. Our cart has a fixed amount of it. If you divide that capacity by the commodity that takes up the greatest amount of volume per item, you get a relative minimum quantity that it can carry, and if you divide by the smallest volume per item, you get a maximum quantity. Neither of those are particularly helpful, but if you take the average, you can get a ‘typical quantity per load’. Is that of any more use? Not really.

What we can do is define a standard volume size, and package commodities by volume to fill that exact volume. To do that, we need to start with the volume occupied by the commodity with the highest volume per unit, and round that to a convenient number.

Or we can start by defining the volume capacity of a ‘typical cart’ and divide that by a convenient number to get a standard volume. Our actual cart will have a capacity of so many of those standard volumes. It’s a simple spreadsheet calculation to transform all of those volume-per-unit numbers into a value of standard volumes per unit – or take the reciprocal to get number of units in a standard volume, with a certain amount of space left over to fill that standard unit. And then we can partner that with another commodity whose volume per unit exactly fills the available space.

Weight

The typical wagon will also have a maximum load that it can carry before you start adding to the likelihood of a breakdown. If we divide that by the same number of standard volume as will fit, we get a maximum weight per unit. If we then use our spreadsheet, we could translate the weight of each of our partnerships into a certain number of those maximum weights per unit.

But here’s where my mathematical trick first shows up. We don’t care about the actual weight of any standard volume, so long as the average overall is within the capacity of our actual cart. So we can partner the weights per unit so that we achieve this overall average – a heavy standard unit partnered with 1, 2, or 3 lighter ones. There will be some that are a little over, and some that are a little under, but we’re packaging groups of standard units that will fit in the physical volume so that the weight overall is right.

This process can even be refined – if you’re 0.2 over on one pairing / partnership, you can take that off the next one that you’re putting together so that it ends up being 0.2 under. But that’s probably more detail than you need to go into.

A better approach is to ensure that each package of standard-units is as close to the desired value as possible without going over it.

So far, then, we’ve created standard shipping units that contain combinations that ‘fit’ the available space and weight capacity with specific quantities of a group of commodities.

Price and Profit

Price per item is known, so each of these combinations will have a total price. Profit per item is trickier because of all the different costs that have to be taken into account, some of which only affect specific commodities. Perhaps, then, it’s a good thing that the actual selling price is what a customer is willing to pay, which has nothing to do with either the purchase price of the shipper / wholesaler and the costs incurred. Instead of profit, we should be looking at revenue – the income that can be generated from selling the commodity.

Choosing the commodity package that yields the highest total revenue creates the maximum scope for profit after those expenses are taken out. We could even label it “Idealized Profit”.

That’s what the successful trader wants to maximize. In an ideal world, he would pack his entire wagon’s capacity with whatever yielded the highest idealized profit and be on his way. Unfortunately, it’s not quite that easy.

Compromises With Reality

Said trader has to accommodate two more limitations: Finances and Availability. The second is the more readily dealt with, so let’s do it first.

Availability

There might only be two units of the most profitable combination while the merchant has room for 8. So he picks those two and then looks at the next most potentially profitable units. If there are six of them available, that fills his wagon and he’s on his way. If there aren’t, he adds what is available and then moves to the third most potentially profitable, and so on.

That’s just common sense, right?

I’ll get back to that in a moment. First, we have the other compromise with reality to deal with.

Finances

Merchants frequently don’t have as much money to spend buying commodities to trade as they might like. Perhaps the most potentially profitable single commodity is Emeralds plus something cheap to fill out the unit – potatoes, maybe. But these cost 10,000 GP a unit, and the trader might only have 14,000 to spend.

So, even though there might be two or three such units available, the trader can only afford one – and maybe not even that.

To find out, he has to play a little game with himself. Deduct the price of however many top-profit units he can afford from his ready cash and divide what’s left by the number of spaces still to fill. The result is the maximum amount he can spend per unit on the remainder.

If the list of available consignments has been sorted in sequence of idealize profit from high to low, the job is simple – work your way down the list until you find the highest potential profit package that he can afford. Fill the remaining space with them – if there are enough available. If not, buy them all and repeat the assessment.

If you reach the bottom of the list without being able to fill your wagon, you then have a choice: run light, or assume that you can’t buy as many of the most expensive units as you thought you could. Reduce that quantity of units by one, and recalculate.

Eventually, you will end up with a full wagon-load that is as much profit as you can afford. The next time you come back, hopefully you will have a bit more cash to spend.

If that’s all there was to being a successful trader, there would be a lot more of them to go around.

Idealized Profits, revisited

What?s missing from the picture is allowance for the impact of supply-and-demand, and market knowledge. Both of these impact the idealized profits of the different types of unit on offer.

Demand for certain commodities rises and falls with the time of year and with the market environment. If there hasn’t been a supply of something that’s in demand – no matter what the level of demand is – that demand will rise, carrying the price people are willing to pay with it. If the market is oversupplied, demand will fall, and will again take sales price with it. You might know what the prices and supply were like a couple of days, or a couple of weeks, ago, but you have no idea what they are now, or what they will be when you finally bring your goods to market.

Some of the change depends on factors outside your control – what other traders have delivered a commodity in the meantime, for example, or a temporary hazard that means fewer such loads are reaching their destinations. Inclement weather and a hazardous river crossing can cause loads to build up, undelivered, while demand skyrockets, until conditions improve. And suddenly, there will be a glut on the market as everyone tries to capitalize on that pent-up demand. If you’re one of the early traders, you can do unusually well; if you’re late to the market, you can lose your shirt.

The one thing that’s for certain is that any ‘idealized profit’ list will bear only a passing resemblance to the actual prices at sale. Some commodities will be higher, and some lower.

In practical terms for the GM, they can either drive themselves nuts tracking every influence on actual sales prices – back to minutia again – or they can simply roll a bell-curved die roll and get a relative price adjustment which they then explain in narrative terms.

But just because a factor is outside your control, that doesn’t mean that its completely unpredictable. Market knowledge is a powerful tool that only the most intelligent can access. If there are a bunch of new homes being built, building supplies will increase in demand and therefore in price. If there’s a major sporting event coming up, the resulting groundswell of visitors will push up the demand for food of all sorts, and alcohol in particular. If the other team are from an area with a specialized cuisine, the ingredients for such cuisine will rise disproportionately relative even to the inflation of food demand in general.

If a particular bridge is rickety and old and in urgent need of repairs, it might be worthwhile going around it, even if it slows you down; while that might produce a short-term reduction in profits, sooner or later that bridge will fail, and you will reach your destination to find demand skyrocketing.

The more the canny trader knows about the world around them, the more they can use that knowledge to anticipate movement in demand and in price, and can then buy accordingly.

The GM Shortcut

All that fussing around with so much of this and so much of that comprising a unit can still be a lot of work, time that could more profitably be spent on something else. It’s still minutia, just a more generalized form of it. Is there a way around that?

The answer is yes, and it gives rise to a fundamental principle of making trade work in an RPG – any RPG, regardless of genre.

The GM sets the prices and the quantities and every other significant value in the process.

Some GMs use random die rolls to do so, thinking that takes the effort out of the problem. And they’re right, it does – but it makes more work and more minutia than you save, in the long run, and the removal of GM bias doesn’t make the system any more or less fair, just more chaotic.

Two die rolls are all that are needed. Two.

The first one sets the current market conditions if these are not already known / inferrable.

    <0, 0 = catastrophically unfavorable, +2 to the second roll
    1 = strongly unfavorable, +1 to second roll
    2 = somewhat unfavorable, +1 to second roll
    3 = slightly unfavorable / neutral
    4 = slightly favorable
    5 = somewhat favorable, -1 to second roll
    6 = strongly favorable, -1 to second roll
    7+ = incredibly favorable, -2 to second roll

The second one sets the trend, the direction things are going. The GM can add or subtract 0, 1, or 2 from this die roll to correspond to known external factors, plus there are the modifiers from the first roll.

    -2, -1 = becoming strongly more unfavorable, -2 to the next ‘first roll’
    0, 1 = becoming somewhat less favorable, -1 to the next ‘first roll’
    2 = becoming somewhat less favorable, -1 to the next ‘first roll’, re-roll 5+ results on the next ‘first roll’
    3 = becoming slightly less favorable, -1 to the next ‘first roll’, re-roll 6+ results on the next ‘first roll’
    4 = market steady, no real change
    5 = becoming slightly more favorable, +1 to the next ‘first roll’, re-roll 0- results on the next ‘first roll’
    6 = becoming somewhat more favorable, +1 to the next ‘first roll’, re-roll 1- results on the next ‘first roll’
    7, 8 = becoming much more favorable, +2 to the next ‘first roll’

These then become the foundation for narrative (which has to explain both the movement from the last ‘first roll’ and the current market trend).

The GM then interprets that narrative to set a general trend in commodities and pick out two or three exceptions going up and two or three going down. He then deliberately constructs unit packages with an eye to the implications of supply and demand from the narrative, and from that, can set availability levels.

Working backwards from the general picture of trade to the specifics of what’s available and what it’s likely to sell for saves an awful lot of work.

But it’s possible to simplify the GM’s job a little bit more, by considering group effects.

Group Effects

The principle of group effects is a simple one: events affect related commodities in a common way. As a general rule, you can treat all grains as a single entity, all vegetables, common meats, alcohols (except beer / ale, which often stands apart, and which may go down in demand when other alcohols like wine go up), and so on. If there’s a military conflict on the horizon, weapons, horseshoes, saddles, and armor will all increase in demand, and so will (higher-quality) cloth that can be died into standards and flags. And basic produce, like beans, hay, and oats. If there’s an increase in building, stone, timber, nails, panes of glass, and tools are all affected. And so on.

If only part of a unit is affected in this way, the proportion by cost price that is affected goes up, the rest doesn’t.

Fitting All This Into Trade In Fantasy

The main series deals with minutia of effects. It’s designed to fully simulate immersion by PCs into the world of commerce, where business activities are to be used as a springboard to adventures.

Sometimes, that’s overkill, because that’s not what the PCs have in mind at all – but the GM still needs to simulate the complex field of commerce within the game world. The PCs are hired on as guards for a wagon train of goods, for example – what are they actually protecting? And why does the wagon master need so many guards? Is the owner just paranoid, or is there something he’s not telling the PCs?

Or perhaps the PCs are employed to scout markets in remote places, searching for unusual commodities that might be valuable, and places where something or other seems to be in high demand. That could be a gateway to all sorts of adventures because it’s simply a justification for them going places they’ve never been before. You need some method of simulating trade to give the mission verisimilitude, but it should be as unobtrusive as possible.

That’s where the simplified system contained in this post comes in. You can make it as generalized as you want to – having standardized volume and weight, why not cost as well? It can be done in the same way, so long as you handle expenses separately.

There are some circumstances where this article is all you’ll need.

Leave a Comment

Delineating Overarching Character Traits


A technique for creating unique and interesting characters that makes their cultures more rich and detailed. Simple but comprehensive.

This image of a Mannequin in Ferengi Makeup and Uniform by Marcin Wichary from San Francisco, Calif. was first published on Flickr under the Creative Commons Attribution 2.0 Generic License, https://commons.wikimedia.org/w/index.php?curid=79570035.

I was reading something on Quora the other day about how Deep Space 9 used the overall concept of Ferengi Traits to make the personalities of Quark, Rom and Nog distinctive (and don’t worry if you don’t know who those characters are, it’s not important to the article).

The key point being proposed was that while all three fell into the general pattern of ‘Ferengi’, each had his own unique traits for which that general pattern provided context. Putting those together permitted an interpretation of those traits from the Ferengi perspective, which in turn broadened the perspective on that society from comic-book simplicity to rich and culturally-detailed.

To employ a metaphor, a spotlight on one of the characters reflected back on the overarching commonalities, exposing fresh facets of the collective generality.

My thoughts went immediately to the gaming applications. These are essentially the same thing, but four-fold. Racial, Archetypal, Cultural/Social, and Characteristic. Each of these represents a way of generalizing a character, and provides (through interpretation), specific traits that denote the individual personality.

Initially, I was focused on NPC delineation, because that’s always a topic of value to GMs, but then I realized that the same methods would work for PCs as well – and that a lot of advice offered both here and elsewhere over the years were already groping in this particular direction.

An introduction to the Architecture

I’ve tried very hard, in this article, to use different collective descriptions for each facet and sub-facet of a subject. This had two purposes – first, by using non-standard nomenclature, it invites readers to take a fresh look at a very familiar subject; and second, it helps keep it clear just what facet or sub-facet I’m talking about. The goal is to avoid boxing ourselves in with stereotypes while creating a broad range of end personalities within a particular culture of which the individual (and all other individuals) are collectively representative.

This matters because it transforms the personalities from something being dictated by rules narrative and cultural write-ups to a foundation for individuality – it lets individuals be unique while maintaining that cumulative impression.

And it matters because that’s how characters in-game would formulate their impressions of both an individual and of a collective grouping – they wouldn’t be given an overarching definition, they would be given stereotypes if they were told anything at all about the race /culture, into which they would have to ‘fit’ the individual, or – if told nothing about the race / culture – they would be presented with one or more individuals which they would have to then generalize into an overall impression.

In other words, this approach is both more akin to, and more facilitative of, the situation as it would be encountered in the real world. That makes this less work for the GM, allows more creativity, and produces more unique individuals.

    Three Options and how to choose between them

    GMs can either start with a generalized pattern as a structure, or let one emerge naturally as a collective impression created by a group of individuals. Or they can occupy a half-way house somewhere in between these two extremes, offering a broad summary as a guideline and content to extrapolate from that beginning for individuals, fleshing out the resulting general view one individual at a time.

    There are two factors that should be considered when choosing between these three options. (1) How much contact has the society in general had with the race / species? This pushes toward the generalized pattern as foundation. And (2), how diverse are the race / species in personality, and within that question, how representative of their race / species does the GM want this individual to be? The second pushes toward the middle ground, while the latter goes further and promotes the emergent collective impression as the path to follow.

    There’s even a variation on the half-way house in which the specific description is filled with half-truths and inaccuracies perpetuated through myth and legend and culture. The GM may not know what the truth behind this picture is, only that it’s partially accurate and partially invented or romanticized.

    There should never be a forced ‘one size fits all’ answer to this question; it should be different each and every time – but, once made, it should remain in effect for each representative of a race / species until you have good reason to change it..

The Four Stanzas Of A Character

The general picture of an individual character, can be broken down into four stanzas. Four paragraphs / lines that collectively delineate an individual persona. Some GMs may add a fifth, alignment, but that’s fallen out of favor in gaming circles these days.

That redefines the objective – we want to end up with a four-to-eight-sentence summary of the individual and how he represents the broader culture from which he derives.

Before we can achieve that, we need to know the subjects of these four stanzas.

    Racial Traits

    These are the racial stereotypes that collectively apply in some manner to the normal individual – even if the individual is wildly different from them, they are still defined, in relative terms, to those racial traits. “The typical Orc is boisterous and brash, ill-mannered, and prone to violence, with a huge chip on their shoulders from being suppressed as a species, and as an individual within the species.” Right away, there’s a lot in that description that will seem familiar but there’s a nuance or two that are just a little different to the generic description of the race. It provides a subtle redefinition of the race, one that can manifest in different ways in every individual.

    Archetypes

    Similarly, in most RPGs there are archetypes – sometimes explicitly defined as character classes, sometimes not. Each archetype, in turn, carries baggage in the form of a description of the type of persona that it welcomes and develops, the personas that naturally ‘fit’ the archetype and how well-suited the individual is to their profession.

    Social Class, Associations, and Faiths

    These three are all ways that individuals associate with others, sometimes within their culture, and sometimes forming a point of connection with others beyond it. Each of them carries an expectation of behavior that forms part of the collective identity of the specific sub-group of which the individual is a member, and for that behavior to come naturally, those speak to the persona of the individual. On the other hand, if the individual rebels against one or more aspects of the group identification, that also says something about the personality of the individual.

    There can be several such groupings to which an individual belongs, but one of them will always be dominant, and their response to that dominant grouping will be definitive, providing a guideline to how they integrate (and how well they integrate) with the other groups to which they belong. These other groups provide nuance, not definition. They can warrant a mention in this stanza only when it is culturally expected that this association is definitive – and in this individual’s case, it is not.

    Characteristic Attributes

    There are three different aspects of characteristics that shape an individual – those that are relatively high, those that are unrelentingly average (relative to those around him or her), and those that are notably lacking or low (same caveat). Each of these can form an important element of the individual persona or can be negligible. The latter should be ignored for now; it’s the former that we are interested in.

    If the individual is notably stronger than those around them, this will have a profound influence on them, amplifying the consequences of some typical adolescent behaviors into life-altering events. Similarly, if they are faster, more nimble, more agile, more athletic, smarter, wiser, more attractive, or more resilient, there will be profound impacts that will push them either more firmly toward the stereotype, or more strongly away from it.

    If the individual is notably weaker than those around them, or more foolish, or more stupid / easily led, less genteel, or more clumsy, these impacts will also be profound. Always being the last person picked for games or teams will amplify other attributes of the persona, and may even put the individual into situations that threaten their lives. Some may devote their lives to overcoming this handicap, no matter the cost; others will accept it and embrace another path through life.

    It doesn’t matter how many characteristics the game mechanics define, there will always be more than can easily be accommodated in a short descriptive passage of the type being discussed here. Of necessity, you need to focus on the one, two, or (at most) three that are most definitive of the individual relative to the broader population around them.

    I want to highlight something before continuing. I’ve made a big point of using terminology relating to racial / social expectations, for example, “relative to the broader population around them”, for three reasons.

    First, it’s the relative value in comparison to those expectations that shapes a persona, not the absolute value;

    Second, this accommodates circumstances of adoption / resettlement, in which the racial norms themselves deviate from the expectations of the society around the individual; and

    Third, defining these attributes in relative terms means that the individual’s raw numbers can be filtered through the relative terminology to say something about the culture from which they derive.

The Process

With the subjects of each stanza now defined, we can move on to the process of generating an individual’s persona. For each of the Stanzas, this is a four-step process that is often conducted intuitively. As with most intuition-driven events, greater understanding and control can be achieved by understanding the process intellectually, and this can provide a road-map to follow when intuition fails us.

In fact, the four-step process is so quick (and usually easy) that we can contemplate far more than the four stanzas, and that creates a need for a fifth step, placed second-last, and labeled step 4:

  1. Generic Trait to Profile Spectra
  2. Individual Placement within Spectra
  3. Alternative Interpretations & Adaptations of Individual Placements
  4. Selection
  5. Facets of Individuality from Specific Interpretations

Let’s briefly look at each of these in greater detail.

    1. Generic Trait to Profile Spectra

    I recently wrote, though I’m not sure where, “Nature doesn’t deal in absolutes, it deals in spectra”, or words to that effect – I think it might be in the Zenith-3 adventure currently being played.

    Every element in the four stanzas can be viewed as a placement upon a general range of spectra that collectively define the application of the element to the collective identity of the race / species.

    You can see this readily in the case of the characteristic attributes – the character has a specific value for each characteristic, while the full range of possibilities defines the scope of the spectrum from low-to-high. One of my very early advocacies, long before I started writing articles for Campaign Mastery, dealt with the spectrum of full possibilities permitted by the game mechanics and the placement of the individual upon that spectrum as a guide to personality traits.

    In this case, the spectrum of possibilities is reduced to just those considered ‘valid’ for the race / species, permitting a socially-relevant measure of the impact of that placement, but the older interpretation still has some value in terms of defining the significance of those racial restrictions relative to the human population base.

    If the human range is 3-18, for example (very traditional D&D scale), an individual value of 15 give rise to certain character traits (depending on which characteristic is being discussed). If the race in question has a spectral range of 12-20, the 12 tells you something about the race relative to humans, as does the 20, and the individual’s value of 15 tells you something about where they fit within that 12-20 spectrum of possibilities.

    Set aside the individual value of 15 for the moment, though; this step is about defining the 12-20 and translating that into general descriptions of the characteristic with respect to this particular race.

    Each of the stanzas can be treated in the same way, as a range of possibilities that define the race / species, and this step is one of defining those spectra.

    Obviously, if you don’t take racial notes, you have to repeat this process every time. When you don’t have a unified concept of the race / species in your head, that can help create one through step-wise refinement and iteration of the process; but when you do have a clear idea of the central concept of the race / species, it’s a waste of prep time to repeat the process. Either way, the process is sped up in the future with a little careful note-taking at this point in the process..

    2. Individual Placement within Spectra

    This is where that individual’s value of 15 reenters the picture. You aren’t so much looking at what this enables the character to do, or not do; you are looking for the consequences of that specific value toward the personality of the individual. What comes naturally to him or her, what do they struggle with, and how do those things fit them into the culture surrounding them?

    Again, this step is easier when thinking about characteristics, but it’s true of all the stanzas. Social Class, for example, will have a range from those at the bottom to those most-valued by the society (usually rulers, but not necessarily so). Elves may revere those making cultural contributions far above their social standing as defined by their political influence. Applying a little creativity can nuance racial definitions in ways you would scarcely believe – for example, if the Brewers of Ale are the most influential in Dwarven societies, you get a very different picture of the society. If you then generalize that from the specific Beer-maker to ‘Social Lubricants’ to ‘Social Interaction Enablers’, you find that anyone who makes social interactions easier or more significant grows in stature within the resulting society, and that social interactions of all sorts become more significant within the resulting culture. Feasts, Parties, casual get-togethers of all sorts, become more significant, more frequent, and more embedded within the society. There would be excuses for such, both informal and formally-defined, that stretch even beyond the extremes in human cultures – there would literally be an excuse for a ‘party / celebration’ every week of the year. Some of these might even be negatively contextualized in expression – commemorating a war in which such celebrations were not possible might be remembered by making ale forbidden during the first phase of the social event (to be followed by an even more extreme celebration of the victory, when social norms once again became possible). So you have a week of fasting (in terms of alcohol) and then a blow-out.

    3. Alternative Interpretations & Adaptations of Individual Placements

    So we have a spectrum of results and a placement of the individual within that spectrum. The racial profile associated with that spectrum defines what is usually meant by that placement, but nothing exists in a vacuum; how an individual reacts to a specific spectral placement will not be an isolated phenomenon, it will be a part of the unified whole that is the individual’s personality. Rather than look to the generic cardboard cut-out interpretations, it’s worth spending a few moments contemplating alternatives that might better represent a coherent profile of the individual, relegating the generic contribution to (at best) a secondary status within this individual’s makeup.

    This stage of the process is an exploration of ideas – don’t be afraid to throw in something from left field to see what becomes of it.

    4. Selection

    By the time you’ve finished that, you will have a vast swathe of contributing elements, a soup of possibilities, all present in equal strength, and so yielding a fairly bland and unfocused characterization. Time to apply a little selectivity, picking out the elements within each stanza that best define the individual and their place within their natural society.

    Remember, the goal is to be able to sum up the individual and their place within their native culture in just 4-8 sentences of simple construction – none of this 15-line paragraphs that read like legal fine-print. Simple, direct statements. Anything that doesn’t belong in description of the individual’s personality and placement should be part of the racial notes.

    5. Facets of Individuality from Specific Interpretations

    When you’ve boiled off the dross – and it’s likely that your pruning will need to be ruthless – what remains is canon for that character. Everything not explicitly stated is free for interpretation in response to triggering events, though logical implication may narrow the reactions to such events.

    Roleplaying is about taking those defining elements and merging them into a holistic view of the personality which can then be expressed in thought (decisions), word, and deed. The GM has to do it just as much as the players do.

    It can be the case that the holistic view needs 1-2 more sentences to unify the constituent elements. “[Name] is a Party Animal” can mean very different things in different cultures, and usually requiring a clarifying clause within the sentence. “Elvor is a Party Animal, always up for a good poetry recital or inspection of the blooming of roses” – by redefining ‘Party Animal’ into a relevant social context, this describes a very specific individual in a single sentence; everything that follows merely enhances that overall summation.

    Simply by virtue of making this the dominant personality trait of ‘Elvor’, you automatically insinuate that everything else is secondary to this aspect of their personality, to be sacrificed if and when it becomes necessary. Right now, there’s an impression that the character is a gadfly, without serious heft and gravitas – but if this love of ‘intellectual events’ has driven the character to become engaged in internal politics, or a social firebrand / conscience, it’s possible that nothing could be farther from the truth. That’s what the other elements of the characterization are there for.

    It’s the overall summation that GMs and Players should keep in mind when roleplaying. Nuance is all well-and-good, but can often conflict with other characterization elements; the overall summation is the guide to navigating such complexities.

Spotlight Placement

Like most creative types, I love to show off my handiwork to the players. Perhaps eight times in ten, I’ll get a shrug and a ‘so what’, but the remainder generates varying degrees of appreciation and occasionally awe.

There’s a wrong way and a right way and a better way.

The wrong way is simply to dish up “here’s something I’ve been working on,” without in-game context. This risks giving away key details of plot not yet played, throwing away any surprise or wow factors at the game table for a moment of gratification that might not even be coming. It’s something that most of us have been guilty of at some point along the way, and we all have to learn (sometimes repeatedly) not to do it.

The right way is to make the revelation part of the plot by ensuring that the plotline focuses on at least one of the more unique aspects of the character, showcasing his or her individuality.

The better way is to fully integrate the character and one or more of their unique personality attributes into the plot, making them an essential building block of the campaign, while using them to shed light and add substance to the range of possibilities implicit in their race, profession, and social position. This might require the involvement of a second character whose job is merely to forewarn the PCs about the uniqueness or place it in a racial / professional context afterwards, specifically addressing the nuances that make the character function.

    Focal Point

    As you can see, there’s a great deal of similarity between the ‘right way’ and ‘the better way’ – the distinction is in how central the uniqueness of the character is to the plot.

    Both start with the selection of a focal point – the aspect of the personality that is going to be on most prominent display. This could be any one of the character’s stanzas of description, and there will always be a best choice in terms of the plot and intended usage. But if, by chance, the character you’ve created doesn’t match up with your plot needs, it’s at this point that you should set the character created aside for use some other time, and start over – letting the plot guide you to a unique character for that critical role in the story.

    Reflections Of Individuality

    Once the primary point of uniqueness is built into the plotline, the second step is to look for opportunities and character-roleplay moments that can briefly highlight one or more other unique aspects of the character. Failing that, a foil – someone present merely to expose the existence of those other unique attributes – is often the best answer.

    The Racial Rainbow

    I am always cognizant of what the uniqueness of the character adds to the rainbow of racial aspects and colors contained within the race. How does this character, and their role within the adventure, expand the fundamental definition of the race that lives in the player’s heads? How can we make that expansion unforgettable, so that the next example builds upon it, having a cumulative impact?

    Every non-cliche Elf, Dwarf, Orc (or whatever) adds to the substance of that race, so long as their uniqueness can somehow be put on show and made memorable. The more central they are to the plotline, the more easily the latter can be achieved, and the more interesting the character, the more easily you will be able to drop them into future occasions.

    If you make six unique NPCs and only one of them goes on to become a central figure in the campaign, that’s a win for the GM – because if they weren’t memorable, none of them would do so; they would simply be part of the campaign furniture. But at the time of creation, you never have any idea which of them will turn up again in the future – you’re simply placing as many top-quality building blocks to hand as you can come up with.

    The Archetypal Rainbow

    It’s the same thing with respect to the character’s archetype. Expanding the role that the individual can play expands the potential capabilities of their archetype, providing a second avenue into their becoming a recurring element.

    The Social Rainbow

    The sheer variety of groups around which the character can be oriented means that their contributions to the social rainbow will be more diffuse, unless this is the central facet of the character spotlighted.

    But this also brings me to a top tip – The Path Not Fated

      The Path Not Fated

      We’ve all met people who would excel in a different vocation or social position, but who were forced by circumstance, or family, or opportunity, or whatever, into a pathway through life for which they aren’t really a very good fit.

      They nevertheless do as much as they can to fit themselves into the square hole, no matter how much of a round peg they may be, and do enough to continue on in that square hole, though it doesn’t come naturally to them.

      Whenever fate (or a PCs’ decision) throws up the need for a generic cardboard cut-out NPC, my favorite tactic these days is to make them something else, then reconcile that with their life and its demands.

      The noble who would be better-suited to being a bookkeeper. Or a Beekeeper. Or an architect. Anything but a typical ruler, in fact.

      The inn-keeper who was born to tread the Tennis Court. Or the Pool Hall. Or to be a famous singer.

      The Blacksmith who should have been a painter. Or a gardener. Or a butler.

      It’s a shortcut through the processes described here that doesn’t fully flesh out the character but still captures at least half of the uniqueness that would result from such a treatment, and is fast enough that it can be done on the fly – which is exactly what you need in this game situation..

      The biggest trap to watch out for is creating a new stereotype by reusing the same ‘alternative vision’ repeatedly. Avoid that, and you’re well on your way.

    The Characteristic Hues

    Characteristic-defined traits are a little different to the rest. They rarely stand alone, instead compounding with other personality traits to add additional nuance and depth. These are personality elements that would be largely similar no matter what archetype / profession the character adopted, what their social class was, and that are embedded within their racial profile, inseparable from it to at least some degree.

    Contemplate, for example, the differences in the following:

    • “He’s unusually strong for a Gnome.”
    • “He’s unusually strong for a Storm Giant.”

    Both will have generated similar formative influences within their respective cultures; it’s when you step outside those boundaries that the context becomes important. In the first case, the character is likely in for a rough time, adjusting to no longer being the biggest and toughest around, but they may end up a better person for the humbling. In the latter case, any personality traits engendered by their strength are likely to be amplified, if anything.

Totality: The sum of many reflections

The techniques described in this post shouldn’t be used every time you generate an NPC. Their power stems from the cumulative impact of many diverse representatives; if you can’t envisage a pathway through the campaign that yields many encounters with Ettin, it may not be worth going through the whole process.

That’s certainly one path to take. The on-the-other-hand counter-argument is that if there’s only going to be one Ettin, you should make it as memorable and distinctive as possible. While the pragmatist in me aligns with the former position (less time spent on this means more time that can be spent on something else), everything else in my nature (excluding laziness) demands the latter.

I can’t decide this question for you – I can only advise people to find the balance and pathway that works best for them. Every GM has some talent at which they are better than the rest, some have several. Prep time invested in something that comes naturally to the GM yields a better dividend, but leaves holes in their performance behind the screen; prep time invested in the areas they are weaker in elevates the performance bottom line and also frees up some of their time and attention for their strengths to be displayed. There’s no one right answer.

But I thought it worth the effort, before wrapping up this article, to think about some even bigger pictures and the impact the technique can have.

    Genre Variations

    By defining the racial and archetypal parameters differently, even within the same game system, you create genre variations, and these can be as nuanced as you want them to be. If you want to distinguish between high fantasy and low fantasy, you can – even in the middle of a campaign, if you perceive that the campaign has evolved through characters gaining wealth and experience. That’s a powerful benefit, but it misses one of the more useful functions of the process.

    It also makes the conceptual repackaging of one genre’s creatures into another genre. There are two examples that I could offer right now, but both are from adventures that haven?t yet been played. Instead, I’ll throw out a less-developed idea just to illustrate the power of the technique.

    Let’s take a Troll and translate it into Sci-Fi using nanotech repair mechanisms housed within the humanoid organism. There would be certain aspects of the ‘repaired’ creature that would be user-customizable, and some that aren’t. Increased strength, size, and resilience? No problem. Diminished intellect and Agility? Suggestive of nerve damage as a consequence of the nanotechnology, and maybe neuron damage to boot. That suggests an inverse relationship between Strength / Resilience and Intellect / Nimbleness. It might be that every time the nanotech repairs the body, it gains a point of strength and/or resilience, but loses a point of intelligence and/or dexterity. Slowly, the character becomes more brutish – and more dangerous. This treatment doesn’t say anything about the ‘racial’ traits or the social groupings; the latter would probably be generic aspects of the sub-culture that embraced nanotech / cyberware, while the former would be about the places such ‘modified people’ hang out and the jobs they perform, and that would reflect their integration (or the lack thereof) within the broader society. That in turn suggests either a game setting that leans heavily into cyberpunk tropes, or one that is actively trying to avoid going down that path.

    In my Zenith-3 (superhero) campaign, Earth-prime has started down the road to cyberpunk but there is considerable resistance, not least of which stems from a number of unique illnesses / diseases / conditions (some of them physical, some mental) that exist and act as a deterrent to many. There are a few fatalists who believe that cures will eventually be found, and that upgrading now gets them in on the ground floor of the next stages of human evolution; there are some who see the diseases as a natural price that has to be paid if ordinary people are going to compete with superheros and villains; and there are some who are simply overconfident (“it will never happen to me”). Philosophy colliding with Futurology in a Superhero context. These ‘trolls’ would fit right in.

    There can even be an argument made in reference to the purported ugliness of a Troll. Characters who opt for this type of augmentation will probably start out fairly average in appearance, maybe even a little sickly. At first, the gains would be positive – they would put on muscle mass and become more attractive as a result. That wouldn’t last; they would slowly become more grotesque in appearance, a trend enhanced by the natural occupations of this sort of augmented person – bouncers and enforcers and the like. All professions in which intimidation is an asset. And so most of them slide down a slippery slope into a more horrific appearance.

    We can make such a character unique by making them friendly, polite, soft-spoken, with exquisite manners. The dichotomy of such a social paragon being an ugly SOB who does an ugly job does the rest.

    Campaign Variations

    I’ve often discussed my desire to make no two campaigns that I run exactly alike. Sometimes, where they are both set in the same game world and operating concurrently in game time, the distinguishing features may have to be more nuanced and less casually-obvious, but they are still there.

    This is particularly the case when it comes to the different D&D campaigns that I’ve run over the years. I want Elves and Dwarves and Orcs and so on to be different in each, and to have some reason in back of those differences. Collectively, those racial differences manifest from conceptual differences within the world and its history. Put both together, and each campaign takes on its own unique flavor.

    It should be obvious that this technique not only assists in creating such unique reinterpretations, it helps spotlight them in play. That’s both a win and a bonus, in my book.

    GM Individuality

    I’ve often made the point that each GM is a little bit different from the next. No two of us think exactly alike. Over time, the strengths, weaknesses, likes and dislikes, etc of the individual start to come together in a unique GMing style, one that often transcends campaigns and genres and game systems.

    There is a corollary to this perspective – not every game system will suit every GM equally. Some game systems will simply be a complete bust; others may flex ‘muscles’ that the GM didn’t know they had, enhancing and developing their capabilities; and some will fit them to a T, while the GM (metaphorically) next door can’t cope with that system and doesn’t see its attraction.

    Because this process enables individual GMs to craft individual interpretations of common elements like races or species, it facilitates the expression of a GM’s particular style – even before they know what that style is. Without that knowledge as a guide, there will probably be false starts and missteps along the way – but those would happen anyway. We make mistakes and we learn from them.

    The Developmental Sandbox

    The final big-picture that I want to point out is that you can start with a completely generic setting and evolve it, one step at a time, using this process. Eventually, you will find that you have developed your own singular ‘take’ on that setting – your “Ebberon” might be completely different to another GM’s “Ebberron”, your “Middle Earth” unique, while still deriving from and reflecting the source material.

    The process allows for the development of singular elements within a sandboxed game narrative, permitting the incorporation of creativity in greater or smaller doses – but one at a time, making assimilation of the distinguishing features easier for both GM and players.

    That’s not nothing, either.

A Powerful Tool

In conclusion, then, this is a powerful tool for character creation that expands the mythos surrounding the specific races, classes / archetypes, and social groupings to which the individual belongs. Rather than being confined by pre-packaged concepts of those character facets, it causes their expansion to accommodate greater diversity and richness of material within a campaign.

Throw in a few side-benefits along the way, and it should be easy to see why it’s worth your attention.

Leave a Comment

All About Ripple Plotlines


Ripple plotlines use domino chains that feed back to the main plotline while cascading out to trigger other plotlines in a chain reaction. They can start from the most apparently inconsequential act or decision and grow until whole Kingdoms hang from them like Christmas baubles.

Today (as I write this) is Australia Day, our equivalent of the 4th of July, and yesterday was unbearably hot and humid, so I got nothing done. Which meant, of course, that I would need something fairly quick and simple for this week’s topic.

I’ve given a pretty fair description of what a ripple plotline is in my introduction, so instead let’s look at the anatomy of one.

Anatomy Of A Ripple

Every ripple starts with an act or decision, which can be described in an abstract manner as the ‘seed’. This is similar, but not identical, to an adventure seed in that there are some very specific requirements that it has to possess. Specifically, it has to affect others in a number of different ways.

Each of those effects is a Primary Strand of the plotline. At least one primary strand has to affect a PC, usually directly but indirectly can be okay, too.

Each group or individual affected is a secondary node, and each secondary node has to have the need to act or react to the Seed Event. That, too, is a requirement of the Seed that has to be met in order for this to qualify as a Ripple Plot.

Those secondary nodes give off consequences of the decisions. One of these “Secondary Strands” has to connect back to the Seed Originator in some way, and another has to impact one or more PCs in a specific fashion. I’ll come back to that detail in a little bit.

The rest of the Secondary Strands can either connect to the campaign background, creating a change in that background moving forward, or can connect with a Tertiary Node. That tertiary node will cast of Tertiary Strands, which – just like the Secondary Strands, have to affect the original Seed Originator, and either the background, or one or more PCs, or both.

A ripple plotline grows via a chain reaction of dominoes falling, spreading outward like ripples on a pond – hence the name.

The Binding Agent

One of the characteristics of a Ripple Plot is that, initially, it’s about something other than the ripple plot itself. It starts in the background, just a backdrop to the “Through Plot” which serves as a Binding Agent. As ripples intercept the participants in this “Through Plot”, it gains momentum and significance, until the through plot is less important than the ripples that are rewriting the adventuring environment around the characters.

I’ve labeled this a ‘binding agent’ because it ties the narrative together, it ties the PCs to the ripples, and it gives the whole thing a momentum that it would otherwise be lacking. These are important functions, and it follows that the choice of through plot can be just as important as the Ripple Seed.

So what should you look for in a Through Plot?

In a word, discontinuity. It has to be something that starts and stops and then resumes, so that in the intervals in between, the ripples have time to manifest. A dungeon that has to be completed in sections, with rest and recovery away from the dungeon in between, for example. A courier job in which several different noblemen have to be taken a message, and the replies brought back to the employer. Or maybe, instead of noblemen, it’s a particular character class or occupation.

The nature of the Ripple Seed

Some types of plots lend themselves readily and obviously to Ripple Plots, in particular political events / decisions. But these are often too obvious and too significant, causing the PCs to focus on them before the full impact has time to manifest; there’s a fine line to be walked.

A lot of GMs come up with the basic idea, or some variation of it, on their own, usually based around a political seed, and this effect then causes them to lose control of the ripple plot. They then write the whole thing off as an uncontrollable force within a campaign, and never discover the power than it can have from a more subtle Seed.

What’s really desirable is something that’s going to be minor to start off with and grow.

Timing is everything

I can best explain this point by offering up an example. Suppose our Ripple Seed is the notion of disbanding the Inland Revenue Service and contracting the collection of taxes out to public groups / agencies. The theory is that in a year or two, this will save so much money that the tax rate itself can be lowered.

Right away, there’s a potential problem – what if the PCs decide to become one of these contracted groups? There are two ways of avoiding this, and I would use them both. First, the remuneration should be less than the existing tax collectors were being paid – a disincentive; and secondly, making sure the PCs are busy with something that looks far more important / useful / profitable than this before it is even an option.

That ‘something’, obviously, is the Through Plot. I might foreshadow the Ripple Plot with news of a new Advisor to the Government (the Throne in a Kingdom) who has privately proposed radical reforms of the tax code. This, of course, is only half-right; he or she is not advising changes to the Tax Code, only suggesting that such might become possible if this change is put in place. But it sounds both important and boring at the same time, and so will incline the PCs towards the Through Plot when it manifests.

The thing that makes this a suitable Ripple Seed is that there will be lots of different groups who will have different reactions. Some will embrace it, in a restricted manner – Professional Guilds, for example, collecting the Taxes from their members, and using the revenue payed to them for performing this service to lower their guild fees. Churches might embrace it, mandating that the congregations pay their taxes on the collection plate. Thief’s Guilds might also embrace it, as a way of hiding their thugs in plain sight, giving them a veneer of respectability, and fattening their coffers by ‘increasing the tax rate’ (unofficially, of course) – not to mention the money-laundering possibilities. Various bandit groups might sign up as a way of gaining, or regaining, legitimacy.

Other groups will oppose it. Some might see the potential for corruption. Others the prospect of Confusion and/or tax avoidance. Winemakers and Vintners might claim that they’ve paid their taxes through their guild (when they haven’t) and so don’t need to pay agency X – whoever it is that comes around demanding tax payments. Still others may see it as a way for the neighbors to justify intruding into their privacy. How do you prove that you’ve paid your taxes – showing a token of some sort?

“Psst, hey, kid — wanna buy a token? I can give a discount for lots of six or more. Almost as good as the real thing, I promise.”

Instead of a central authority, there would be dozens of smaller authorities – and that makes any inequities in the system harder to remove by increasing the bureaucratic burden. Some groups might take matters into their own hands – if the merchants feel that sales taxes are high enough to stifle business opportunities, they might arbitrarily reduce the amounts they are collecting to what they consider ‘reasonable’.

Some groups may hear rumors of such goings on and decide to do likewise. Others will hear such rumors and decide that the guild in question is elevating themselves and their prosperity over that of others, and start acting against the guild who is the subject of the rumor.

Everyone will have an opinion of the idea, of the way it is implemented, of the groups backing it of the groups opposing it, of the groups trying to make the system fairer and those who are trying to take advantage of it. Those opinions will shape or reshape the implementation of the idea, and some will shift from ardent supporters to vehement denialists. “I was all for this until the Seafarer’s Guild signed up to collect taxes from the docklands. You can’t trust them as far as you can throw a warehouse.”

Trust. In this Ripple Plot, trust becomes a taxable quantity that not everyone can afford.

And, at the end of the day, when society starts coming apart at the seams, it can all be undone by decree the same way as it was implemented. The old Tax Collectors can be rehired – at increased pay, no doubt – and taxes will go up to cover this increased cost. That won’t put the genie back in the bottle – the consequences and repercussions will take years to unravel and stabilize. And lots of different groups will have entirely changed attitudes toward the government who foisted this shambles off onto the public.

The Key To Success

Ripple plots succeed or fail, live or die, according to the extent which the characters are directly affected. Those impacts should start small and innocuous, as already noted, but should compound one on top of another.

Ripple Plots. Everyone should know how to make them and how to use them.

Leave a Comment

A Fairy Colony In Zenith-3


What is a Fairy Colony, and why should you never annoy one? Or attack one? I didn’t want to go full “Fey” so I came up with something different…

Pieces Of Creation Logo version 2

In the Zenith-3 superhero campaign, there’s a Fairy Colony at the bottom of their back yard. It was placed there years ago (real time) but until late last year, no details had ever been worked out. heck, there wasn’t even a functional definition of a fairy, let alone a Fairy Colony! But, with play set to resume next week, that had to change; so I wrote up some concepts, and then added to them, and added to those, and so on. None of my players have seen this yet (and the details in that specific campaign’s version are slightly different, anyway). That’s because I’ve adapted this to work with D&D/Pathfinder, even though it remains a concept for use with Hero Games, fundamentally.

Fairy Physical Structure

Fairies average 6-12 inches (15.24-30.48 cm) in height.

They trend towards being slightly built, though a few are stockier. The average weight is 40-320 g. Stockier examples x 1.45

Their wingspan is typically 2.4 x their height (each wing = x1.2 height), and resemble those of a dragonfly. They fly at peak speeds of up to 6″ 25mph (72 km/h) 12″ 65mph (105 km/h). Divide these by the 1.45^0.5 = 1.204 for stockier builds.

Cruising speed is 6″ 20-25 mph (32.2 – 40.2 km/h) to 12″ 30-.35 mph (48.2 – 56.3 km/h).

They have three fingers and a thumb on each hand. As a result, they tend to number things in base-8.

    1=1
    10=8 (two hands)
    20=16 (four hands)
    100=64. (a great hand)

etc.

At 12 inches tall, a fairy is effectively a small, sentient projectile. Flying at 65 mph, an impact would be significant – carrying about the same kinetic energy as a professional pitcher’s 100mph fastball.

Wearing a pointed helmet or using a pole arm, they become the equivalent of a living AP round (at relatively low velocity relative to a gun, but still…)

The wingspan of the larger fairies handicaps them in forest and indoor settings. They dominate the open skies. The smaller fairies are far more maneuverable and dominate tighter spaces. As a species, they take advantage of these facts – short fairies are melee fighters while taller fairies use javelins and bows..

Because of the high speeds and small size, these fairies would likely have an incredibly high metabolism, requiring constant intake of high-energy foods (nectar, fats, or sugars) to fuel their flight muscles. They magically concentrate food daily. They will eat once when the moon rises, twice more at four-hour intervals, and have a half-meal when it sets (to give them an energy reserve to call upon if attacked in the night). Their preferred diet is tree sap (especially of the maple variety), leaves, and fruit. Most flowers do not produce enough nectar to do more than add flavoring, but they prize them for that function. Especially brave or hungry fairy colonies may raid a beehive.

Fairy Social Structure:

They consider themselves a single clan or “colony”. When their numbers grow too large, the colony will split and have a big fight to see who gets to stay and who has to look elsewhere. Normally, about 2/3 will refuse to fight, either choosing after the outcome is decided which group they will affiliate with, or volunteering to relocate, regardless.

How many is too many? The real number is somewhere between 500 and 1000 adults, but most Kings pick a number between 100 and 500 with which they are comfortable. Beyond a few hundred, you stop knowing everyone as individuals very well, and past about 500, you start losing track of individuals completely – and social cohesion and relationships are essential to a Fairy.

Fairies hold grudges for decades, if not longer, as hot and passionate at the end as when the incident is fresh. They are easily placated, however, if this is done sincerely. For the most part, they simply want to be left alone. And party. And celebrate nature. And socialize. And gossip about each other (usually in a friendly way).

Then, too, in every generation there are a few really mean and nasty individuals – bullies and the like. If the colony is small in size, there won’t be many of these, and they will be easily quelled and controlled by the society at large; once numbers become more significant, society begins to splinter into subcultures, and these louts can become a gang, sparking difficulty with those living around the colony as well as internal strife. They can become a significant problem for the colony.

Four times a year, on the second full moon of the season, the Fairies have a celebration with an outsider as guest of honor. This outsider is chosen by a process called the Fabrinelle, a kind of treasure hunt through the surrounding lands. To be chosen, the person must be a true lover of nature. At the end of the night of wild celebrations, the guest is given a gift of some sort and an honored role in Fairy Society; he or she may call upon the Colony to aid them in some struggle or task that is beyond them. This power, once used, is lost forever.

On rare occasions, a guest may wish to remain with the fairies permanently. It is up to the King to determine if this is possible, and to make any arrangements necessary, but his primary task is to ensure the security of the Colony; there are times when this makes the request impossible. Some Kings, especially those without the guidance of a Queen, have made poor choices in this regard, such as replacing the child with a simulacrum, a changeling, who will fall ill and seem to ‘die’ over the next month or so.

Of secondary importance is that the request must not create conflict between the family of the guest and the colony.

Those who are permitted to remain are transformed permanently into fairies and become members of the colony like any other.

Fairy Political Structure:

On paper, it’s a Monarchy, but Fairies don’t use paper. Kingship rotates through the male population on a weekly basis. The Kings from the previous two weeks and the one who will assume the throne next week form a council of advisors, providing some semblance of continuity. If a King is wed, the she becomes Queen. The role of the Queen is to provide a conduit between the rest of the colony and the throne. She is also in charge of the recreation activities of the colony, some one-in-all-in social occasion.

It is when a King is unwed that things can get messier. The King has the authority to choose as his consort any unwed female who will have him, and she will then act as Queen for the remainder of the King’s Reign, but she has no training or authority to organize events, so the King does that himself – usually more masculine activities like hunts.

Fairy Activity Orientation:

As a general rule, fairies are neither nocturnal nor diurnal – they rise with the moon and set with it. But they can function outside these hours at need. To human observers, their daily cycles drift by about 50 minutes later every earth day; one week they are active at midnight; two weeks later, they are active at noon.

During the New Moon phase, the fairies rise and set almost exactly with the Sun. This is likely their most stressful time – they are active when the “Big People” (humans) and daylight predators are most active, and they lack the cover of night.

Clothing and Equipment:

Fairy clothing is generally made of leaves that has been treated with tree-saps to stiffen them and bind layers together, then magically hardened. Their very best armors are as protective as those used by human SWAT teams.

They carve many implements from wood and then preserve them with lacquers. Because of their small size, these can possess incredible delicacy and detail.

They forge metal through (magical) transmutation and melt/cast/smith it using magical fires. A single “blacksmith” might be one artisan and 15 or 31 others generating the heat. 256 fairies casting in unison can produce brief bursts of plasma-cutter temperatures.

Domiciles & Structures

Edible tree sap isn’t the only type that Fairies use. They dry sap out into flat planes, usually sandwiched between two leaves, building up layers which they treat magically to make them more resistant and resilient, at least as hard as granite, depending on the number of layers. These are then assembled and joined to construct homes and other structures.

The most common practice is to suspend these from tree branches, but every Colony has a different approach. The most grandiose structures may be suspended from multiple sides enabling a much larger construction – these can be full-on medieval palaces in miniature. But most structures are smaller and more humble.

The simplest structures are round, like beehives.

By far the favorite place to reside if one isn’t entitled to a ‘palace’ or ‘castle’ is in the hollow of a tree. These can be extensively and elaborately carved internally while little or nothing is visible from the outside save some internal illumination through windows.

They can sharpen sticks by coating them in resin, wrapping a leaf around it, and transforming it in the same way. A ‘forest’ of 3-6 inch spikes surrounding a colony for a couple of feet – with gaps big enough for the feet of any human(oid) visitors – is enough to discourage most predators; these spikes are needle-sharp and capable of penetrating the hardest hooves. If they have been attacked in the past, other refinements may be added to inflict poisons or diseases on hostile entities. This is also how they make their javelins and arrows.

This often makes a colony in a relatively safe environment confident enough to build dwellings on the ground as well as aloft, though only the lowest social classes would live there.

Fairy Magic

This is generally more elementary than that of a human mage, and more elemental, but it is capable of great subtlety, and backed by enormous power, because the whole clan participates in the casting. They may only have 1 mana point each, but 500 or 600 fairies cast spells more powerful than most human mages can even contemplate.

They recover that 1 Mana point almost instantly – it actually takes 5 or six seconds.

Fairy Spells tend to blow some aspect of the spell out to extremes.

Base area is proportionate to their size, so about 6 non-game inches to a hex.

In practical terms:

    log [Area (square feet) x 12 / 6] / log(2) = area modifier.

So double the area (or less) for +1 modifier. or half area for -1 modifier.

    EG: 10 sqr feet: 10×12/6 = 20; log(20)/log(2) = 4.3 so this is a +5 modifier.

Note that you don’t need a calculator. 2; 4; 8; 16; 32. 32 is more than 20, so we stop doubling. Count the number of doublings: 5. So 20 sqr ft = +5 – and so is anything from 17-32 square feet.

  • 1 square foot = +1. This is the area to affect a human-sized individual.
  • 10 sqr ft area is x20, so +5.
  • 20 sqr ft area is x40 = +6.
  • 100 sqr ft is x200 = +8.
  • 1000 sqr ft is x2000 = +11.
  • 10,000 sqr ft is x20,000 = +15 (a large stadium).
  • 1 square km = 1.55e+9 sqr inches = x1.55e+9 / 6 = x258,333,333.3 = +30.
  • 1 sqr mile = 4.01451e+9 sqr inches = x 4.01451e+9 / 6 = x669,085,000 = +30 (both fall within the same power of two).
  • 25 sqr km (5km x 5km) (a moderate city) = x6,458,333,333.3 = +33
  • 22.7 sqr miles (Manhattan island)= x15,188,229,500 = +34
  • 100 sqr miles (a larger city) = x66,908,500,000 = +36.
  • 12,367 sqr km (Greater Sydney) = x3,194,808,333,333.3 = +42
  • 30-40,000 sqr km (small Western European Country) = x7,750,000,000,000 – x10,333,333,333,333.3 = +43 to +44
  • 100,000 sqr km (average Western European Country) = 25,833,333,333,333.3 = +45
  • 540,000 sqr km (France) = x139,500,000,000,000 = 1.395e+14 = +47 (barely)
  • 7,660,000 sqr km (continental US) = x1.978833e+15 = +51
  • 255 million sqr km (Earth Hemisphere) = x6.5875e+16 = +56
  • 510 million sqr km (Earth) = x1.3175e+17 = +57

Duration: the base is instant (+0), then 1 second (+1), as usual. The calculation is the same, as you will observe below.

  • 1 minute = 60 sec = x60 = 1+log(60)/log(2) = +7.
  • 5 mins = 300 sec = x300 = 1+8.2 = +10.
  • 30 mins = 1800 sec = x1800 = +12.
  • 1 hr = 3600 sec = x3600 = +13.
  • 1 great-hand of minutes = 64×60=3840 sec = x3840 = +13.
  • 1 hand of life = 4 great-hands of minutes = x3840x4 = x15360 = +15
  • 6 hrs = 21,600 sec = x 21,600 = +16.
  • 1 sky-cycle (lunar rise to lunar set) = approx. 12 hrs 43 min = 45780 sec = x45780 = +17
  • 1 long-day (max lunar rise to set, occurs every 18.6 years) = 18.5 hrs (max) = x66600 = +18. Most will be +17.
  • 1 day = x24x60x60 = x86400 = +18.
  • 1 Fairy-day = x(86400+50) = x86450 = +18
  • 1 Fairy-week = x7x86450 = x605,150 = +21
  • 2 hands of fairy days = 1 half-cycle = x8x86450 = x691600 = +21
  • 1 hand of hands of fairy days = 1 cycle = x1,383,200 = +22
  • “15” cycles = 13 cycles = 1 season = x13x1,383,200 = x17,981,600 = +26
  • 1 hand of seasons (1 year) = x4x17,981,600 = x71,926,400 = +28
  • 1 hand of years (4 years) = x4x71,926,400 = x287,705,600 = +30
  • 2 hands of years (8 years) = x2x287,705,600 = x575,411,200 = +31
  • 2 hands of hands of years = 32 years = 1 Fairy generation = 2x4x4x287,705,600 = x9,206,579,200 = +35
  • 1 great-hand of years = 2 Fairy Generations = 1/4 of an age = 1.841316e+10 = +36
  • 1 hand of great-hands of years = 8 Fairy Generations = 1/2 an age = x4x1.841316e+10 = x7.365264e+10 = +38
  • 2 hands of great-hands of years = 16 Fairy Generations = an age = x2x7.365264e+10 = x1.4730528e+11 = +39
  • 1 great-hand of great hands of years = 4096 years = an ‘eternity’ = x4096x71,926,400 = x2.946e+11 = +40

Difficulty in breaking spells:

  • Caster level required +1 = +1
Adapting to D&D Spells:

    Colony Size / (Spell Level* +1) = total pluses (round down).
    Area Pluses + Duration pluses + Difficulty-in-breaking pluses = total pluses

      * includes any additional caster levels to achieve desired effect level.

Kings can choose to cast with lower total pluses, the above sets maximum levels.

A Fairy Queen. Image by Jim Cooper from Pixabay, cropped by Mike

As a general rule, choose the spell effect that you want and then select the spell that best fits. “Bless” and “Curse” are frequent choices.

    EG “May it rain on you, wherever you roam, regardless of cover, for an entire season.”
    Curse, 1st level spell. Human sized individual. Colony of 85 faeries.
    85 / (1+1) = 42.5, rounds to 42. So an individual could be cursed for more than 4096 years. But let’s play it safe (for the colony) and limit the curse to a season (+26). And let’s spend +10 adding to the caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 26 (duration) + 10 = 37. This leaves 5 unallocated.

Casting Consequences

A colony casting a spell is literally doing so with their life-force. It’s not done trivially.

    (30 x actual total pluses / maximum total pluses) + spell level + 10 = % of colony half-killed = 2 x % of colony killed (round both down).

    % colony killed can be reduced by X% by increasing the % half-killed by 2 x X% and reducing the number of pluses AFTER the above calculation by 0.5 x X.

    EG Continued: 30 x 37/42 = 26%. 26+1+10=37% half-killed and 18% killed. We can use the 5 pluses remaining to reduce the death penalty by 10 to 8%. This adds +16% to the number half-killed, for totals of 8% killed and 53% half-killed.

Not a trivial exercise at all; this curse is right at the limits of what a colony this small can do.

    Comparison example: Colony of 170 (twice the size): “May it rain on you, wherever you roam, regardless of cover, for an entire YEAR.”
    Curse, 1st level spell. Human sized individual.
    170 / (1+1) = 85. Duration: 1 year (+28). +20 caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 28 (duration) + 20 = 49. This leaves 36 unallocated.

    30 x 49/85 = 17% half killed, 8% killed. Reduce the 8% to 0: uses 4 additional pluses, plenty in reserve. Totals: 0% killed, 17+16=33% half hit points (recovered at 1 per day as usual).

Not only is this a nastier spell (it lasts a year and is harder to dispel), the colony is able to cast it with relative impunity.

Let’s nasty it up a little more, so that it not only affects the individual but anyone physically close to them.

    Comparison example: Colony of 170 (twice the size): “May it rain on you and any who approach you, wherever you roam, regardless of cover, for an entire DECADE.”
    Curse, 1st level spell. Human sized individual + surrounds = 5′ x 5′ area.
    170 / (1+1) = 85.
    Area: 5′ x 5′ = 25 sqr ft. log(25)/log(2) = 4.64, so +5.
    Duration: A decade isn’t on the list, but 8 years is – value of +31. So a decade will be +32.
    +23 caster level requirement of any mage or cleric who attempts to lift the curse.
    Total of 5 + 32 + 23 = 60. This leaves 25 unallocated.

    30 x 60/85 = 21% half killed, 10% killed. Reduce the 10% to 0: uses 5 additional pluses, still plenty left over. Totals: 0% killed, 21+20=41% half hit points (recovered at 1 per day as usual).

Half-killing almost half the colony is about as far as it’s reasonable to go; anything more risks the colony’s survival, should a predator find them.

    One more example:
    “May every building you enter burn to the ground for the rest of your natural life” (man, the King must really be pissed off at the target!)
    Colony Size 400.
    Spell: Fireball (3d6), Level 3 spell, plus 2 caster levels to get 3d6 = level 5.
    Max Bonuses = 400 / (5+1) = 66.
    Area: 20′ x 20′ = +6.
    Duration: +37.
    Dispel Difficulty =+7
    Total = 6+37+7=50, leaves 16.

    30 x 50 / 66 = 22% half-killed, 11% killed. Protect the 11% = +6 levels, 10 in reserve.
    Net cost: 22+22=44% half hit points, no fatalities.

Note that this is right on the edge for a colony of this size, which is close to as big as they come. Maybe they colony could have afforded another +5 dispel difficulty. But most spell-casters would be disinclined to help if the practice of consulting them burned down their houses, so maybe that’s not necessary.

Personal Magic

In addition to the major castings above, which always involve a ritual and a whole colony, most fairies are capable of smaller, more temporary ‘personal magic’ – making vines and tree limbs light up with glowing ‘fairy light’, shrinking visitors to enable them into homes, etc. No such magic effect can last for more than a day and most for less. It is ten times more efficient to sustain an existing spell than it is to cast it anew.

Fairy Personalities

Fairies are generally lighthearted and friendly, though some have nasty senses of humor. A few – generally marked for greatness within their society as a result – are capable of being more serious, more judgmental, and exhibit gravitas that far outweighs their stature. Relatively few are the sly, cunning, scheming types; they are more happy-go-lucky and take life one day at a time as it comes to them.

These moods and attitudes vanish instantly when the colony feels under threat. Fairies are capable of an anger that has to be seen to be believed, and can sustain it for generations. Hillbilly fueders have nothing on these folks when someone earns their enmity. Entire colonies have uprooted and moved simply to be in a better position to harass someone the Fairies think worthy of that level of enmity – though it is more common for a colony to split over such an issue.

One of the fastest ways to earn such enmity is a failure to respect nature. Fairies have no theology as such, but they are fiercely protective of the environment around them. As the land on which they abide sickens or is befouled, so the fairies succumb to ill-health, so this is not all that surprising; they are bound to the life of the nature which surrounds them, and they guard and protect it as fiercely as they guard and protect themselves.

Dishonesty and misrepresentation are the second fastest ways to arouse a Fairie’s ire. A Fairy’s word is inviolable; it would die before breaking it, sacrifice their entire family if need be. And they don’t care about ‘the letter of the law’; they operate on the intention of the principle as spelled out in the original agreement. They never forget the exact wording of an agreement reached and never forget, ignore, or obfuscate the intention behind it; if an agreement is no longer fit to serve that purpose because circumstances have changed, or if the intended purpose becomes out of date, the whole agreement needs to be renegotiated, it cannot be amended. At the same time, Fairies have no equivalent of the human sense of Honor, because that implies dishonor which is unthinkable in a Fairy. They are natural seekers of Justice.

Educated Fairies

Fairies with natural Gravitas are natural leaders, and are groomed for that role. About 1% of the population are natural geniuses (by Fairy standards), with two or even sometimes three times the intelligence of the smartest ‘typical’ fairy. it is very common for these to get initial education by listening outside the windows of human institutions, becoming fascinated by words, stories, and higher learning. When recognized, if it is socially acceptable to the culture outside the colony, these may even be sent to study at a more advanced institution or at the feet of a non-Fairy master of some sort. Eventually, these ‘expatriates’ return to the colony and learn to apply what they have learned – be it the cultivation of food stuffs, new construction techniques, new science, or whatever. They frequently become advisors to the crown – whoever happens to be wearing it this week.

Note that they adapt the knowledge they have gained to Fairy Society and its benefit, and not the other way around. Anything learned that requires a change in social structures or patterns has to be put to the colony as a whole, and may not be implemented until all not only understand it but approve of the change. Anything that can’t be used within this structure is discarded.

Comments (2)

The Power Of 1 on Root R


Today, I offer a new technique for rolling multiple dice many times with great efficiency. Any RPG can benefit from that!

Sometimes, the shortness of the road can make up for rougher conditions. Image by Nataly from Pixabay

I hope everyone had a wonderful Christmas break. Mine was great, though not without its challenges – but I have evidently weathered them, because here we all are, in a bright and shiny New Year!

This isn’t going to be a long post – but it is going to be a profound one. In the adventure I’m currently working on for the Zenith-3 campaign, a situation arose in which a character was going to be exposed to multiple minutes of an environment doing damage to him every turn.

Not just a few dice, but a lot of dice. Fortunately, he also has a lot of protection. How many dice, and whether or not that protection was going to be enough, would depend on what the character chose to do.

(Note that I’m being circumspect because this adventure hasn’t been run yet).

He could choose to head into the danger and incur a higher rate of damage. He could try to get out of danger by the shortest possible route – which also incurs that higher rate of damage but only for a relatively short time. Unless he gets lost along the way – a potential real danger. He has other options, as well.

So I didn’t know how many dice a round he would be taking, but I knew this: there are 3 twenty-second rounds in a minute (or 6 10-second rounds – the latter is our default, the former something I’m experimenting with). Thats 15 rolls of 8-to-10d6 every five minutes. And the character could be waiting in this situation for 20, 30, 40 minutes or more.

120 or more rolls of 8-to-10 d6 each. And apply defenses to each. And calculate damage from each. And accumulate that damage from each. And recover some of that damage from each.

It might take as little as two minutes to do each, but it would probably be more. FOUR HOURS of making rolls while everyone twiddled their fingers.

There had to be a better way. And then I thought of one, and got Google Gemini to help flesh it out and make it real.

The Principle

As you make more and more rolls, they become more and more inclined to average out. That’s one of the abiding principles harnessed by The Sixes System, and it’s something I understood very clearly. So why not leverage that fact? Roll ONCE and apply a mathematical manipulation to that result to get the outcome of R rolls.

Sounds incredibly simple, doesn’t it? Well, it’s not quite that easy, but it’s pretty close to it.

The procedure

  1. Roll Once.
  2. Subtract the average roll to get Delta.
  3. Determine R, the number of Rolls that this calculation is going to represent.
  4. Multiply the Delta by 1/ (R^0.5).
  5. Add the average roll to the result.
  6. Apply any modifiers that are applicable to every roll. The result is the average result over the totality of R rolls.
  7. Multiply by R.
  8. Apply any other adjustments. Which gives you the total of effect at the end of those R rolls.

This sounds complicated, but in most RPGs it will be even simpler.

An example

Let’s pick… 8d6 damage, 12 rolls over 12 rounds. Defenses subtract 20 from the result. Anything that gets through the defenses also does x3.5 Stun damage. At the end of each minute, the character gets 25 Body back and 50 Stun. He has a pool of 120 HP and 240 stun to draw upon.

  1. I roll 8d6 and get 33.
  2. The average of 8d6 is 8 x 7 / 2 = 28. Delta = +5.
  3. R = 12.
  4. Delta x 1 / (R^0.5) = 5 / 12^0.5 = 5 / 3.464 = 1.4434
  5. Add the average roll 28 + 1.4434 = 29.4434.
  6. Subtract Defenses of 20 = 9.4434.
  7. Multiply by R = 12 x 9.4434 = 113.3208. Round in the character’s favor to 113. Multiply this by 3.5 for the Stun = 395 stun damage.
  8. If 3 rolls is a minute, 12 rolls is 4 minutes, and the character gets 4 x 25 = 100 HP back and 4 x 50 = 200 Stun back. So his losses at the end of the 4 minutes are 113-100=13 HP and 395-200=195 stun.

That took about 5 minutes to do – but I was typing explanations. If I just did it? 2 minutes, tops – 60 to 90 seconds, more likely.

Another example

There are 25 men defending a castle wall. There are 200 archers attacking them, and each archer gets 2 shots per round. Each shot does 1d6 if it hits. The archers have a 3 in 20 chance of hitting, and half of those hits will strike castle wall instead, so it’s effectively 1.5 on d20. Archers have to inflict an 20 points of damage to kill a target.

There are a couple of preliminary calculations needed for this example.

  • 200 x 2 x 1.5 / 20 = 30 hits per round.
  • Distributed over 25 men, that’s effectively 1.2 hits per defender per round.
  • At an average of 3.5 points per hit, that’s an average of 4.2 damage per defender per round.
  • At 20 needed, that’s an average of 20 / 4.2 = 4.76 rounds of combat.

That’s all well and good, but we don’t want averages – we want specifics.

So let’s do 5 x 6d6 per round for 4 rounds and see where we’re at (5 x 6 = 30).

  1. Roll 6d6.I get 18.
  2. The average of 6d6 is 6 x 7 / 2 = 21. Delta is -3.
  3. R = 4.
  4. -3 x 1 / 4^0.05 = -3 / 2 = -1.5.
  5. -1.5 + 21 = 19.5.
  6. 19.5 x 4 = 78.
  • 78 points distributed amongst 25 men is 3 12 points per man per round.
  • For every man who’s taken twice that, there will be one who’s taken half that. So 1.56 and 6.24.*
  • Repeat: 0.78 and 12.48.
  • Repeat: 0.39 and 24.24.96/
  • Six numbers, so out of every 6 defenders, 1 is dead, 1 is half-dead but still fighting, and 1 is wounded slightly.
  • 25 defenders, so the total is 25/6=4 dead, four half-dead, four lightly wounded, 13 virtually whole.
  • * Assuming the roll is symmetrical.**

    ** Okay, this isn’t quite true – if there’s a minimum result, the true answer is half-way from the result to the minimum matches halfway from the maximum to the maximum minus the result. But this is a lot quicker and easier, and it works even when you don’t know what the maximum is, as in this case.

Specifics vs Averages – it makes a VERY big difference.

I would then run the same calculation for the defenders taking down attackers. About 4 minutes to run 4 rounds worth of siege.

But the next time around, I’d be informed by the results of the first run and increase R to 6 or 8, and run the attack in bigger ‘chunks’ of time.

Useful R values

If you can arrange it, the following R values are especially convenient, for reasons that should be obvious: 4, 9, 16, 25, 36, 49, 64. The square root of these numbers are 2, 3, 4, 5, 6, 7, and 8, respectively.

Perhaps less obvious are 2.25, 6.25, 12.25, 20.25, 30.25, 42.25 and 56.25, .These become 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, and 7.5, respectively.

Wait, What? “2.25” rolls? “2.25”” rounds? How does THAT work?

The “round” or “turn” is an artificial construct. It doesn’t actually exist, it’s just a convenient dividing line. Multiply by the number of minutes or seconds in one, and you get real-world units of, respectively, minutes or seconds.

And that works in the other direction, as well. Let’s say there are 12 seconds in a round – then 2.25 rounds is 2.25 x 12 = 27 seconds.

Or, let’s say there are 15 seconds in a round, and a character has to run through a danger zone, which will take him 72 seconds at his movement rate. 72 / 15 = 4.8 rounds. Not 4 rounds, or 5 rounds, 4.8 rounds.

Or, to go back to the original trigger for all this – the character might spend 16 minutes in the 6d6 zone, then cross 100m of 8d6, 100m of 10d6, and 200m of 12d6. Most movement rates aren’t going to translate those distances into neat time intervals when they are measured in rounds. Seconds, maybe, maybe not, but rounds? Almost certainly not.

Three Final Tips

    Tip #1

    If you really want your results to FEEL like you’d rolled them all, aim for an R that is one less than required and add one one totally legitimate random roll. In reality, this inflates the randomness more than is warranted, but it gives the right ‘feeling’ in play.

    So if your true R is 15, use R=14. One random roll feeds into the calculation, and one stands alone. I do NOT recommend this, though – it’s an extra set of die rolls for not enough reward.

    Tip #2

    The second one is this: if you have a long interval, break it into smaller chunks and a smaller R, and generate a new ‘seed value’ for each chunk. For 20, 30, or 40 minutes? 5 or 6 minutes at a time. For longer? 10, or 15. For even longer? 20.

    Divide the time by the total number of rolls that you want to make. That will tell you how long each chunk should be – just round to the nearest convenient number.

    Tip #3

    The more granular the die roll, the better this works. Let that sink in for a moment. It’s not just that the system processes 12d6 just as quickly as it does 6d6, saving more time; the results are qualitatively more nuanced.

    But that granularity is also enhanced with higher R values.

    That implies a sweet spot – and it’s going to be roughly found at (R x N) ^0.5. And the closer that R and N are, therefore, the closer you are to the sweet spot – without even calculating it.

    if you have a choice between 15 dice and R=8 or 10 dice and R=12, the second one will give the best results.

    If you have a choice between 60 dice and R=4 vs 15 dice and R=16, the second one wins every time. Not just is ease of roll, but in quality of result.

Well, that’s the power of 1 on Root R. Hopefully it’s useful out there!

Leave a Comment

The Adverse Effects Engine


The AEE is a subsystem that slots into any RPG for simulating everything from Bad Weather to Plagues & Poisons.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

The Backstory

A while back, I was working on an adventure for one my campaigns (being deliberately vague, here) and I needed to look up the effects of Cobra Venom in the Hero System.

I wasn’t impressed – this stuff is supposed to be dangerous, even deadly, and what was offered in the bestiary supplement would barely kill a child.

And this particular venom was supposed to derive from supernatural Cobras summoned by a pissed-off deity. So that wouldn’t cut it.

I developed the Venom described in the box below, but wasn’t very happy with it – too fiddly, and perhaps a touch TOO lethal.
 
 
 
 
 

PER HIT:

  • Immediate on exposure: -5 all primary stats -2 PD -2 ED -10 END -1 ALL SKILLS -2 OCV -2 DCV plus 10 STUN 1 BODY dmg
  • Round after exposure: -3 all primary stats -1 PD -1 ED -6 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 10 STUN 2 BODY dmg
  • 2nd round after exposure: -2 all primary stats -4 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 5 STUN 3 BODY dmg
  • 3rd, 4th, rounds after exposure: -1 all primary stats -2 END plus 3 STUN 2 BODY
  • 5th round after exposure: -1 all primary stats -1 PD -1 ED -2 END -1 ALL SKILLS -1 OCV -1 DCV plus 2 STUN 1 BODY
  • 6th, 7th round after exposure: as per 3rd & 4th rounds
  • 8th round after exposure: as 5th round
  • 9th, 10th round after exposure: -2 END plus 2 STUN 1 BODY

These are accompanied by appropriate physical & mental responses – shaking, stumbling, delirium, semi-consciousness, poor decision-making, extreme pain (burning sensations) etc. The wound site will blister as though exposed to Mustard Gas or a gas stove’s flame, and the effect will slowly spread through the 10 rounds, starting 2-3 cm diameter +1 cm diam each subsequent round..

TOTAL EFFECTS:

    -5-3-2-2-1-2-1= -16 all primary stats;
    -2-1-1-1 = -5 PD same ED;
    -10-6-4-2-2-2-2-2-2-2 = -32 END;
    -1-1-1-1-1=-5 ALL SKILLS;
    -2-1-1-1-1=-6 OCV & DCV;
    10+10+5+3+3+2+3+3+2+2 = 43 STUN
    1+2+3+2+2+1+2+2+1+1 = 17 BODY

Clothing: Adds 1 round delay to the above

A tornique: Halves the rate of effect shown

Antivenom: Stops effects instantly, restores 1/4 of the damage taken to stats & skills (round down)

If the character survives the course of the attack and does not get hit again, he can recover:

    1 Primary stat point (each stat) / 30 mins
    1 OCV & DCV / 30 mins
    1 Secondary stat point / hour
    END as Normal
    STUN as 1/2 Normal
    BODY as Normal

Those second thoughts didn’t happen right away – in fact, there was about a year in between generating and reviewing the above, and we’re still nowhere near it appearing in play, which it may never do, so I marked it for reconsideration and moved on to higher-priority tasks.

Then, a few weeks ago, in Traits of Exotic d20 Substitutes pt 1, I casually tossed out a completely original system (inspired by the Sixes System, for which I still have to write the final part).

A number of people seemed to like its elegance and simplicity and flexibility. So, a couple of days later, when I came across my note to review the Cobra Venom, the two thoughts clicked together.

But, to actually be usable in play, I needed to dig deeper into what was a casual aside at the time. And so, here we are.

The Core System

The GM specifies N dice, and a target of T sixes. At intervals (generally fixed by the GM but may be variable), the character rolls Nd6. Any sixes are counted towards T, until the total is T or more.

    If one 1 is showing, something bad happens (specified by the GM but not necessarily announced).

    If two 1s are showing, something worse happens (specified as above). Or the same bad thing happens twice. Or the same bad thing happens, and some other bad thing happens. Whatever – it’s worse.

    If three 1s are showing, something really bad happens (specified as above). And T might increase by 1. Or one of the alternatives listed previously. It’s useful to be consistent.

    If four or more 1s are showing, something catastrophically bad happens and T increases by 1 or more. Or (you guessed it) as above.

    You also have the option of specifying a very small ‘something bad’ if no 1s are showing, just to remind the victim that they have this hanging over their head.

The GM controls the severity of each level of effect, the frequency of rolls, the size of the rolls (N), and the target (T). The combination of N and T also dictates what the frequency of occurrence of the different levels of penalty should be.

Nice, neat, and simple – in theory.

To really use it in practice, the GM needs a way to estimate what the total effects are likely to be. Then he can adjust the penalty levels and N and T accordingly to get exactly what he wants the probable outcome to be.

Or he can start with predetermined outcomes in mind and divide them up into the different penalty levels according to a convenient pairing of N and T, based on E, the number of rolls it’s expected to take to reach T.

On Today’s Menu

I’m going to outline the process in full, with tables and convenient shortcuts built in for the GM, for the first approach. Then I’ll outline the second in a shorter format, because it will use the same tables as the first approach.

When I was planning and contemplating this expansion, I also thought up a number of variations, so I’ll describe them and their impacts as the cherry on top.

Set N and T

These should always be determined by E, the expected number of rolls to reach T rolling N dice at a time.

    T=1, for N=1 to 8: 6, 3, 2, 2, 2, 1, 1, 1
    T=2, for N=1 to 8: 12, 6, 4, 3, 3, 2, 2, 2
    T=3, for N=1 to 8: 18, 9, 6, 5, 4, 3, 3, 3
    T=4, for N=1 to 8: 24, 12, 8, 6, 5, 4, 4, 3
    T=5, for N=1 to 8: 30, 15, 10, 8, 6, 5, 5, 4
    T=6, for N=1 to 8: 36, 18, 12, 9, 8, 6, 6, 5
    T=7, for N=1 to 8: 42, 21, 14, 11, 9, 7, 6, 6
    T=8, for N=1 to 8: 48, 24, 16, 12, 10, 8, 7, 6

or, you might prefer to pick an N and then a T:

    N=1, T=1 to 8: 6, 12, 18, 24, 30, 36, 42, 48
    N=2, T=1 to 8: 3, 6, 9, 12, 15, 18, 21, 24
    N=3, T=1 to 8: 2, 4, 6, 8, 10, 12, 14, 16
    N=4, T=1 to 8: 2, 3, 5, 6, 8, 9, 11, 12
    N=5, T=1 to 8: 2, 3, 4, 5, 6, 8, 9, 10
    N=6, T=1 to 8: 1, 2, 3, 4, 5, 6, 7, 8
    N=7, T=1 to 8: 1, 2, 3, 4, 5, 6, 6, 7
    N=8, T=1 to 8: 1, 2, 3, 3, 4, 5, 6, 6

Don’t worry about these not lining up in neat columns, the same information is available in the table that is below.

Advice:

I prefer this approach because of the clear patterns shown for N=1, 2, 3, and 6 – but these can be misleading if used for extrapolation, as N=4 shows with its jump from 3 to 5, and N=5 shows with the jump from 6 to 8, with the second of these being the stronger example. So the extrapolation is not as certain as a pattern might suggest, and can’t be relied on – so I will always recommend using the first arrangement, simply because it doesn’t suggest potentially misleading extrapolations.

High-T = long durations, especially with lower N values. That’s suitable for diseases that have a long interval between checks – every 12 or 24 hours, say. But for poisons, you don’t want an E that’s more than 6 or 8, even for the worst ones, and 5-6 is probably a better target even for those. E=3-4 is good for mid-strength poisons, and E=1-2 should really be reserved for only the fastest-acting.

For every really lethal poison or disease, there should be several of the mid-strength variety, and for every mid-strength, many weaker poisons – or so runs one line of thinking. But evolution favors those poisons that are strong enough to take down whatever the poisoner feeds on or is commonly attacked by; it doesn’t happen in isolation. That can cause potency to increase, moderating the earlier trend. So here are a trio of ratios to get you thinking:

    By Theoretical Threat Magnitude: 1: 3: 9-12
    By Evolutionary End-point: 1 : 2 : 3
    Compromise: 2 : 5 : 10

Playing into that decision should be the poison reservoir. In other words, how many bites of the poison cherry can one poisoner deliver?

Size of the creature impacts this – the larger the creature, the larger the venom sacs (or their equivalent).

Here are some real-world assessments:

Tiny/Small – insects, small spiders, scorpions, small centipedes – venom capacity is very low and either single-use or low-frequency bursts. The venom is metabolically costly relative to body size. Often have a single, full dose for immediate defense/predation. Recovery is long (hours/days).

Medium – mid-sized snakes, large spiders, cone snails, large scorpions, etc – Moderate venom capacity, low-moderate frequency of delivery – three uses in quick succession. Capable of venom metering – injecting less than maximum to conserve supply. May deliver a full dose for a large threat, or a “dry bite” (no venom). Can deliver a burst of 2-3 significant bites, then need short recovery (minutes).

Large – large snakes, octopuses, large fishes – high venom reservoirs, Moderate-high frequency of use (multiple uses or sustained delivery). High reservoir allows for multiple, significant envenomations. Gaboon Vipers, in particular, are known for a massive venom yield and ability to deliver repeated, high-volume strikes. Delivery can be sustained over a short period. Recovery time for full capacity is still long, but practical use is frequent.

As a general rule of thumb, the less venom, the deadlier it has to be, because volume decreases as the cube of linear size. The venom therefore has to become more potent just to keep up. Larger creatures have much more venom, which they can utilize in a number of different ways, one species compared to another. On top of that, smaller creatures are less physically resilient, and need to end combat encounters more quickly in order to survive – so that’s an extra push toward higher toxicity

The graphic below was provideded by Gemini, Google’s AI, and edited by me:

I also asked Gemini to extrapolate its’ findings to cover giant and ‘dire-” creatures, and this is what it came back with (edited):

Gargantuan Creatures – 5m long spiders, Giant Snakes: Size factor 5-10 x earth “real”. Venom Capacity up to 50x that of normal equivalents. Potency may decrease slightly, but total damage output increases exponentially due to volume. Sustained High Frequency of venom delivery, can deliver (5-10x earth “real”) lethal doses with minimal pause. (May take weeks to recharge but still have sufficient venom for 2-3 encounters while recharging).

Colossal Creatures – 25m sea creatures, “Kaiju” spiders, etc. Size Factor 25+ times earth “real”. Venom Capacity – essentially unlimited. Potency is often low relative to size, but the volume is so immense it acts as a biological (or breath weapon, acid spray, etc, with toxic effects on top). The creature’s bite/sting is less about injecting a dose and more about dousing the target (and/or the environment around it).

A “Dire Version” is a creature that defies the standard biological trade-off, making it inherently more dangerous and a true “boss” encounter. The Dire modifier should break the Inverse Correlation by increasing both Reservoir Size and Venom Potency.

So, once you have T, N and E, and have started thinking about bite frequency vs toxicity

Probable Occurrence of Adverse Effects

By the way, before it begins – generating this table of results proved too complicated for both Gemini and ChatGPT! Both understood clearly what I wanted them to do, and (as much as an LLM can) why, and generated a solution to the problem of how – that didn’t work.

Repeated corrections were attempted in both cases, and failed. That’s not a measure of my intellect or anything like that – it’s an indication of just how much detailed work lies under the surface of this innocuous-looking table.

If I had a BASIC compiler, I could have written the code myself from one of their algorithms in less time, and in about 20 lines.

Key:

“No +” represents low chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“+” represents a moderate chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“++” represents a significant chance of more. Use the indicated number of occurrences + 0.5 to estimate the average total impact from impact per occurrence.

“+++” represents a high likelihood of more occurrences than the number shown, and a high confidence of at least this many occurrences. Use the indicated number +1 to estimate the average total impact from impact per occurrence.

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
T N E K=1 K=2 K=3 K=4 K=5 K=6 K=7 K=8
1 1 6 1
1 2 4 1 0
1 3 3 1 0 0
1 4 2 0+++ 0 0 0
1 5 2 0+++ 0+ 0 0 0
1 6 1 0+++ 0+ 0 0 0 0
1 7 2 0+++ 0+ 0 0 0 0 0
1 8 1 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
2 1 12 2
2 2 7 1+++ 0
2 3 5 1++ 0+ 0
2 4 4 1++ 0+ 0 0
2 5 3 1 0+ 0 0 0
2 6 3 1 0+ 0 0 0 0
2 7 3 1 0++ 0 0 0 0 0
2 8 2 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
3 1 18 3
3 2 10 2+++ 0+
3 3 7 2+ 0+ 0
3 4 5 1+++ 0++ 0 0
3 5 4 1++ 0++ 0 0 0
3 6 4 1++ 0+++ 0 0 0 0
3 7 3 1 0++ 0 0 0 0 0
3 8 3 1 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
4 1 24 4
4 2 13 3++ 0+
4 3 9 3 0++ 0
4 4 7 2++ 0+++ 0 0
4 5 5 2+ 0+++ 0 0 0
4 6 5 2 1 0+ 0 0 0
4 7 4 1++ 0+++ 0+ 0 0 0 0
4 8 4 1++ 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
5 1 30 5
5 2 16 4+ 0+
5 3 11 3+++ 0+++ 0
5 4 8 3 0+++ 0 0
5 5 7 2+++ 1 0 0 0
5 6 6 2+ 1 0+ 0 0 0
5 7 5 1+++ 1 0+ 0 0 0 0
5 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
6 1 36 6
6 2 19 5+ 0++
6 3 13 4++ 0+++ 0
6 4 10 3+++ 1 0 0
6 5 8 3 1+ 0+ 0 0
6 6 7 2+++ 1+ 0+ 0 0 0
6 7 6 2+ 1+ 0+ 0 0 0 0
6 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
7 1 42 7
7 2 22 6 0++
7 3 15 5 1 0
7 4 11 4 1+ 0 0
7 5 9 3++ 1+ 0+ 0 0
7 6 8 3 1++ 0+ 0 0 0
7 7 7 2++ 1++ 0++ 0 0 0 0
7 8 6 2 1++ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
8 1 48 8
8 2 25 6+++ 0++
8 3 17 5+++ 1 0
8 4 13 5 1++ 0 0
8 5 10 4 1++ 0+ 0 0
8 6 9 3++ 1+++ 0+ 0 0 0
8 7 8 3 1+++ 0++ 0 0 0 0
8 8 7 2++ 1+++ 0++ 0 0 0 0 0

E is usually a decimalized number because the calculations determine the average outcome over many sets of rolls. “2.6” means that 40% of the time it will take 2 rolls and 60% of the time it will take 3 – but there is always an outside chance that it might take 1 or 4, so those percentages are approximate. Because in the real world you can’t have “0.6 of a roll”, these have been rounded up, and the resulting whole number of rolls used to calculate the rest of the table.

If you want to know the exact query that ‘broke’ the AIs, it was something like this:

For N 6-sided fair dice from 1 to 8, calculate the number of rolls required to reach a total number of sixes shown across all rolls equal to or greater than T, which also varies from 1 to 8, and label it E1. Because in the real world you can’t have “0.6” of a roll, round E1 up and label it E. For E rolls of N fair six-sided dice, calculate the number of rolls exactly K 1s will be seen, with K varying from one to 8. If the result for a given K (designated R) is an integer, show the integer; else if RK-INT(RK) is <0.25, show INT(RK); else if RK-INT(RK) is <0.5, show INT(RK) and one “+” sign; else if RK-INT(RK) is <0.75, show INT(RK) and two “+” signs; else show INT(RK) and three “+” signs, for example “2+++”. If an entry is impossible, eg K>N, show a blank space, not a 0. Format the results in a plaintext tab-delimited table with columns T, N, E, K=1, K=2, etc, sorted by T and sub-sorted by N.

Note that I had to run this query about 25 times, refining it each time, and eventually had to take out everything relating to the encoding and requesting the answer to 3 decimal places so that I could ‘manually’ do the coding.

Gemini calculated the results correctly, including the formatting, but couldn’t get the columns of data to line up correctly after 24 rows plus the heading – the K=1 column kept overwriting the E column, no matter what was done.

ChatGPT failed completely to apply the encoding correctly and had several calculation errors at first, but with a bit of patience and simplifying the question, did manage to produce a table that I could copy and paste into a spreadsheet. I then inserted additional columns to perform the calculation of RK-INT(RK) and interpret the results as per the “if” statement shown above. I then hid the working and manually transcribed the results into the tables above.

Oh, and for clarity, I decided at the last minute to break what was one big table into the more user-friendly 8 smaller tables.

I’m getting ahead of myself with this picture, but it had to go somewhere! You’ll see why it’s included in due course. Image by Daniel McWilliams from Pixabay

So let’s pick an entry, I’ll decode it, and show you how it works. How about… 5 dice, target of four 6’s.

  1. Look for the line that starts 4 – 5.
  2. E is 6, so you can expect the victim to roll 6 times on average before getting to the target of 4 sixes – of course, it could happen on the very first roll, but it probably won’t.
  3. So, what’s likely to happen, bad-things wise, over the course of those expected 6 rolls?
    • K=1 has a value of 2+, so there will probably be two times that a single 1 one is showing.
    • K=2 has a value of 0+++ – so the expectation is that this won’t happen on any of them, but there’s a very high chance of it happening at least once – just not a relative certainty of it. And that makes sense – there’s a 1 in 36 that you’ll get 2 ones on two dice, and 25/36 chance that there will be no 1s on the other 2 dice, for a total chance of 25/1296 of this outcome, or 1.9%. But that doesn’t allow for a 1 on the first dice and another 1 on, say, the 3rd dice – so there are more ways for this to happen. And that puts the chance up so high that it’s very likely to happen.
    • K=3 through K=5 are extremely unlikely to occur. Not impossible, but not likely. For all practical purposes, this is a two- or three-tiered penalty structure.
  4. The key takeaway, though, is: 2 x one 1, 1 x two 1’s, and 8-3=5 x no 1’s.
  5. So multiply that by the chosen harm levels that go with those one-counts, add it up, and you have your expected damage.
    • To demonstrate this, let’s say no 1’s = 1 HP, one 1 = 5 HP, and two 1’s = 10HP. Then we would have 1×5 + 1×10 + 5×1 = 20 HP damage.
  6. But the system can be as complicated as you want.
    • Try no 1’s = 2 HP, 1 one = +5 HP, and 2 ones = +10 HP and a point of STR, each accompanied by the lesser levels.
    • Then, we would expect 2x(5+2) + 1x(10+5+2, & 1 STR) + 5×2 = 14+17+10 HP & 1 STR = 41 HP & 1 STR.

Choosing N and T

Unless you are modeling a specific set of conditions that dictate otherwise, or are working to deliver an ‘average fixed amount of damage’ (both covered in subsequent sections), the place to start is with the time intervals* between rolls and the number of rolls expected to be needed, E.

That will give you a short-list (perhaps VERY short) to choose between.

For example, if I want an effect to apply for an average of 6 time-intervals – it could be six rounds, six lots of 30 seconds, 6 minutes, 6 hours, 6 days, or whatever – I would look for E of 5, 6, or 7.

A whopping 17 entries in the table match, so I’m not spoiled for choice. Since there are so many, I would lose the 5’s and 7’s and go with just the options that give exactly what I want.

That gets me down to 5 choices. I want the players to roll more than 1 die but no more than 4, because anything else takes longer to add up.

But that kills all my choices, so the decision is now which restriction do I desire more – the 6 rounds, or the 4 dice?

I decide that 7 rounds is acceptable, after all. That puts a lot of options back on my radar, including T=4 N=4 and T=4, N=5. The first has a higher chance of K=1 results, the latter introduces an outside chance of K=5 and an increased chance of K=3 and K=4. But it does fit my original 6-round desire. In the end, I choose to flip my compromise and choose the N=T=4 option.

Job Done.

Extending The table

Let’s compare the 4-4 line with the 8-8 line.

4-4: 7, 2+, 0+++, 0, 0; vs
8-8: 7, 2++, 1+++, 0++, 0, 0, 0, 0, 0

So you can’t break an 8-8 into two sets of 4-4 rolls. But there is a simple way.

Let’s look at N=12 T=12.

    Step 1: Divide both N and T by 2 (they have to be even).

    Step 2: Look up the results on the tables above. In this case, we get N=6, T=6.

    Step 3: The total number of rolls expected is the same for both – in this case, 7.

    Step 4: Because the scaling also increases the deliberately-induced ’rounding error’, subtract 1/2 from the expected number of rolls in response to the doubling. So that’s 6½.

    Step 5: The total number of rolls is the same, but doubling the dice makes it easier to roll high numbers of ones. The counts for the worse penalties will increase, while the count for the standard penalty remains stable or slightly decreases. Balanced against that is the fact that the probability of those higher penalties is so low that your increasing nothing by a smidgen in most cases. Analysis has led to the rules for doubling:

    • # and #+ are always treated as #.
    • ++ should be read as #+1.
    • If the full E is <16, +++ should also be read as #+1.
    • If E >15, +++ should be read as #+2.

    So, in this case, we have 2+++, 1+, 0+, 0, 0, 0.
    E is <16, so 2+++ becomes 3.
    1+ stays 1.
    0+ stays 0.
    0 stays 0.

    So three single 1s, 1 pair of 1s, and 2.5 rolls without ones.

    Step 6: But then we have to factor in the drop from 7 to 6½ expected rolls:

    3 x 6.5 / 7 = 2.8 single 1s, 0.93 pairs of 1s, and 6.5 – 2.8 – 0.93 = 2.77 rolls with no 1s.

    Step 7: Multiply those by your chosen penalty values.

    Let’s use…

      No 1’s = 3 HP
      One 1 = 10 HP
      Two 1s = 25 HP

    3 x 2.77 + 10 x 2.8 + 25 x 0.93
    = 8.31 + 28 + 23.25 = 59.56 HP.

    Step 8: Round up and add the lower of the half N or T to allow for the possibility of those results of 3 or more 1s.

    In this case, both are 6, giving a final estimate of 66 HP damage.

It is recommended that + and +++ rolls should have their expected penalties softened, especially if using compound effects, as the levels set for them are based on occurrence numbers that are only partially expected to occur. 10% weaker is about right. Similarly, ++ rolls should be subjected to a moderate reduction (~20%) for the same reason.

Setting penalty levels

Ensure the penalty definitions are geometrically worse as K increases (eg., K=2 is far worse than K=1) to reflect the exponentially decreasing probability of high K rolls.

Setting penalty levels from a designated target

If plugging values into the calculations above doesn’t suit, you can establish a fixed geometric ratio – 2.5, 3, or 4 all work well – and use them to reduce your high K results to a specific number of K=1 or K=0 results. I recommend the first of these, but it’s up to you.

For example, let’s use 6 dice and a Target of 3 sixes. E=4.

    One 1 = 1++, treated as 1.5
    Two 1s = 0+++, treated as 1.
    Three to Six 1s = 0. Ignored.
    No 1’s = 4-1-1.5 = 1.5.

And let’s set a nice robust target like 100 HP. That’ll get a PC’s attention in a hurry!

Set the ratio as 4, and let’s extend the calculation down to K=0.

    Two ones = 4 (the ratio) single ones, for a total of 5.5 of them.
    One one = 4 (the ratio).’no ones’, so 5.5 x 4 = 22.

    100/22 = 4.54. Round down to 4. That 4 x 1.5 expected = 6 points, so our target is now 94 points from 5.5 k=1s.

    94 / 5.5 = 17.09. Round it down to 17. Multiply by the 1.5 times it’s expected to occur and we get 25.5. So our target goes down by 25 (round it down again) and our K=1 value is 17 HP.

    96-25 = 71. So our K=2 – expected once – is 71 HP.

Final results:

    K=0 does 4 HP.
    K=1 does 17 HP.
    K=2 does 71 HP.

Of course, if you set more modest targets, you’ll get more moderate results. This was deliberately extreme.

Variation One: Nested Damage Types

Try this on for size:

    K=0: minor HP damage.
    K=1: significant HP damage.
    K=2: significant HP damage & single-stat damage.
    K=3: significant HP damage & second-stat damage.
    K=4: Significant HP damage & both stats damaged.
    K=5: K=4 + Significant HP damage.
    K=6: K=4 + K=2.
    K=7: K=4 + K=3.
    K=8: 3 x K=4.

These results ‘nest’ three types of damage – two to stats and HP. You can use a similar system if the game system has multiple damage types, as in the Hero System:

    K=0: Some END loss
    K=1: K=0 + Some Stun loss
    K=2: 2 x K=1 + Some Body damage
    K=3: K=1 + K=2 + Some temporary Stat loss
    K=4: 2 x K=2 + Some temporary Stat loss
    K=5: K=4 + K=2
    K=6: K=5 + K=3.
    K=7: K=6 + K=4.
    K=8: 3 x K=5.

Defining ‘some’ as 5 points, that becomes:

    K=0: -5 END
    K=1: -5 END -5 Stun
    K=2: -10 END -10 Stun -5 Body
    K=3: -15 END -15 Stun -10 Body -5 Stat
    K=4: -20 END -20 Stun -10 Body -5 Stat
    K=5: -30 END -30 Stun -15 Body -5 Stat
    K=6: -45 END -45 Stun -25 Body -10 Stat
    K=7: -65 END -65 Stun -35 Body -10 Stat
    K=8: -90 END -90 Stun -45 Body -15 Stat

Or you could simplify things:

    K=0: -5 END -1 Stun -0 Body
    K=1: -10 END -5 Stun -1 Body
    K=2: 2 x K1
    K=3: 4 x K1 plus -1 stat
    K=4: 8 x K1 plus -5 stat
    K=5: 15 x K1 plus -10 stat
    K=6: 30 x K1 plus -20 stat
    K=7: 50 x K1 plus -30 stat
    K=8: 100 x K1 plus -40 stat

The Healing Difference

It’s up to you to decide whether or not healing, or recoveries in the Hero System, can function until whatever-it-is has run it’s course.

That makes these effects much nastier and should cause you to halve whatever damage levels you had in mind. Unless you want it to be potentially deadly.

Other Systemic Options

There are five other options that the GM can choose. Some of these can operate in combinations.

1. The Exhaustion Option

When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

That means that your biggest risk of a really bad result is at the start, and possible effects moderate.

It makes it much harder to predict the net outcome though.

Statistical Impact: This dramatically reduces the dice pool (N) over the course of the effect. Successes are achieved quickly, but the chance of rolling K>0 adverse events on any remaining die remains constant (1 in 6). Since the pool shrinks, the absolute chance of rolling multiple 1s decreases rapidly.

Game Feel: Front-loaded risk and rapid resolution. The initial rolls are the most dangerous. If a character survives the first two or three checks, the difficulty in rolling 1s drops faster than the difficulty in reaching the target, T.

Best For: Fast-acting, non-renewable poisons (like a single large dose of nerve agent) or short, focused challenges where the effect is quickly flushed from the system.

2. The Continual Option

Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.

This means that things get progressively worse until whatever-it-is has run its course and left your system. It’s nasty but good for supernaturally-sourced troubles.

The one saving grace is the additional way out – if every dice is either a 1 or 6, the nightmare ends. In some cases, the cause – disease or poison – will burn itself out fast, in others it will be the cause of extremely protracted suffering.

The higher the initial N, the worse this gets. If you start with 6 dice:

    1, x, x, x, x, x – T sixes (cumulative) or 5 sixes needed
    K=1 events every roll until you roll another 1 or exit
    1, 1, x, x, x, x – T sixes (cumulative) or 4 sixes needed
    K=2 events every roll until you roll another 1 or exit
    1, 1, 1, x, x, x – T sixes (cumulative) or 3 sixes needed
    K=3 events every roll until you roll another 1 or exit
    1, 1, 1, 1, x, x – T sixes (cumulative) or 2 sixes needed
    K=4 events every roll until you roll another 1 or exit
    1, 1, 1, 1, 1, x – 1 six needed
    K=-5 events every roll until you roll a 1 or a 6. If you roll a 1, there is a K=6 events.

Each time a die is locked on ‘1’, your chances of getting the sixes you need go down and the number of rolls you’re expected to need will go up.

Damage accumulates very rapidly, and with accelerating pace.

3. The Progressively-worse Option

Each 1 that gets rolled increases the Target by 1.

This puts survival on a knife-edge and ensures that if you suffer badly, the effects will linger for longer – making it a good choice for plagues.

Statistical Impact: This maintains the dice pool (N) but increases the overall target (T) dynamically. Every adverse event makes the recovery condition harder to achieve. This means rolling a 1 directly increases the expected duration (E) of the effect. A single unfortunate roll early on can potentially double the total expected number of checks.

Game Feel: Cascading failure and desperation. Failure feeds failure. The character sees the light at the end of the tunnel (the target T) constantly moving further away. This is highly effective for plagues or diseases that exploit the body’s weakening condition.

Best For: Plagues, zombie infection progression, or effects that are harder to fight off the longer they persist (like a viral load).

4. The Blessed Balm Option

Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on.

This creates a situation in which the health of the sufferer is on a roller-coaster, up and down with each roll of the dice. Eventually, these changes will tend to dampen out. Works very well with the Progressively-Worse option.

This fundamentally re-balances the risk assessment; introducing greater variance into the process – rolls are either great (success towards T) or terrible (a large number of 1s) or tension-building (anything else). It models a scenario where the character’s vitality is constantly tested.

Game Feel: Roller-coaster effect and high stakes per roll. The character may suffer a terrible wound but instantly cancel it in the same roll with a heroic recovery effort. This variation is highly dramatic.

Best For: Magical duels, effects that fluctuate with effort or willpower, and scenarios where the poison’s progression is inherently unstable.

5. The Devastating Option

The first 6 in a roll doesn’t count, only sixes above that one.

This strongly biases the results away from recovery, without ruling it out entirely. It makes any of the ‘nasty’ options far worse.

Statistical Impact: This increases the expected number of rolls (E) needed to reach T without changing the probability of adverse events (K). Since E is higher, the total number of adverse events over the life of the affliction is necessarily higher. If you use the same N and T, the effect will be substantially longer and more severe than calculated in the base tables.

Game Feel: Recovery – and the downhill slide before it – feels Incredibly sluggish and unforgiving. Successes are hard-won. This makes the affliction feel resistant or deeply embedded in the character’s system, guaranteeing prolonged suffering

Best For: Artifact-level curses, dire creature venom, effects designed to be a significant narrative roadblock, or spurs for quests for a cure. Don’t hit a PC with this variant except in unusual circumstances when they have no-one to blame but themselves; DO hit someone important who the PCs want to save.

6. The With-A-Bang Option:

A selected number of the dice pool (N) start already showing ones and are not re-rolled. These reduce by 1 each round, becoming regular dice rolled and not “fixed ones”.

The “Fixed Ones” should be 1/2 of N or less. This ‘forces’ the occurrence of a high K result in the first round, tapering off in subsequent rounds. It also extends E by reducing the likelihood of sixes being rolled, generally by the number of fixed ones at the beginning, minus 1.

    6a. Bigger Bang Sub-variant

    The”fixed ones” are only removed when a 6 is rolled. A 6 used for the purpose does not count toward the target.

    This extends the durability of the high-N count AND effectively increases T by the number of initial ones showing.

    6b. It Will All Be Over Soon Sub-variant:

    As per the basic option 6, but fixed ones do not become regular dice, they become automatic sixes.

    This front-loads the results with high-K results but effectively reduces T by the number of initial ones showing.

Going Further

Any situation in which one character uses his skills to solve a multipart problem, or a group collaborate on a challenge, or a group face adversity together, or that can otherwise be broken down into units of roughly equal value, can be modeled using the Adverse Effects Engine.

Each part of the problem, or contributor to a solution, or participant, gets one dice, and they all roll collectively at the same time. This is especially powerful when coupled with the variants listed above.

Think of T as Progress, N as Resource/Skill, and K as Consequence (usually Immediate, but that depends on the definitions of harm that you set up).

Here are just a few of the many situations that the engine, correctly configured, can simulate.

Extreme Weather

N = number of PCs / NPCs in the group

T = N unless there is a natural channel either guiding the weather toward them (+1-3 T) or away from them (-1-2 T).

K = scale of impact of the weather event on the group.

Best Option: The Blessed Balm PLUS Progressively Worse:
Each 1 that gets rolled increases the Target by 1.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on, mitigating an existing K result OR reducing the Target by 1 if there are no K results to mitigate.

Everybody rolls a dice and contributes the result to the roll. Sixes push the weather away from the party, Ones bring it down on top of them to a degree. Net effects change from round to round, with weather either just missing the characters (K=0), catching them at its fringes (K=1), or enveloping them (K>1).

For added flavor, throw in Nested Damage Types – First impact = Wind, Second impact = Rain / Hail / Snow, Third impact = Stronger Wind, and so on.

Product Development

Your PC is part of a team developing a new product for sale. You will need a Market Specialist (salesman), a production / manufacturing engineer, a marketer, a technical expert, and a team manager.

The salesman will identify a gap in the market to be targeted, the technician will design the product to fill that gap, the engineer will determine what the possible price-points are, and the rate of production that is possible, the marketer will figure out how to sell the product, and the team manager will make decisions and look at the costs of altering the production environment to change the production engineer’s forecasts.

Each team member gets at least 1 dice to contribute; if their specific skill is more than double the lowest specific skill in the team, they get a second one. If the company has a good history / reputation in the field, the GM can award 1-3 extra dice.

T starts at 1 per team member. If the company has a bad history or reputation to overcome, add 2. If the product is especially cutting-edge, increase this subtotal by +50% or even +100%. If the market is especially cut-throat, add another 25% on top of that. For each team member whose specific skill is less than half the highest specific skill amongst the team, add another 1.

Each 6 counts +1 towards the product being fit for purpose. Each roll marks a milestone in the development process – there can be blind alleys, competitor announcements changing the market / playing field, cost increases, new markets opening up, old markets closing down, scandals in the boardroom – anything and everything that affects the market for the product.

Penalties take the form of additional design time between rolls (K=0, K=1) and reductions in the fitness for purpose of the resulting product (K>0).

I don’t think any of the optional configurations are appropriate for this application.

Collaboration to overcome an environmental hazard (1)

Use the AEE for ongoing natural challenges where the group’s collective effort determines the duration, and individual poor luck determines the immediate suffering.

Crossing a Frozen Lake or Glacier, for example: N (Dice) = The number of characters in the group, or the lowest relevant skill rating in the group, or some reasonable fraction thereof. Only characters with a relevant skill or with a relevant stat value higher than a medium-high threshold get a die. Below those marks, the characters are liabilities toward the group’s success.

T = the GM-assigned difficulty, or some simple fraction thereof, +1 per character, whether they get a die or not..

Options Configuration: The Continual Option, PLUS The Blessed Balm Option.
Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on – removing it from the locked pool and releasing it back into the live dice to be rolled.

Collaboration to overcome an environmental hazard (2)

The party are roped together and have to climb.

N = Characters with climbing skill of +2 or better, or STR+DEX of 16 or better.

T = Total number of characters + 1-4 for difficulty of climb. Add 2 if the characters are under attack or otherwise pressured to climb at speed.

K = falls / setbacks. K>2 = ropes break.

Options Configuration: The Exhaustion Option simulates the rope tying the bad climbers to the good ones: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

For especially difficult climbs, add The Progressively-Worse Option: Each 1 that gets rolled increases the Target by 1.

For the most supremely challenging climbs, add the Devastating Option instead of Progressively Worse: The first 6 in a roll doesn’t count, only sixes above that one.

Ransacking A Library for specific (hidden / obscure) information

How long it takes to find a specific piece of hidden or obscure lore in a Library that might not even contain what you’re looking for depends on your reading speed (INT), presuming you have the ability to read, and your ability to recognize what you’re looking for, or that what you have just found is a clue to where to look next.

Well-structured libraries also make it a lot easier by excluding most of the books as irrelevant.

I would employ a simulation similar to the Design-A-Product example, but based purely on INT and not on specific skills. Note that if you have a character participating who is low INT, they can actually disrupt the efforts of higher-INT characters by continually interrupting them with “is this it?”.

Specifically, you want the total number of 6s to exceed the total number of 1s before the search comes to an end. If it doesn’t, either the answers aren’t there, or you’ve missed them. So long as there are dice to be rolled, there’s a chance, even if you’re at -2 or -3 to getting a result.

K=penalties to the success total, high-K = passing guards, accidental fires, magical books that scream when opened, ghostly librarians…

Focal Character overcoming an environmental hazard

All sorts of things fall into this category. Picking a combination lock, for example. Or Disarming a bomb with N critical steps that have to be performed in the right order. Or using a code-breaker.

You’ve seen these devices in the movies. Attach one to a lock and let it work its way through the combinations. To make life more difficult, consider a rolling code – that’s where a complex algorithm sets a new code every time, but only the 1000 or so valid results from that algorithm will be accepted. Which means that if you lock in the wrong answer, you have to start over.

The relevant skill here isn’t necessarily one of yours – it’s the design and programming skill of whoever designed and built the code-breaker. All you have to do is place it on the lock in roughly the right position.

With each success (each 6 toward the T), the stakes get higher. One wrong move (K>1) and it’s back to square one.

This scenario seems tailor-made for the Exhaustion Option – a 6 is a locked-in digit: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

Lesser K results are events that threaten failure / discovery, but which may not actually incur the problem.

T=Number Of Digits in the code.

N=T+a simple fraction of the programmer / designer’s skill.

Let the tension build…

PDF Icon

Click the icon to download the PDF

Using The AEE

If you prep in advance, you have plenty of opportunity to consult the tables and simply put the specific simulation instructions into your notes.

If you want to be able to use the system off-the-cuff, though, you’re going to have to be able to take it with you. For that reason, I’ve put together a PDF with the essential mechanics, shorn of explanation and example – but WITH a hyperlink back to this article.

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5b


This entry is part 20 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1.5 Blended Models

In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

The simplest is to designate one model for part of a zone, and another to apply to the rest.

Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

5.8.1.6 Zomania – An Example

I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

5.8.1.6.1 Zone Selection

I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

I was going to bring this up a little later, but realized that readers need to know it, now.

Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

3. Fishing villages.

4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

5. Non-coastal land, usually suitable for agriculture.

6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

7. The bottoms of the lobes are lopped off…

8. And the land equivalent is then found ‘squaring up’ the locuses…

9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

5.8.1.6.1.2 Sidebar: Elevation Classification

Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

     -> □ Zone 30 Treeline from the start of this category
     -> □ Normal Treeline is midway through the range

4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

5.8.1.6.3 Area Adjustments – from 5.7.1.13

Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

Hostile Factors:
     Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
     Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
     Towns -0.1
     Net total: 1.03333
167.8 x 1.0333 = 173.4 units^2.

Benign Factors:
     Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
     Subtotal +0.6
     Square Root = 0.7746
173.4 x 0.7746 = 134.3 units^2.

Zone 30 is… messier. Base Area 251.45 units^2.

Hostile Factors:
     Mining 1.5 +
     Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
     Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
     Net total = 1.5 + 1.28 + 0.75 = 3.53
251.45 x 3.53 = 887.6 units^2.

Benign Factors:
     Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
     Subtotal 0.75
     “Wild” = average subtotal with 1 = 0.875
     Sqr Root = 0.935
887.6 x 0.935 = 829.9 units^2.

To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

5.8.1.6.4 Defensive Pattern – from 5.7.1.14

Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

5.8.1.6.5 Sidebar: The Size of Zomania, revisited

16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

That’s about the same size as the Netherlands.

It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

And that is due to a second scaling problem that was getting in the way of my thinking:

How much is that in day’s marching?

In 5.7.1.14.3, I offered up:

    If d=10 miles (low), that’s 103,923 square miles.
    If d=20 miles (still low), that’s 415,692 square miles.
    If d=25 miles (reasonable), that’s 649, 519 square miles.
    If d=30 miles (doable), 935,307 square miles.
    If d=40 miles (close to max), 1.66 million square miles.
    If d=50 miles (max), 2.6 million square miles.

But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

So, let’s redo them on that basis:

    If d=10 miles (low), that’s 11,547 square miles.
    If d=20 miles (still low), that’s 46,188 square miles.
    If d=25 miles (reasonable), that’s 72,169 square miles.
    If d=30 miles (doable), 103,923 square miles.
    If d=40 miles (close to max), 184,444 square miles.
    If d=50 miles (max), 288,889 square miles.

And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

    If d=10 miles (low), that’s 320.75 square miles.
    If d=20 miles (still low), that’s 1283 square miles.
    If d=25 miles (reasonable), that’s 2,004.7 square miles.
    If d=30 miles (doable), 2,886.75 square miles.
    If d=40 miles (close to max), 5,123.4 square miles.
    If d=50 miles (max), 8024.7 square miles.

Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

In my error-strewn calculation, my logic went as follows:

    ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

    ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
    ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

    ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

    ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

    ▪ So, let’s start with the “reasonable” 25 miles.
    ▪ Apply the ‘working day’ to get 8.333 miles.
    ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
    ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
    ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
    ▪ Or 655.56 sqr miles each.
    ▪ Equivalent to a square 25.6 miles x 25.65 miles.
    ▪ Or a circle 12.51 miles in radius.
    ▪ Base Area 173.4 units^2 = 17340 square miles.
    ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
    ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
    ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

But, in the meantime…

Zone 30.
    ▪ Actual area 251.45 square units = 25,145 square miles.
    ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
    ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
    ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

5.8.1.6.6 Sidebar: Changes Of Defensive Structure

The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

What does that imply for Zone 11, which lies in between the two?

You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

If you look back at the overall zone map for Zomania (reproduced below)

…and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

5.8.1.6.7 Inns in Zone 7 – from 5.7.3

Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

Conditions:
    Road condition, terrain, good weather = 3 x 2.
    Load = 1 x 1/2.
    Everything else is a zero.
    Total: 6.5.
6.5 / 16 x 3.1 = 1.26 miles per hour.
1.26 mph x 9 hrs = 11.34 miles.

Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

    150 miles / 26 = 5.77 of them.

It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

    1 inn functional
    2 inn functional
    3 inn functional but 1/4 day closer
    4 inn functional but 3/4 day farther away
    5 inn not functional
    6 inn not functional, and neither is the next one.

Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

And that’s where I’ll have to leave it, for this post. I had hoped to get all of the Zomania examples done, but the problems early on put paid to that, and didn’t even leave me enough time to get Zone 30 detailed through to the inn stage – let alone up to date! That’s obviously for the next post….

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5a


This entry is part 19 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated. I have decided at the last minute to let the featured image (but not the head image) evolve with each post.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

    5.8.1.3.1 Maternal Survival

    The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

    We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

        Y, @1, @2, @3, @4, @5, @6:
        1, 99%, 98%, 97%, 96%, 95%, 94%
        2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
        3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
        4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
        5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
        6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
        7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
        8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
        9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
        10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
        11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
        12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

    The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

    There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

    Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

    Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

    This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

    5.8.1.3.2 Paternal Survival

    The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

        0.95 x 0.95 = 90.25%.
        0.9 x 0.9 = 81%.
        0.85 x 0.85 = 72.25%
        0.8 x 0.8 = 64%.

    Those values give the percentages in which both parents have survived to the birth of the average number of children.

    If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

    The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

    The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

    In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

    5.8.1.3.3 Childless Couples

    Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

        Year 1: A% -> 1 child born
        Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
        Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

    … and so on.

    This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

    But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

    In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

    Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

    Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

    We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

    The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

    I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

        A%, C, [C rounded]
        80%, 25,
        75%, 33.33, 33
        70%, 42.86, 43
        65%, 53.85, 54
        60%, 66.67, 67
        55%, 81.81, 82
        50%, 99.98, 100
        45%, 122.13, 122
        40%, 149.67, 150
        35%, 184.66, 185
        30%, 230.10, 230
        25%, 290.50, 291
        20%, 372.51, 373

    But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

    So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

        A=45%, C=122:

        1200-122 = 1078
        1078 families with children, 122 childless couples
        1078 / 122 = 8.836
        8.836 + 1 = 9.863
        so 1 in 9.863 families will be childless couples.

    5.8.1.3.4 Unwed Singles

    The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

    This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

    You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

    The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

    So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

    While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

    Want the bare bones? Okay, here goes.

    For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

    We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

    The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

    What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

    Nope. Not gonna work in any practical sense.

    So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

    It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

    But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

    Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

    I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

    In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

    In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

    How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

    What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

    Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

    How optimistic / positive is the society? How grim and gritty?

    Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

    Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

    In the end, you will have a number.

    Let me close out this section with some advice on setting that base percentage.

    There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

    ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

    ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

    ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

    Here’s the alternative perspective:

    ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

    ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

    ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

    What’s the attitude in your game world? They are all reasonable points of view.

    In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

    In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

    In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

    Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

    5.8.1.3.5 Population Breakdown

    That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

        # Children with no parents,
        # Children with mothers but no fathers,
        # Children with fathers but no mothers, and
        # Children with two parents.
        # Childless Couples
        # Unwed Singles

    Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

    Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

    I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

    5.8.1.3.6 The Economics Of The Demographics

    In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

    Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

    There will be a percentage who have no fixed employment and operate as day labor.

    Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

    Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

    The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

    Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

    The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

    There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

    But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

    Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

    And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

    You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

    So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

    In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

    But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

    5.8.1.3.7 An Economic Village Model

        8 a^2 = b^2 – c^2.

    Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

        a^d = (b^e – c^f ) / g,

    but that’s beyond my ability to model, and too fiddly for game use.

    a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

    b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

    c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

    To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

        8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
        a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

    The village grows. b rises to 62. c rises to 59.

        8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
        a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

    It has risen – but not by very much.

    Things become clearer if you can define c as a percentage of b:

        a^2 = b^2 – (D x b^2) / 100
        100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

    If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

    Side-bar: 5.8.1.3.6.1 Good Times

    You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

    Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

    Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

        95.16 x (n+1) = 80.85 + (n x 98.5)
        95.16 x n + 95.16 = 80.85 + 98.5 x n
        (95.16 – 98.5) x n = 80.85 – 95.16
        3.34 n = 14.31
        n = 14.31 / 3.34 = 4.284.

        4-and-a-quarter normal years to every 1 good year.

    You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

        95.16 x (5 +1) = g + 5 x 98.5
        g = 95.16 x 6 – 98.5 x 5
        g = 570.96 – 492.5 = 78.46.

    That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

    I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

    So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

        Boom: (1 x 78.46 + 5 x 98.5) / 6
            = (78.46 + 492.5) / 6
            = 570.96 / 6
            = 95.16%
            (we already knew this but it’s included for comparison)

        Thriving: (1 x 78.46 + 9 x 98.5) / 10
            = (78.46 + 886.5) / 10
            = 964.96 / 10
            = 96.496

        Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
            = (78.46 + 1871.5) / 20
            = 1949.96 / 20
            = 97.498

        In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
            = (78.46 + 2364) / 25
            = 2442.46 / 25
            = 97.6984

    Look at the differences, and how thin the lines are between growth and stagnation.

        Stable to In Decline: 0.2004% change.
        Stable to Thriving: 1.002% change.
        Thriving to Booming: 1.336% change.
        Booming to In Decline: 2.5384% change.

    The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

    An aside within an aside shows why:

        Boom: 95.16% = 0.9516;
        0.9516 ^ 6 = 0.74255;
        so 25.74% productivity goes into growth.

        Thriving: 96.496% = 0.96496;
        0.96496 ^ 6 = 0.8073;
        so 19.27% productivity goes into growth over the same six-year period.

        Stable: 97.498% = 0.97498;
        0.97498 ^ 6 = 0.859;
        14.1% of productivity goes into growth over the same six-year period.

        Declining: 97.6984% = 0.976984;
        0.976984 ^ 6 = 0.8696;
        13.04% of productivity goes into growth.

    Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    And that will about do it for this post. It will continue in part 5b!

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5 (incomplete)


This entry is part 18 of 20 in the series Trade In Fantasy

We’ve used the economy to distribute fortifications, and used those to locate inns. Now let’s wrap some communities around them.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

Table Of Contents

In parts 1-3 of this chapter:

Chapter 5: Land Transport

    5.1 Distance, Time, & Detriments

      5.1.1 Time Vs Distance
      5.1.2 Defining a terrain / region / locality

           5.1.2.1 Road Quality: An introductory mention

    5.2 Terrain

      5.2.0 Terrain Factor
      5.2.1 % Distance
      5.2.2 Good Roads
      5.2.3 Bad Roads
      5.2.4 Even Ground
      5.2.5 Broken Ground
      5.2.5 Marshlands
      5.2.7 Swamplands
      5.2.8 Woodlands
      5.2.9 Forests
      5.2.10 Rolling Hills
      5.2.11 Mountain Slopes
      5.2.12 Mountain Passes
      5.2.13 Deserts
      5.2.14 Exotic Terrain
      5.2.15 Road Quality
           5.2.15.1 The four-tier system
           5.2.15.2 The five-tier system
           5.2.15.3 The eight-tier system
           5.2.15.4 The ten-tier system

      5.2.16 Rivers & Other Waterways
           5.2.16.1 Fords
           5.2.16.2 Bridges
           5.2.16.3 Tolls
           5.2.16.4 Ferries
           5.2.16.5 Portage & Other Solutions

    5.3 Weather

      5.3.1 Seasonal Trend
      5.3.2 Broad Variations
      5.3.3 Narrow Variations
           5.3.3.1 Every 2nd month?
           5.3.3.2 Transition Months
           5.3.3.3 Adding a little randomness: 1/2 length variations
           5.3.3.4 Adding a little randomness: 1 1./2-, 2-, and 2 1/2-length variations

      5.3.4 Maintaining The Average
           5.3.4.1 Correction Timing
                5.3.4.1.1 Off-cycle corrections
                5.3.4.1.2 Oppositional Corrections
                5.3.4.1.3 Adjacent corrections
                5.3.4.1.4 Hangover corrections

           5.3.4.2 Correction Duration
                5.3.4.2.1 Distributed corrections: 12 months
                     5.3.4.2.1.1 Even Distribution
                     5.3.4.2.1.2 Random Distribution
                     5.3.4.2.1.3 Weighted Random Distribution

                5.3.4.2.2 Distributed corrections: 6 months
                5.3.4.2.3 Distributed corrections: 3 months
                5.3.4.2.4 Slow Corrections (2 months)
                5.3.4.2.5 Normal corrections: 1 month
                5.3.4.2.6 Fast corrections: 1/2 month (2 weeks)
                5.3.4.2.7 Catastrophic corrections 1/4 month (1 week)

           5.4.4.3 Maintaining Synchronization
           5.4.4.4 Multiple Correction Layers

    5.4 Losses & Hazards
    5.5 Expenses – as Terrain Factors
    5.6 Expenses – as aspects of Politics
    5.7 Inns, Castles, & Strongholds

      5.7.1 Strongholds
           5.7.1.1 Overall Military Strength
                5.7.1.1.1 Naval Strength
                5.7.1.1.2 Exotic Strength
                5.7.1.1.3 Adjusted Military Strength

           5.7.1.2 Mobility
                5.7.1.2.1 Roads
                5.7.1.2.2 Cross-country

           5.7.1.3 Kingdom Size and Capital Location
           5.7.1.4 Borders
           5.7.1.5 Terrain
           5.7.1.6 Internal Threat
           5.7.1.7 Priority
           5.7.1.8 Threat Level
           5.7.1.9 Zones
                5.7.1.9.1 Abstract Zones
                5.7.1.9.2 Applied Considerations
                     5.7.1.9.2.1 Sidebar: Why do it this way?

                5.7.1.9.3 Preliminary Zones, Zomania

           5.7.1.10 Kingdom Wealth
                5.7.1.10.1 Legacy Defenses
                
      5.7.1.10.2 Military Training
                
      5.7.1.10.3 Disaster Relief
                
      5.7.1.10.4 Religion
                
      5.7.1.10.5 Magic
                
      5.7.1.10.6 Tools
                
      5.7.1.10.7 Entertainment
                
      5.7.1.10.8 Resource Development
                
      5.7.1.10.9 A Hypothetical Disaster
                
      5.7.1.10.10 Housing & Funding Boosts
                
      5.7.1.10.11 Food
                
      5.7.1.10.12 Diplomacy
                
      5.7.1.10.13 Trade
                
      5.7.1.10.14 Education
                
      5.7.1.10.15 Transport (Road Maintenance)
                
      5.7.1.10.16 The Impact On Population

           5.7.1.11 Military Need: Theoretical Scenario 2

In the last part of this series:

           5.7.1.12 Stronghold Density
           5.7.1.13 Zone Size
           5.7.1.14 Base Area Protected per Stronghold
                5.7.1.14.1 The Distance between defensive centers
                
      5.7.1.14.2 The relationship between defensive patterns
                
      5.7.1.14.3 The shape of the defensive pattern
                
      5.7.1.14.4 What is 100% coverage, anyway?          5.7.1.14.5 Calculating Area Protected
                     
      5.7.14.5.1 Three Satellite
                     5.7.14.5.2 Four-Satellite

                5.7.1.14.6 Configuration Choice(s)
                5.7.1.14.7 The Impact On Roads
                The Impact on populations

           5.7.1.15 Economic Adjustments
           5.7.1.16 Border Adjustments
           5.7.1.17 Historical vs Contemporary Structures
           5.7.1.18 Zone and Kingdom Totals
           5.7.1.19 Reserves

      5.7.2 Castles, Fortresses, and the like
           5.7.2.1 Distance to a satellite fortification using 2d6
           5.7.2.2 Distance to a neighboring hub
           5.7.2.3 Combining the two: the nearest neighbor

      5.7.3 Inns

In this part:

    5.8 Villages, Towns, & Cities

      5.8.1 Villages
           5.8.1.1 Village Frequency
           5.8.1.2 Village Initial Size
                Optional
           5.8.1.3 Village Demographics

                5.8.1.3.1 Maternal Survival
                5.8.1.3.2 Paternal Survival
                5.8.1.3.3 Childless Couples
                5.8.1.3.4 Unwed Singles
                5.8.1.3.5 Population Breakdown
                5.8.1.3.6 The Economics Of The Demographics
                     Side-bar: 5.8.1.3.6.1 Good Times

           5.8.1.4 The Generic Village
           5.8.1.5 Blended Models
           5.8.1.6 Zomania – An Example
                5.8.1.6.1 Zone Selection
                5.8.1.6.2 Sidebar: Elevation Classification
                5.8.1.6.3 Area Adjustments – from 5.7.1.13
                5.8.1.6.4 Defensive Pattern – from 5.7.1.14
                5.8.1.6.5 Sidebar: The Size Of Zomania, revisited
                5.8.1.6.6 Sidebar: Changes Of Defensive Structure
                5.8.1.6.7 Inns In Zone 7 – from 5.7.3

      5.8.2 Towns
           5.8.2.1 Towns Frequency
           5.8.2.2 Town Initial Size
           5.8.2.3 The Generic Town

      5.8.3 Cities
           5.8.2.2 Small City Frequency
           5.8.2.3 Small City Size
           5.8.2.4 Size Of The Capital
           5.8.2.5 Large City Frequency
           5.8.2.6 Large City Size

      5.8.4 Economic Factors, Simplified
           5.8.4.1 Trade Routes & Connections
           5.8.4.2 Local Industry
           5.8.4.3 Military Significance
           5.8.4.4 Scenery & History
           5.8.4.5 Other Economic Modifiers
           5.8.4.6 Up-scaled Villages
           5.8.4.7 Up-scaled Towns
           5.8.4.8 Up-scaled Small Cities
           5.8.4.9 Upscaling The Capital & Large Cities

      5.8.5 Overall Population
           5.8.5.1 Realm Size
           5.8.5.2 % Wilderness
           5.8.5.3 % Fertile
           5.8.5.4 % Good
           5.8.5.5 % Mediocre
           5.8.5.6 % Poor
           5.8.5.7 % Dire
           5.8.5.8 % Wasteland
           5.8.5.9 Net Agricultural Capacity

           5.8.5.10 Misadventures, Disasters, and Calamities
           5.8.5.11 Birth Rate per year
           5.8.5.12 Mortality
                5.8.5.12.1 Infant Mortality
                5.8.5.12.2 Child Mortality
                5.8.5.12.3 Teen Mortality
                5.8.5.12.4 Youth Mortality
                5.8.5.12.5 Adult Mortality
                5.8.5.12.6 Senior Mortality
                5.8.5.12.7 Elderly Mortality
                5.8.5.12.8 Venerable Mortality
                5.8.5.12.9 Net Mortality

           5.8.5.13 Net Population

And still to come in this chapter:

      5.8.6 Population Distribution
           5.8.6.1 The Roaming Population
           5.8.6.2 The Capital
           5.8.6.3 The Cities
           5.8.6.4 Number of Towns
           5.8.6.5 Number of Villages
           5.8.6.6 Hypothetical Population
           5.8.6.7 The Realm Factor
           5.8.6.8 True Village Size
           5.8.6.9 True Town Size
           5.8.6.10 Adjusted City Size
           5.8.6.11 Adjusted Capital Size

      5.8.7 Population Centers On The Fly
           5.8.7.1 Total Population Centers
           5.8.7.2 The Distribution Table
           5.8.7.3 The Cities
           5.8.7.4 Village or Town?
           5.8.7.5 Size Bias
                5.8.7.5.1 Economic Bias
                5.8.7.5.2 Fertility Bias
                5.8.7.5.3 Military Personnel
                5.8.7.5.4 The Net Bias

           5.8.7.6 The Die Roll
           5.8.7.7 Applying Net Bias
           5.8.7.8 Applying The Realm Factor
           5.8.7.9 The True Size
                5.8.7.9.1 Justifying The Size
                5.8.7.9.2 The Implications

    5.9 Compiled Trade Routes

      5.9.1 National Legs
      5.9.2 Sub-Legs
      5.9.3 Compounding Terrain Factors
      5.9.4 Compounding Weather Factors
      5.9.5 Compounding Expenses
      5.9.6 Compounding Losses
      5.9.7 Compounding Profits
      5.9.8 Other Expenses
      5.9.9 Net Profit

    5.10 Time
    5.11 Exotic Transport

In future chapters:
  1. Waterborne Transport
  2. Spoilage
  3. Key Personnel
  4. The Journey
  5. Arrival
  6. Journey’s End
  7. Adventures En Route
5.8 Villages, Towns, & Cities

Part 5 of Chapter 5 is all about Population and its distribution. Most systems that I’ve seen for this purpose start with an overall population and work backwards, and often end up with unreasonable results, like a village every mile-and-a-half.

My system works the other way – a population density model to a population density to a local population. Many local populations give a Zone population, and the total of the Zone populations gives the Kingdom population overall.

5.8.0 Concepts & Principles

Select a model based on the desired ‘look and feel’ of the society within the Kingdom / Zone. The model describes the general distribution of population within the Kingdom / Zone, assuming a fixed unit of area (10.000 km^2), but most zones will be smaller.

The model plus a random roll sets initial village size. Village Frequency is determined by the placement of Inns & Administrative / Military structures, already defined. Together these define the total population density of an entire Kingdom according to the model.

This can then be applied to the size of the actual Kingdom to determine the total population of the Kingdom.

All of the above is on today’s agenda. In addition, there will be contributing factors determined that will be applied going forward.

Each village occupies a footprint termed a Locus.

The location within a locus actually occupied by the village or town is generally defined by the content of that locus. The population center will always be in the location within the locus that is most advantageous to growth.

A series of factors increases the size of the village within the locus, sometimes positively and sometimes negatively. Each factor yields a fractional value called a Scale Value. Applicable Scale Values determine the village location because many of them are specific to this place or that, enabling the location to be quickly refined within the locus.

Where there are multiple possible locations of roughly equal value, a community will split into two half-sized populations which will begin growing toward each other.

These Scale Values are totaled. The total Scale Value is applied as exponential growth to the base village size to determine the nominal size of the community.

If this is sufficient to trigger growth into a new size category, it is further adjusted and the new base size is used with the adjusted value to redetermine the size. This process iterates (i.e. gets applied repeatedly) until the final size of the settlement is determined.

Some conditions restrict community size by passing on excess growth to neighboring communities; these are passed from one to another until reaching a community that is no longer restricted. That community is sometimes referred to as the “Gateway” to the region. Becoming a ‘gateway’ is also a growth factor!

This is all achieved by taking the excess part of the Scale Value and applying it as a modifier to the nearest Locus outside the restricted area, reducing the total scaling factor that applies to settlements within the restricted area. Not all the excess can be redirected; growth in restricted areas is slowed, not stopped.

Along the way, various side-issues will be raised and assessed, building up a population profile for the Locus, the administrative division, the Zone overall, and for the Kingdom as a whole. In particular, the political infrastructure of the Kingdom gets determined.

Finally, these various considerations will come together to provide a system whereby a GM can generate a village ‘on the fly’ whenever a group of characters (PCs most of the time) enter a locus or cross a border.

At least, that’s how it’s all supposed to work in theory! As always, if the reality doesn’t yield useful results, I’ll feel free to diverge from this road-map!

    5.8.0.1 Frequency, Size, and Services

    The section above does a good job of outlining the process, but I thought it worth taking a moment to explain the philosophy behind it and the reason for this particular approach.

      5.8.0.1.1 The traditional approach

      The fundamental concepts by which population levels are usually defined come down to two main ones and a boat-load of implications.

      The first primary factor is settlement frequency – how many miles or kilometers or day’s march apart they are. The first two options are the ones with which most readers will be familiar and they have the virtue – and penalty – of being absolute measurements. The third option is more abstract, but can also be more practical. It takes account of terrain, for example, and at first that might seem like a good thing – but then you realize that it takes it into account backwards: if the terrain is poor, travel over it will be slower – but a fixed ‘average time apart’ then means that the settlements will cluster more closely together, i.e. there will be less physical distance covered in the same amount of time because of the terrain. What you really want is the opposite – good terrain clustering communities together, bad terrain setting them further apart.

      The second primary factor is settlement size – how many families or dwellings make up a ‘typical community’ in the specific zone.

      It’s the implications that start to get complicated. Between them, these specify the level of economic and industrial capacity of the typical community, and thus, what services are likely to be available. But that then gets muddied somewhat by demand. Certain services are always going to be in demand and providing those services is an economic opportunity for a practitioner.

      And that then gets complicated by the logistics of travel – the ‘footprint’ serviced by a given provider will vary from one occupation to another. A good blacksmith may service several small communities (if they are close together), or just one, while a mill may have a much bigger ‘footprint’.

      Add to that the secondary impact of travel capabilities – if travel is easy, and the community is on a trade route, there will be more services geared toward supplying the needs of travelers; if not, the primary driving force will be the needs of the inhabitants.

      The more you look into it, the bigger the mess the whole thing becomes. And that’s why I have rejected this traditional approach, at least for the most part.

      5.8.0.1.2 The alternative approach

      Instead, each settlement starts off at a base size and separation. The ‘tail’ – the implications – then wag the dog. Every location has benefits and drawbacks – the benefits help the settlement grow, the drawbacks cause it to shrink in size. If the demand for a blacksmith is high enough, there will be a blacksmith – who gets added to the base population and causes further population growth. If there’s no local blacksmith, but there is one in the next town over, that makes that town grow at the expense of this community. Taking stock of every relevant factor, the size of the actual settlement is then adjusted.

      But there’s one more way of looking at this approach, and for me, it makes this the most compelling possible option – it develops village size to accommodate the needs of the plot! If you need there to be a sage, or a blacksmith, or a tavern with rooms for travelers in the next community, they are there – and the community grows, within the context of the terrain and other factors, to whatever size is needed o justify the presence of these services.

      And if you don’t have any specific plot needs, the defaults of terrain and frequency and traffic and trade dictate the size and the services that are available should the PCs decide they need them.

    5.8.0.2 Community Sizes: Base, and smaller

    The fundamental unit of community size in this system is the Village. It has a certain base population, and that population size supports the provision of a certain number of general services to the community. These are ‘General Services’ and they exist to meet the needs of the inhabitants. A base-sized village also supports a single “Specialist Service” – i.e. a service with a ‘footprint’ larger than just this community. If the distance between communities is large enough, it may add a second ‘Specialist Service”, causing the community to grow – but it’s still within the range of ‘normal’ for the base size.

    Various factors shrink communities. If a community shrinks too much, it enters a community scale lower down the size chart. While the real-world terminology is vague in application, in this ‘unified’ view, these are designated Hamlets, and they have a base size 1/8 that of the base community. Hamlets no longer offer any Specialist services, and support fewer ‘General Service’ providers. The model supports Ha-1, 2, and 3 (those terms will make more sense shortly).

    Communities smaller than a Hamlet are Thorpes. Officially, this is a variant of a Middle English word meaning hamlet or small village – but I’ve expropriated the term for usage to represent the smallest of settlements. Once again, we can have Th-1, 2, and 3, and the base size of a Thorpe is 1/8 that of a Hamlet.

    Except that we can go smaller!

    Smaller than a Thorpe is a mining or logging Camp. Actually, the biggest of these overlap with a Thorpe in size, but the typical-and-smaller range of camps starts where a Thorpe leaves off. Such camps exist to enable the residents to perform one function and one function only; they provide only the essentials necessary to achieve that. These are often (usually?) a satellite of a larger community somewhere nearby. Any single-purpose camp comes under this designation.

    Camps can be rated Ca-1, -2, -3, -4, or -5. The base size of a camp is 1/4 that of a Thorpe (but they also have a minimum population of 1).

    If you’re keeping track, that’s 1/4 of 1/8 of 1/8 of a village, or 1/256th. If your village base size is 256 people or smaller, then the ‘minimum 1’ rule can be said to be in effect.

    Technically, you could also describe a Caravan as a Camp – it just happens to be mobile, or semi-mobile.

    5.8.0.3 Community Sizes: Larger than Base

    Going the other way, we find ourselves buried in adjectives, because there aren’t many terms on offer. Things get even more confusing when you discover that the definition of a city isn’t what we tend to associate with the term – and different countries have different definitions in terms of size.

    And, since most adjectives tend to be relative in meaning, and subject to interpretation, I’ve tried to eschew them in favor of suffixes.

    So, larger than a Village is a Village-2, Larger than a Village-2 is a Village-3, and Larger than a Village-3 is a Village-4.

    A Village-5 is the same size as a Town (leaving off the -1 suffix). The meaning of the term “Town” is also something that can vary widely from one culture to another. The term is used here to designate a community with a municipal authority beyond a singular mayor / burgomaster / whatever. In England, a Town is usually formally defined by a legal Charter issued by the Crown, giving it a specific identity outside of the control of the regional Nobility. In the US, it loosely refers to incorporated communities – i.e. a community that has issued its own Charter, which formally “Incorporates” the community.

    Australia and Canada distinguish communities based on population thresholds – but these can vary from state to state. Nevertheless, this is the mindset that this system adopts.

    The difference between a Town and a Village is that the town provides, by virtue of its Charter, services restricted to the Town Limits, collecting rates and revenues to fund these services; in a Village, there is no central authority to provide these services, and any that are provided are provided by the broader administrative unit – be it a state government or a Nobleman, paid for from the taxes and fees they are entitled to collect.

    ‘Town’ is followed by Town-2, Town-3, Town-4, and Town-5.

    Towns -6 to -10 follow, but a Town-6 is the same size as a City-1.

    A city is distinguished by having a metropolitan area beyond a simple town square, surrounded by residential districts or suburbs. Many of these will possess some singular identifying traits or characteristics (social or economic in nature), or will claim such an identity. Each suburb or district has its own independent retail or services providers. The number of suburbs or districts is roughly equivalent to the city-suffix squared, plus 1, not counting the metropolitan zone. So City-1 has 2 residential zones, City-2 has 5, City-3 has 10, and so on. These residential zones are all still administered by the central metropolitan zone.

    City-5 (with its 26 residential zones) is the same size as Metropolis-1. This is the point at which the central metropolitan area and surrounding suburbs are excerpted from the larger community to form a smaller City (usually City-1 or -2), while the remaining suburbs or districts collectively organize into a separate but contiguous City (usually City-2 or City-3 in size) with an authority independent of that of the central hub. Collectively, these form “Greater [name]”. For example, Greater Sydney consists of the City Of Sydney and 32 surrounding Cities, each of which contains and administers a number of smaller Suburbs. My residence is in the suburb of Panania, which is one of 41 suburbs within the City of Canterbury-Bankstown.

    You can work backwards from such numbers.

    Canterbury-Bankstown, with 41 suburbs, would have a suffix = sqr root (41-2) = sqr root (39) = 6.245. But this is the result of forced amalgamation between two different cities by the state government, a quite unpopular move at the time. Canterbury used to have 17 suburbs and be a City-3.87, while Bankstown had 10, and was a city-2.8. When they were merged, additional suburbs were also added from surrounding areas. Greater Sydney itself would rank as a City-25.5 if taken collectively – but it instead rates as a Metropolis-5.5 (32 cities, -2, take the square root). But Greater Sydney is a BIG city – 5,356,944 people – or more than five times the population of Imperial Rome at its height (1 Million, according to best estimates).

    The justification given for the amalgamation was economy of scale, and for some councils who were struggling to provide services, that was fair enough – but some such mergers were refused by the State Government for political reasons, and others forced through against the wishes of residents even though the parent cities were financially sound. So the whole thing stank of corruption and political manipulation. The leader of the governing party saw his popularity plummet to trump-like figures as a result of this and a couple of other controversies, and was forced to resign so that his successor would stand a shadow of a chance at the next State Election and so that his unpopularity would not impact on the Federal Election due later that year. It was a successful move on the latter front (just barely) but the shadow wasn’t deep enough on the former, and there was a change of state government.

    Adding to the size of Sydney is the fact that it’s a State Capital – and our present National Capital only exists as a compromise between Sydney and Melbourne, neither of whom were willing to let the other be the political Big Dog.

    5.8.0.3 Demographic Research

    Although the models will abstract things greatly, and not adhere to historical reality if it’s inconvenient, reality has to be the underpinning of the Demographic Models that are available.

    You don’t have to dig very deep into the history of various townships in Arkansas to discover the effects, both economic and social, or gaining or losing County leadership; I can only project up to the effect of being named a State Capital, and then scale up again for a National Capital.

    But it is worth noting that in 33 out of 50 US States, the largest city in the state is Not the State Capital. I put this down to everyone else in the state not wanting to be dominated by that largest city, just as Melbourne would not accept Sydney as the capital of Australia as well as of the state of New South Wales.

    Before moving on from this discussion, some historical context is worth highlighting.

    According to this graph…

    Excerpted from “Mortality, migration and epidemiological change in English cities, 1600–1870” by Romola Davenport, University of Cambridge, CC BY 4.0, courtesy of Researchgate (image scaled by me)

    …in 1600, the population of England was 5 million, and about 10% – half a million – lived in an Urban setting. In about 1650, the general population peaked and only slow growth could be seen until about 1775. At that time, the urban population was about 25%, or 1.25 million – and half of them lived in London.

    This graph…

    Excerpted from “When Bioterrorism Was No Big Deal” by
    Patricia Beeson & Werner Troesken (both from the University of Pittsburgh), Copyright unstated, courtesy of Researchgate (left caption moved and image cropped and scaled by me).

    …is harder to read, but shows that the trend given in the first continues back another 50 years and then flattens – so in 1550 it would have been about 6% of 5 million (i.e. 300,000) and in 1500, it might only have been 5% (250,000). And almost all of them would have resided in London.

    (That paper, downloadable from the link “Researchgate”, has a bunch of others for comparison at the back – Western Europe, Scandinavia, Eastern Europe. Worth grabbing for reference if one of those resembles the Kingdom “tone” that you’re going for.)

    This graph…

    Historical_population_of_France.svg by Max Roser, CC BY-SA 3.0, via Wikimedia Commons

    …shows the historical population of France, which provides additional context.

    Below, I’ve isolated the part that matches the 1500-1950 range of the England Graphs:

    Extract From Historical_population_of_France.svg
    Creative Commons CC-BY-3.0 as above, Cropped and Enlarged by Mike

    In 1500, there were about 15 million in France, rising to 18 million by 1600. 1550 would therefore have been about 16.5 million.

    In 1500, it can be estimated that 5.6% of the French population lived in towns of 10,000 or more. In 1550, that was 6.3%; and in 1600, 8%, according to one source (and there aren’t many to pick from).

    In 1500, Paris had a population of about 150,000, or just 16.1% of the urban population.

    In 1550, that was somewhere between 300 and 350,000 people, and 25.2-29.4% of the urban population.

    In 1600, we’re talking between 300 and 400,000 people, and 18.8-25% of the urban population – so other cities grew faster than Paris in the 1550-1600 period.

    Which other cities? The only one with more than 60,000 on all three dates was Paris. In 1600, Lyon or Ruen may have hit that number. We need to go to one-sixth the size of Paris or less for the next biggest population center, Toulouse, but it might also be in the vicinity of Lyon and Ruen. Estimates of the population in those cities at the time vary from about 40-60,000 in 1500, and 70-80,000 in 1600. But when you compare that with England, you see a stark difference.

    Here are some estimated population densities and population levels from the year 1300:

    ▪  France – 36 to 40 people per sqr km – 18 to 20 million total population.

    ▪  England and Wales – 33 to 40 people per sqr km – 5-6 million total population.

    ▪  Germany (then core of the Holy Roman Empire) – 24 to 28 people per square km, 12 to 14 million total population.

    ▪  Scotland – 6-13 people per sqr km – 0.5 to 1 million total population.

    Some other relevant Demographic research:

    France

    ▪  Largest Regional Cities (Excluding Capital): Milan, Venice, Florence (in broader Western Europe) were over 100,000. In France, cities like Ruen or Bordeaux may have reached 25,000?40,000.

    ▪  Major Towns (5,000?10,000+): Numerous. The median major town size in this range may have been around 12,000?15,000.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The backbone of the French urban network; perhaps a few hundred such towns scattered across the kingdom.

    ▪  Very Small Boroughs (Below 1,000): Most settlements below 1,000 people were agricultural villages.

    England (and Wales)

    ▪  Largest Regional Cities (Excluding Capital): York and Bristol were the undisputed next-largest, likely reaching 15,000?25,000 at their peak before the Black Death.

    ▪  Major Towns (5,000?10,000+): Only a handful of towns (eg., Norwich, Coventry, King’s Lynn) were in this tier, perhaps 8-10 total.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): This was the most numerous class of true urban centers in England. The average was likely around 2,000?3,500 people.

    ▪  Very Small Boroughs (Below 1,000): Many hundreds of market settlements were under 1,000 people, functioning as local market centers but not true urban areas.

    Germany (Holy Roman Empire Core)

    ▪  Largest Regional Cities (Excluding Capital): Cities like Cologne and Prague were major international centers, likely with 30,000?40,000 inhabitants.

    ▪  Major Towns (5,000?10,000+): Cities like Lübeck, Nuremberg, and Augsburg were regional powers, mostly in the 10,000?25,000 range.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): There were hundreds of walled, independent towns across the Empire, with many falling into this category. The average would be difficult to pin down but was lower than England.

    ▪  Very Small Boroughs (Below 1,000): A very large number of minor market towns and Minderstädte (small towns) were below 1,000.

    Scotland

    ▪  Largest Regional Cities (Excluding Capital): Edinburgh was the only city approaching major European size, perhaps 10,000?12,000 at its peak.

    ▪  Major Towns (5,000?10,000+): None. The scale of Scottish urbanization was significantly smaller than its neighbors.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The largest burghs, such as Aberdeen and Perth, were likely only around 3,000 people.

    ▪  Very Small Boroughs (Below 1,000): Most Scottish burghs (towns) throughout the Middle Ages are believed to have had populations below 1,000.

    Those four models emerge as the most robust to choose from. But I’m going to expand the list further with some bigger-population models and one or two even smaller ones, and abstract the ones that have already been identified so that it doesn’t matter if the results of the generation model aren’t quite 100%.in line with History.

    This is clearly a village in Switzerland. The buildings are bigger and much closer together, but there’s still a lot of empty landscape. Image by ?Christel? from Pixabay

    5.8.0.4 The reality-based Demographic Models r

    &squarf:  France: Demonstrated a more distributed urban network with many cities (especially in the Low Countries/Italy) capable of sustaining populations of 25,000+.
        Urban Population: 5.6% (1500) – 8% (1600)
        Hierarchy Slope: Flat but rising sharply
        Regional Cities: 0.2-0.3 / 10,000 sqr km
        Major Towns: 0.5-1 / 10,000 sqr km
        Minor Towns: 5-7 / 10,000 sqr km
        Base Village: 320-480

    &squarf:  Germany: Akin to France but with a significant amount of Forests and Mountains which were relatively lightly populated while occupying great swathes of land.
        Urban Population: 10%
        Hierarchy Slope: Flat
        Regional Cities: 0.4-0.5 / 10,000 sqr km
        Major Towns: 1-2 / 10,000 sqr km
        Minor Towns: 8-12 / 10,000 sqr km
        Base Village: 400-600

    &squarf:  England: Had a relatively high urban density for its size, but a steep hierarchy. The difference between London and the next tier (York/Bristol) was large, and the gap between those and the average town was also significant.
        Urban Population: 5-6%
        Hierarchy Slope: Steep
        Regional Cities: 0.15 / 10,000 sqr km
        Major Towns: 0.4-0.5 / 10,000 sqr km
        Minor Towns: 3-4 / 10,000 sqr km
        Base Village: 240-360

    &squarf:  Scotland: Was the least urbanized region. Even its major burghs would be considered only medium-sized towns in England or minor towns in France.
        Regional Cities: None.
        Urban Population: 2-3%
        Hierarchy Slope: Very Flat, Slope flattens
        Major Towns: 0.1 / 10,000 sqr km
        Minor Towns: 0.5-1 / 10,000 sqr km
        Base Village: 160-240

    5.8.0.5 The Artificial Demographic Models

    To those four, I am adding the following:

    Imperial Core: A region dominated by a single capital or a handful of enormous cities, like Ancient Rome, Ancient China, or Mamluk, Egypt. It would also apply to any of the others if they have significant improvements over standard medieval technology (including magic) in the fields of agronomy and food transportation.
        Urban Population: 15-20%
        Hierarchy Slope: Very Steep
        Regional Cities: 0.5 – 1 / 10,000 sqr km
        Major Towns: 0.1 – 0.3 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 480-720

    Coastal Mercantile Model: Based on the late medieval and early modern low countries (Flanders./ Holland) and the Italian City States. Power and wealth are distributed among many medium-large communities, trading ports, and other economic centers, but there is no one super-sized city.
        Urban Population: 20-30%
        Hierarchy Slope: Very flat at low levels, rising sharply from higher town sizes (30,000 people)
        Regional Cities: 1 – 2 / 10,000 sqr km
        Major Towns: 2 – 4 / 10,000 sqr km
        Minor Towns: 4 – 6 / 10,000 sqr km
    Base Village: 280-420

    Frontier Nation: Somewhere in between Scotland and England, consisting of one part moderately densely settled, one part very sparsely settled (4-4 times as large) and a third part in the middle (2-3 times as large), relative to the densely settled region.
        Urban Population: 4-8%
        Hierarchy Slope: Moderate, flattens
        Regional Cities: 0.05 / 10,000 sqr km
        Major Towns: 0.2-0.25 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 200-300

    Tribal / Clan Model: based on Early Medieval Scandinavia and central Africa. Also useful for an extensive Nomadic Trading Network. Settlements are mainly defensive or seasonal gathering points.
        Urban Population: 2-5%%
        Hierarchy Slope: Impossibly Steep but capped
        Regional Cities: None
        Major Towns: 0.001 / 10,000 sqr km
        Minor Towns: 0.05 / 10,000 sqr km
        Base Village: 80-120

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

      5.8.1.3.1 Maternal Survival

      The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

      We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

          Y, @1, @2, @3, @4, @5, @6:
          1, 99%, 98%, 97%, 96%, 95%, 94%
          2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
          3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
          4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
          5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
          6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
          7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
          8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
          9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
          10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
          11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
          12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

      The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

      There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

      Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

      Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

      This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

      5.8.1.3.2 Paternal Survival

      The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

          0.95 x 0.95 = 90.25%.
          0.9 x 0.9 = 81%.
          0.85 x 0.85 = 72.25%
          0.8 x 0.8 = 64%.

      Those values give the percentages in which both parents have survived to the birth of the average number of children.

      If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

      The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

      The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

      In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

      5.8.1.3.3 Childless Couples

      Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

          Year 1: A% -> 1 child born
          Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
          Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

      … and so on.

      This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

      But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

      In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

      Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

      Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

      We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

      The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

      I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

          A%, C, [C rounded]
          80%, 25,
          75%, 33.33, 33
          70%, 42.86, 43
          65%, 53.85, 54
          60%, 66.67, 67
          55%, 81.81, 82
          50%, 99.98, 100
          45%, 122.13, 122
          40%, 149.67, 150
          35%, 184.66, 185
          30%, 230.10, 230
          25%, 290.50, 291
          20%, 372.51, 373

      But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

      So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

          A=45%, C=122:

          1200-122 = 1078
          1078 families with children, 122 childless couples
          1078 / 122 = 8.836
          8.836 + 1 = 9.863
          so 1 in 9.863 families will be childless couples.

      5.8.1.3.4 Unwed Singles

      The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

      This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

      You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

      The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

      So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

      While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

      Want the bare bones? Okay, here goes.

      For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

      We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

      The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

      What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

      Nope. Not gonna work in any practical sense.

      So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

      It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

      But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

      Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

      I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

      In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

      In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

      How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

      What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

      Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

      How optimistic / positive is the society? How grim and gritty?

      Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

      Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

      In the end, you will have a number.

      Let me close out this section with some advice on setting that base percentage.

      There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

      ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

      ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

      ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

      Here’s the alternative perspective:

      ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

      ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

      ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

      What’s the attitude in your game world? They are all reasonable points of view.

      In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

      In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

      In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

      Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

      5.8.1.3.5 Population Breakdown

      That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

          # Children with no parents,
          # Children with mothers but no fathers,
          # Children with fathers but no mothers, and
          # Children with two parents.
          # Childless Couples
          # Unwed Singles

      Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

      Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

      I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

      5.8.1.3.6 The Economics Of The Demographics

      In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

      Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

      There will be a percentage who have no fixed employment and operate as day labor.

      Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

      Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

      The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

      Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

      The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

      There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

      But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

      Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

      And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

      You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

      So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

      In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

      But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

      5.8.1.3.7 An Economic Village Model

          8 a^2 = b^2 – c^2.

      Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

          a^d = (b^e – c^f ) / g,

      but that’s beyond my ability to model, and too fiddly for game use.

      a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

      b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

      c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

      To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

          8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
          a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

      The village grows. b rises to 62. c rises to 59.

          8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
          a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

      It has risen – but not by very much.

      Things become clearer if you can define c as a percentage of b:

          a^2 = b^2 – (D x b^2) / 100
          100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

      If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

      Side-bar: 5.8.1.3.7.1 Good Times

      You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

      Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

      Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

          95.16 x (n+1) = 80.85 + (n x 98.5)
          95.16 x n + 95.16 = 80.85 + 98.5 x n
          (95.16 – 98.5) x n = 80.85 – 95.16
          3.34 n = 14.31
          n = 14.31 / 3.34 = 4.284.

          4-and-a-quarter normal years to every 1 good year.

      You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

          95.16 x (5 +1) = g + 5 x 98.5
          g = 95.16 x 6 – 98.5 x 5
          g = 570.96 – 492.5 = 78.46.

      That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

      I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

      So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

          Boom: (1 x 78.46 + 5 x 98.5) / 6
              = (78.46 + 492.5) / 6
              = 570.96 / 6
              = 95.16%
              (we already knew this but it’s included for comparison)

          Thriving: (1 x 78.46 + 9 x 98.5) / 10
              = (78.46 + 886.5) / 10
              = 964.96 / 10
              = 96.496

          Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
              = (78.46 + 1871.5) / 20
              = 1949.96 / 20
              = 97.498

          In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
              = (78.46 + 2364) / 25
              = 2442.46 / 25
              = 97.6984

      Look at the differences, and how thin the lines are between growth and stagnation.

          Stable to In Decline: 0.2004% change.
          Stable to Thriving: 1.002% change.
          Thriving to Booming: 1.336% change.
          Booming to In Decline: 2.5384% change.

      The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

      An aside within an aside shows why:

          Boom: 95.16% = 0.9516;
          0.9516 ^ 6 = 0.74255;
          so 25.74% productivity goes into growth.

          Thriving: 96.496% = 0.96496;
          0.96496 ^ 6 = 0.8073;
          so 19.27% productivity goes into growth over the same six-year period.

          Stable: 97.498% = 0.97498;
          0.97498 ^ 6 = 0.859;
          14.1% of productivity goes into growth over the same six-year period.

          Declining: 97.6984% = 0.976984;
          0.976984 ^ 6 = 0.8696;
          13.04% of productivity goes into growth.

      Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    5.8.1.5 Blended Models

    In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

    But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

    The simplest is to designate one model for part of a zone, and another to apply to the rest.

    Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

    Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

    An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

    When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

    5.8.1.6 Zomania – An Example

    I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

    5.8.1.6.1 Zone Selection

    I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

    Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

    Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

    Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

    5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

    I was going to bring this up a little later, but realized that readers need to know it, now.

    Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

    1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

    2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

    3. Fishing villages.

    4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

    5. Non-coastal land, usually suitable for agriculture.

    6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

    7. The bottoms of the lobes are lopped off…

    8. And the land equivalent is then found ‘squaring up’ the locuses…

    9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

    The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

    Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

    Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

    Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

    But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

    My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

    5.8.1.6.1.2 Sidebar: Elevation Classification

    Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

    I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

    To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

    1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
    2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
    3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

         -> □ Zone 30 Treeline from the start of this category
         -> □ Normal Treeline is midway through the range

    4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
    5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
    6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
    7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
    8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

    Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

    A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

    5.8.1.6.3 Area Adjustments – from 5.7.1.13

    Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

    Hostile Factors:
         Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
         Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
         Towns -0.1
         Net total: 1.03333
    167.8 x 1.0333 = 173.4 units^2.

    Benign Factors:
         Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
         Subtotal +0.6
         Square Root = 0.7746
    173.4 x 0.7746 = 134.3 units^2.

    Zone 30 is… messier. Base Area 251.45 units^2.

    Hostile Factors:
         Mining 1.5 +
         Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
         Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
         Net total = 1.5 + 1.28 + 0.75 = 3.53
    251.45 x 3.53 = 887.6 units^2.

    Benign Factors:
         Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
         Subtotal 0.75
         “Wild” = average subtotal with 1 = 0.875
         Sqr Root = 0.935
    887.6 x 0.935 = 829.9 units^2.

    To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

    5.8.1.6.4 Defensive Pattern – from 5.7.1.14

    Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

    When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

    That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

    S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

    5.8.1.6.5 Sidebar: The Size of Zomania, revisited

    16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

    That’s about the same size as the Netherlands.

    It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

    There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

    Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

    There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

    A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

    UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

    So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

    It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

    And that is due to a second scaling problem that was getting in the way of my thinking:

    How much is that in day’s marching?

    In 5.7.1.14.3, I offered up:

        If d=10 miles (low), that’s 103,923 square miles.
        If d=20 miles (still low), that’s 415,692 square miles.
        If d=25 miles (reasonable), that’s 649, 519 square miles.
        If d=30 miles (doable), 935,307 square miles.
        If d=40 miles (close to max), 1.66 million square miles.
        If d=50 miles (max), 2.6 million square miles.

    But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

    Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

    If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

    So, let’s redo them on that basis:

        If d=10 miles (low), that’s 11,547 square miles.
        If d=20 miles (still low), that’s 46,188 square miles.
        If d=25 miles (reasonable), that’s 72,169 square miles.
        If d=30 miles (doable), 103,923 square miles.
        If d=40 miles (close to max), 184,444 square miles.
        If d=50 miles (max), 288,889 square miles.

    And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

        If d=10 miles (low), that’s 320.75 square miles.
        If d=20 miles (still low), that’s 1283 square miles.
        If d=25 miles (reasonable), that’s 2,004.7 square miles.
        If d=30 miles (doable), 2,886.75 square miles.
        If d=40 miles (close to max), 5,123.4 square miles.
        If d=50 miles (max), 8024.7 square miles.

    Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

    In my error-strewn calculation, my logic went as follows:

        ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

        ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
        ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

        ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

        ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

    Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

    After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

        ▪ So, let’s start with the “reasonable” 25 miles.
        ▪ Apply the ‘working day’ to get 8.333 miles.
        ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
        ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
        ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
        ▪ Or 655.56 sqr miles each.
        ▪ Equivalent to a square 25.6 miles x 25.65 miles.
        ▪ Or a circle 12.51 miles in radius.
        ▪ Base Area 173.4 units^2 = 17340 square miles.
        ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
        ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
        ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

    Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

    Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

    I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

    Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

    If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

    But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

    Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

    REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

    But, in the meantime…

    Zone 30.
        ▪ Actual area 251.45 square units = 25,145 square miles.
        ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
        ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
        ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

    What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

    If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

    One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

    Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

    5.8.1.6.6 Sidebar: Changes Of Defensive Structure

    The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

    What does that imply for Zone 11, which lies in between the two?

    You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

    If you look back at the overall zone map for Zomania (reproduced below)

    …and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

    If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

    But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

    5.8.1.6.7 Inns in Zone 7 – from 5.7.3

    Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

    But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

    So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

    The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

    Conditions:
        Road condition, terrain, good weather = 3 x 2.
        Load = 1 x 1/2.
        Everything else is a zero.
        Total: 6.5.
    6.5 / 16 x 3.1 = 1.26 miles per hour.
    1.26 mph x 9 hrs = 11.34 miles.

    Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

    We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

    If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

    Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

        150 miles / 26 = 5.77 of them.

    It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

    The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

    But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

    Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

    Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

    Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

    I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

    It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

        1 inn functional
        2 inn functional
        3 inn functional but 1/4 day closer
        4 inn functional but 3/4 day farther away
        5 inn not functional
        6 inn not functional, and neither is the next one.

    Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

    Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

As the Table Of Contents makes clear, there’s still a lot to come in this part. It will continue in part 5c!

Comments (2)

All Spiders (And Snakes) Are Not Alike


Snakes & Spiders in RPGs tend to one-size-fits-all construction. Use reality to make them exceptional!

Image by Alan Couch, CC BY 2.0, via Wikimedia Commons

I got curious this morning.

Australia is well-known around the world for the number and variety of deadly fauna we live alongside.

The likelihood of your home being robbed drops by a ratio of between 100-1000 times if you live above the ground floor, to the point that if you are not away for an extended period (more than a day) and have no neighbors on the same level, it’s perfectly safe to leave your front door unlocked for a few hours – while you go shopping, for example (doing so freaks a lot of urban dwellers out, though – it’s far more comfortable for those coming from relative security like a small country town).

So I suddenly wondered, “How much do Sydney Funnelweb Spiders like to climb? What are the rates of reported bites taking place on any above-ground level higher than the ground floor?”

I wasn’t able to answer the second because it’s not a statistic that is routinely recorded, but was able to get an answer to the first, based on the behavioral traits of the spiders in question. And that answer got me to thinking about Spiders and Snakes in RPGs.

Funnelweb Spiders

These are, perhaps, the most deadly spider in Australia. Nevertheless, there have been few if any fatal attacks since the anti-venom was developed.

Sydney Funnelweb Spiders (Atrax Robustus) are generally terrestrial (ground-dwelling), but they are capable of climbing under specific circumstances.

Sydney Funnelweb Spiders are primarily known for building their silk-lined tubular burrows in sheltered, moist, cool habitats, usually under logs, rocks, or in suburban gardens. The females are especially sedentary and rarely leave their burrows.

The most common encounters occur with wandering males during the warmer months (especially November to April), particularly after rain, as they search for mates. This wandering behavior often leads them into backyards, garages, and houses, or they fall into swimming pools.

The species is overwhelmingly terrestrial (ground-dwelling). Their burrows are in the soil, under rocks, or in logs. The only ones that typically leave the burrow are the wandering males looking for a mate.

When males wander, they move across the ground and seek shelter at dawn. They are most often found entering homes by crawling under doors or sometimes through other ground-level openings.

They generally CANNOT climb smooth surfaces like clean glass, plastic, or very smooth painted walls due to a lack of specialized adhesive pads (like those found on many other spiders). This is common lore among experts.

They CAN climb textured or rough surfaces like rough brick, steps, or rough-barked trees, as their claws can find purchase. In fact, some related species, like the Northern Tree-dwelling Funnelweb (Hadronyche Formidabilis), are known to live meters above the ground in tree bark.

So, while they prefer to stay at ground level, a Sydney Funnelweb Spiders could potentially climb a textured wall or staircase to reach an above-ground level, but this is not their typical, preferred mode of movement or habitat.

By far the most likely source of an above-ground attack is from a Spider being carried up on furniture or boxes being moved (carried up by a human) or an accidentally journey in a lift – by definition, unnoticed by the user of that lift.

Bio-security Barrier

Living on an above-ground level in an apartment building significantly reduces your risk of encounter.

You can treat living above the ground floor as a form of “bio-security” against Funnelwebs (and many other ground-dwelling risks) that is analogous to the security drop in crime mentioned.

Comparison: Huntsman Spiders

Huntsmen are climbers; they like to live high up on walls and on ceilings. Most varieties (maybe all) don’t build webs at all. They are incredibly fast and often very large (bigger than an open hand with the fingers splayed out as far as they will go). They are also adept at squeezing themselves through gaps that are much smaller than their bodies.

While most Australians don’t welcome the intrusion of a Huntsman into the home, it’s rarely a cause for panic. They are actually fairly shy creatures – just getting close to one and staring at it for a few minutes can be enough to get them to leave on their own when you then leave the immediate vicinity and don’t look at them – they treat this as coming across a predator that isn’t hungry enough to have them for lunch, a lucky escape, ‘now let’s get the hell out of here before it comes back!’

Huntsmen live on cockroaches, flies, and other far more annoying insects, so there are exceptions to that general rule. For the most part, in Australia, if you leave them alone, they will earn their keep.

But for the especially arachnophobic, that’s not an option, and there’s always the risk of a visitor freaking out, so it’s common practice to remove them gently and release them outside. Again, this is viewed as a predator ‘toying’ with them cruelly before letting them go – the last place they are likely to go is where they were removed from.

They have been known to scuttle inside cars and can even work their way through the door-seals of a closed door or a window that’s only opened a crack – 1/4 of an inch is more than enough. That’s why you’ll often see videos on the internet of spiders inside cars or on windscreens, and sometimes the braver souls will catch them, open the door, and release them. No Aussie questions the validity of these videos, they are far too plausible for that.

Huntsmen CAN climb smooth surfaces like glass, and can cling to a windscreen at highway speeds. They may not like the experience, though – I can’t attest to that, either way.

The largest one I’ve ever seen was the size of a dinner-plate. I think they can grow a little larger than that, but not much. But size alone makes them terrifying to some.

Snakes

The same is true of the most venomous snake varieties here, provided there is no access for them to get into the ceiling of the ground floor space.

Australia’s most medically significant snakes (like Eastern Brown Snakes or Tiger Snakes) are also strongly terrestrial. While they can climb surprisingly well, they are not naturally adapted to navigate the smooth, high, sheer walls and stairwells of a multi-story building.

Awareness of the ground-floor ceiling / roof void is key. If a snake gets into the space above the ground floor (by climbing a vine, tree, or rough surface to the roof-line and entering through a small gap), it is primarily a risk to the ground-floor residents. If you live on the first floor or higher, this risk is eliminated unless there is some opening in that crawlspace upwards that the snake is small enough to take advantage of – heating ducts or something, perhaps.

There is an evolutionary rationale for this: Because they are principally terrestrial, they are more likely to encounter predators, and so are more likely to develop defenses against those predators. So the general rule is, the less a snake likes to climb, the more likely it is to be dangerous.

Carpet Pythons

Carpet Pythons, and constrictors in general, are far stronger and better able to climb. They can be viewed as the Snake-world’s equivalent of Huntsmen. Their preferred attack mode is to leap / fall on prey from above or from the side and wrap themselves around it, squeezing it until it dies, then swallowing it whole.

The Second Bio-security Barrier

Even the climbing species tend to stay close to where the food is, and that’s closer to the ground. While they can climb higher than the first floor above ground level, there is little advantage to them in doing so, so there is, effectively, an equivalent ‘bio-security barrier’ that’s just one floor above the first. Encounter incidence drops dramatically at such heights. Part of it might be that while robust, strong, climbing snakes and spiders can survive a one-story drop completely unharmed, there is far greater risk when falling two or more stories. Just like people, extreme heights are not what they are built for, and are therefore scary (to some, they are thrilling to others – I wonder if that’s true in the Animal kingdom as well?)

Spiders In RPGs

While there can be exceptions of small-but-deadly spiders taken from the real world – Black Widows, Tarantulas, and so on – for the most part, RPGs treat Spiders as “one stat block does all”. They are all venomous, all climbers, all web-spinners, all generic except for size. At most, there might be cosmetic variations.

Simply dividing the world of spiders into two – terrestrial types vs climbers – and applying the difference to determine capabilities – is a direct infusion of verisimilitude into spider encounters. Go back and read the spider encounter in The Hobbit again, and this time don’t let yourself get distracted by the conversations and “Attercop”, and you will find that the encounter has a greater level of credibility because the behavior of the spiders feels realistic. There are species whose venom doesn’t kill right away, and who surround their prey in webbing and leave it hanging to die on its own, because it’s harder to tear flesh from bone when it hasn’t started to rot.

Snakes In RPGs

These fare somewhat better, but the same truth can ultimately be found here in an awful lot of cases. It might be, in part, due to varieties of deadly snake being recognized culturally with greater frequency – the cobras with their flaring necks, rattlesnakes with their rattles, and so on. When these get super-sized, some of their traits – those known to the referee – tend to go along for the ride. Many systems explicitly detail a “Giant Boa” or other constrictor.

But, past a certain point, the same truth is there – all snakes past a certain size are venomous, have similar behaviors and attitudes, and behave the same way – and can benefit in the same way by a little differentiation.

Example: Giant Swampy Tree-snakes

You don’t have to ground your ideas in reality, the mere fact that they are different from the ‘norm’ gives them instant credibility and interest. As an example, let me present to you the Giant Swampy Tree-snake, better known as the Green-backed Swamp Viper.

My chain of thinking:

  1. I don’t know what the defining characteristics of a Viper are, but the name sounds cool.
  2. These snakes cannot swim. In a swampy environment, that’s the key point of distinction, from which everything else will flow.
  3. To cross small rivers and streams, they learned to climb one tree, head out along its branches until it was above another tree’s branches, then drop down into it.
  4. Evolution favored smaller, lighter specimens, but required the retention of above-average strength relative to their size.
  5. After a while, they learned how to wrap their tails around the end of a tree-limb and swing, greatly increasing their chances of traversing terrain. This favors a longer, thinner body.
  6. Their eyesight grew more acute and their reactions faster in order to better target neighboring tree-limbs.
  7. Once you have a locomotive ability that doesn’t require descending to ground level, there is a survival benefit to not doing so most of the time. The only reason to drop to ground level is to attack prey, and once it’s in your mouth and on its way to being digested, you would head straight for the nearest tree and climb.
  8. Minimizing the time spent on the ground naturally demands a quicker-acting venom. Smaller body sizes give this snake a lower metabolic demand, so smaller prey, less frequently, becomes sufficient. The improved eyesight aids in the resulting development path. So the snake has fewer doses of its venom but it’s more potent.
  9. Take all of the above changes and repeat them because they are not just a change, they are a trend.
  10. Swinging from tree-limb to tree-limb imposes a natural length limit of average height above ground plus enough length to firmly grasp the tree-limb – two or three coils around, so if the tree-limb is 1/2 an inch in diameter, that would be 6 pi 1/2 = 3 pi = 9.4 inches.

In reality, this looks a little cumbersome in terms of the snake releasing it’s grasp at the end of its’ swing – if it wants to leap from one tree to another, I’d probably take one coil out and make the added length 4 pi 1/2 = 6.3 inches.

Put all of these changes into an appropriate stat block, and you have something unique, interesting, unexpected, fantastic – and yet, it has a ring of authenticity.

Snakes that live in trees tend to evolve to have a diameter 1/2 the diameter of the branch, at most. If they stay in close to the trunk, they can be enormous in size; if they head for the outer branches, they shrink – fast. And maximum length, as said, tends to be height above ground in the average tree-limb plus a few inches.

Final Tips

Hunting Vs Defense: A creature’s venom can have either purpose or both.

If it’s for hunting, the quantity will be enough to bring down its usual prey quickly. Every second that a snake or spider is waiting for its prey to konk out (dead or unconscious) is another second that the spider or snake itself can be attacked.

If it’s for defense, the quantity and deadliness will follow the same logic with reference to whatever it usually has to defend itself from.

If both, half-way adaptions become likely – smaller venom amounts but the speed for multiple attacks, for example – so that venom is not wasted on prey when it might be needed for defense.

The same logic still applies when you scale these creatures up.

Before you go, I have a couple of announcements.

Monday Deadlines Erased (well, lightly scuffed)

I (or Johnn) have been publishing Campaign Mastery every Monday at around Midnight my local time since 2008 with just one extended break (not of my choice). Back then, we followed the usual formula of 1-2000 words to a post. For the first ten years, we published twice a week, Mondays and Thursdays.

As of this post, that changes. When I started, I could knock out a post in one day – I often didn’t start writing until the Monday Morning, though I liked to have time up my sleeve by writing the next post early.

I had a set routine – Monday, CM; Tuesday, Pulp; Wednesday, the real world; Thursday, CM; Friday, prep the next campaign to be played on the monthly rotation cycle; Saturday, play; Sunday, personal time.

Then the posts started getting longer and more complicated. First Sundays and then Saturday Nights and then Tuesday Nights all got added to the CM schedule, one at a time. Lately, it’s been Thursday, Saturday, Sunday, Monday – more than half the week – and that often hasn’t been enough.

A number of times, a post has been almost but not quite ready completable before deadline, come Sunday / Monday, and I would have to set it aside and throw something together at the last minute, when another day or two would have seen it good to go.

So, as of this post, there’s a new publishing schedule here at CM:

Something New Every Week.

Where possible, I’ll stick to the old deadline, but when something’s not quite ready to go, I’ll give it the extra time that it needs and publish when it’s ready. If I get to Thursday and it’s still not ready, I’ll do the ‘something quick’ trick – and aim for the delayed post to appear the following week.

Partial Posts

When it’s a major series, like Trade In Fantasy, I’m going to pull a new trick out of my hat, the Partial Post. In a nutshell, come Monday or Thursday, I’ll publish whatever’s ready to go, no matter how minimal it might seem. The following week, I’ll publish everything done since the last post as “Part 5b” or whatever, but I will also update the incomplete post with the new content.

Like I said above, something new every week. I’ll even take my usual Time Out breaks in the middle of working on the larger post instead of waiting until it’s complete.

The “Part 5b”-style posts will be minimal – no updated TOC, a repetition of the same feature image, no commentary – just straight ahead from where I left off, with only a single text panel at the top with a link back to the main post.

When one of these drops, it will also signify that there may have been retroactive amendments to the content of preceding parts – these will be Works In Progress, not complete until the main post is complete.

And, on that main post, there will be a similar text panel which will keep track of the status of that post.

Right now, I’m working on Chapter 5 part 5. So the first part of it will get uploaded and published as “Chapter 5 Part 5 (Incomplete)”.

It will be followed by “Chapter 5 part 5a”, with the date and text saying “partial post, click here to read the more complete version” in a panel at the top. And, when it drops, the content will be integrated with the old “Chapter 5 part 5”, the end-of-post blurb will be updated to indicate whether or not Part 5 is complete or will continue, and a text panel will appear at the start, showing the date, and “Integrated part 5a”.

How well this will work remains to be seen, but the theory is sound, and hopefully readers will stick around.

What’s that? Why post separately at all? There are a number subscribers who get Campaign Mastery delivered by email who won’t get the updated version of “Part X”. Posting the additional text means that they will still get the new parts.

Taking Time

I have a number of major projects on the go right now.

  • I’m illustrating a complex machine for the Warcry campaign – so far, it consists of more than 1800 layers.
  • When it’s finished, I have to write description and narrative around it in the adventure for which it’s written.
  • Then I have to finish the adventure – and I have a hard deadline of early January for this task. So far, it’s 41,200 words long and about 80% complete. It contains 97 original images and 7 sound effects (so far)!
  • Meanwhile, there’s a Pulp adventure that’s almost complete but needs some finishing touches. It has meant creating an 88-page offline website with 500 images, not counting ones that haven’t actually been used, and more than 129,000 words of text. I have one last page of the website to finish of this and then it’s done. The entire (still incomplete) “Value Of Material Things” series is a spinoff of the work put into this website. The adventure itself is is 16,100 words, is about 95% complete, and also contains about 60 illustrations.
  • But before I can finish it, I need to complete work on another article for CM that currently stands at about 90% complete and is almost 9000 words long (there will be some compression in editing and many of those words are HTML, so it won’t be that long when it’s published).
  • After that, there’s another Pulp adventure that’s 80% complete, maybe 90%, but it needs a complicated illustration that I’ve barely been able to start. It needs to be complete by May 2026. So far, it has 184 illustrations (some originals, many hand-edited) and is 24,300 words long.
  • And then there’s my Superhero campaign. The next adventure is more or less complete at 7200 words and 28 illustrations, most of them original, but I have a growing itch to go back and add to it. But I also have to find time for the adventure that’s to follow it, and I haven’t even started on that beyond basic notes. It’s likely to run to 10-15,000 words.
  • And, meanwhile, the current Dr Who adventure currently stands at more than 56,000 words and is only 22% complete. 7200 words of that total have already been played (one full session), so this is turning out to be a monster. So far, it has 33 original illustrations and (in another first for me) 5 animations. Because play has already started, this has been a high priority for me. And the rest of that adventure needs to be illustrated – that’s probably another 67 or so images, maybe more, to be sourced. Most of those won’t be originals, though – I just have to find the ones I need on the internet.

Put all that together.

  1. 718 illustrations, most of them original, with 2 more major ones in progress and 78 more to be sourced.
  2. 7 sound effects. And 5 animated short movies.
  3. 10 documents & 1 88-page website.
  4. 282,800 words. That’s approaching three full-length novels.
  5. With 67200 still to write by February. And another 160,400 to follow later in the year.

That’s doable, but it means stealing back some of the time that Campaign Mastery posts have soaked up in recent times (hence the Partial Post concept). So, in addition to the measures stated above, more time is going to be diverted away from writing longer blog posts for the next few months. And, on top of that, I will be taking a two-week vacation covering Christmas and New Years Day.

There’s a lot to do, so I’d better get on with it!

Leave a Comment