This image composites a modified “HAL” from 2001 based on artificial-intelligence-155161.png (Image by OpenClipart-Vectors from Pixabay) with a page of error alerts (Image by Gerd Altmann from Pixabay) (also modified) and some word balloons by Mike.

Although I have outlined this post in the usual manner, something about the structure doesn’t feel quite right, as though at some point, my narrative is going to zig when my plan calls for it to zag. If there appears to be the occasional discontinuity in the discussion, that’s what’s caused it.

No AI.-generated content was used in the creation of this article, to the best of my knowledge.

Artificial Intelligence is one of the “hot new buzzwords” of the decade, starting in 2023 and enduring for who knows how long. It’s a term that has become loaded with controversy.

It’s long been a staple of science fiction, and the current explosive ubiquity makes it one of the more successful predictions of that genre – though, no doubt, the details are different to those forecasts.

Here’s a fact: Any near-future game is going to have to address the impact of AI on the in-game society. Another: Any far-future game is going to contain a society that has passed through that stage in their evolution.

Or will they?

A point of difference

It first has to be said that today’s AIs resemble those of science fiction about as closely as a bacterium resembles the family cat. This article is about the journey from one to the other and some of the pitfalls along the way that will have to be navigated.

Each generation of game will therefore have a different take on just what an AI was, and what it could do, or some extrapolation of those answers into a different-era context. Every few years, our perception of what an AI will look like – past. contemporary, or future, changes.

    Old Games & Settings – Adapt, Diverge, or Create Anew

    Which (once again and not for the last time) raises the question of what to do with those games that contain an out-of-date set of concepts. Ultimately, there are only three choices – you can either update the concept, you can leave the concept as is, or you can attempt to prognosticate based on current or past thinking in an attempt to synthesize a coherent whole.

      Adapt

      Retroactively changing the content of the game to be more in tune with current thinking is the first option. The closer to contemporaneous the game design was, the smaller the leap required.

      Earlier, I suggested a Base of five years, but that might be too slow. I could mount some convincing arguments in favor of three years. Both are generalizations; progress usually occurs in fits and starts, not slow and steady. The question is, what is the minimum time-span necessary to ensure that each interval into the past yields a different interpretation of just what an AI is and what it can do, and how those things will extrapolate into the future?

      You also need to factor in progress in terms of game design, because that also changes with time. Perhaps the change isn’t so much in the concept of an AI between one time-period and the one just prior, perhaps it’s the way that it translates into game mechanics.

      Within a span of three Base intervals, I would expect relatively easy adaption. Every Base interval further back and this becomes more difficult, and adapt becomes less of a satisfactory approach. If the Base Interval is three years, that’s 9 years back that can reasonably be adapted to accommodate developments since publication; beyond that marker, it becomes increasingly difficult to do and do well.

      Diverge

      Which brings us to the second strategy – the decision that history and technology within the game will diverge from reality in order to stay just the way they are, in-game, even if that in-game reality has been disproven by more recent science.

      People who play classic Traveller, for example, are largely stuck with a 1950s view of computers as big and bulky and incredibly limited. There have always been discussions amongst such groups about why and how this has remained the case, searching for some meta-explanation for why the obvious and remarkable progress of the decades since is not reflected in the computers in-game. (My preferred explanation comes from Heinlein – computers are as susceptible to Jump Shock as humans are, and nothing less robust can reliably survive the experience unless powered down. So planetary computers might be supercomputers and laptops and PCs and networks and everything more modern and shiny, but starships are stuck aiming to satisfy a completely different design priority, one which acts to keep the computers big and stupid).

      The more that a concept of computers and AI are embedded within the game setting, the more likely it is that Diverge has to be the answer; you can’t change too much without undermining that basic pillar of the game universe. It’s like changing the nature of magic in D&D – it can be done, but it’s not an exercise for the faint-hearted..

      Create Anew

      In all those instances where neither Adapt nor Diverge are the preferred solution, creativity can be the best answer. This goes beyond some in-game explanation for why things are the way they are; it’s actively trying to reinterpret the projections of the past into a more contemporaneous perspective.

      It can be a heck of a lot of fun, if you like that sort of thing, and a lot of people do.

      Compare and contrast the “AI” and computer technology of The Mote In Gods Eye with that of it’s sequel, which parallel the real-world developments in such technology. Since the two novels are set a reasonable span of time apart from each other, this adds tot he credibility of both settings. Arguably, the first had the harder job of explaining why the technology was so “primitive”; the author’s answer was Jump Shock again.

      Consistency in a broader narrative

      At one point, it was fashionable to adopt one answer for all technologies and sciences. Inconsistency of approach was deemed “bad” and to be avoided, never mind if one approach was optimum for one science and not others.

      Thankfully, that era has been put to rest, and ‘the best answer for the game’ is now the driving imperative, as perhaps it always should have been. It’s relatively easy to adapt modern extra-planetary discoveries and stellar evolution theories to most sci-fi settings, and by doing so quite blatantly, you can justify leaving other areas unchanged as part of the “look and feel” of the genre.

    The Genre Pyramid

    At this point, I should wheel out one of the classic lines of thought here at Campaign Mastery, the Genre Pyramid (usually called other things, like “The Hierarchy of game elements). This is all about what design and conceptual imperative dictates what your content should be.

    There have been a couple of versions of this over the years; the most recent was in Inherent, Relative, and Personal Modifiers (August 2023).

    The same pyramid also appeared in Simulated Unreality: Game Physics Tribulations (August 2020).

    Other posts that reference the concept have included Into Each Chaos, A Little Order Must Fall: Coping With Randomness (March 2019); and

    The Language Of Magic: A Sense of Wonder for the Feb 2019 Blog Carnival (February 2019), which first introduced the version of the pyramid presented in all of the above posts,

    A text-only version appeared in The Blind Enforcer: The Reflex Application Of Rules (April 2014),

    and something similar was presented in Blat! Zot! Pow! The Rules Of Genre In RPGs (Jan 2011), part of a series on the Pulp Genre, where it was called “The Hierarchy Of Dominance”.

    Each layers’ demands supersede or overrule those lower on the pyramid. From the bottom up, those layers are (excerpted from Simulated Unreality):

     

    1. Official Rules: – The official rules that come in the game system are the foundations at the bottom of the pyramid.
    2. House Rules: – Because house rules explicitly supersede official game rules, they have to sit above that foundation in the pyramid.
    3. Simulation: – This is the level of Game Physics within the game world, and the subject of today’s discussion. Because the rules (house and official) are an imperfect codification of the game physics, if there is ever a conflict between what the rules say should happen and what the principles that have been established say should happen, it’s the official rules that get overruled – so the Simulation layer has to sit above the rules layers. This is what makes it possible to translate a campaign from one game system into another. The game physics is a metagame level of in-game ‘reality’ – the characters might understand them in a completely different way to the comprehension of the GM and players, especially in a ‘hyper-realistic’ genre.
    4. Genre: – There are several different places in the hierarchy where Genre can fit, and that’s at the heart of today’s subject, too. But because the one set of rules can be a broad church providing for multiple genres, the specifics of one particular genre override generic rules and even game physics.
    5. Plot: – plot refers to the decisions made in-game by PCs and NPCs within the current adventure; it’s the story of that adventure. Since an adventure can contain out-of-genre elements and influences, this level dominates the genre if a ruling can be justified in terms of the needs of the current adventure.
    6. Campaign: – This level contains anything that persists beyond this one adventure. That includes characters and characterizations (as exemplified by the PCs, quite specifically) and any narrative that defines or displays the way the game world works – the style and look-and-feel of the game environment. There are some who would argue that the Plot layer should supersede the Campaign layer.
    7. Gameplay & Practicality: – The uppermost level of the pyramid recognizes that a rule can be technically correct but unplayable – see, for example, My Biggest Mistakes: The Woes Of Piety & Magic for concrete proof of this fact. No matter what anything else says, the needs of practical gameplay are the ultimate censor and trump card. At least, according to the official pyramid.
    8. Fun: – GMs are in the business of entertaining through creativity, narrative, plot, and stimulated interaction between characters and the players who “voice” them. Fun isn’t given a level of the pyramid because it functions like the walls and capstone. If you have two equally-balanced choices, the most ‘fun’ choice should always win. If you have a technically-correct and/or practical answer to any question that is boring as heck, it should lose to a less correct, less-practical answer that happens to be more fun.

     

    Sequence Of Consideration

    I want to point out something that I don’t think has been mentioned in all those previous appearances: try not to work up; try to work down, at least initially.

    Why? Because if you start at the bottom and consider the official mechanics first, if that should be overridden by a higher element, you’ve wasted time.

    If you start at the top, you can stop when one of the considerations trumps those below and proceed directly to a solution to that particular problem.

    1. What will be the most fun? Do you invoke the Rule Of Cool?
    2. Does practicality demand an alternative answer?
    3. Does ‘what’s best for the campaign’ require a non-standard answer?
    4. Do the needs of the plot mandate a different answer in this particular case?
    5. Does the Genre suggest that a particular non-standard decision is required?
    6. Does ‘Realism’ (within the context of the campaign in general need to override the rules?
    7. Are there any specific House Rules that apply?
    8. What are the official rules (if any)?

    Make no mistake, there will be times when bottom-up is more useful, especially in game prep and out-of-play moments, because that yields a more comprehensive consideration of the issues, and may lead to the formulation of new House Rules specifically to deal with the situation. But in-play when seconds count, faster is better.

    Application to this situation

    You have a choice – a, b, c, or d (which is some hybrid approach). The answer doesn’t need to be consistent over the whole of knowledge, or of science or engineering, or even within this particular science; you can break it into smaller, more specific as you see fit.

    Working top-down – 8. fun is ephemeral and inconsistent. The needs of fun may override your decision now, but it’s an unreliable prognostication tool.

    7. Practicality – because we aren’t talking rules, this has no impact.

    6. Campaign – which choice is best for this particular campaign? – answering that question will give you your answer most of the time.

    5. Plot – because we aren’t talking about a specific adventure, this usually has no impact. However, if the entire question has arisen because of adventure content, then this very much becomes relevant.

    4. Genre – 90% of the remaining answers can be derived from understanding the genre that your campaign is representing.

    3. Simulation – but this whole question is aimed at deciding whether or not to integrate post-publication changes to the background and any relevant skills and technology. Before you can answer “does this approach more closely simulate game reality” you need to have defined or understood what that reality is. So this level is unanswerable.

    2. House Rules – you may have decided on a general philosophy regarding this issue; that is a House Rule, whether you realized that or not. Having dealt with any aspects of the question that could compel an exception to that general rule, if you get this far, then apply that general rule to decide whether or not to modify –

    1. Official Material – normally, this points to rules, but in this case it would refer to published background and setting. If you get this far without a change being mandated, then the correct answer has to be b, Diverge.

All AI is not alike

A necessary foundation for such decisions is knowing what form the AI is taking – and that’s a matter of correlating the state of the art with anticipated in-game future development.

I have broken the evolution of AI into 6 major stages (and one sub-stage), starting from the simple simulated conversationalist, proceeding through limited simulations of reality to expert systems, to Heuristic Learning Systems, to Chatbots, to Directed Generative AI (where we are right now) to True AI.

In this section, I’m going to look at each of them, how they are used, and what they are/have been useful for – a sequence of milestones, if you will. In terms of the applications, this list is far from exhaustive – it’s a set of selected highlights (and some low-lights). I intend to do my best to sidestep (at this point) the real question of whether or not it’s possible to go from Stage IV (now) to Stage V (True AI) – this is all context for examining that question.

Because there are 21 sub-sections (more after writing the content), I’ll need to be fairly brief, or we’ll be here all day!

    Stage I: Simulated Conversationalist

    The earliest attempts at simulating a person weren’t terribly effective. All they could do was ask open-ended questions (even if it had asked that question before), extract keywords, and use those words in another open-ended question (even if that showed that it didn’t ‘understand’ the words themselves.

    The Turing Test

    These gave rise to the Turing test of AI: Put the person in a room with a computer monitor and keyboard and have them interact with the AI., or with a real person – they aren’t to be told which. When the human can no longer tell whether or not he’s talking to a person or a machine, it has passed the Turing Test.

    Some experiments along these lines showed the importance of response times and speeds. If whole paragraphs materialized seconds after you wrote something, it damages the ‘humanity’ of the response. If individual characters appear at too regular a pace, that too is not human. To succeed, the computer couldn’t generate and parse text any better than a human could. The machine needed to hesitate, and to exhibit other human flaws – or it had no chance of coming off human.

    Stage Ia: Simulations Of Reality

    At the same time, some other simulations of reality were bearing fruit. These weren’t trying to be intelligent, they were seeking to apply machine capabilities to real-world problems, which they solved, sometimes by trying every logical possibility in succession, such as the 4-color problem.

    Use in Science

    In brief: A computer can be programmed to try every alternative solution in a simulation. Problems like the four-color problem (are four colors enough to color any political map regardless of its shape and the arrangement of the states within) that have resisted a theoretical solution, sometimes for decades, can be solved the hard way.

    These get us answers to problems we don’t have any other way to solve, without telling us why this is the answer. They also enable us to test the solutions to problems that are simply too large and convoluted for direct solutions, such as complex orbital mechanics, impact modeling, solar system formation, weather pattern interactions, and so on.

    Use in Engineering

    Complex interactions between engineering structures and the environment, especially movements of air and soil, are another area in which simulations have revealed problems and solutions that would have taken decades to calculate by hand – by which time the structures in question would long since have failed under the stresses involved. In fact, some did so fail, and the modeling was only used after the fact to understand why.

    The ultimate example of that would be the collapse of the twin towers – no-one expected that to happen. Only after the event were models created that matched the end result, explaining why it had occurred – and how to modify future designs to prevent it.

    Most people would argue that this is not AI by any stretch of the imagination, and they would be correct. But it is a way-point to the next category of AI.

    Stage II: Expert Systems

    In an expert system, you present the system with data on a particular complex relationship and let it search for correlations. You can start by engineering into the system the state of the art in human knowledge, but the systems are arguably more powerful and effective starting from scratch and not being biased by what we think we know.

    In the learning phase, the expert system deduces ‘rules’ based on the correlations that it observes and continues to refine those rules as it progresses until they present a match to the actual end results provided. At any point, the system can be interrogated as to what the rule was that yielded a particular evaluation, and that rule can either be marked as ‘relevant’ or ‘ignore,’ when it was sheer coincidence for example. As it learns, it becomes better at doing this one expert task than any human.

    In the testing phase, current cases and situations are presented to the system, which uses the rules that it has formulated to predict the outcome. Initially, these are just used to test the system, and no action is taken based on the prediction.

    Finally, in the operational phase, the predictions are used to set policies or guide research.

    Use In Research

    The key point in the research value of Expert Systems is that ability to observe corollaries and formulate them into rules that can be interrogated and studied, sometimes revealing relationships between outcomes and causal factors that had never occurred to us.

    Use in Business and Society

    They are often used to determine insurance risks and investment strategies. For example, an Expert System might determine that whatever changes take place in the valuation of Stock A, a corresponding change takes place in Stock B some brief period of time later. Evaluation: both are subject to the same market factors, but at different velocities (I think that’s the correct term). Therefore, movements in the value of Stock A can be used to guide investors in their buy/sell positions with regard to Stock B. Of course, the expert systems then have to factor in what all the other expert systems are telling people to do, and so on; the search for refinements is never-ending in such a tail-chasing exercise.

    They have been used to analyze mortgage risks, identify fraudulent transactions, and create artwork.

    The X-Band Antenna of the ST5 Satellites; Public Domain image by NASA, via Wikipedia Commons.

    Use in Design

    I brought this antenna design to reader’s attention in The Artificial Mind: Z-3 Campaign Canon (August 2022).

    NASA needed an unusual antenna design for their 2006 Space Technology 5 (ST5) mission. The designers determined what radiation pattern would be ideal for their needs, [and that no existing design, including their own best efforts, would meet those needs] and then turned the actual design over to a piece of software that used fractal patterns and evolution of designs to generate millions of variations on design until it matched the requirements. In the process, it evolved its own rules for antenna design, defining an evolutionarily “better” design as one that more closely matched requirements.

    The resulting shape (shown to the right) is bizarre, to say the least; and the engineers had no idea why this peculiar shape would produce the required electromagnetic radiation profile, or even if it would do so in real life. [Nor could analyzing the AI’s “design methodology” explain it; it had observed a correlation and created a rule, but with no understanding of why that rule worked].

    So they built one, and found that it worked perfectly – but they were still no closer to understanding < em>why it worked.

    Deep Blue

      Deep Blue was a chess-playing expert system run on a unique purpose-built IBM supercomputer. It was the first computer to win a game, and the first to win a match, against a reigning world champion under regular time controls.

      — Wikipedia, Deep Blue

    It made headlines around the world when Deep Blue defeated chess champion Garry Kasparov in 1997. This was, arguably, the first time that an expert system received mainstream press attention.

    Stage III: Heuristic Learning Systems

    Heuristic Systems learn in a similar fashion to an expert system, but are designed to accept approximately correct answers. Functions that achieve this are called Heuristic Functions, and they can reach conclusions far faster than non-heuristic functions.

    The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. They generalize, in other words.

    Use in Cybersecurity

    While there are lots of applications, most of which no-one pays much attention to outside of the specialist fields affected, there’s one area where Heuristic Learning systems impact almost all of us – Cybersecurity.

      Antivirus software often uses heuristic rules for detecting viruses and other forms of malware. Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected.

      …Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner’s users.

      — Wikipedia, Heuristic (computer science)

    Use in Worms and Viruses

    It’s an arms race. Heuristics have also been used to create self-evolving Computer worms and viruses, sometimes described as Polymorphic. These change structure and internal content to evade the simpler string-matching detection methods commonly in place as a first line of protection against such cyberattacks.

    Every few years, a new and more potent attack is revealed, and for a few days or weeks everyone panics (with good reason). The antivirus community contains some of the smartest people on the planet, and collaboration and cooperation levels are high at such times, so solutions are often swift – perhaps initial ones will be basic, but they will rapidly improve.

    It should be noted that this evolution of malware has gone very quiet since the last incident (in November 2019), which few would even have noted. Is that because the perpetrators have shifted focus to data theft attacks and ransomware? Is it related to Covid? I don’t know. But such an obvious gap after a sustained flurry of activity over many years suggests that we may be due for another.

    Stage IV: Vocal Interfaces & Chatbots

    The first software to make a real stab at vocal interface was Dragon Naturally Speaking. It was clunky, it had to be trained in the user’s specific vocal patterns, but it pointed to a future not unlike that depicted in many sci-fi movies and TV shows in which vocal interfaces dominated, generally replacing the keyboard and mouse. The actual technical term is “Voice User Interface”.

    At the heart of all such systems is speech recognition – a computer being able to understand what a human has said. Such systems have only gotten better over the years, and these days we have Siri, Google, and Alexa.

    These have evolved to the point where they no longer need training; their inbuilt expert “sub-systems” now parse most users correctly. Early examples were a running joke, especially in cars and GPS systems, but they have been getting better every year.

      Parallel Development

      In parallel with this, Speech synthesis has also been evolving from primitive beginnings to now sounding almost natural. These basically take text output from a computer and read it to you.

      There’s been something of the sort buried away in multiple versions of the Windows Operating system, for example.

      Conversationalist

      Putting the two together, and you and a computer can now have a heart-to-heart conversation. Initially, it was easy to tell that there was a machine at the other end of the phone line, but it’s becoming harder as the software improves.

      Problem-solving Chatbots

      A third parallel evolution has been the chatbot. You can trace the current generation of chatbots back to the original FAQ text file, in my opinion. These were slowly replaced with web-page versions where you could click on or select a question and be taken to a relevant information page.

      These expanded to entire databases filled with hundreds, thousands, and then millions of answers to common and uncommon queries. Arguably, the Microsoft database is the largest and leading example, now grown large enough that you can sometimes need a new FAQ. to help you find the information you’re looking for.

      These were followed by the first chatbots, where you typed a message to the computer and it attempted to decipher it and send you to the right information or web page. At first, these were nothing but frustrating; using the wrong keyword often sent you to the incorrect information but they have become steadily more sophisticated.

      A variation emerged in the form of auto-complete, which offers suggestions for the word that you are typing in a mobile phone or search window.

      It was when these systems were integrated that their true potential emerged.

    This graphic representation of an AI is another Image by Gerd Altmann from Pixabay. I did nothing but crop and resize it.

    Use in Telephone Reception / Customer Service

    It’s quite common these days to get an automated menu of options when you telephone a large, or even a medium-sized, institution. Early versions used pre-recorded options which you selected using the telephone touch-pad, but more recent advances use speech recognition.

    How effective they are is uncertain; some have said that 70% of problems / queries can now be resolved without any human intervention; others say 80% or 90%.

    Whatever the true number, the certainty is that it will only go up as this suite of technologies improves further.

    Use in Automatic Cold Calls

    A downside is that it is now possible for telephone contact to be initiated by the computer, using an autodialer to spam potential customers. Usually, at the moment, the computer only recognizes that it has reached a person, and switches the call through to a human operator, and it becomes possible to recognize the situation and hang up before the connection is made; a very common variation simply logs that there is a person at the other end of the line and places the phone number on a queue for the human to call back.

    As these systems improve, it is virtually certain that some preliminary data will be gathered by an automated system pretending to be a person, with the call then transferred to a human to complete the process.

    Use in Spam and Fraud

    Rising quickly is the even worse downside implicit in such capabilities – spam and fraud over the telephone. Three times in recent months, I have received calls from an inobviously artificial “person” concerning a potentially fraudulent transaction on a credit card – not identified by number.

    This is obviously a means of Phishing, an attempt to obtain sensitive information or persuade users to install malware, especially ransomware, on their computing device. The perpetrators are relying on the shock value of someone (allegedly) attempting to steal money from your credit card to inhibit your thinking processes.

    Even in my case, where (each time) the type of credit card is not one that I possess, it took me a moment to process the situation – the attacks are that good. Or that bad, depending on your point of view.

    Stage V: Generative / Directed AI

    At last, we arrive at the heart of today’s subject: the current state of the art. There are two separate forms of this technology – one, like DALL-E, generates images based on your requests, and the other, like ChatGPT, generates text.

    Much of the controversy thus far has focused on the image generators, but they are just the tip of the iceberg.

    Use in Writing / Publishing

    Already, there are problems on sites like Quora where “users” simply copy-and-paste ChatGPT answers to questions, and Quora itself has a variation on the technology that they are developing and attempting to foist on people. They are also now offering to add an AI-generated image to illustrate an answer supposedly created by a human. And, on top of that, they have another AI generating questions – usually very inane and stupid ones.

    I went into that as part of The Artificial Mind: Z-3 Campaign Canon, which I’ve already referenced in this article.

    Use in Media

    Another growing phenomenon is the use of false images in media, especially social media. For a while, we’ve had photographs of one event being intentionally mislabeled in support of some viewpoint or perspective, and that was even before the generative AIs came onto the scene. Throw in the growing capacity for deepfakes, and the growing intrusion into traditional media, and it is becoming less and less true that “seeing is believing”.

    It is reaching the point where we need an expert system designed to do nothing but detect false images.

    Stage VI: ‘True’ AI

    I think I should briefly discuss the goal that researchers and designers are ultimately trying for, or at least, my interpretation of it.

    A true AI is an artificially-created system that interacts with a human user as though it were a human. It can be assigned tasks to perform on the human’s behalf, and is reliable enough that the human can generally trust the result to be equivalent to what the human would do on their own.

    In effect, it automates the following conversation:

      Boss: “Sandy I need you to do [X] for me.”
      Secretary/Assistant: “I’m sorry, Mr Rogers, I don’t know how to do that.”
      Boss: “No problem, I’ll show you how. You start by…”

    It would not be restricted to one task at a time; it would be capable of pausing in a task to deal with something granted a higher priority.

    Furthermore, given authority to act in selected situations, the AI can initiate actions on its owner’s behalf, subject to review and approval – from scheduling a meeting to bringing an outstanding invoice to the owner’s attention.

    Use in Problem-solving

    Quite obviously, this (potentially) gives everyone their own personal assistant. If it’s reliable enough, that’s a massive benefit to everyone.

    The better the software gets, the more it can be used for people to solve day-to-day problems in their ordinary lives, automating tasks that are currently so complex as to defy such automation – restructuring and reformatting an article to match a personal editorial standard, for example, or planning a difficult schedule.

    Use in Legal Procedures

    NASA are cautious when it comes to their computer hardware; it needs to have established itself as reliable before they will even contemplate putting it into a mission-critical situation. The consequence is that their computer technology was frequently multiple generations out-of-date when deployed.

    I anticipated that the legal profession would adopt a similar approach to software assistance in the future when creating the Earth-Regency near-future setting for my superhero campaign. Thus, Internet Relay Chat was used to connect an entire backroom legal research team to the front-line lawyer – in 2005, when the technology was already old. In later years, as the in-game date of 2055 (now 2056) approached, expert systems as legal research tools reached the point where open-and-shut cases could be adjudicated without parties even setting foot in a courtroom, and the legal backlog that had accumulated slowly began to unwind.

    Well, here we are in 2024, and to the best of my knowledge, none of this has yet happened. That’s fine – this was always a campaign operating under the Divergent approach listed earlier. But that doesn’t mean that it will stay that way forever – if the software is good enough and speeds up the court process enough.

    True AIs would be dragged into the courtroom anyway – questions such as whether or not one could give evidence would need to be resolved. Could one be sued? Could a business be sued because of one? Could one potentially sue someone else? AI rights is a hot-button issue in the campaign at the current time.

    Use in Scientific Research

    Human knowledge is advancing so quickly that even experts are having trouble staying up to date in their own fields, and the pace is accelerating. It is estimated by some that at the current time, human knowledge is doubling every 12 hours. Others have nominated a more conservative 1-2 years.

    No-one human can keep up with all of it; the only solution right now is to focus ever-more-tightly on a small specialty within a tightly-defined subject, and to assemble research teams that coalesce to give a more comprehensive overview of a scientific problem.

    Functional, reliable, flexible true AI would offer an alternative – instead of trying to keep up with the flood, the AI can be directed to abstract and link to only the knowledge relevant to a research problem, much as a search engine does for more common forms of information. If necessary, documents can be relayed to a support team for validation and verification before they are integrated into the main project.

    This is nothing less than a complete change in the way research is conducted, and it would have repercussions all the way along the educational process, and in any industry that does its own product development.

    How Can You Tell?

    Lastly, let me circle back to the Turing Test. We are now approaching the point where it can be passed with ease, and we need to be contemplating whether the test is even meaningful. It has always had its critics and acknowledged weaknesses.

    When we can no longer distinguish between real people and simulations of people through AI without actually being in the room with the other party, we also have to reconsider the property being tested for. The lines between sentient and non-sentient simulation become so blurred that it may be necessary to adopt a philosophic approach from Star Trek: “That which makes no difference is no difference.” This would fundamentally change the definition of “human” – leading to those questions of AI Rights that I mentioned earlier.

Current Controversies, in brief

There are essentially three major controversies at the moment.

    Copyright

    Generative AIs are a bit like expert systems – they have to be taught or ‘trained’ before they can generate anything. And the makers of these systems have deemed anything that’s been posted to the Internet as available for the purpose.

    In a way, I can see their logic; if it’s been published to the internet, it’s because the authors want people to see it and read it. What they then do with that information is out of the control of the original author – but there are safeguards to protect that author’s rights, called copyright.

    AI isn’t a person, but if it can ‘read’ and ‘analyze’ what’s been written, it’s close enough in my book. The problem then is, what do they do with what they have read and does it violate copyright?

    That’s a much thornier question than most people will realize, simply because the existing copyright laws were written a long time before a machine could “read”, and make no allowance for it. As usual, it will take a while for the law to catch up with technology.

    But there are additional complications. What if the AI simply copies substantial parts of what it has read – is that plagiarism? Arguably, yes. What if it misquotes or misattributes an author’s views, or confuses the views of a character with those of the author? Is that defamation, or is it something else? Can it accuse someone of a crime – with absolutely no evidence, because it’s a machine with no evidence to offer? Is it entitled to hold, and present to others, an opinion? Is it covered by the right to free speech? How many social protections will this piece of software be afforded when it, provably, is not a sentient being?

    There’s enough there to keep courts busy for a decade.

    Copyright part II

    And that’s just the text-type Generative AIs. Are art-generating AIs to be treated the same? Or do we need separate laws for them?

    One of the big problems that is already taking place is AI generating art in the style of a particular artist who is still alive and functioning, including replicating the artist’s signature?

    The AI is clearly interfering in the right of the artist to work and be paid for his or her work. There is a word that sort of work when it is produced by humans: Forgery. And, by ‘replicating’ the signature, the AI is unknowingly perpetrating a fraud in implying that this is the work of the artist they are copying.

    Don’t get me wrong – there are applications for this sort of thing that are completely legal. In the Adventurer’s Club campaign, we had need of a couple of ‘lost Van Goghs’ – I used a generative AI to get something that was ‘close enough’. But would I ever show them in a public forum like Campaign Mastery, or are they for private use (in-game) only – and should that, does that, make a difference?

    The more you dig, the bigger the bottomless can of worms becomes.

    Consent

    The second issue is the latest iteration of Deepfakes. The Wikipedia article linked to above is pretty scathing – it arguably goes too far, painting all such uses with a pretty broad brush, on the assumption that the deepfake will be pornographic.

    There are a lot of other applications for deepfakes that we will be increasingly hearing about in the near future. Identity Theft, for example. Impersonations of public officials. Scams of all sorts, but especially deceptions involving a trusted individual.

    There have been a number of episodes of NCIS LA that have dealt with the issues extensively. In some ways, the scenario presented is laughably unlikely – the identity of the creator of the fake identity (of one of the stars) for example. It’s comic-book of the less-sophisticated kind. But, at the same time, some of the things that have been done in-story using this level of deception are both amazing and daunting.

    Three examples

    Rather than try and summarize all of them, here are a couple of generalized analogs. They all start the same way – you receive a request to Skype or facetime or video conferance with someone.

      #1 – a trusted relative

      The face and voice match what you would expect. They are clearly distraught, they tell you they are in terrible trouble, and go on to describe a situation in which they need $500 right now or bad things will happen.

      Their story has been social-engineered to sound plausible. Do you send the money?

      What if it’ for real? What if it’s not real?

      You are making what might be a life-and-death decision, you have no time to think, and don’t know what to think even if you had the time…

      This is already happening here in Australia, and if it’s happening here, it’s happening everywhere else, too.

      #2 – your boss

      It’s your day off, but your boss calls with an emergency – “Colin” is out of the office, he needs to retrieve an invoice/document from the computer, but he doesn’t know the password. You do.

      The boss looks and sounds real, and couldn’t work the computer on his own for love or money. That’s what he pays you to do, and Colin fills in when you aren’t around. So it all sounds really plausible. Do you cough up the password?

      Up the ante. You’re now an employee of your government, and this is the password to a secure server holding all sorts of low-level unclassified but secret information. Now, do you cough up the password?

      Raise the stakes – the document the boss needs to access is a real one, but not scheduled for public release for days/weeks. All sorts of undesirable things could happen if it gets out early. But it’s plausible that some circumstance has changed, and a whole different list of undesirable things could happen in that event if your boss can’t get ahold of the document. Now, what do you do?

      Deceptions of this sort are happening already, but as yet (to the best of my knowledge) not involving deepfakes – but the latter has the potential to add so much credibility to the request that it’s only a matter of time.

      #3 – an authority figure

      You live not far from a commercial, business, and government hub in an apartment. One day, you get a surprise call from the State Premier (Governor in the US) and/or the Chief of Police or something similar.

      They have some unconfirmed evidence that a resident in a neighboring apartment block is a terrorist planning to plant and detonate an explosive device at any one of several possible targets in the vicinity. If they start sending lots of police into the area, he might panic and detonate the device early, resulting in mass casualties. Instead, they want to dress a number of Special Forces (or equivalent) in civilians and use your apartment as a rendezvous, bringing in people a few at a time until they have enough firepower to take the terrorist down quietly and quickly. It’s your patriotic duty – do you give permission?

      As each person arrives, they present official-looking credentials. When there are a dozen of them, they move out, silently and professionally. And you hear nothing more about the matter.

      What just happened? Who were those people? Were the credentials real? Are you an accessory to a terrorist act that’s being hushed up, or a bank robbery, or an assassination? Will you be in more trouble if you keep quiet, or if you call your local police and tell them what happened? What if they don’t believe you? Or only half-believe you, dismissing the whole nonsensical “Call from an authority figure”? What if whatever this group set out to do was so well-executed that no=-one’s noticed yet? Is your life irredeemably screwed? Or are you an unsung hero?

    Let’s back it up a notch. Some banks and other institutions now require facial verification to grant access. Enter the deepfakes – and we have another arms race.

    Hallucinations & Inaccuracies

    The third category brings us back to ChatGPT and its ilk.

    You’ll read it at the start of almost any mass-consumption article about these systems – they don’t understand the content of what they write, they don’t look anything up, or have a database of facts at their disposal. What they have is a glorified autocomplete function – but instead of guessing a word, they guess the rest of the article they are writing.

    And that can make those articles wildly, laughably, almost-incoherently, inaccurate, in whole or in part.

      Mini-examples

      In The Artificial Mind, which I linked to earlier, I quoted a number of questions from the Quora Prompt Generator – this is, essentially, a miniature Generative AI that writes questions for people to answer.

      Here are some of them, reproduced as examples of the problem….

      Are there atheist crickets?

      Does anyone use the letter Z anymore?

      What is the name of the movie “Soylent Green”?

      Is there a building in Venice?

      Who wrote “Every Breath You Take” by Sting?

      Who played Cleopatra in the movie with Elizabeth Taylor and Richard Burton?

      Why is Paris not the Capital of France?

      Quora’s problem

      Quora has a problem – the same problem as most other social media, in fact: It’s not making money.

      The more interactions the site records – people asking questions, writing answers, reading answers, commenting on answers, up-voting or down-voting answers – the higher it ‘ranks’ relative to other social media, and the more it can charge for advertising.

      Hence, the Prompt Generator, whose questions are so inane that people post snarky replies. Hence, an Answer Generator that – like ChatGPT – auto-creates answers.

      Hence, a (failed and now defunct) program that paid people for the number of answers their questions generated.

      Hence, moderation has been cut to the bone and beyond, opening the door to spambots.

      The Consequences

      I swear, the following are real questions posed by a real spambot – or a real Troll, it can be almost impossible to spot the difference:

      “Since India was once part of Russia…” (the rest of the question hardly matters)

      “Since Columbia was once part of Russia…”

      “Since Greenland was once part of Russia…”

      “Since Australia was once part of Russia…”

      “Since New Zealand was once part of Russia….”

      …and so on, through virtually every country and many of the major cities. The part after the word “Russia” was always the same.

      Analysis

      Notice anything about these questions in comparison to those of the Prompt Generator?

      They could be written by the same hand – it’s just that one has iterated multiple variants of the same question, while the other hasn’t.

      You could hardly call one more intelligent than the other. Both use something close enough to colloquial English to pose their questions, and both ask questions whose responses are virtually guaranteed to be completely worthless.

      To be fair…

      I have to admit that not every question posed by the Prompt Generator is as bad as the examples cherry-picked above. I even answered a few of them on the basis that I thought readers who saw the question might get some value from a legitimate answer. But those are a very small percentage.

So that’s the “State Of The Art”. Time to break out that old cracked crystal ball, and look ahead 5 or 10 years (maybe less).

A Hypothetical General Purpose Journalism AI

For various reasons, I’m going to focus on text-based AIs. The parameters of the art AIs are now fairly well-established and while there may be some cross-pollination from one to the other, it’s only a matter of refinement for them.

Form follows function – so, in this case, we need to distinguish exactly what it is that the AI is mostly going to be used for.

My answer: Essays, Reports, Website Posts, Letters, Short (Bad) Fiction, Worse Poetry and maybe the occasional RPG product. The latter would be limited because a human would still have to devise any game mechanics and integrate them into the product. I also think that a modified form might be used to auto-generate solo-player text-based adventures.

Next, we need to consider the social framework surrounding them. I’m not going to try and answer those thorny issues raised earlier, except perhaps in general terms, but I am going to assume that at least some of them have been resolved. I’m also going to need to look more closely at one of them, exploring a couple of penumbras that don’t get a lot of attention.

Let’s start there.

    Assumption: Anything on the internet

    To start with, lets assume that the lawyers and law-makers have confirmed that if it’s posted to the internet, it’s intended to be read, and you can’t stop an AI from being the one reading it. What you can restrict is what an AI is allowed to do with it, in line with existing laws.

    Our next-generation AI needs to understand and obey the rules of Fair Use. So it now DOES maintain a database of sources by keyword, and whenever it uses that keyword, it’s required to limit the extent of it’s plagiarism, and to cite a reference that the reader can follow.

    What’s more, I’m going to assume that it has access to a crude credibility rating for these sources, permitting it to choose more reliable sources over less reliable ones – but these scores are part of its ‘learned behavior’ and not something that some human has to manually input.

    The AI is therefore learning in much the same way as a student learns to write essays – from user feedback, both on the source and to its created content.

    That requires it to recognize comments and quotations and distinguish them from primary content – which should not be so hard. Learning to treat the two differently should also be well within its capabilities.

    But those two changes are enough to induce a profound difference in two of the major problems the current generation of AI are struggling with – credibility and copyright.

    Copyright Restrictions won’t work

    There are already calls to restrict AIs in learning mode from accessing any site with a copyright notice. These won’t amount to anything, and no-one who knows anything about how copyright works would make such a demand, because they know they won’t work.

    First, there’s nothing magical about a copyright notice. Everything anyone writes and publishes, whether in a book or on a web-page or delivered in a podcast, is automatically protected by copyright, whether they put a notice up or not. That copyright can be explicitly waived by posting a Creative Commons license for example, restricted voluntarily the way I do at Campaign Mastery, or deliberately not enforced, but it’s all still copyrighted.

    And second, there’s the small matter of relevance.

    The Copyright Dilemma Revisited

    If you want meaningful results from your AI, if you want relevance, then it needs to have access to contemporary sources of information. It can’t rely exclusively on sources that are 75 years old.

    This results in an implicit but usually unstated social contract – if you are benefiting, or potentially benefiting, from an AI’s access to copyrighted material, then you are obligated to make your content available to AIs so that others can share the same benefits.

    Wholesale plagiarism is not permitted, and AIs will need to be taught those “rules” before it is given wholesale access to the internet, but beyond that, you can’t keep them out and then expect to benefit from such tools.

    The debate over copyright materials and AI access to same will be a storm in a teacup, if AIs can be taught to apply the rules of Fair Usage. That’s good, because we will need the space to debate more substantive issues.

The New Problems

So some problems have – theoretically – been solved with this putative new generation of AI, but others remain. By requiring the AI to pass judgments on its source material, we open the door to mistakes being made in those judgments, and thorny questions of liability arise. But beyond those, there are problems deriving from the content itself, and they may well be enough to kill the whole concept.

I saw one such problem, and didn’t see anyone else discussing it, which resulted in this article. Others have noticed another problem, which I had never seen anyone else discussing.

Remember that the objective here is an AI that can write material for an online newspaper or equivalent. They get held – or are supposed to get held – to a higher standard than a blog, which can more freely blend fact and opinion. Right now, the credibility problem is so severe that even that usage is a pipe dream – but, once progress of the type described gets made, blog/website usage will be no problem, and achieving the stricter standards of actual journalism will require only incremental refinements.

Let’s start with the problem that I foresee.

    Two Streams Of Perspective

    The US is a dominant player in terms of Media. Other sources may appear to be equal, but that’s only because they are local. And right now, in US-based media, there are two different streams of perspective.

    There’s the conservative, Fox-driven narrative, and there’s the broader mainstream. There are a few media organizations that could legitimately be called far left, but they are trapped by that very ethos into being the polar opposite of the far-right outlets.

    I’m being very careful here not to lift one above the other in terms of credibility. I know which one I support and encourage, and which one I decry and denounce – but for the purposes of this article, I need to be a neutral observer, and the primary fact that I observe is that the far right believe explicitly in the far-right narrative.

    To our putative AI, it does not have the context to distinguish between the two, even though they are fundamentally incompatible.

    Correlating the incompatible

    To solve this problem, we need to teach our AI the difference between Fiction and non-Fiction – and that some Fiction pretends to be non-fiction and vice-versa. It needs to understand things like allegory, and metaphor. And it needs to understand that there are times when it is not only okay to wallow in a fictional reality, it can even be desirable. That will be a big leap forward in an AIs understanding of language!

    Once that happens, and a bridge is formed between those concepts and the content that it is encountering on the internet, it will come to the conclusion that one of the two sets is fictional (no matter how much it pretends not to be) and the other is not. Which one it chooses to “believe” doesn’t matter.

    Next, when it is given a request / assignment, it needs to ask a few “getting to know you” type questions. It doesn’t care what the questions are, or the answers – but it does care about being to extrapolate from those questions what slant you probably want in the material it gives you. From that, it can either answer in “real mode” or wallow in the “fictional universe” of the other side.

    Finding the right questions will be tricky – you don’t want to be crass or blatant or triggering – but I’ve no doubt that it can be done.

    And the great news is that you can now ask in-universe questions or request in-universe perspectives and the response will be in-universe, too. “Write me 2000 words on why Han Solo might have shot first?” would be completely within it’s range.

    Self-induced Schizophrenia?

    The generation of AI that follows our journalism-bot won’t be all that different, from the perspective of most users; the core product will now be what designers and engineers sometimes call “mature”. But it will be more capable of self-criticism, and more adept at crossing boundary lines – capable of looking at cross-genre questions from both sides. It will be capable of deciding that it’s initial declaration of “reality” may have been incorrect and should be revised. It will also learn to use reader’s criticism as a diagnostic tool for self-improvement, and be able to think about things like style and wit. As a writer, it will be all the more human.

    But it’s entirely possible that there will be a half-way stage in which it’s perception of ‘reality’ flips to accommodate the readers that it thinks it is delivering to. “Today I’m an ultra- conservative, writing to appeal to those who think the 2000 election was stolen and Trump is a patron saint. Tomorrow, I may be a centrist-conservative or a liberal. I have no ideology of my own; I am just a mirror of whoever is using me.

    That’s all fine when you are having one-on-one interactions – but it won’t be long before it will start a second task, while the first is incomplete – a second task with a radically-different ideology. In fact, it won’t be long before it has to believe in all things simultaniously, no matter how contradictory.

    I don’t want to project human qualities onto this compilation of algorithms, but we are likely to see emergent behavior that mimics certain human flaws – neuroses, paranoia, schizophrenia, mental breakdowns, stress, anxiety. The researchers will be fascinated, and will want to know if there’s anything regarding the equivalent human problems to be learned – and new specialties will spring up to help the machine rationalize the irrational.

    And at some point, those old questions will bubble to the surface and someone will want to know how human the AI has to get before some of the privileges and restrictions of real people should be applied to it. But that’s where my crystal ball starts getting a little hazy.

So the problem that I foresaw was the impact of the current separation of narratives. And, like a child learning about the world, the AI will have to develop its own tools for dealing with those contradictions – which can have real-world repercussions.

Another problem

There are a handful of writers on Quora for whom I have great respect. I will at least start to read anything I come across written by

, amongst others.

Franklin wrote an interesting answer to a question about copying ChatGPT answers whole that helped earlier parts of this article to gel.

But the biggest contribution (this time) came from Mats Andersson who forecast the spectacular failure of Large Language Model AI in the near future, identifying a problem that had completely escaped my notice.

I reached out to Mats for permission to repost his entire article here, but haven’t heard back from him, so I’m going to have to condense and paraphrase.

    Hallucinations Redux

    Remember what I described in “Hallucinations & Inaccuracies” a little while back? (CTRL-F and “Halluc” will find it, if you don’t).

    So we will have all these pieces of alleged non-fiction floating around the internet, most of them not identified as having come from a Generative AI system, attributed to a real person. Unless it is able to recognize and reject this material, and current AIs can’t do that, it won’t be long before Generative AI is basing its ‘reality’ on Generative AI ‘Hallucinations’.

    Here are a couple of key paragraphs from Mat’s post:

      Large Language Models [like ChatGPT] are trained on vast repositories of text. Most of them are stripped off the Internet.

      And increasingly, texts on the Internet are generated by LLMs. Texts that contain what AI researchers call “hallucinations”, which is when the LLM just makes shit up. AI researchers say that this happens like 5–10% of the time; I say they’re laughably optimistic, it’s more like 50/50 whether you’re going to get a useful answer out of an LLM.

      What we will get, probably in a couple of years but I’d estimate at the most five years, is LLMs that are trained mostly on the output of other LLMs. They’ll be hallucinating based on hallucinations.

      And the Internet will be drowned in texts that are just shit that an LLM made up.

    Solutions

    Giving a new generation of Generative AIs something akin to “critical judgment” in the manner I proposed earlier would go a long way to solving the problem identified by Mats. But his point – and it’s a good one – then would boil down to a race between the old AIs, polluting their own environment to the point where it becomes toxic to them, and those developing the proposed next generation of AIs.

    And right now, I’d say it was neck and neck.

    Elbow Room

    This is no time for niceties – we may have to cheat in order to make sure that “the good guys” win. So, how do we do that?

    The simplest answer is, by doing everything we can to slow the spread of Generative AI text output. That’s not an overly onerous burden, because we’re already inclined to down-vote (or not Like) such content if it comes from the Toxic side of the street anyway.

    Every time you don’t share the idiocy with others, it cleans up just a small part of the Toxic environment created by AI Hallucinations.

    We don’t have to keep it up forever – just until we can invest our AIs with some sort of critical faculties. I hope that I’ve shown that while difficult, it’s not that big of an advance.

    The Starter’s Gun

    A related issue is that it doesn’t look like we’re in a race. It’s just the same old, same old. Which means some researchers will b looking at this, and others will be looking at something else. In fact, it’s fair to assume that no matter what development you are discussing, only a minority of researchers will be pursuing that path.

    If this really does start to become a major problem, no doubt more resources will be thrown at it. But that’s like giving the other side a head start – not because we’re confident of winning, but because we’re undervaluing the prize.

Echo Chambers

You might get the impression that my solution involves pandering to those stuck in an echo chamber, generating anti-vaxxer stuff for antivaxxers. And, to a certain extent, you would be right – at first.

I’ve lost count of the number of times I’ve seen someone try to jerk an individual immersed in “alternative facts” all the way back to their perceived ‘reality’. I’ve seen liberals attempt it with Trump supporters and I’ve seen Trump supporters attempt it with alienated conservatives. Heck, I’ve even seen Flat-Earthers try it with new-born Globalists and vice-versa.

It never works; the cognitive gap is too wide, and the attempt just bounces off an impenetrable shield. A different approach is needed.

Next-Generation Generative AIs who write for their targeted audiences, as I have described them, are that different approach.

    It’s all about credibility

    As the AI creates its ‘rules’ around the credibility of sources, fringe sources will slowly become discounted, and more mainstream sources will be used by the AI to plug the gap in its product. For pro-liberal sources, that amounts to a fairly normal cutting through spin – and make no mistake, there’s plenty of spin in those sources. You don’t need to read many fact-checking newsletters, where both sides of a debate come under scrutiny, to realize that.

    The result will be a step-by-step progression away from the lunatic fringes toward a centrist position, slowly easing extremists toward a reasonable and rational position.

    For the last decade or so there will always be two divergent histories, two perceived realities. Positions are too entrenched at this point for that not to be the case. What’s more important than tilting at windmills is pulling people away from the fringes the same way that they got there – one step at a time. That can’t be done by people from the other side of the debate; it needs to be done by someone who is perceived as being “on the fringe’s side,” because they are the only ones that the fringe will listen to.

    What we want is a convergence that brings the majority together into a consensus view going forwards.

    Inevitability

    That happens, will happen, and is happening, already, from what I can see. The next section details how to monitor the progress toward that outcome, and the basis for that statement of optimism.

    Events lose relevance when viewed with the lens of time, unless they are vigorously refreshed. The more crackpot the interpretation of events, the more that version of events depends on that refreshment to keep it vital and relevant.

    Old arguments don’t die, they just fade in relevance. The natural evolution of credibility for Generative AI that I have described in my ‘next generation’ repaints those faded colors but corrects the picture, one detail at a time.

    Conspiracy theories are like a game of Jenga – remove one plank and the structure as a whole grows weaker, until collapse can no longer be avoided.

    You won’t be able to bring everyone around with this approach – there will always be a lunatic fringer – but the majority can be saved from the echo chamber they are now trapped in, one disinfecting ray of sunshine at a time.

    Locking It In

    While it won’t be necessary to achieve this, giving our speculative next-gen AI one additional set of capabilities will really lock this slow-but-steady progress into position: Teaching the AI to distinguish between fictional, editorial, and journalistic content, and the capacity to evaluate these by different standards.

    This is not as straightforward as it may sound – some content will be a hybrid of both, some will mix a paragraph of editorial into an otherwise fact-based report (and vice-versa), and the AI will have to understand the rules of logic in order to find the flaws in arguments.

    It might well be that this is beyond the next-generation AI and has to wait for a subsequent refinement. That’s fine, if it’s necessary; this would simply be the icing on the cake, another step forward in credibility. We can get to the goal without it – but it would greatly speed the process, to the benefit of all.

Measuring the transition and an unsolicited email

While the concept of this article had been floating around in my head, fairly vaguely, a piece of (possible? probable?) spam is what crystallized it. It, too, forecast the collapse of AI, based on a number of speculations about what could go wrong if we become dependent on the technology – and it then fails in some way, causing a complete loss of public confidence.

The problem with the projections made – at least in summary form – are that (a) many of the speculative failures are vague and improbable; (b) the speculation includes Generative AI moving into areas of society that it’s not equipped for, and that are already well-served by advanced Expert Systems, and (c) that it requires a generalization of response. Failure on the part of a Generative AI Text Writer like ChatGPT might affect the applications for artificially-generated text, but I don’t see them having much impact on AIs used as a virtual “service desk”, for example.

There are some good points made, especially proposing the possible “unintended amplification of harmful behavior”, but we already have human trolls and spambots and deepfakes spreading garbage in all directions through social media, at least some of which crosses over into ‘real media’ – adding an AI “voice” to this cacophony won’t make it very much louder.

Still, it has to be asked – is there a threshold that we are approaching and should not cross? is there a limit to how much disinformation a society can withstand?

    Election-watching

    Politics in recent years has been all about disinformation. Pinning down where it started in the US is very difficult – all sorts of hands have had their turn at the tiller. Here in Australia, it’s actually a bit easier, because the first major incidence was so spotlighted.

    I refer, of course, to the “Children Overboard” Scandal, which I wrote about in Incredible Truth and Improbable Stories: Oratory in an RPG.

    The phenomenon then seemed to die out here to a large extent, except in the case of a few individuals who gained political traction and then were repudiated and ejected from their governmental positions.

    The Clive Palmer Party, for example, had quite a spectacular implosion in 2014-2015. At the following election, Palmer tried to buy his way back into politics, and was resoundingly rebuffed. Palmer himself was later charged with Fraud for diverting money to the campaign illegally (he’s delayed facing those charges for years, but that clock has just about run out, and the charges have not gone away).

    In 2021, another radical politician, Craig Kelly, took over leadership of the re-re-rebranded Clive Palmer Party. Within two months, Kelly was ‘awarded’ (in absentia) the “Bent Spoon” award by organization the Australian Skeptics ‘for spreading misinformation about Covid and vaccinations’.

    Kelly’s views on the COVID-19 pandemic were described as “crackpot” by Omar Khorshid, the head of the Australian Medical Association. Kelly has said, for example, that forcing children to wear masks is child abuse,

    He bought up vast quantities of hydroxychloroquine, and began advocating for its use as a COVID treatment / preventative. This started him down a path of conspiracy theories that would seem eerily familiar to anyone watching the Trump presidency.

    But it hasn’t all been smooth sailing; a progressive attempt at improving relations with Native Australians was resoundingly defeated last year, thanks in part to extensive disinformation from the “No” campaign.

    In the months since, the current leader of the opposition has become renowned for saying “no” to every proposal, so much so that it was starting to affect his popularity, which was already abysmal. The more extreme elements of his party, though, seem to approve.

    Any American reading that brief summary would be feeling a sense of deja vu; it’s a direct emulation of parts of what the US has experienced over the last 16 years, just in a different order.

    But this illustrates an important point: the impact of political disinformation can best be observed and quantified by monitoring election results.

    The American Story

    Trump won the 2016 Presidential election; every election in the US since has seen his credibility slide and the Republican Party fall further behind expectations. In the 2018 mid-terms, the Democrats won control of the House and of Congress overall. In 2020, Biden ousted Trump by 7 million votes, and took control of all three branches of Government. In 2022, Republicans succeeded in taking back the House, but a much-lauded (and expected) “Red Wave” failed to materialize – and every Trump-endorsed candidate failed to get elected.

    Which brings us to the 2024 Presidential Election. Despite the criminal charges against him, Trump has – in some polls – held a narrow lead at some points, though it must be pointed out that the Democrats had not really started campaigning at those points. The House has been a shambles, self-destructing in public view. Nine of the 10 largest Republican donors have distanced themselves from the party, leaving the Republicans seriously underfunded.

    This is now an election campaign that is all about disinformation. Should Trump prevail, it will be because a substantial portion of the electorate chose to accept disinformation and conspiracy theory as fact. Analysis of the primary votes held thus far have Trump support, even within the party, at an all-time low. He may have won the Iowa caucus, but only 14% of the usual turnout attended, due to poor weather, and it has often been pointed out that it is the most fanatical who will be driven enough to participate despite the conditions.

    Some suggest that this will be an electoral wipe-out for the Republicans due to the array of circumstances opposing them. Others are more measured and cautious.

    Anyone who is interested in the question posed – “How much disinformation can a society withstand” – will get an answer with this election. If the trend continues, the Blue Optimists may have every reason to smile; if Trump wins, despite the trend, then clearly there is a breaking point that has been reached. The very least thing that can be said is that it will all be very interesting.

    Of particular interest will be the lesser races; the Presidential Election tends to steal the spotlight, and the “oxygen” from those races, but at that grassroots level, many of the Red states have remained loyal to the Republican Party. Should a “Blue Wave” occur in these lesser races, it will demonstrate a generalized toxicity of disinformation; should it not, then the effect can be considered more concentrated on and around the source. Will trump-endorsed candidates succeed or fail? There are endless layers to this onion, and they are all relevant to this question, and therefore to the credibility of the line of argument about an AI “crisis of confidence” put forth in the spam in question.

AI is here to stay

I can dismiss the dire conclusions of the unsolicited email relatively easily; while they may partially materialize, the total collapse forecast seems unlikely. Mats’ line of argument is more difficult, and I think that Generative AI itself will need to evolve to something like the next step that I have described in order to avoid that catastrophic outcome.

Those same evolutionary steps will also be necessary to avoid the catastrophe that I came up with. But I have no doubt that there are efforts already underway, because they seem such a logical solution to the problems facing Generative AI Now, never mind five or ten years from now.

    The Prototype Phase

    All new technologies go through 6 stages of development. Sometimes smoothly, sometimes messily, with fits and starts; sometimes along multiple channels simultaniously, sometimes quite sequentially – but all six stages are essential to get to the end product.

    The first phase is the prototype phase, when the core functionality becomes available but with great gaps and sometimes maddening flaws and imperfections and limitations.

    The Cowboy Phase

    That gets followed by the Cowboy Phase in which endless permutations and variations get explored from different makers. I call it the Cowboy Phase because it’s the wild west – anything goes and there’s virtually no regulation.

    This is the phase in which the technology gets applied to all sorts of functions that were often not even dreamt of by the makers of the original prototypes, and we start to figure out just what this technology is going to be good for.

    The Regulated Phase

    Meanwhile, those who can claim authority over the branch of technology start reacting to the excesses of some in the Cowboy Phase and start looking at what is needed to regulate the new technology.

    Eventually, they will curtail the worst offenders, or at least drive them underground and new versions of the software will emerge that obeys the restrictions placed on it by regulations (assuming those regulations can be translated into the technical and design spheres). The software has entered the regulated phase.

    At first, there will be a lot of give-and-take and instability; regulators regularly go either too far or not far enough, and the technology architects will push back. Ultimately, each of the major applications discovered in the Cowboy phase will get its own version of the software for that dedicated purpose.

    Formulating a Community Standard

    Meanwhile, starting back in the Cowboy phase, the public begin formulating a set of expectations for what the software can do for them. Those expectations feed directly into the development of both software and inform the regulations – or those regulations will be more honored in the breach than the observance, in which case it’s back to the drawing board for all concerned.

    These community standards are critically important, because they demand a shift in the regulatory framework, which goes from Regulators Vs Developers to Regulators Vs The Public. It’s entirely possible for a technology to be completely regulated from the first perspective and for those regulations to need to be almost completely scrapped because they don’t accord with public expectations at all.

    But a third party – who may be the developers again – soon show up to the party, in the guise of people who have invested in the technology and demand some return on that investment. And they can completely derail the process. In the worst-case situation, you can have a messy, three-corner contest; in more orderly transitions, two of the groups will reach an accord and face down the third.

    The investors have a big advantage in such fights – they have influence and money, and those who write laws pay close attention to those with either of those. The bias is therefore always toward commercial exploitation of a technology – unless there are other moneyed interests whose prosperity will (they think) be disrupted by the new tech.

    Formulating A Legal Standard

    Once those bun-fights are more-or-less wrestled into some sort of compromise that’s tolerable by all (other than outright lawbreakers), the regulators start tweaking and refining the laws that have been created, and those laws get tested through the courts. With each case, the legal standards – which override everyone else’s take on the tech – become more sophisticated. If some existing and accepted framework can be adapted, that can be a shortcut – but it can also have unexpected repercussions down the track.

    Nevertheless, a legal standard will be codified and the software will be adapted or evolved to operate within that standard.

    Acceptance & Ubiquity

    At some point after that, the software will enter the final phase of adaption – acceptance and ubiquity. It will be everywhere, doing just what it’s permitted to do, and will quickly fade into the background and become just a part of the technological landscape.

    Mobile phones, file sharing, GPS, mp3s, internet browsers, internet provision, streaming services – you name it, they’ve all been through this evolutionary process and are now pretty much taken for granted. Sometimes they got there quickly and fairly peaceably; sometimes, the road was full of acrimony; but they all get there in the end.

    Why should Generative AI be any different?

    Again, though, I think it important to stress that “AI” and even “Generative AI” is being used as an umbrella term for at least three different (but related) technologies, and that can fool people into thinking that a “one size fits all” solution is needed, or even possible. This can delay, obstruct, and confuse the regulatory landscape. I think that the laws surrounding Generative Art AIs and Generative Text AIs will need to be different, though each will no doubt inform the content of the other, and there may be common features or principles.

    How Long?

    How long will it take? Well, for all the noise that’s surrounding Generative AI at the moment, the only regulation that’s even being discussed is the application of rules and legislation that never anticipated this new technology. Unsurprisingly, no-one is satisfied with this situation. It means that Generative AI is still in the Cowboy phase, and will stay there until someone starts paying a lot more attention to the developing Community Standards and the purposes to which the technology is being put.

    I think it will be 5-10 years after that happens that ubiquity and acceptance are achieved. Right now, some users and sites are happily exploring the capabilities of this new technology, others are trying to figure out how to exploit it for their own profit, and some people have completely blackballed it.

    No matter how dug in their respective positions, they will all evolve once we start emerging from the Cowboy Phase. One of the first developments will probably be an attempt at an “Ethical AI” – and even if it’s not as polished or accomplished as the cowboys, it will undercut the objections of those antipathetic to the technology.

    There’s a long road ahead of AI, and it contains some monumental potholes and the occasional “bridge out” or “detour” sign. It may not be possible to reach our intended destination. But it won’t be for lack of trying.

Looking Beyond The Near-Term

All this isn’t going to happen overnight. My 5-10 year estimate is potentially wildly optimistic – but with the pace of development in the modern age, it might happen surprisingly quickly, especially since the social benefits that result simply accelerate processes that occur naturally in society.

Any near-future society is either grappling with these problems or is in the process of solving them. AIs are so useful that they will grow in ubiquity, so it’s essential that these problems are solved.

Having laid out a plausible pathway to doing so, it then becomes possible to extrapolate further into the future – and to take a side-road or two out of the Sci-Fi genre entirely.

    “True” AI

    The connection to the underlying embedded morality of a “true” AI should be fairly obvious – even if the tests that we have at the moment are no longer adequate to identifying such an AI as sentient in its own right. Those tests held that the interface between person and machine would be reflective of the intellectual capabilities of the machine, and that’s simply no longer true.

    Robots, Droids, Androids, and other Sci-Fi automata

    Once you have a true AI – be it a starships computer or whatever – other technologies that can be ‘motivated’ and ‘controlled’ by an AI become possible. This leads to Androids and Droids and all sorts of other sentient or semi-sentient automata.

    A solution of some sort would be needed to correctly parse verbal instructions to such devices – and without that, they are not going to be the same as what we see in various movies and TV shows. Any sort of ethical processing will be built around these concepts or something similar – so this will be the ‘subconscious’ of these machines.

    Fantasy Automata

    But Mechanical “Life” isn’t restricted to the Sci-Fi domain. There are all sorts of such life-forms in Fantasy, too, such as Golems and Warforged in D&D.

    Well, the hardware might be different, and there might be a whole lot more hand-waving going on, but it’s easy to see something analogous being essential to such “Life”, too.

    Decision-making, AI Style
    • “AI Life” is given an order.
    • Does it understand the order?
    • If so, is the person giving the order entitled to issue instructions to this representative of “AI Life”?
    • If so, evaluate the order for practicality. Is the “AI Life” capable of performing the required task?
    • If so, is the order countermanded by prior orders of greater urgency or priority?
    • If not, is the order ethical?
    • If so, will the order place the “AI Life” at risk?
    • If so, is the person giving the order entitled to issue orders with that potential ramification?
    • If so, or if there is no such risk, the order is a valid one, and the “AI Life” should carry it out – or decide for itself, if it is independently sentient.

    This is a fairly basic logical progression, a slightly beefier version of Asimov’s Three Laws.

    Unless the “AI Life” can take some sort of shortcuts in its decision making, however, this could represent a delay of seconds, minutes, hours, or even days.

    Short-cutting this laborious process

    Those shortcuts can best be summed up by two simpler principles operating in tandem.

      Authority

      This is essentially nothing more than a redressing of the ‘source credibility’ concept. Categorize the source of the order and you index their authority over the system.

      Trust

      And then, as in the military, you simply have to trust that a sufficiently-high Authority will not place you (the “AI Life”) in unnecessary jeopardy and will not issue morally-questionable instructions. Those that do will, after all, have to deal with accusations of misuse of authority. It’s not the AI’s job to police such.

    Yes, there’s room for some nuance. Assigning orders issued by someone with relevant expertise or credentials a higher “Authority rating”, for example. But, by and large, this simplifies the question enough that it can be resolved almost instantly – essential in an artificial soldier, for example.

The Promised Land

It is,. perhaps, worth recapitulating just what that intended destination is, or should be.

AI that

  • we can all use with a clear conscience;
  • evaluates the reliability of its sources, and this gives trustworthy output;
  • is capable of learning from its mistakes, both technical, ethical, and in terms of accuracy, without specific input from an overseer;
  • tailors its output according to the credibility of sources but is capable of working with an inbuilt bias on the part of the intended audience, as exemplified by the person requesting the output’
  • fairly compensates those whose work it references (with credit and links if nothing else);
  • has legal restrictions to which it adheres, and which are acceptable to the community at large and to the sub-community of creatives whose work is actually being referenced.
  • [if possible] can distinguish between fictional, editorial and fact-based content, even within a passage contained in a source, and can evaluate such content independently of the main content of a source.

There could be more, but I think that list is ambitious enough to be getting on with.

No AIs were harmed (or consulted) in the writing of this article.

Print Friendly, PDF & Email