Jonathan BaldieJun 21, 2021 — 13 mins read
I am a software developer by trade, and one of the more frustrating aspects of stories, whether on TV, the cinema, or in books, is the portrayal of technology and programming. In this long post, I will make my best attempt to rectify this.
As technology becomes an increasingly large factor in our everyday lives, it is natural that technology also becomes a bigger feature in the stories we read in books or watch on TV. Characters are more frequently using computers, smartphones, and other tech devices than they might have done in stories from a few decades ago.
Everyone is coming into more frequent contact with technology in their daily lives, but each device we use requires more specialised knowledge than we might be aware of. Your own home is likely full of devices you often use, containing inner mechanisms that are unknown to you. Conversely, although I’m a software developer with extensive knowledge of my digital devices, I have very limited knowledge of how my house’s plumbing system works.
As technology has advanced, and wealth has spread around the world, it might seem as if our lives have become more complex than those of our ancestors. That’s actually far from true. Medical advances have reduced the complexity of managing our health and that of our families, for example. Advancing civilisation allows us to better economise on knowledge, and that is a very good thing for humanity as a whole, but sometimes makes our jobs as storytellers a little harder!
What I’m trying to get at is the idea that we are expected to tell stories that include technology, while few of us understand how that technology works. So we often have no choice but to guess, and our stories’ characters have fantastic technical abilities, “hacking” into servers seemingly at will, typing faster than humanly possible, and pushing technology past their realistic limits. While certainly impressive, such skills are unrealistic or downright impossible.
A certain amount of poetic license is acceptable, and few will begrudge the author who takes certain liberties when their characters interact with technology. As author Scott Adams has written in Win Bigly, his book on persuasion, sometimes it is better if as few details are provided in a story as possible, so that the reader can fill those details in by herself and feel more immersed in the story as a result.
That is different, however, from stories in which the main characters consistently solve major plot points with apparently magical feats of technological brilliance, in ways that fail to challenge them. Nothing is worse than a deus ex machina ending to a story, and godlike technology is one way of committing that storytelling sin.
The key, as will be repeated throughout this post, is to understand what sort of technology is possible, to add a touch of realism to your stories, and avoid eye-rolls amongst the more technically minded of your readers.
Servers are essentially big computers that run programs for a variety of different purposes. Naturally, if you run a server, you don’t want just anyone connecting to it, logging into it, and then doing whatever they want. The same goes for your personal computer, your smartphone, and anything else that could used by someone else for nefarious purposes.
That’s why you normally have some form of authentication for servers or other such technology, whether that’s FaceID for an iPhone, a PIN code for a smartphone or credit card, or an SSH key for servers. If you don’t have these forms of authentication when you try to log in, then you can’t log in. Period.
Many stories, however, will have a tech-savvy character “hack” into a server in order to gain access to some data or some program they need to advance the plot of the story. In Bones, a crime serial based on Kathy Reichs’s Temperance Brennan novels, one of the characters regularly “hacks” into servers or “bypasses protected folders” in order to gain information.
Let me be clear that “hacking” is not some sophisticated method of programming where a server is bamboozled into opening access to the hacker. When you hear in the news that some company’s servers have been “hacked,” it is because the server isn’t protected by authentication at all, a password or SSH key has been leaked, or there is a very weak administrative password. There is no other feasible method. Period. You do not just “bypass” a password or type really fast to “hack” into a server.
In 2018 I wrote a blog post detailing how AutoCrit, a premium writing software product on the web, was storing passwords in plaintext. Websites following modern standards don’t do this. They “hash” the password you make when you register, and then store that hashed string. To anyone who gains access to that hash, they can’t do anything with it. It can only be used as a comparison against the correct password string when the user it belongs to attempts to log in. But if a website stores passwords in plaintext, anyone can nab it and use it to impersonate the user, steal their data, and many other bad things. Since a lot of people use the same password for all of their logins, this can be a very bad thing if it happens—imagine a hacker grabbing your password off AutoCrit and then logging into your social media!
Whenever you hear of a company being hacked, someone high up has a weak password. It looks better for the management to seem like the victims of sophisticated hackers, than to seem like tech-illiterate fools who left the front door open.
So where does that leave us as writers and storytellers? It means that please, on behalf of all developers and IT professionals out there, don’t write generic “hacking” into your story. Instead, have the “hacker” do things properly:
All of these are very realistic, and your readers may actually find it more interesting to see a real “hack” in action than a generic line like “she hacked into the server” that leaves little to the imagination. It’ll also help your readers to learn more about the process, and that might help them to improve their own security!
A show that accurately portrays hacking is Mr. Robot—the main character is a cybersecurity engineer! It also happens to have a great story, and my developer colleagues and friends all recommend it. There is a huge gap in the market for genuine, accurate technological stories right now, and Mr. Robot is alone in attempting to fill that gap.
In terms of future technology, Black Mirror is an anthology series that explores the scarier and creepier parts of technology in a generally quite faithful manner, though it doesn’t really count, since the whole point of the series is to stretch what technology is currently capable of, exploring some aspect of that in each episode.
A trope that became common in crime serials was the “Enhance!” method, which has a tech-savvy character make a fuzzy image more detailed, magically revealing the face of the killer. This is simple bunk. You cannot add more information that doesn’t already exist. You certainly can extrapolate based on existing information in an image or video, but no accuracy can be guaranteed outside of a good, educated guess.
“Enhance!” has become a trope in its own right, with its very own page on tvtropes.org, and is often the butt of jokes in other stories, such as in this exchange on Futurama:
Zapp: Why’s it still blurry?!
Kif: That’s all the resolution we have. Making it bigger doesn’t make it clearer.
Zapp: It does on CSI: Miami…
Another exchange I can’t help but mention is on Buffy the Vampire Slayer, when one character asks to “zoom in” on a VCR tape, and is told it’s not possible:
Cordelia: So? They do it on television all the time.
Xander: Not with a regular VCR they don’t.
Oz: What’s that? Pause it.
Xander: Guys! It’s just a normal VCR. It doesn’t… Oh wait, uh, it can do pause.
It’s probably more transparent in this example that it isn’t possible. That doesn’t mean, however, that there aren’t sophisticated approximations to it in real life. We’re getting better in the software development world at writing machine learning models, which learn from existing data to produce a likely estimate of further data.
Machine learning models are designed to take in sample data, and then provide some sort of prediction or classification when presented with new data.
For example, a classification algorithm is given thousands of images of fire hydrants, and then it can provide yes/no answers as to whether a new image it hasn’t seen before is indeed a fire hydrant. Alternatively it may be able to ingest many different types of images, and then tell you what classification a new image falls into.
Images are perfect data for machine learning models, because they are effectively just arrays—albeit high-dimensional arrays—and a machine learning model “sees” this array as a giant set of rows and columns filled with numbers, rather like a Microsoft Excel spreadsheet.
It is possible to do some very cool things with machine learning algorithms and images. One artist used existing mappings between statues and photographs of their real-life counterparts to create realistic images of Roman emperors. And services like letsenhance.io are turning the “Enhance!” button from a mockable joke to a freaky reality.
If you want to get even more freaky, someone has created software that generates authentic-looking profile images. Uncanny, aren’t they? It’s creepy, but “deep fake” videos are also becoming more sophisticated, and it begs the question of whether we’ll be able to trust audio and visual evidence in the future.
The point of all of this is that you can easily extrapolate new image data based on existing data visible in an image. But this is only ever as good as an educated guess. It does not reveal hidden information obfuscated by blurry resolution. Once information is lost through low detail file copies, it can’t be magically recreated.
So in your stories, don’t use magical “Enhance!” buttons if you can avoid it. Instead, have your tech-savvy characters write machine learning models to recognise, classify, and extrapolate data. This is a lot more realistic, and actually represents an active area of software development and computer science, that people are working on right now!
Unlike the other sections, this isn’t a false storytelling cop-out, but is actually a realistic method used to attack servers.
Real-life hackers are able to knock down critical infrastructure, by discovering the location of the servers that the infrastructure relies upon, and then spamming the heck out of those servers with requests.
Servers running critical infrastructure aren’t usually exposed to the public, so it’s not like a hacker can find the IP address (almost like the zip code of a computer, you can find yours at icanhazip.com) of a government website and then use that as the target of their attack.
Servers generally have firewalls which block certain IP addresses from accessing them. For example, there are blacklists of IP addresses known to be associated with bad actors, and firewalls often come preloaded with these blacklists. If someone from these blacklisted servers tries to access the server behind the firewall, then the firewall will recognise where the connection is coming from and block it.
Occasionally, though, companies or governments will have weak points in their technology stacks. In English, this means that while they do have a lot of their servers locked down away from public-facing addresses, there might be one or two servers that got forgotten about, but nevertheless run some important part of their infrastructure.
If a company’s code uses that exposed server for some part, no matter how small, and if that exposed server has an IP address, then it can be blasted with billions of requests from anywhere. If the server isn’t set up to handle these requests, or recognise that an attack is happening and block those requests, then it’ll go down and take the rest of the infrastructure with it. Any code or other infrastructure that relies on that server will fail as long as that exposed server remains down.
Simple, right? DDoS (denial-of-service) attacks happen every day, and make the news when they bring down high-profile infrastructure. Hackers recently brought down critical energy infrastructure in the eastern United States, for example. Attacks grow more desperate all the time, and hostile governments or loose groups of hackers are eager to bring down bigger and bigger targets.
At a more base level, if your character worked for a tech company and she got fired, wouldn’t she love to bring down their website and embarrass them publicly?
In your stories then, DDoS attacks can make a highly relevant addition. Have your tech-savvy villain DDoS a government server, or maybe have a disgruntled employee bring down his employer’s servers. If firewalls are the shields of real-life servers, then DDoS attacks are the swords of real-life hackers.
You’d be surprised at how many servers don’t have firewalls. Granted, there’s little to be gained from DDoS-ing the server behind a website that gets a few thousand hits a day. But this could be a way of allowing your characters to attack or hack into a server. If there’s no firewall or no authentication setup on a server within your story, then cyberattacks should be a piece of cake for your story’s characters.
This is a much less important technology trope than the others, but it is a heavily mocked one that makes tech-savvy people roll their eyes.
It’s when you have a character “hack” into some server by typing really darned fast into a keyboard. Often with a screen showing code… that isn’t actually anything like realistic code.
Character 1: Hack faster!
Character 2: I’m trying! *Types furiously*
Again, Mr. Robot wins out here. The main character writes Linux bash scripts and C code to do his cyber security work, and that is very realistic. I’ve seen some shows with CSS or HTML as the “code” (neither are code) used for hacking, and it’s embarrassing. Granted, it’s not that important, but it’s a detail that some will notice.
Software developers and IT professionals don’t get paid by how fast they type. They are rewarded for building and maintaining effective and robust software that gets the specified job done. Bragging about how fast you type is likely to make you look like an idiot, and if you break something because you were trying to type too fast, then you might not have a job for much longer.
This is less about the specific phenomenon of characters typing really fast in order to “hack” into some server or “bypass” a password. It’s more about the problem of having characters employ flashy, show-off behaviour. Those sort of people definitely exist in the technology world, but they don’t tend to last very long. And they certainly don’t tend to get promotions and lead organisations, as is often implied by movies, TV shows, and books.
In your stories, give the quiet, unassuming people the leading positions in technology organisations. Software development and IT are about being careful, thoughtful, and having a love of the craft. Typing really fast, or bragging about whatever fancy technology you’re using to “hack” someone’s server, are to be avoided.
I’ve been very cautious in this post not to sound too much like an arrogant tech-geek. There’s a stereotype of the person nitpicking every tiny technical detail in the stories they read or watch. But that is very different from noting the glaring technological inaccuracies used in lazy writing, that takes away from a story and lessens its credibility.
A lot of thought, research, and detail goes into historical novels and stories. This makes sense, because a given anachronism, such as a character eating a roast dinner with mashed potatoes in medieval England (potatoes were brought over from the New World in the 15th and 16th centuries), can take focus away from the story and reduce the writer’s credibility.
We therefore have to ask ourselves, why don’t we do the same thing for technology? I would speculate that fewer novelists and storytellers find themselves deeply associated with technology than with history. We can’t expect all authors to obtain perfect knowledge of the areas of life that they cover. In a more complex world, knowledge of systems and technology is economised, which is very good for the quality of life of society in general, but makes such things more opaque for people outside of those fields.
So the task of being faithful to technology is difficult, and will likely only get more difficult as technology advances in its complexity. But at the very least, we can try to avoid the obvious blunders. Having characters “hack” or “bypass” in godlike fashion, whether to make plot obstacles disappear or to resolve an ending, is both unrealistic and lazy storytelling.
Remember that characters and their struggles are the real things that attract readers and viewers to our stories. Technology is a constant part of our lives now, so please be as faithful as you can when portraying it in your story—this post is a good start! But also keep in mind that like historical storytelling, there are diminishing returns in seeking absolute fidelity. Characters are still the most important elements in any story. I hope that this post will help you to keep their technical exploits accurate and realistic.
Please check out The 24 Laws of Storytelling, my book that explores the principles that make some books and movies great and explains why others fail. By reading my book, you’ll gain the same strategies used by master storytellers such as Stephen King, Christopher Nolan, Fyodor Dostoyevsky, and many more. Pick up your copy today.
Be very careful about being openly political in your writings. It is very risky, with a low chance of payoff.