top of page

The Ins and Outs of Video Game Subtitling:

An Interview

Kari Hattner is a veteran game producer who’s been 18 years in the business. A member of the International Game Developers Association and a speaker at the Game Accessibility Conference, she’s worked on such big titles as Deus Ex: Invisible War, Hitman: Contracts, Rise of the Tomb Raider and Mafia III.

 

Alain Dellepiane is a well-known name in the game localization circles. Chair of IGDA’s Game Localization Special Interest Group, he created the LocJAM translation contest, founded team GLOC and took part in localizing several hundred games throughout his lengthy career.

 

Ian Hamilton is an accessibility advocate and specialist, co-author of the Game Accessibility Guidelines and co-director of GAConf. He works with developers, publishers, academia, platforms, middleware and industry bodies to raise the profile and understanding of accessibility across the game industry.

 

These three distinguished professionals joined me today to give an insight into the history, process and issues of video game subtitling. Below you will find our discussion.

The History

Max Deryagin (MD): Ian, could you please guide me through the history of game subtitling? What were the main milestones?

Ian Hamilton (IH): Subtitles in games obviously didn’t exist until speech in games.

Starting from the early 80s, it wasn’t uncommon to find a bit of speech in both arcade machines and home console/computer games, but it was more along the lines of just several words — exclamations, short phrases or announcing the name of the game on the title screen. Speech was produced by voice synthesis or digitization, and sometimes it accompanied the onscreen text.

Intellivision World Series Baseball (1982)

Credit: YouTube channel ed1269

However, those early efforts were audio equivalents for existing text, or just text and audio together, instead of providing captions specifically for the benefit of people who have hearing loss or who speak a different language. Subtitling in the sense of what you would expect the term to mean in other industries wasn’t really a thing in games at that stage.

One game from those times that’s worth mentioning is Beyond Castle Wolfenstein. It was made in the U.S. for an English-speaking audience, but it had short phrases spoken in German, all of which were accompanied by a textual English translation at the bottom of the screen.

Beyond Castle Wolfenstein (1984)

Credit: Barrie Ellis & YouTube channel Frankomatic

Apart from some exceptions like LaserDisc interactive movies, it wasn’t really until the early 90s that games started to have proper conversational in-game speech, often in the form of new CD-ROM adaptations of previously released point-and-click adventure games — games that already had a mass-market text-based version and later received an enhanced CD version with speech. So, although that’s both speech and audio at the same time, it still isn’t really subtitling. Martian Memorandum is a nice early example of a game from around then that had both speech and text presented together.

Martian Memorandum (1991)

Credit: YouTube channel Thriftweeds

I think proper subtitling came a year or two later, with games that had full-motion video cutscenes. Games like MegaRace, Star Wars: Rebel Assault and Wing Commander III had optional subtitles for their FMV.

MegaRace (1993)

Credit: YouTube channel NintendoComplete

Star Wars: Rebel Assault (1993)

Credit: YouTube channel Padawanmage71

So, subtitles in games have been around for a fair old while, but progress in that time has been slow. It has gradually edged closer to being a standard consideration, but there have been some big gaps between major steps forward.

Half-Life 2’s closed captions for background sounds were an important step, they came in 2004. The first game that I’m aware of with options for text size was Dragon Age Inquisition, which patched in around 2015, and the first game that I’m aware of with an option to turn the background behind the text on/off was Life Is Strange, which patched in the feature in I think 2016.

Advances like that are now starting to accumulate, so although it has taken a while to get to this point, it is nice to see the pace of change accelerate.

The Process

MD: Now, Kari, could you please describe the game subtitling process from the developer’s perspective?


Kari Hattner (KH): Sure. It starts with the narrative team, which is a team of writers who work on the scripts for the game’s spoken content — cinematic cutscenes, mission/level dialogue and so-called “barks”. The latter, in gamedev lingo, are single lines spoken by characters as systemic reactions to game events or player actions — think of combat lines like “I’m hit!” when they get shot or pedestrian lines like “Watch out!” when they get shoved by the player’s character who is running down the street.

Pedestrian Bark in GTA San Andreas

Credit: YouTube channel Its KeV3nSGaMing

The cinematic scripts are most similar to those in film/TV. They not only contain the dialogue but also describe the setting, time of day, direction for the actors, etc. Not all dialogue is produced by the writers, though — some of it may be created by the designer for a specific area of the game and rewritten by the narrative team afterward. Or, in the case of barks, it can be created by someone in the combat team or the AI team — and then improved upon or given additional variants by the writers.

 

So, depending on the production needs, the narrative team may consist of internal staff or external writers contracted for specific tasks.

 

After the scripts have been written, they need to be transferred into the game. How they’re stored before that happens varies from studio to studio: each line might be an individual text file, or they may all be in one file. The key thing is that there’s usually a database tool or interface that allows you to create/edit/import/export those files as needed for different purposes such as creating scripts for voice actors and exporting them for translators. The tool also stores the metadata — which character says the line, which actor, the direction, tone, and the like.

How the dialogue text is transferred from these files into the game is a bit of a black box to me as a producer — it’s more in the domain of the engine/tools programmers to determine how it gets converted into actual subtitle data and how that data gets onto the screen. I think the latter is often done automatically by the game engine.

 

At the same time as the writers, another group of developers is also concerned with subtitles — the user interface team. These people are tasked with getting the subtitles displayed in a legible manner alongside all the other information that needs to be shown to the player: menus, onscreen objectives, tutorials, maps, etc. They come up with the system for how many lines of subtitles can be displayed, the character limit or width of the subtitle, font, font size, font effects (like drop-shadow or background box) and so on. They also ensure that the localization system works to display each language correctly, and they sometimes have to devise a priority system for subtitle display — if a bunch of characters are talking in the foreground and the background, there needs to be a way to ensure that the player gets the subtitles with the most crucial information first.

 

Once that’s done and the voice-over for the primary language is recorded for the game, the audio and subtitle scripts can be localized. And again, the approach varies widely from studio to studio — some handle all of this internally, some send it to an external company or multiple regional companies for translation and recording.

 

And that’s about it.

MD: Very interesting! Now, do game developers ever use subtitling software when working on their games?

 

KH: My experience is pretty limited, but I am not aware of any subtitling-specific software used in the industry. As I mentioned before, how subtitles appear on the screen is determined by the automatic rules that the UI team puts in the game code — how long to display, how much to display, where to display — so there’s no use for subtitling software. Plus, subtitle scripts are usually lumped together with the other in-game text and not treated differently. But beyond that, some studios have developed tools that automatically create subtitles from the dialogue by analyzing the audio files to see at which points the text should be broken up — I saw an example of this at the Game Developers Conference.

 

MD: Hm. Does this mean that games don’t use timecodes for subtitle display?

 

KH: It varies. For gameplay subtitles — barks, non-cinematic conversations, etc. — we use event triggers rather than timecodes. A designer specifies an event for a particular voiceover file to play on, like “the character opens this door” or “the character hides in cover”, and then, when the event occurs, the game plays the audio file and displays the corresponding subtitle. For cinematic cutscenes it can be different — it depends on how the audio is handled. I have worked on games where cinematics were squeezed into one long audio file with all the voice tracks mixed in — in that case there would be one huge subtitle that included timecode metadata to play each line at the correct time.

 

MD: Oh wow, this is very different from how subtitling works in other industries. But still, do studios ever hire professional subtitlers for assistance or consultation?

 

KH: I don’t know. I personally have never worked with a professional subtitler. It’s possible that game localization teams work with them, which would be surprising, but it’s unlikely that any developers do.

 

MD: Well, perhaps a subtitler could help the UI team with setting the rules for durations, text size, line breaks and reading speed. Studios might be missing out. On a related note, do developers ever try to apply the TV/film subtitling standards to their games?

 

KH: There are solid recommendations and guidelines for games now, but when I first started looking into the topic a long time ago, the only information I could find was from the TV/film standards. This was useful as a foundation, but there are differences between these media, so you need to understand the differences and keep them in mind when creating subtitles for your game.

 

For example, in film and television, subtitles don’t often compete with any other on-screen text — unless it’s the news or a sports programme. In games, on the other hand, they compete all the time with lots of other interface text. So, in this respect, there’s more wiggle room in film/TV to customize their size and placement, and you can even experiment like they did in Slumdog Millionaire.

Subtitles in Slumdog Millionaire

Another difference is the interactive nature of games: you have to input commands and scan the interface elements during gameplay, so you’ve got less time to read the subtitles. Then there’s the prioritization system that I mentioned previously, and finally in some games subtitles don’t disappear until you press a specific button, so you get to control the presentation rate, which is never the case in film.


MD: I see. Thanks for your insight, Kari!

The Translation

MD: Alain, have you ever been asked to use subtitling software as part of your game localization work?

 

Alain Dellepiane (AD): Not really. As Kari said, video games structure their assets in very specific ways, and that makes them inaccessible by normal subtitling tools.

 

MD: When it comes to localizing game subtitles, how does the process usually work on the translator’s end?

 

AD: They send you a long Excel file with multiple columns and many many lines. The first column would contain a unique ID like Mine_05_Death_98; it allows the game to find that exact string when it’s time to display it and gives us some idea of where it will be used. Then, if we’re lucky, there’s a column with the name of who’s speaking and other context notes about what’s happening. Then there’s a column with the space limit, which we’re keen to match, since the text is likely to fall off the screen otherwise. We also tend to dislike the limit when they expect us to fit Romance languages — which are structurally longer by about 20% — into the exact same space as English. And finally there’s a column for each language the game will be translated in.

 

So, you do your job and send it to the project manager who takes all the translations, pastes them in their corresponding columns in the master Excel file and sends that file back to the developer, where they use some proprietary tool to convert the text into whatever specific format the game uses.

 

MD: Hm. In film subtitling, we believe that the audiovisual context is king. You have the image, the sound, the dialogue, and the onscreen text, and you need to pay close attention to all of them to produce a quality translation. But in game localization, from what I’ve read, translators rarely get to see the game, let alone play it, so there’s no context to rely on when you’re translating the subtitles. Do you think this has to change? And is such a change possible at all?

 

AD: My feeling is that many games put those concerns behind their technological and commercial needs. When art takes priority, things move like you mentioned — when we translated Papers Please, we could finish the game at our pace, ping the author at any time and even tweak most graphics freely.

 

But the larger the game, the harder it gets. I was a localization tester at Rockstar Games and I saw how even a simple request like “let the translator play” can become a minefield at that scale. First of all, the constant race to being bigger, prettier and edgier than anything that came before means that many AAA titles are a mess until the very release. Grand Theft Auto San Andreas wouldn’t run at all when translation began and remained very unstable until the end. In order to let translators try the unfinished game directly, Rockstar Games would have needed to send them special development consoles and qualified staff for months, which is expensive but also a security nightmare: millions of investment and revenue projections hinged on that one unprotected disk. The suits would never consider sending it off to some 5,000 USD vendor, no matter how much we’d argue about translation quality.

 

MD: Makes sense, but the same can be said about the film industry, yet they let us see the proxy video of the entire film during the translation process. Leaks do happen, but they’re extremely rare.

 

AD: Alas, that’s not where most games are, both in technological and business terms. I mean, every issue we mentioned would vanish if they could just sit and wait a few months, but people vote with their wallets and that vote is for having games now, even if it affects quality.

This whole system, this whole just-in-time chain is really just serving that crave: from the writer who can’t polish their script because they have to hand it over bit by bit, to the external freelance translator who must go by guesswork because they can’t see the game, to the in-house tester being told to ignore the tinier issues because there’s not enough time… The cost of immediacy is paid at each step.

 

That doesn’t mean that people aren’t doing their best: whenever you hold a game in your hand, you can bet that someone, somewhere, worked late into the night to make it the best that it could be. But there’s no denying that we work for day one, not for the ages.

 

MD: Do you ever use techniques from audiovisual translation when translating subtitle text in games?

 

AD: You might say they have become a constant part of my own writing. One of the most useful articles I came across in my career is Subtitling Decluttering Tips by Bianca Bold and Carolina Alfaro de Carvalho. It’s a great reminder of how you can take a sentence and make it much more readable while leaving its meaning intact. From transitioning your sentences to direct word order and coordination to rationalizing stuttering and repetitions, there are many good practices you can follow to make the text more accessible — and easier to fit within those pesky space limits.

 

MD: Is machine translation ever used for translating character dialogue in games?

 

AD: Not that I know. Each game tends to build a little jargon of its own, and that seems to throw most translation engines off-course. One translator famously fed the whole script of Final Fantasy IV into Google Translate, and the results were so hilarious that he ended up writing a whole book about it.

 

MD: Ha! Okay, here’s my last question to you: When people talk about game subtitles, they usually mean PC and console, but what about mobile games? Are they subtitled at all? And if so, what’s the difference?

 

AD: Some mobile games have cutscenes and hence dialogues to be subtitled, but it’s fair to say that narration isn’t the main focus on mobile — people don’t want to watch Citizen Kane on the bus. So, the subtitles are usually functional, an extended tutorial if you will: “Hi, mighty warrior!” “Let’s build our first fortress.” “Now let’s attack the neighbor.” “You will need 500 wood for that building, have you checked the premium market?”


MD: Ah, I see. Thanks a lot for your great answers, Alain!

The Issues

MD: Ian, you’ve already guided us through the history of game subtitling, now let’s talk about modern games — or, rather, issues with their subtitles. As I wrote in my previous article, there’s still much to be desired in terms of the font size, contrast, segmentation, reading speed and support for the deaf. Some games are not subtitled at all, which was the case for Crash Bandicoot N. Sane Trilogy, and that can cause an outrage among the players. What are some other, less-known issues that need more attention?

IH: I’d say that all of the issues you raised are in fact less-known issues that need more attention. Awareness of how to implement even the basics of subtitle presentation is still nowhere near where it needs to be; we are still at a stage where developers often make mistakes over simple things like text size and contrast.

Things are improving, though — we’re now starting to see the first examples of games that do individual elements well, like adjustable size of subtitle text, an option to turn the background box on and off, an option to turn speaker names and colors on and off, dynamic indication of sound direction, etc. So, now there is at least a growing bank of examples of good practice, which means we can’t be too far away from someone pulling all of those different features together in one game and starting to get to the same level of presentation that exists in other industries.

As far as games without any subtitles go, that was getting to a point where it was extremely rare — until the resurgence of virtual reality games. VR presents a design challenge with subtitling, since there is no screen bottom to attach the text to, or any “outside” of the game environment to place anything. So, as there isn’t an immediately apparent solution, quite a few studios have just decided not to bother with subtitles, which has obvious implications for both accessibility and localization. It isn’t an unsolvable problem by any means — I think it will be one of those things where as soon as one relatively big game launches with a decent solution in place, it’ll spread to other studios pretty quickly.

You mentioned player outrage over unsubtitled games. Those negative reactions from consumers are really important. Half-Life didn’t have any subtitles at all, so Valve received letters (this was before social media) from deaf gamers saying they had heard that the story was good and asking if there might be a transcript they could read. That lead to Valve including people with hearing loss in their playtests from then on, meaning future games had full closed captions for all sounds, better contrast for the subs, etc. The first Assassin’s Creed game also had no subtitles, and there was an outcry from consumers, the result of which was Ubisoft implementing a certification requirement for all future games to have subtitles.

It is important for players to make their voices heard, with both positive feedback and negative feedback. It doesn’t always work, sometimes studios are not interested, and sometimes for entirely valid reasons studios just aren’t able to implement or talk about the changes that people request. However, if players had not have done that, Valve’s games would not have had subtitles — let alone closed captioning and background boxes — and Ubisoft would not have implemented the certification requirement. Both of those things were the direct result of dialogue between players and the developers. And it is just as important for players to tell developers when they benefit from good design choices. When positive comments from players circulate within a studio, they can really open doors and allow the studio to do more in the future.

MD: Great points, and I think this also works for film consumers — they should voice their dissatisfaction with poorly subtitled films. Now, speaking of virtual reality games, most of them are not subtitled, as you said, but some are — and the latter use all kinds of different approaches, with varied levels of success. For example, in Skyrim VR the subtitles are glued to one place, so you can turn away from them, in Resident Evil 7 they follow your gaze and remain at a constant depth, getting obscured by objects that are between you and the subs, in The Lab subtitles follow the characters in speech bubbles, and so on and so forth. Does this massive variation exist because the developers have yet to figure the optimal way to subtitle VR games? Or is there no optimal way per se and it’s up to the devs’ creative vision, based on their game’s aesthetics and genre?

IH: There will always need to be some degree of wiggle room depending on a particular game mechanic or interface, but in general it is the former, that there just hasn’t been very long yet to explore what kind of solutions work.

Each of those examples has their own pros and cons. If you have subtitles floating in front of you, you can end up with what’s called vergence-accommodation conflict, which is eye strain, discomfort and fatigue from continually shifting your eye focus from close to far away. If subtitles are attached to the audio sources, this issue disappears, but then another one pops up — you have to be facing in the right direction to read the subtitles or even to notice them, which means you can miss some of them entirely.

At the moment, the most promising solutions seem to be hybrid. For example, subtitles attached to the audio sources by default, and when the source is not onscreen, the subtitles detach from it and temporarily float in front of you, with an indicator of which way to turn to bring the source into view. But even that is still only just early experimentation and exploration, there’s plenty of room for new innovative approaches.

Early Prototype of VR Subtitling in Stolen Steel

Credit: Joe Wintergreen

MD: Very interesting! Going back to non-VR, in the Reddit comments to my article many people said that the developers should simply copy-and-paste the standards used in television subtitling and apply them to their games. Would that be a good idea?

IH: I don’t think it’s as simple as copy-paste, but it’s certainly a good starting point. Part of the issue is developers just not being aware of what’s possible or desirable, but if you just shift your head to the side slightly and look at other screen media, the answers are already there. Some landmark work in game subtitling has come about through looking at the standards from other industries.

MD: Do you think hiring professional subtitlers to complement the development and localization teams would help studios create better subtitles for their games?

IH: I’d love to see more professional subtitlers involved, but I think probably part of the issue is the same one that you run up against with subtitle presentation — perception of value. That is, to what extent studios realize how important subtitles are and how much of an impact they can have on a player’s experience. The subtitle usage data is pretty compelling in this respect, but it isn’t widely known about in the industry. I think if more developers were to gather and share data on just how many people play games with subtitles turned on, you would quickly see some pretty dramatic changes to the approaches taken with subtitling.

Even in TV the figures are high. According to Ofcom’s research in the UK, a significant majority of people who watch TV with subtitles turned on are not deaf or hard of hearing. And this is domestically in a relatively monolingual country, so interlingual isn’t much of a factor in their data. But in games there are more reasons to use subtitles than in TV or film, from unpredictable audio to greater distractibility.

I can’t share any specific usage data, but what I’ve seen so far has been pretty compelling. I’ve yet to see a single data set for any game that would show that the people playing it with subtitles enabled are in the minority. So, once you start to see things in that context, of it being an issue that affects the experience of your game in general rather than it being a niche consideration, that really should change the mindset towards improving game subtitles and investing more in accessibility features.

MD: Kari said that the way subtitles are added to the game vastly differs from studio to studio, and then, as you said, there’s also little awareness of just how important subtitles are. Wouldn’t it be better to standardize game subtitling, maybe even on the governmental level, so that studios all have to follow specific guidelines created by accessibility experts, or would this approach be too stifling for the developers’ creativity?

IH: Developers already have to follow rules in the form of certification requirements set by publishers and platforms — for instance, the requirement mentioned earlier for all Ubisoft games to have subtitles. So there is already a mechanism in place for some sort of standardization. But beyond the basic requirement to include subtitles, there are also more specific issues for which there’s no justifiable reason, like tiny subtitle text. So I do think there is some scope for more publisher level requirements. And actually having a clearly defined goalpost for developers to aim for isn’t a bad thing at all.

That said, outside of platforms and publishers laying down requirements, there is to my mind a quite different but really powerful tool for advancing the quality of subtitle presentation in games, and that’s game engines. Some engines include a basic out-of-the-box subtitling system for developers to use, and some don’t. But if commonly used engines like Unity and Unreal offered great pre-built subtitling systems that had a combination of good functionality (such as the option to scale text) and sensible defaults (such as number of characters per line), you would then see not only developers keen to make use of the subtitling that such an engine offers but also developers who have no knowledge of and/or interest in subtitling still producing really nice subs — they’d be happy to just grab an off-the-shelf system solely to save themselves dev work at a highly pressured stage of development.

And standardization like that spreads. Even if you aren’t developing in a tool that provides that kind of functionality, if you see that it’s now very common for games to offer all of those things as standard, there’s an increasing chance that you’re going to copy them too. Especially because player expectations shift accordingly, and you’ll have people asking why your game doesn’t have the same subtitle presentation options that other games do.

MD: Next question: Right now there’s not much interest in video game subtitling in the audiovisual translation academia — I could only find several papers on it, like Carme Mangiron’s descriptive and empirical studies. Do you think that more academic research in this area would be welcome and beneficial for the industry, or do you already have it all figured out in the Game Accessibility Guidelines and elsewhere, and it’s just a matter of awareness more than anything else?

IH: There’s another nice academic paper on subtitle presentation in games due to be published this month, by Tomas Costal.

At the moment, the urgent need is just to get the basics nailed, but after that there’ll always be the need for more investigation.

One thing I’m personally interested in is the possibility of customizable reading speed. Currently, as a developer, if you have a game with lots of dialogue, you need a subtitle prioritization system. When exactly subtitles appear depends on the player’s input, and sometimes it can so happen that several subs will appear at the same time, so there’s not much time to read each one of them. Since some subtitles are more important than others (think main dialogue versus background chatter), you need to add a system that prioritizes certain kinds of subtitles. All that could be offered up for player configuration: someone who needs more time to read the text could adjust the duration each subtitle is displayed for, meaning that less important subs that you’ve got no spare time for would drop off. And vice versa, a player would be able to ramp up the reading speed and free up more gaps for exposing the less critical, more atmospheric lower-priority subtitles.

It would be lovely to tackle some things like that, making use of dynamic environments’ potential to surpass what’s possible in linear, rather than playing catch-up.

But in general, again, there’s always the need for investigation and pushing the boundaries. I heard a nice quote about that once — that accessibility’s work will be done soon after people stop innovating. There are always new ways of interacting and displaying, which means new potential for inclusion and exclusion, and new solutions needed.

MD: Well said. There’s also this deep neural network craze in the translation industry right now. Do you see these technologies become prominent in game subtitling?

IH: There is an area where that has already started to make its way into game development. Some online games now have real-time speech-to-text transcription of voice chat, which helps deaf and hard of hearing players understand what their teammates are saying in the microphone. This system basically converts their speech to subtitles. It first appeared as a test in Halo Wars 2 but is now available for any studio to use through the Xbox SDK. It is fairly rudimentary due to the limits of current technology, but, as time goes on, that will only improve.

MD: What new developments in the game subtitling technology are you most excited about?

IH: If you can count that live transcription thing, then that, as it is a move to address what is a really fundamental issue for many gamers.

Outside of that the developments that I am excited about are in practices rather than technology, that is, seeing a small but steadily growing number of studios implementing things like a decent and configurable text size. It feels like a trajectory has now been set: even though there’s still a long way to go, at some stage it will certainly reach the tipping point.

MD: Is the situation with game subtitles getting better overall?

IH: It is both getting better and getting worse. We’ve already talked about progress being made and also the slide back that has happened with VR, but one thing we haven’t covered is studios who are moving backwards and making subtitles ever more difficult to read. Games seem to shift to ever smaller text and lower contrast, often in the mistaken belief that it makes the subtitles less distracting, with less impact on your immersion — but having to peer, concentrate and even get up and walk over to the TV to be able to read the text has precisely the opposite effect. Bad subtitle presentation can really pull you out of the experience, and once you’ve hit a barrier like that, it is really difficult to get back to a good place of immersion and enjoyment again.

Frustration, discomfort and anger aren’t the kind of feelings any developer would want their players to experience, or at least wouldn’t want them to experience as a result of their subtitling, but that’s exactly what happens. Without naming any names, some of the worst subtitles I’ve ever seen were in games released in 2017, and gamers are really unhappy about it. Some of the replies in the Reddit thread that accompanied your last article are a perfect illustration of that. Even just personally, I have 20/20 vision but I’ve now had to dump one of my favorite franchises, as its ever-decreasing subtitle size (and text size in general) has made playing it a really miserable experience.

Although that may seem disheartening in the short term, change can come quickly, and it occurs along a curve rather than a line. We’ve already seen that very clearly with colorblindness. It was only a few short years ago that games considering colorblindness were rare, but as soon as a few big games started to take it into account — Sim City, Borderlands 2 and Black Ops 2 all adding support in 2013 — awareness snowballed quickly and it started heading up that curve.

Colorblind Mode Options in Borderlands 2

(click on the image to expand)

Credit: YouTube channel Linnet's How To

So, although today I’d probably say that there are as many steps back as forward, that balance can change quickly, and at this stage I feel confident in saying that it will.

MD: I look forward to that! Now, the last question. What would be your advice for studios that want to subtitle their game to a high standard but are not sure where to start and how to go about it?

IH: The most important piece of advice I would give is just to do something. That may sound like a trite answer, but it is an important one. I’ve often come across companies both inside and outside game development who have become aware of the breadth of what is possible, feel unable to do all of it at once, so decide to do nothing. This is the worst thing you can do. Not being able to do everything in one go is fine; every little incremental change you make just means your game will be more enjoyable for more people.

So, just pick something simple and easy to achieve and build from there. For example, add a solid background behind your text in your next patch, configurable opacity for that background in the following patch, text size options and captions for background sounds in the sequel. So long as your current and future games always move forwards rather than backwards, you can’t fail to get to a good place.

MD: Thank you, Ian, for your comprehensive answers! And this concludes the interview — thanks to all of you for joining me today!

* Opinions shared by Kari, Alain and Ian in this interview are their own and don’t represent any company or organization.

© 2024 MD-Subs

  • ProZ
  • LinkedIn
  • Twitter
  • Upwork
bottom of page