Vegeta, what does the scouter say about its graphics level?

It’s been a while indeed, as normally happens when life comes at you fast. Things have changed a lot here, and my time for games has been massively reduced – though this has had the side effect of making me slightly more interested in mobile games. When you have little time, you want to play in small sessions.

Candies ‘n’ Curses: this took a very long time. My proudest moment.

Among other things, I have very slow internet in this house, which kinda puts a dampen on my addiction to benchmarks. It takes around two hours to download a 25GB game, and my browsing is very slow in the meanwhile, so of course I gravitate more towards mobile games while bigger ones are downloading.

Subway Surfers: this also took a very long time. Not necessarily my proudest moment.

Still, the downloads eventually finish. And one of them was the Immortals of Aveum demo. Aside from having the most facepalm-worthy name of the century, it’s notable for two things: it supports FSR3 (which by the way is somewhat underwhelming – it does increase the maximum framerate, but the minimum doesn’t seem to change much, and the extra lag is quite noticeable to me… I guess it only works well with 60+ FPS games in the first place). And it has a peculiar method of quantifying your GPU’s power budget: with a single number.

Left number is the required GPU power, right one is the available power, specs be damned. That’s my RX 6600 XT by the way.

In fairness, maybe it’s not so peculiar. After all, our very basic method of quantifying a GPU’s power in Teraflops is effectively the same thing. And I suppose it’s still a little more interesting than games where all you get is the required VRAM. Then again, this game won’t tell you that, so… win some, lose some.

And that’s my GTX 1650 laptop, which is apparently six times slower. I wonder how that number is calculated.

I don’t think it’s very useful regardless, because it doesn’t change depending on the chosen resolution. I guess the game thinks my power budget is the same, no matter if I chose 4K or 320×240? And in the end, my GTX 1650 was still running the game decently… somehow… ok, that’s a very generous take on the results. But they weren’t nearly as bad as the above numbers would imply (ah, the power of 720p and FSR Performance).

I guess after all, the best way to make sure a game runs well is to simply run it yourself… and hope for the best.

720p with FSR Performance means a base resolution of 640×360, meaning I was running old Windows 95 games at higher resolutions. It still doesn’t make Requiem: Avenging Angel any good… but at least it looks sharp.

Indiana Jones and the Attack of the Clones

Lightfall is upon us, inevitable server issues notwithstanding, and while waiting for Destiny 2 to occupy my days once more, I’ve used the last week to replay an unsung gem of the late 90s: Indiana Jones and the Infernal Machine. Unfairly maligned in its day for not being an adventure like Fate of Atlantis, it is nonetheless a great game that showed that developers at Lucasarts weren’t content with any old licensed drivel. And to be honest, I could spend days just gushing over the environments.

Sophia using her divination skills to read 75 years into the future.

The old Tomb Raider, celebrated as it may have been, I’ve always found drab and lifeless. Okay, maybe lifeless is right, you were raiding tombs after all. But surely Lara Croft herself would get tired of the same old brown and grey eventually. The sequels would introduce a bit more color and variation, but ultimately the palette always strived for a sort of realism, well, as realistic as you could get with people transformed into dragons by old daggers anyway. Venice (despite completing TR1, that was as far as I could get in TR2 before giving up) wasn’t a carnival of 256 colors, it was about what you’d expect from old stone and bricks.

Someone once said that, if you are going to stare at someone’s back for 60 hours, it might as well be a nice-looking girl (let’s not get too specific). The origins of this phrase are lost to time, though some say it dates back to a PVP Online strip from 2004. Personally, I’m more surprised that someone would need 60 hours to complete a Tomb Raider game.

However, Indiana Jones famously doesn’t believe in the supernatural, hence he’s not bound by such wild concepts as realism. And it shows in the colors of the places he visits throughout his adventure. Lava is a searing reddish hue, and the vapors are as ambient temperature as you’d expect from a Hollywood-inspired videogame. Sand is not vaguely beige, but a most brilliant mix of yellow and orange. Water is not just transparent like in real life, but a torrent of azure. With colors like these, who needs realism?

Irregular shapes come together to inform the level design. I love it.

The Infernal Machine was mockingly called a Tomb Raider clone back then. Perhaps it is so, though I’d argue that Lara Croft was directly inspired by Indy, so it’s merely a full circle. You gotta wonder where the circle goes now, since the new Tomb Raider has been inspired by Uncharted instead… what will the upcoming Indiana Jones game do this time? Clone Uncharted as well? It might, and perhaps it won’t be so bad.

Cue scene of Indy driving in the desert for three days. Well, as long as he had water.

Clones have always had a bad reputation in games. Remember the “Doom clones” of its era? Following a leader denotes a lack of originality on your own part, or so goes the idea. But some of the best games around have been directly inspired by other, perhaps more famous titles. I’ve already said that I think Infernal Machine is better than the Tomb Raider games it so obviously copied, but there are so many other cases where a clone can stand perfectly well on its own. Resident Evil is sci-fi Alone in the Dark with zombies. Serious Sam is Doom with more enemies and an adventure setting.

Rhem 1 is basically a harder Myst, and it succeeds because of that. Maybe the sequel copied Riven instead, because it sure was a whole lot harder.

The idea, I guess, is that a clone will need to have at least some element of differentiation, and so will usually introduce a gimmick. When we are lucky, this one different element is what wins the day. Unfortunately the rise of mobile gaming seems to have given birth to an entirely different idea of clone, one where games are exactly always the same. There is only so much you can do with the match-3 formula, I guess.

After finishing the game, I’ve now started Jedi Fallen Order. Because cloning Uncharted is not enough, you gotta clone Dark Souls too.

Even if the new Indiana Jones game ends up being another riff on Uncharted, I have faith that it could stand well on its own. Sometimes, as Henry Jones said, you must believe.

And then, 333 days later…

Oh man, almost a year since my previous post. In my defense, things have been quite hectic in the past eleven months. Lots of stuff happened, and I’m now looking at a complete change of life within the next three months. That’s quite something, and more than I could have hoped for even just a year ago. I consider myself lucky overall. Of course, luck only comes to those who made the effort to call for it. Best not forget that. People too often tend to minimize their own achievements.

Maybe not quite as severe a change of life as being resurrected as a wraith to enact vengeance on your old master and executioner. But… almost there.

I guess I don’t have much to say this time, I just wanted to update the blog. But now I’m here, might as well find something to say. So, for example, due to having to move around, I’ve become the owner of an Acer Swift X with Ryzen 5500U and GTX 1650. It… isn’t a great combo. The Ryzen is a little too slow to get to a stable 60fps in modern games, and the GTX is a mere 35W model, so yeah. The 8GB of soldered RAM don’t help any. Kids, remember to buy a laptop with upgradeable memory, or at least one that comes with 16GB in the first place. Still, I’m the one who wanted a thin-and-light that could play games fairly well for a low price. Shoot for the moon, as they say.

Nobody will complain even if there are some jaggies, and even performance isn’t all that. What matters is that I’m ready for Lightfall when it comes out.

Can’t complain too much though. As a whole, this notebook has served me well enough. Destiny 2 runs well as long as you can suffer some framerate uncertainty in heavy scenes, as well as truly unstable performance in PvP. Not that I care, the Crucible has been stuck in a rut for ages anyway. Other than that, the CPU and GPU are generally enough to run Xbox One games at 60fps… at the same settings and resolution, if the original was a stable 30fps and the port is good. So I’m afraid for Elden Ring and some other games I had to keep to 30fps instead. Doing that, however, usually lets you set 1080p and high details. Having all that extra power is very nice indeed.

Ok game, look at me for a moment. I may be below minimum specs, but I can run the benchmark at 1080p medium, at 96fps with lows of 84fps. Got anything to add?

Aside from the RAM struggle, VRAM also tends to be somewhat of a problem nowadays. Frankly, 4GB isn’t quite enough in recent titles. And keep in mind Windows 11 really wants some memory for itself, so it’s more like 3.3GB available. Deathloop takes a minimum of 4.6GB and it shows with stutter at random times, even when playing at 720p and low details. Darktide is more or less out of my league. I was surprised to find that even the new Lego Star Wars can’t keep a stable 60fps no matter what. You’d think the CPU would be able to do better there.

Not terribly excited about that red text. It’s usually not a good color to see. Did you know that in DX12 you can only have 90% of 90% of your card’s total memory? Sucks, yeah.

It’s a 35W card, which would technically be a Max-Q, but since Nvidia doesn’t make those anymore, they just call it a GTX 1650. Apparently it just keeps its clocks as low as needed to stay within the power budget. The overlay shows a maximum of 1740mhz, which sure is too high to be real. The actual performance is lower than the average GTX 1650 by a good 15%. More realistically, GPU-Z says it runs at 990mhz and can boost up to 1155mhz. That sounds more like it, if you ask me. Now, 3DMark 06 actually does a decent job here, since it gives me fillrates of 38GT/s in single texturing and 58GT/s in multi texturing, quite close to what you’d expect from 32 ROPs running, I want to say, in boost clock… and 56 TMUs running in base clock maybe? Performance is respectable still. Much better than the integrated Vega 7 for sure.

The main attraction. The device ID corresponds to a Max-Q. Make of that what you will.
The side show. Pretty bad, yet we have come a long way since my old Radeon Xpress 200.
Borderlands 3 gives us a vague idea of how things stand between the GTX and the Vega. Obviously, the 1650 is bottlenecked hard at low resolution and details, but you get a feeling of their respective powers, and it ain’t a pretty comparison.

Of course, such tests are kinda pointless. We are talking about a sixteen years old benchmark, whereas modern 3DMark says my GPU clock is closer to 1450mhz. A bit harder to believe though. Anyway, I would like to try 3DMark 06 on the old netbook, to find out if it really does have 12 TMUs. And given that I’ll be free until my upcoming change of life, I should have time for that. For now at least, I can enjoy life, free of worries for a few weeks. After eleven difficult months, there’s so much I need to catch up to. Books, games, you name it. Just not Elden Ring though, I already took care of that.

In other words, I can’t afford to be too lazy. Even if being lazy sure feels nice sometimes.

Around the world in 80 polygons

At the time of writing, the world is going crazy for Elden Ring. I’m sure that, at some point, I’ll be picking it up too eventally (I have my reason for not buying it yet, some of which may or may not be related to the fact that I’m a cheapskate). Until then, I decided to try my hands at From Software’s older and lesser known output, specifically Eternal Ring and Shadow Tower. I’ll spare you my opinion on those games at the moment, they may yet end up in another blog post. Instead, I wanted to rant a bit about what I see as a point of difference between Japan and the West in the 90’s: the technology, especially for adventure games.

Games from the nineties tend to be vastly different depending on which side of the ocean you are looking at. America and Europe were quite big on PC games, and both Sierra (before) and Lucasarts (after) influenced many a developers. But perhaps neither were as big as Myst, which went on to become a massive success story, and spawned an entire subtype of Myst clones – a genre that still goes on today, though usually not with static screens anymore. I’m guessing a 3D game is actually easier nowadays than one with 360 degrees panorama points. Not that those don’t exist anymore, look for example at Darkling Room’s Ghost Vigil.

The average Myst clone, much like the average Souls clone, always made a few mistakes that the original did not. Lighthouse really screws up in highlighting hotspots.

Japan was probably bigger on consoles, and not so big on PC games. As a result, when the PS1 and Saturn came out, it was a race to utilize their amazing new 3D capabilities. Since we are still in the topic of From Software, who can forget King’s Field? That was more of an RPG… eventually, however, adventures started appearing too. My usual favorites: Echo Night, Iblard, Enemy Zero. All of them presented in true 3D.

I really just wanted to reuse this gif.

The distinction between both types of graphical approach here is quite obvious. Games like The 7th Guest and Myst looked impressive, but they were clearly static. Press the mouse, move to the next hotspot. Echo Night, on the other hand, is in 3D and gives you total freedom, but it sure looks very rough in comparison. This distinction doesn’t just stop at adventure games, either.

Okay, so maybe The Dark Eye could have been done in 3D too. I doubt it would have looked as interesting though. At least, not with the available texture memory of the PS1.

I’m thinking that the reason for this is due to both different influences and different hardware. As I said before, the Western market was fairly big on PCs. And until the arrival of graphics accelerators, PCs weren’t all that good at 3D stuff. Mind, they weren’t all that good at fast-paced 2D stuff either (rough deal, uh?). This might have been part of the reason for the massive number of Eye of the Beholder clones out there, even when Japanese developers were starting to deal with true 3D dungeons. Mind, you could argue that Ultima Underworld did it even earlier – but that game was a system hog, and later Wolfenstein 3D and especially Doom found far bigger success on computers. It’s possible, then, that developers started seeing faux-3D styles to be more useful for fast-paced shooters and their ilk, though Might and Magic and Elder Scrolls at least eventually did the same for RPGs. Rare examples of 3D adventures from the era (Normality?) have remained mere curiosities. Plus, it wasn’t even real 3D. For that, the earliest game that comes to my mind is realMyst, released in 2000. Quite a long time to wait.

Betrayal at Krondor tried to be more 3D-ish. Looked a bit weird, actually.

I mentioned Myst. Of course, a bunch of static images must have been a lot easier to handle than anything in real 3D – and they looked a lot better too. If you consider the level of true 3D graphics throughout the years, it would take a very, very long time before anyone made anything as good-looking as Riven. And when you also consider the massive success of Myst, it’s no surprise that everyone and their grandmas adopted the same style.

Myst clones were common in edutainment too, such as Opera Fatal here.

Lucasarts and Sierra games were a big deal in the 80’s, and partly through the early 90’s too. Again, another reason to avoid 3D backgrounds. I suspect the success of Alone in the Dark may also be responsible – as developers noticed that fusing 2D backgrounds with 3D models allowed for more details in the formers, and… well, more details in the latters too, since the CPU didn’t have to share its polygons budget with anything else. A character is more dynamic than a background, of course, so the method was viable. And Lucasarts eventually proved that it could be used in pure graphical adventures too. In Japan, Capcom and Square took the same approach for Resident Evil and Final Fantasy, and we all know how successful they were. However, even with the occasional Clock Tower, you were still more likely to find a 3D adventure there.

I’m just going to assume it’s not okay.

Unfortunately, I’m not as familiar with what was popular in Japan in those years. It would be interesting to find out more. Visual novels were probably fairly big, while I don’t think Myst clones ever made a huge splash there, though they did have some games like that, with System Sacom adopting the style widely in their portfolio. Mansion of Hidden Souls and Lunacy were effectively the same, though their interface was adapted for consoles. But that type of adventure works better when you aren’t talking to many NPCs… basically the opposite of visual novels.

I kinda wish Mansion of Hidden Souls had been made on PC, if only so the color palette wouldn’t be murder on the eyes.

I also don’t think interactive fiction was ever big in the Japanese market, possibly due to the complexity of the grammar, which would have made the parser a nightmare. Zork 1 did manage to make it work, just barely. It’s a bit of a slog to play though. This might have been a reason why adventures with menus and multiple choices were more popular.

Maybe they were lucky to be spared the terror that was Beyond Zork.

I can’t even tell for sure when the two sides started to converge. In a way, they were never that separated. But at some point, the differences started to disappear. Maybe around the time when full 3D became more convenient than 2D for just about everything, so… the early 360 era? Incidentally, the western adventure market “died” and remained dormat for a while. By the time it reappeared, graphics had already become a lot better.

And at this point, it should be well possible to get a realRiven that looks better than the original. Anyone thinking about it?

If I were a tycoon on my deathbed, my last word would not be “Bobcat”

Has it really been ten months? I guess the blog was more active when I was testing old graphics cards, because without those, life is generally boring. Now that the old PC has been dismissed, there is not much I can do about that (I did try setting it up again some months ago… turns out the Soundblaster 16 is fried, and that makes me sad because that thing was a beauty). But here’s perhaps something else to tide me over, another chance to dig into the past. And that something is an Acer Aspire One 725.

The stuff dreams are made of. Wait, is it dreams or nightmares?

Some context first. Do you remember the netbook craze of 2007? At the time, everyone wanted one of those things, no matter how slow they actually were. The craze continued for a while, until users started getting savvy and realizing that perhaps they were missing out on, you know, not waiting for your life to end while the netbook was loading Youtube. Still, it went on for a time, and in 2012 yours truly got one for himself. In truth netbooks were already on the way out by that point, and I only bought it due to a work trip, but in the end didn’t really use it much for work. In fact, it was bad enough that I didn’t use it for much of anything. Nine years later, I just decided to exhume it back from the earth, to test the limits of human patience. And maybe play some old games.

The idea of using a netbook to play games is perhaps not as exciting as using an old PC to play games. While they are both slow, the old PC allows you to play things that would perhaps be incompatible with modern Windows. On the other hand, a netbook only lets you play modern games, except they run like crap. Seems like a rough deal.

This E1-2100 is the slowest CPU in the comparison tab, it sucked even on release, and it still runs circles around the C-70.

The specs are bad enough on paper. The Brazos architecture, based on Bobcat cores, was one of AMD’s first mobile APUs, and it harkens from an era when they thought it would be a good idea to sacrifice some CPU power to increase GPU performance. That’s great until you realize that there’s simply not enough CPU grunt to carry any meaningful games anyway. You know how well Half-Life 2 Lost Coast runs at 1366×768 and medium details? Pretty bad, 17fps. Now what happens if you drop it down to 640×480 and minimum details? It increases to a whopping… 19fps. See, in the end the GPU is not so terrible, but it’s pointless because you are going to be bottlenecked by the CPU anyway. At least the slideshow will look a little prettier at high resolution.

Already I’m puzzled: depending on the version of GPU-Z, the specs are different. I’d be tempted to trust the more recent release, but all materials I can find online suggest the HD7290 is a mere rebrand of the HD6290, so the core count should be 80. I’ll look for more information.

Like I said, the HD7290 is a bit more capable than average. Note that average by 2012 netbook terms means horrible even by 2007 computer standards. It’s hard to find something to compare it to, but the specs make it seem relatively close to the old Radeon HD 2400 Pro: twice the shaders count at half the core clock, more bandwidth but it’s shared with the processor… should be close enough. Of course, being close to a very low budget GPU of its time is not much to brag about. And at least you were going to pair that with a decent CPU (hopefully).

Classic Rogue, the apex of graphics for a netbook? Ok, let’s not be hyperbolic here.

On the plus side, I replaced the original 2GB DD3-667 stick with a faster 4GB DDR3-800. I’m sure that’s not going to cause any issues or blow up the netbook or anything. I really needed the extra resources to make it a bit more bearable. And with it actually working again, I had a chance to finally test some games.

Jill of the Jungle is nice for sure, but perhaps we can afford to aim just a little bit higher.

At first I was pessimistic. But a quick test of MDK2 at 1024x768x32 made me unexpectedly hopeful. An average of 60fps? Yeah, some strange CPU frametime spikes, but by and large very smooth. So perhaps Lost Coast was too much, but a game from 2000 was well within reach. But it turns out MDK2 may have been very well optimized, because other games struggled.

Labyrinth of Time is quite colorful, and of course, the framerate is as fast as you can click the mouse button.

Serious Sam (The First Encounter, of course, because this thing would never manage HD) runs okay, but the Karnak demo doesn’t reach 60fps even on 1024×768, low settings, and few enemies around. I wonder what will happen when I get to a war scene. Painkiller similarly has to be set to low details and 800×600 to be playable, and note that playable here doesn’t mean 60fps at all, more like 30-40fps with drops. Of course, I’m talking cutting edge games for their time here. If we move to the lower end of the scale, Ys Origin runs very smooth, Sid Meier’s Pirates! is quite playable, and Crazy Taxi 3 is not terrible either. Mind, House of the Dead 2 is simply too slow. I guess there’s Dreamcast port and Dreamcast port (ok, technically Xbox for CT3, but it looked like a DC game anyway).

We enter the realm of 3D, and already MDK is making me regret it. I’m glad the netbook is at least a little above the… uh… Pentium Pro 200.

This computer might be good to play some games from the 90’s instead, but since it’s running on Windows 8.1 (tried updating to 10, it was even slower, so I eventually reverted) compatibility would be an issue. Modern ports to the rescue! Who doesn’t enjoy some GZDoom? Apparently the netbook doesn’t, because it runs quite choppy. Yes, freaking Doom doesn’t run well. It could run on a toaster, and not here. These modern ports tend to have higher system requirements than the original games, and unfortunately it shows. eDuke32 is good enough in classic rendering mode (not Polymer or Polymost alas, so no true 3D for me). Yamagi Quake 2 isn’t bad, but 800×600 is required to stay above 60fps. Anything DOSBox seems to only work well enough at 320×200.

Prince of Persia 3D runs badly. Ok, so it runs badly even on my regular PC sometimes. But on here, it runs badlier. Is that even a word?

I’ll have some chances to play these games in the future, as I’m going to move this netbook to a different place. Technically there’s nothing stopping me from just playing these games on my faster computer right now. But that will give me something to do later. It will also allow me to finally get some mileage from a device I bought nine years ago: no nostalgia here (Charles Foster Kane would not approve), I just like getting some use out what I paid for. And besides, to quote Soul Reaver, “you should respect the power bestowed by a limitation overcome”. But whoops, Soul Reaver doesn’t work. I should choose my quotes better next time.

FEAR, the apex of graphics for a netbook. It runs okay at 683×384 and minimum settings.

Machi – The Thread of Fate

I’m not going to write about Machi today – I’ve already written something about it four years ago. But that was a long time past, and my Japanese skills weren’t as good as they are now (note: they are definitely still not good, but at least better than they used to be). So in April, spurred by that lockdown which left me with very little to do, I’ve taken it upon myself to play the game again and document my experience on Twitter.

This project took me longer than I expected: after all, my first time through the game, skipping quite a bit of text and just giving up on what I couldn’t quite understand, took me “just” two months. This time I made a proper effort to try and understand every word, every phrase. And it was a better experience for it – everything from the bomber’s hints, to Sunday’s machinations, to Ichikawa’s struggles is so much better when it makes sense (yes I realize this sounds obvious). Even if it took me eight months to see it through, playing mostly on weekends.

The result is a 1500-odd tweets long thread, which isn’t anywhere like a proper translation, but could at least give people a vague idea of what the game is like. It starts here:

Five days, eight people, one city.

I doubt anyone will actually bother to read such a long thread or my inane rantings inside. But given that the game will probably never see a localization, I wanted to at least give something to non-Japanese speakers. And, for those who are skilled enough in Japanese, hopefully give them something of a nudge toward experiencing it for themselves.

Chinchikoru!

The highlight of the adventure

I’ve never made a game (ok, I did dabble in Adventure Game Studio for a bit, but I’ll spare you the horror). If I ever made one, I think one of my questions would be… how do I tell people what can be interacted with?

This seems like a silly question, but knowing how to comunicate with the players can’t have always been easy. In the earliest days it was perhaps entirely unnecessary: here’s a line of aliens, shoot them all. But as new genres were introduced, especially adventure games, things got complicated. Sure it would have been nice if you could have used everything to do anything. But I’m afraid it wasn’t a time for Scribblenauts yet. So you had to somehow tell people what they could grab and what was nailed in place!

Ah, my favorite characters from The Hobbit: Gandalf, Thorin, and The Wooden Chest.

The earliest text adventures might have found their solution, but it was kinda conspicuous to tell people: “you are in a room. There’s all sorts of stuff in here. By the way, look at that piece of garlic on the table”. It makes sense, but why would your character’s eyes fall on the piece of garlic in particular? Maybe he’s hungry. As descriptions in adventure games got a little more flowerish, especially from Infocom, they also got a little better at hiding this practice. But in the end, you do what you gotta do. Things just had to wait until graphics arrived.

Excuse me if this super important key is the same color as the ground, we are working with 16 colors here. Besides, you might as well give up with all those enemies. Cauldron didn’t pull any punches.

Even so, for a while, I don’t think it was all that important to tell players what was interactive. After all, if you could see it, there were probably two results: either it was bad and you’d die upon touching it, or it was good and you needed to walk over it and get it. Whichever it was, people could (usually) tell by themselves. And if not, well, trial and error always worked.

Adventure games were once again out of luck, as the amount of details in the picture increased. Surely that flower pot is interactive? Nope. And that statue? Neither. But at least in Scumm games there was one concession: interactive items had a small description appear in the interface. At least you were able to finally work out somehow that the hamster could go in the microwave oven. That is probably one of the earliest forms of highlighting. It also resulted in a lot of pixel hunting. Wish they had thought about that.

A shovel from a sign. Right out of a Looney Tunes cartoon, that.


This wasn’t always possible, especially after The 7th Guest and Myst showed the world what you could do without, erm, much of an inventory or even HUD. Aside from underground mazes, I mean. Once again the problem was telling players what could be clicked safely. Why, that was easy: since it needs to be clicked, your cursor has to go upon it. So just change the shape of the cursor! Not you, Lighthouse. For some reason you didn’t wanna do this and I hate you.

When you are passing above an interactive object in Zork Nemesis, the cursor goes from yellow to… a different shade of yellow. Hope you are good with colors.


All of this is well and dandy with 2D. But what about 3D? No mouse means you have to find other ways. Well, the earliest 3D games perhaps didn’t need to: for example, in Alone in the Dark, you can be pretty sure that if something is in 3D, it can be interacted with. So that’s one way to do it. Just use common sense there. Although this wasn’t always so. I wonder how many people noticed the books hidden in the library background.

Yes, you can interact with the toy horse too. It’s spooky.
JRPGs have their own way of dealing with things. They still do today, for the most part. Vivi is puzzled, but at least we are not.


But one day, 3D just had to become ubiquitous. Now eveything was 3D. How do you deal with that? If everything is a model, what should the players check? And, as the amount of details increased, the issue just became bigger. This is the problem we face today: in a world full of details, what are the important details?

There have been a number of solutions so far. Here are some of the most common.

The techniques adopted by early 3D games are still valid today. Echo Night really wants you to know that you could look at highlighted stuff, and I mean literally highlighted. Too bad it did that with just about everything. Oh well.
When you are out of ideas, just have a button prompt appear on the screen. Can’t go wrong with that. Although in some cases it feels a bit patronizing. I could have probably told the big lone signpost stone was important by myself.
Let’s see now… that revolving item with sparkles coming out of it looks useful. And it is indeed, although a measly 1 armor doesn’t do much. But everything helps in Serious Sam.
A red outline for a health item. Did you know that in many shooters, red is both the color of health items but also of explosive barrels? Life and death in a single color. How peculiar, that.


Nowadays, the most common methods used are the highlight, the outline, and the button prompt (the revolving item style has mostly fallen in disuse as games strive for slightly more realism). I can’t quite tell which is the most elegant solution. Highlighted items can be kind of jarring in a dark, moody, survival horror game. And might be unnoticeable in a well lit game. Button prompts don’t help until you are very close to the item in question, besides nobody likes seeing half the screen covered. Flashing outlines, maybe? They are probably my favorite, although this might be just my Serious Sam bias speaking. But even I find them distracting sometimes.

Maybe there’s a better solution out there. After all, games have tried everything, not just the methods I’ve mentioned here. Manny in Grim Fandango tilted his head to any object that could be used or observed (but nobody liked this system). In Lunacy, you just press forward everywhere, and if it does something, good for you. The detective in 2Dark automatically uses any object in range. Not sure that’s very comfortable. Anyway, I know there must be other games out there that did their own thing. If you know of any other examples, drop me a line in the comments.

Who knows, maybe it won’t even be important anymore in the future. Maybe games will be streamlined to the point where the characters will just do whatever they need to do in cutscenes. And if that happens, maybe they won’t just put hamsters in microwave ovens anymore.

Boulders and pebbles and polygons

October has started, and the end of the year looms ever closer. Imagine, the end of 2020. As if an arbitrary change in number could make any difference! But well, people like to be hopeful anyway, and who am I to dash their dreams? So let’s focus on what we do best.

Enemy Zero used sound cues to great effects, to create a truly tense survival horror. Questionable design choices will always rear their ugly head though.

October is also a time for Halloween, which is always a good excuse to replay some horror-themed games. My choice for this year was… well, more than one. Call of Chtulhu ended up being better than I originally expected, but still somewhat average. Enemy Zero, which I’m playing again now, is always good for both tension and complete frustration at Laura’s slow pace. You’d think she would act with a bit more urgency, what with murderous aliens very keen on eating human heads roaming the space ship.

That is so reassuring.

Another game I finished is Echo Night. One of those titles dating back to a time when From Software still wasn’t solely as the Souls guys, it has both a certain charm and a certain jank to it. But there’s something else to it, an attempt to use polygons to do everything. And I do mean everything. The game looks quite ambitious, in spite of its obviously low budget, but sometimes ambitions can’t be matched by technology. In this case, we are talking about, what else, polygons? The PS1 was so good at those. But when you only have polygons, and only a few of them, issues arise.

It’s just a model.

What I’m thinking here is scale. Maybe it’s not something we think about today, since technology has come along to the point that we can reproduce every scene just about perfectly. Looking weird is not really an issue anymore, although uncanny valley might perhaps be. But in the past, how would one have dealt with the issue of giving the perception of scale?

Look at this screenshot: at first, you might be tempted to think it’s just an alley and you are merely a human walking in it. Look closely and you’ll find it’s not so. But you spend the entire game in first person perspective, so it’s hard to tell when things change.

You’d think first person games would be exempt from this problem: after all, we automatically assume that our point of view is a human-sized being, and perceive everything else starting from that point. Unfortunately, that only creates more problems when we are trying to show something that is, in truth, far bigger. While the solution is to simply put a few “known” objects in plain view, the illusion is nonetheless hard to dispel.

Showing a miniature road is always a good way to pretend that you are actually big. But cars don’t help the feeling of giantness when they look like Micromachines.

Even moreso because your smaller-looking object have to be done with very few polygons. You might well show a Lego-sized car next to the player character to indicate that you are actually piloting a giant mech, but due to the number of polygons and texture size you could use, the car is going to look like actual Lego, and that doesn’t help.

That might be a 20 meters, 500 tons mech. Or maybe it’s a toy. Kinda hard to say.

Another solution might be to rely on different known factors. For example, if you are on a giant mech, employing a mech-like HUD would help reinforce the illusion. Hell, just show the mech in its entirety! But we are just so used to games being human-sized that sometimes it doesn’t matter.

The more details you add, the easier it is to fool the player. Although, in the case of Slave Zero, being a human-shaped character will always leave a lingering taste of normality. Later in the levels, when cars disappear and you start going around giant sewers, it becomes harder to tell you are not just an armored person.

Especially in the past, enviroments were generally limited in what you could render. Oh, you in a city? Well, here’s a few buildings. In a canyon? Have some red rocks. We were so used to these things being in human scale, that when we see them in giant scale, we tend to perceive them as human sized all the same.

Today, we are able to show so many details that things finally work.

Malcolm said in Jurassic Park (the book, not the movie: he didn’t say much of anything in the movie, aside from some memetic lines) that things tend to be the same regardless of the scale factor: a boulder won’t be too different from a mountain, and so on. I think he used it to explain some kind of fractal theory. Well, I don’t know much about quantum maths, but I’d argue he was right. Those big environments don’t look very different from small ones. How we perceived them, however, is a different matter.

An ode to the portable spinoff

The summer heat takes us all hostage, and even with the current health situation, there are probably going to be quite a few people taking some time off for a vacation somewhere. And what’s better than your trusty Switch console to spend time during those long sunbathing sessions? You could play Crysis or something, maybe even continue Divinity 2 from your Steam save. But as we know, it wasn’t always like this.

A handheld port of a modern remake of a 3D remake of a 2D adventure.

The Switch may have changed things now, but for many years, the question was always the same: what compromises have to be accepted once you go from a power cord to batteries? (let’s ignore super bulky gaming laptops, they don’t look like the kind of thing you could easily enjoy on the beach)

Wait, I don’t remember what happened on the sides.

Far lower power and less buttons at your disposal meant that developers were generally unable to simply port console games on a handheld. This was perhaps unfortunate at times. While your mates at home were enjoying Super Mario Bros 3, your Gameboy Color had to do with Super Mario Bros Deluxe and its reduced screen view. And this problem continued long into the next century. Got a DS? Fine, have a Super Mario 64 upgraded port with worse controls.

Wait, I don’t remember this boss.

Even when we got slightly more powerful handhelds (the PSP made a valiant attempt for sure), devs knew that you could only do so much with six buttons and one stick. When you think about it, the PSP tried to sell itself not as offering console games on the go, but “console-like” games. There’s quite a bit of difference. Generally, “console-like” games were seen as simply not good enough. So we get handheld games instead, and those were the true bread and butter of portable consoles.

Sometimes, the handheld game was so successful, consoles started getting handheld-like games.

There was a third side to this story though. Sometimes, when they were going to launch their next big thing, publishers wanted to cover the games market as much as possible. This meant including handheld consoles. But how would they do that, when handhelds could never run the same game? Sometimes it was just a port of an older game (Ubisoft knew this all too well, see for example Rayman 3D or Splinter Cell Chaos Theory on the 3DS), but other times it was a spinoff game. And those were usually way better.

Wait, I don’t remember those mission instructions. Can you tell that Ubisoft wanted to market Conviction too?

For an example, did I say that Ubisoft knew this all too well? They also knew about the other possibility, as shown by Prince of Persia 2008 and its DS spinoff, The Fallen King. While they are in the same continuity, more or less, the two games couldn’t be more different. One is on rails, the other is a proper platformer. One doesn’t let you die, the other is all too happy to kill you for a failed jump. One has a snarky girl as your sidekick, the other has a moody and serious dude. The DS spinoff even resembles the original series more due to the prince’s white outfit and a few returning setpieces (timed door switches, potions, etc).

Stylus battles don’t quite have the same appeal of the old sword duels, but we’ll take what we can.

Since portable spinoffs were given a lower budget, they almost never looked as impressive as their console cousins. Often times they weren’t as good either, since the publisher would get a less experienced team to work on this side offering. The Fallen King has many faults, though ultimately I enjoyed it more than the console version. In the end, outside of the AAA realm, devs were probably given more freedom to experiment.

Something weird about the Fallen King story: they say it takes place after the console game and its Epilogue DLC, but it was released together with the console version, while the DLC only came out a few months later. So people were effectively given the first and third chapters of a trilogy, with the second chapter arriving at the end?

The Switch may have changed things now. Let’s that Doom Eternal will look obviously downgraded from the console versions: it will be the same game. That is the reality of today, and it’s clearly better: no surprise the Switch is selling so much. But let’s not forget the time when handhelds couldn’t keep up with consoles, and instead of giving up, they sidestepped their limitations to give us something completely different.

And sometimes, it gave us games we’d rather forget about. I guess it’s fitting that Turok Evolution, being terrible on console, was terrible on the GBA too.

The Fresh Prince of Persia, or: animation over matter

Have you ever considered what makes a game look good? Maybe it’s the backgrounds, some will say it’s the lighting, others yet the texture work. Well, we could make a case that it’s about all of that together. But perhaps the most important thing is the quality of the animations.

Think back to the 80’s. The Atari 2600 was on its way out, and the IBM PC was about to conquer the collective minds of humanity with its… uhm… 16 colors and no hardware scrolling. Okay. So at the time, animations weren’t really a concern. If your game had graphics at all, you were probably in a good place. King’s Quest let you move your little dude Graham! Amazing stuff. Even a powerhouse such as the NES wasn’t really concerned with animations. Difficult to hold several frames when your cartridge was filled up by a few lines of code. But 1989 eventually arrived and it was time for something more. It was time for… Prince of Persia.

That looks like a white leotard, actually?

A standard walking animation must be easy to make. Like, you make a standing sprite, then a sprite with the left leg forward, and then another one with the right leg forward. Mix them up a bit and you get a walking animation! (disclaimer: I have no idea how sprites creation works, I imagine it’s pretty hard actually). So when people saw the life-like running and jumping of the prince, they were amazed. He didn’t just look so realistic – it was also impressive that he would flow from running to jumping in a natural manner. What was that? Did they just waste their entire development budget on animations? Surely this was the future.

Sword duels in 2D!

You could argue that such a big focus on making sure that the animations would flow smoothly, meant that there was no way you could make a responsive game. If you need to make your characters jump in a realistic manner, then instantaneous jumps such as those seen in Mario and Sonic are impossible, right? Well, the future would eventually disagree, but we’ll go over that later. For now, let’s gape in amazement at those sword duels. In 1989, there was little better.

If you want to combine videogames with the gold era of cartoons, your animations better be up to par. Cuphead delivered.

Of course, animation being important is not news, and not just for games. People often think of Disney’s renaissance era movies and marvel at just how good the animation quality was back then. And if you remember Robot Carnival, one of the short stories had such detailed animations for its characters that it actually crossed the uncanny valley. A great-looking cartoon will be praised… but nobody will ever say that Arnie in The Running Man had such great running quality. That would be ridiculous. Because you see, a cartoon is 2D, and real life is 3D. Nobody expects anything other than life-like movements in real life. And said real life was coming fast in the videogames world.

Of course, there was the opposite approach too. Myst was barely animated. And it was glorious.

It wasn’t as sudden as I’m making it sound. Another World was a 2D game, but it used actual polygons to great effect. Just like Prince of Persia, that game prized itself on its animations that looked way better than what you could achieve with sprite-based work (but notice how it was equally unwieldy). And of course games like Alone in the Dark on one hand, or System Shock on the other, did a great job of mixing 2D and 3D. But we were reaching a conundrum fast. The movements in AITD were great for its time, but barely acceptable after a while. And nobody cared that the aliens in Duke Nukem 3D had maybe ten frames of animation total. But true 3D was a different matter.

That looks like a cool backflip, although also totally useless during actual tomb raiding.

When your entire world is 3D, you have to keep up appearances. And Tomb Raider showed the world what smooth animations meant in the 3D space. Lara Croft might have been defined by her pointy chest for many, but for many others she jumped and ran and grabbed ledges and somersaulted. Yes, the last bit was too much, but it all helped. It was also important to see that, just like Another World, the animations weren’t just good by themselves, these actions all flowed into each other quite naturally, of course given the limits of the era. And now that was the new standard. You can get away with iffy animation in 2D games. Maybe some early 3D stuff such as Quake would also get away with relatively few frames for its models, and people were too busy shooting fiends to notice. But an adventure game in 3D? You better had good animations, or you were going to get laughed out of the classroom.

Sword duels, finally in 3D!

So what about the originator, Prince of Persia? When it made the jump to 3D, it knew it had to deliver on that regard. And Prince of Persia 3D is… interesting. I’m still going through it, there’d be much to say about this ambitious yet broken mess of a game, but here and now I’ll just point out that it even one-upped Tomb Raider in how the titular prince moved. All actions follow each other in a natural manner. Sword fighting is quite a sight, a flurry of dramatic poses and blows exchanged, with light trails and tense music for good measure. Yet the game inherits the same problems as its predecessor and even amplifies them. There’s so much lag in your actions, a running jump requires a great deal of momentum, and to be fair even the sword duels don’t really work outside of the supposed spectacle. Prince of Persia 3D wanted to do too much, but Tomb Raider was already skirting the limits of what was deemed acceptably slow controls. And thus, our prince failed.

Sure, lifting yourself up like this might be realistic, but it is slow. Even slower than Lara, and she was already slower than my grandma.

Modern technology eventually gave us motion capture (although it was pioneered super early by Virtua Fighter 2, it didn’t become popular until the PS2 generation). Suddenly life-like animation became the norm, and devs also started seeing the value of snappy controls. Compare Shadowman to Tomb Raider: less precise maybe, but much more playable. In fact, there’s actually something to be said against the search for the most realistic animations (it is one of the reasons why Red Dead Redemption 2’s controls feel so heavy: you gotta wait for your characters to actually complete his actions).

Sword duels, in 3D… but also 2D… it’s come full circle.

But what about 2D? It may have become old news after polygons rocked our collective socks, but many devs didn’t stop using it. And then, eventually, Abe’s Oddysee showed that it was possible to make a game similar to Prince of Persia actually responsive. Abe runs and jumps like how you wanted the prince to run and jump. And then, possibly inspired by that game, the Prince of Persia Classic remake comes out in 2007. It’s almost a different game: even though the levels layout and basic gameplay loop is the same, the Oddworld feel permeates the whole thing. Now it’s the prince who runs and jumps like Abe did. Admittedly, it makes for a more playable experience (also because you get some quality of life enhancements for a much easier time), even if the graphics were old-looking and the animations at that point didn’t wow anyone anymore.

And he shall be the smoothest prince at that.

I’m in a Prince of Persia mood right now. I still have to get through POP3D. I want to try and finish it, despite its issues. And then I’ll have the 2008 game and the DS spinoff… quite the haul. Who knows if those modern games will have had the same effort put into animating the prince’s movements. However, given the predecessors, I’m hopeful. If nothing else, that my player character will move in a manner that befits the Prince of Persia.

Writing about whatever comes to mind