So-called "AI" has been used in the game industry for decades. I don't doubt that some game devs may use cloud based LLMs for NPCs, but it will be awful. There are reasons I gave up on the video game industry even though I also worked for The Museum of Art and Digital Entertainment which is a 501c3 nonprofit playable video game museum.
For example, Todd's Adventures in Slime World (1992, Atari Lynx, Epyx/etc.) had "music" which was "AI" generated if you could call it that. It certainly was audio, it was also, not good. Probably a great way to cut a corner and save some dev costs rather than pay a musician (because everyone knows real musicians are all greedy, wait, no they aren't, though beware the "biz" music industry and publishing sorts).
Fatal Labryinth for the SEGA Genesis (1991, aka 死の迷宮) used procedurally generated dungeon/levels and the novelty wears off almost instantaneously. It's an awful game.
So, my prediction would be more like: if LLMs are used for "cloud" based NPC? They'll just continue to be awful. There are already examples I think? Twitch Streamer QueenBee was playing some "AI generated" game on stream a year or two ago. It did not seem good. I do not remember its name.
So-called "AI" has been used in video games for decades, but more often than not, as enemy logic. Take 侍魂「samurai tamashii」aka Samurai Spirits aka Samurai Shodown, its enemy "AI" is so robust, that for me at least, I found that I would have to start off the game playing "stupidly" to try to convince the "AI" to go easy on me, to ever complete the game, because each opponent gets progressively more difficult. If you show off at the start, then the difficulty ratchets up so quickly, chances of completing the game, at least to my inept reflexes was impossible.
A lot of early Neo Geo games had tutorial/training modes too.
Which reminded me of when Doug Engelbart described how J.C.R. Licklider cut off Engelbart's funding for NLS. In an interview (I have archived offline and do not see online last I checked) that Robert X. Cringely gave with Doug, Engelbart described how "Lick" (his nickname) seemed to have caught the "AI" bug and when he asked how NLS was going, and Engelbart said something to the effect of, "great! We just hired some women to teach customers how to use the system." Lick apparently responded that: "the systems should be training the users, not people."
Paraphrasing Doug further he said something to the effect of: "In decades since then, I have yet to encounter a system capable of training a novice user. But Lick seemed convinced we were missing out on some fundamental design requirement."
I do agree that I doubt most memory "unsafe" language based code will benefit from being rewritten in memory "safe" languages, but I do think it is plausible and achievable to augment existing code bases with tools to improve memory safety without rewriting them entirely. Admittedly, I don't think projects such as Safe-C and Safe C++ are really targeting compilers at the right layer to achieve desirable results (where desirable results would be something closer to something such as: update my LLVM/clang or gcc, recompile, done.)
The SBOM prediction stuff which you also addressed in part 1 I think I probably agree with too, though I am still fixated mostly on hardware level supply chain attacks and consider software the realms of dependency hells.
Cool Valentine's Day video posted today on YouTube BTW. I hope you have a lovely one; I know seeing your smiling face brightened my day a little bit, so thanks for that!
Ah wow, seems as if the other 3 comments are spam? ;-/
This Part 2, largely seems to reiterate your talking points in Part 1.
Maybe refined? I mostly agree with the predictions, even if I think some of them are absolutely abysmal technologies that should be abandoned.
Take LLMs for example, machine translation is intrinsically awful with some languages (e.g. Japanese and English, also see: https://www.youtube.com/watch?v=4J4id5jnEo8)
So-called "AI" has been used in the game industry for decades. I don't doubt that some game devs may use cloud based LLMs for NPCs, but it will be awful. There are reasons I gave up on the video game industry even though I also worked for The Museum of Art and Digital Entertainment which is a 501c3 nonprofit playable video game museum.
For example, Todd's Adventures in Slime World (1992, Atari Lynx, Epyx/etc.) had "music" which was "AI" generated if you could call it that. It certainly was audio, it was also, not good. Probably a great way to cut a corner and save some dev costs rather than pay a musician (because everyone knows real musicians are all greedy, wait, no they aren't, though beware the "biz" music industry and publishing sorts).
Fatal Labryinth for the SEGA Genesis (1991, aka 死の迷宮) used procedurally generated dungeon/levels and the novelty wears off almost instantaneously. It's an awful game.
So, my prediction would be more like: if LLMs are used for "cloud" based NPC? They'll just continue to be awful. There are already examples I think? Twitch Streamer QueenBee was playing some "AI generated" game on stream a year or two ago. It did not seem good. I do not remember its name.
So-called "AI" has been used in video games for decades, but more often than not, as enemy logic. Take 侍魂「samurai tamashii」aka Samurai Spirits aka Samurai Shodown, its enemy "AI" is so robust, that for me at least, I found that I would have to start off the game playing "stupidly" to try to convince the "AI" to go easy on me, to ever complete the game, because each opponent gets progressively more difficult. If you show off at the start, then the difficulty ratchets up so quickly, chances of completing the game, at least to my inept reflexes was impossible.
A lot of early Neo Geo games had tutorial/training modes too.
Which reminded me of when Doug Engelbart described how J.C.R. Licklider cut off Engelbart's funding for NLS. In an interview (I have archived offline and do not see online last I checked) that Robert X. Cringely gave with Doug, Engelbart described how "Lick" (his nickname) seemed to have caught the "AI" bug and when he asked how NLS was going, and Engelbart said something to the effect of, "great! We just hired some women to teach customers how to use the system." Lick apparently responded that: "the systems should be training the users, not people."
Paraphrasing Doug further he said something to the effect of: "In decades since then, I have yet to encounter a system capable of training a novice user. But Lick seemed convinced we were missing out on some fundamental design requirement."
I do agree that I doubt most memory "unsafe" language based code will benefit from being rewritten in memory "safe" languages, but I do think it is plausible and achievable to augment existing code bases with tools to improve memory safety without rewriting them entirely. Admittedly, I don't think projects such as Safe-C and Safe C++ are really targeting compilers at the right layer to achieve desirable results (where desirable results would be something closer to something such as: update my LLVM/clang or gcc, recompile, done.)
The SBOM prediction stuff which you also addressed in part 1 I think I probably agree with too, though I am still fixated mostly on hardware level supply chain attacks and consider software the realms of dependency hells.
Cool Valentine's Day video posted today on YouTube BTW. I hope you have a lovely one; I know seeing your smiling face brightened my day a little bit, so thanks for that!
Lots of love!