Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Prediction code for rollback is somewhat akin to branch prediction code in that the dumbest solution works surprisingly well but there's incremental efficiency gains to be had.

I wonder if any fighting games have thought to train a neural network per player to try and predict the player's actions N frames ahead. The neural nets could be used for smoother netcode but if the accuracy got high enough, they could, eg: allow for play after one player disconnects, or used to estimate ELO by having the neural nets play each other before the match or be AIs you could play against in offline mode.



You probably don't want to do this. Players will get quite reasonably upset if the AI predicts thst the opponent will use an attack, so on their screen they hit the opponent out of the attack, then a rollback occurs and the opponent has actually blocked.

Some games like Killer Instinct have AIs that learn to play like a certain player. It's pretty cool!


Could be accounted for by having different cost functions for each type of misprediction and heavily penalizing the ones that decrease enjoyment in the game.


That's an amazing idea. I wonder how long a player would have to play before you could train a neural net to play like them.

In a single-player game you could also create an enemy NPC that uses that same neural net, for sort of a "Dark Link" effect where you have to play against yourself. Would be awesome for chess also.

Lots of possibilities.


> I wonder if any fighting games have thought to train a neural network per player to try and predict the player's actions N frames ahead.

The entire point of playing a fighting game is to attempt to solve this problem. A good player, by necessity, can't be accurately predicted; if they could, they'd be a bad player.


A good player can't be accurately predicted by a human


First, a good player can't be accurately predicted at all; the conclusion from game theory is direct and clear. This is a case where a strategy involving picking moves at random is superior to any deterministic strategy.

Second, your rebuttal is not especially good support for the idea that we should be trying to solve the problem with a technology specifically designed to imitate humans.


How good are human players, by that metric?


That's a fair question. I know of related research showing that chimpanzees are much better at achieving the correct distribution of strategies in asymmetrical-payoff games than humans are. The obvious implication is that a typical human isn't that good at being unpredictable.

The distribution of people who enjoy playing fighting games will probably look somewhat different, though.


There's only a few key moments where players need to be unpredictable to win a game. Almost all the rest of the time they are executing predictable consequences of those unpredictable choices.

ie: imagine a player running to a ledge spanning a gap. The "naive" interpolation would be they continue running and fall off the ledge and die. A smarter system would realize that almost all the times they've run to the edge of a ledge, they've jumped and the AI could jump for you and then later confirm that prediction was correct. They could even jump at the median of all of your previous jumping choices and then lerp your position over time so you land at the correct point based on your actual jump.


> They could even jump at the median of all of your previous jumping choices and then lerp your position over time so you land at the correct point based on your actual jump.

I assume the interpolation relates to something displayed on the screen? The idea makes me kind of uncomfortable, because it seems like it would confuse players by causing identical jumps to display different results. If you only learn about jumping by watching the departure point and the landing point, fine, but if part of how you get used to jumping is by watching the animation, this sounds like it could make things a lot harder.

(If the player sees position data calculated locally, and the interpolation is just a process for bringing the remote idea of where the player is into line with the local idea of where he is, that sounds much better.)


This is intended for viewing some other (remote) player's jump (during a disconnect). It wouldn't touch your own (local) jump.

It's the equivalent of letting an AI take over the player when the player drops out, with the AI intended to replicate the dropped-player's playstyle until he rejoins. In short enough time-spans (disconnect-duration) you have some hope of being exactly correct.

And if you were 100% correct at predicting the remove player, all of the time, you don't even need the other player --- you could just run the AI and stay offline, and just "pretend" there's another player.


There are many situations where a good player can be predicted because there is a clear best option (or a good option that advance knowledge won't invalidate).


Definitely possible, but I doubt it could be both trained and/or performant on a single frame, which is what would be required. The other option would be to save all your replays, and have it be an option to train it on a server over several days, and then you can share your AI with others that can download it, but probably just for a fancy training AI, which can still be useful.


Well over ten years ago I read a research report that claimed many dozens of players in Quake III using a variety of techniques including replacing linear dead reckoning with a traditional AI for each player.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: