Forrest Smith has written an article about the problem of rampant cheating (http://altdevblogaday.com/2012/04/02/extravagant-cheating-via-direct-x/) on PC games. I’ve wanted to address this topic because it online cheating has affected my gaming experience in the past (although not any longer because programming has taken up most, if not all, of my gaming time). Cheating on PC games can be seen as a classic Black Hat vs. White Hat battle that never ends. Forrest mentions data analysis as a tool to measure the likelihood of a player using one or more cheats to gain an unfair advantage (which should be noticeable in terms of raw game data). I’d like to add a few things.
I agree that data analysis is probably the best way to detect cheaters. This is because it is possible to detect players who “toggle” subtle cheats that are difficult to notice (e.g., realistic aimbot, or wallhack) — these players use these cheats for 2 minutes then turn them off, and then on for another 2 minutes, then off again, and so on. While it would be difficult for humans to detect such players, computers, with enough data points, could easily see consistent, significant fluctuations in game data and raise a red flag for further analysis by a human.
Though data analysis is excellent, I think that the amount of computation needed (and also the amount of raw data needed) to automatically (and intelligently) analyze every player, in real time, is beyond the current state of technology, or at least economic feasibility. Even if trusted supercomputers could crunch player data, no game company is going to throw in thousands, if not millions, of dollars just to check for cheaters — especially if the game is over two or three years old.
The best way to prevent cheating is to use a network of trust. This is the magic phrase that comes up over and over again for those who are engaged in real security work. Public key cryptography is an excellent example. You could easily have a server that uses the current state-of-the-art public-key cryptography to ensure that, for example, only players who share at least 2 other players in their trusted keyring can log in to play. The effect of this is that you are simulating real human behavior — people tend to avoid taking risks (in this case, the possibility of a ruined gaming experience because of cheating) with complete strangers. You could even enforce draconian measures, such as, “if a trusted person has been confirmed as a cheater, all other associated players will have their trust level reduced.” Such a rule would severely dampen the spread of new cheats, as the banned player will be prevented from playing on trusted servers, and all of his real-life friends who added him to their keyring will have their “trustworthiness” level reduced. In short, real-life peer pressure would dramatically affect the game.
Sadly, the only problem with the network of trust is that it prevents millions of players from playing against each other immediately, because, let’s face it, the vast majority of online players one encounters are complete strangers. Still, I think that this method is a viable option; I am 99.999999% sure that there are thousands, if not tens of thousands, of players out there who would be willing to sacrifice maxed-out (full player) servers in order to enjoy hours of trusted gaming that guarantees a system that no one in the world, not even world governments, is able to hack.
UPDATE April 21, 2012: Fix grammar/typos.