So, Shamus Young made two posts talking about Alpha Star’s attempts to create an AI that can play Starcraft II, and how it managed to beat human players and then where a human player exploited a tendency in it to beat it. There was a lot of discussions about that in the comments, and that made me want to do AI again after it being a … few years since my last attempt. And, of course, I clearly have lots of time to spare and no other projects that I want to look at that I could be doing instead of that. Thanks, Shamus!
Anyway, I went out and bought some books on the subject, two of which are detailed books about how to do AI in general and how to do Deep Learning in Python (the last is a technical book on Deep Learning that I would have already started reading except that it starts with Linear Algebra, which is not something I want to review while watching curling …). So I have that to get to, but in pondering it and reading the comments another idea percolated in me.
The AI there focuses a lot on neural nets, as far as I can tell. Now, neural nets have been around for ages, and have waxed and waned in their popularity for AI due to their rather well-known weaknesses (I’ll talk more about that in general in a later post). But one thing that kept coming up, especially when the exploit was revealed was “Can’t you just explain to it or make a rule in it to deal with that exploit?” And the answer is that you can’t really do that with neural nets, because they don’t explicitly encode rules and don’t really have an “Explain this to me” interface. What you can do is train them on various training sets until they get the right answers, and what often makes them appealing is that they can come to right answers that you can’t figure out the reasoning behind, which makes them look smarter even though they can’t figure out the reasoning behind them either. So, perhaps, they can be very intuitive but they cannot learn by someone carefully explaining the situation to them.
But inference engines, in theory, can.
There’s also a potential issue with using a game like Starcraft II for this, because as people have pointed out the intelligent parts of it — the strategy — can get swamped by simple speed of movement or, in the vernacular, “clicking”. As is the case in curling, the best strategy in the world doesn’t matter if you can’t make the shots, and in this case while you’re working out that grand strategy someone who builds units faster and maneuvers them better will wipe you out. A Zerg rush isn’t a particularly good strategy, but if you build them fast enough and can adjust their attack faster than your opponent can you might win, even if your opponent is a better strategist than you are. In short, Starcraft II privileges tactical reasoning over broad strategic reasoning, and while tactical reasoning is important — and arguably even more so in an actual battlefield situation — broad strategic reasoning seems more intelligent … especially when some of those tactical considerations are just how quickly you can get orders to your units.
So what we’d want, if we really wanted intelligence, is a game where you have lots of time to think about it and reason out situations. There’s a reason that chess is or at least was the paradigm for artificial intelligence (with Go recently making waves). But that game can be solved by look-ahead algorithms, and look-ahead algorithms are a form of reasoning that humans can really use because we just can’t remember that much (although it has been said that chess grandmasters do, in fact, employ a greater look-ahead strategy than most people are capable of. And now I want to start playing chess again and learning how to play it better, in my obviously copious spare time). There’s also an issue that it and Go are fairly static games (as far as I can tell because I’m not a Go expert) and so things proceed pretty orderly from move to move, and so aren’t very chaotic or diverse.
Which got me thinking about the board games I have that have chaotic or random elements to them, like Battlestar Galactica or Arkham Horror. These games let you develop grand strategies, but are generally random enough that those grand strategies won’t necessarily work and you have to adjust on the fly to new situations. They’re also games that have set rules and strategies that you can explain to someone … or to an AI. So my general musings led me to a desire to build an inference engine type system that could play one of those sorts of games but that I could explain what the system did wrong to it, and see how things go. Ideally, I could have multiple agents running and explain more or less to them and see how they work out. But the main components are games where you have set overall strategies that the agents can start with, and yet the agent also has to react to situations that call for deviations, and most importantly will try to predict the actions of other players so that it can hopefully learn to adjust that when they don’t do what is expected.
Now, other than picking a game to try to implement this way — Battlestar Galactica’s traitor mechanism is a bit much to start with, while Arkham Horror being co-operative means that you don’t have to predict other players much — the problem for me is that, well, I’m pretty sure that this sort of stuff has been done before. I’m not doing anything that unique other than with the games I’m choosing. So, if I did some research, I’d find all of these and get a leg up on doing it, at least. But a quick search on books didn’t give me anything for that specifically, a search of Google will make it difficult to sort the dreck from the good stuff, and the more up-front research I try to do the less actual work I’ll be doing, and I want to do some work. Simple research is just plain boring to me when I’m doing it as a hobby. So my choices are to reinvent the wheel or else spend lots of time looking for things that might not be there or might not be what I want.
So, I’ll have to see.
Anyway, thanks Shamus for adding more things to my already overflowing list of things I want to do!
Tags: AI
February 22, 2019 at 7:13 am |
[…] today I’m going to muse on the Alpha Star AI playing Starcraft II that I kinda talked about here. These are just musings on the topic and so might be a bit inaccurate, and I’m not doing any […]
February 27, 2019 at 4:46 am |
[…] projects were always going to take a lot of time, but adding AI projects to it has added far more things where regular progress is required but will take quite a long time to […]
April 12, 2019 at 6:26 am |
[…] projects went terribly. I’ve made starts on a couple of things but haven’t started on the AI stuff I wanted to do. This is despite the fact that doing these things is officially in my schedule even when I’m […]