Jul 29

Dynamic pathfinding(Cutting edge research!)

Today, I went to an awesome presentation by Vadim Bulitko about his most recent research. Here’s the presentation, but I’ll summarize what the presentation was about for general use. (I just found that the pdf didn’t come through the pdf export very well, apparently.) This was a presentation at UBC-Okanagan, in Kelowna, BC for reference. Vadim is thinking about taking his sabbatical here at UBC, yay!

Its a rather innovative technique, and delivers great results for only a bit of computer power.

The Big Granddaddy Algorithms

First, a bit of background as to what has been done before. The Big Granddaddy of pathfinding algorithms is the A*(called the A-star) algorithm. It makes the complete plan, every step needed, to get from one point to another, then executes it. Unfortunately, the amount of time this takes scales exponentially with the size of the map. It, however, does provide an optimal path.

The other algorithm is the LRTA* algorithm, which looks ahead, and moves based only on that information. For example, it looks ahead, maybe 3 steps, and sees what sequence of 3 steps will bring it closer, in a straight-line measurement, to the goal. This has the benefit of always executing within a certain amount of time. Unfortunately, walls between the agent and the goal can and will prove difficult to surmount.

An addition to the LRTA* raises the “distance” weighting of a node every time the agent goes over it, thus eventually forcing the agent to go around the wall. But it still takes bumping into the wall three or four times to finally find the correct path.

Its all real(time that is).

There are a few constraints that pathfinding needs to work under in games, and even in robotics(Phoenix Mars Lander anyone?). You need to have something that can operate and find its way with only a bit of computation, and predictably work well. It would be a disaster for a $70 million dollar robot to keep bumping into a rock.

As well, with ever more powerful cpus and gpus, gamers demand more and more units, and bigger and better maps. The irony is, that with increasing hardware power… comes even less time to compute pathfinding and general AI. Computer power has only quadrupled a couple of times since Starcraft, yet, we demand 100 times more units, with smarter and more intuitive AI, coupled with four or five times bigger maps.

Dynamic Lookahead

So, the way that D LRTA* works is first, they have dynamic lookaheads. Some squares on a map require more searching to find a near optimal solution, particularly around corners. One way to determine the lookahead value for a square, is to naively compute the required minimum lookahead depth to any other possible square. Then, store the pairs of coordinates and that lookahead depth, and just look it up! Simple, eh?

Not really. On a 512×512 map, that results in ~68 billion pairs to store. On a normal desktop, that would take about 2 years to compute(probably a lot less on a compute farm… google could likely do it in one minute, because its an extremely parallelizable problem.) This is quite unacceptable, both due to time constraints and space constraints. This is where the research group’s second innovation comes into play.

A house divided…

They found that splitting up the map into spaces, and then computing the optimal lookahead depth from one space to another worked very well. In the worst case, it devolved into LRTA*. However, there must be a better way to divide things up…

Clique Abstraction

[Sturtevant, Buro 2005]

Clique abstraction was developed in the above cited paper. It is the concept of taking adjacent tiles, and lumping them into one larger tile. For example, four adjacent tiles that are all interconnected, could be replaced by one larger tile. And so on, with larger and larger abstraction at each level. (Take a look at page 38 from the PDF)

It becomes a lot easier to pre-calculate the optimal lookahead depths when you have a sufficient level of abstraction. Granted, you lose a bit of that oh so sweet optimalness, but its worth it for getting things to work within a certain amount of time. Of course, walls and pockets on maps still give the budding algorithm a bit of trouble. Which leads us to the next addition to D LRTA*:

Setting goals

The team found that by setting intermediate goals, they take advantage of a small property of lookahead algorithms: they’re more accurate closer to the goal. By setting goals in each of the subdivided spaces, its plainly obvious to the algorithm that going that-away is better than going this-away.

As well, by not placing goals in spaces that are obviously farther away from the ultimate goal than the starting point, it makes the algorithm ignore those spaces.

Right now, there are two ways to do dynamic lookahead: pre-calculated with clique abstractions, or calculated on the fly with sub-divided spaces and intermediate goals. They both come with their pros and cons, and if the game company can afford it, it appears that precomputed is much more reliable.

Optimal lookahead:

Optimal lookahead depths for a Baldur's gate map

Optimal lookahead depths for a Baldur's gate map

Stored Lookahead

Stored lookahead depths for a Baldur's gate map

Stored lookahead depths for a Baldur's gate map

I do have a few questions about the research, that unfortunately I didn’t ask at the time, for example, the outliers on the graph on page 70. One of the results for the little dots was very efficient, but the other two dots are optimal, yet expanded lots and lots of moves. As well, I’d like to know exactly what the different algorithms that were used for the graphs, specifically. I will ask the presenter today.

What else?

What needs to be done to expand this research, is to look at how to handle dynamically changing maps, and moving targets. As well, they haven’t looked at memory storage techniques or compression of precomputed files.(According to Information Theory, the data files could likely be extremely compressed, on the order of 75% compression or so.)


3 Comments so far

  1. zylum July 30th, 2008 7:48 am

    That’s some interesting stuff. Definitely something I’d love to research. It’s a shame the presentation slides are missing most of the text.. I was thinking of other ways of pathfinding and was wondering why most maps use tiles. This system is sort of crude and creates a lot of problems when there are large spaces with no obstacles (ie adds lots of unnecessary computation). I was wondering if there are any good algorithms based on computational geometry. Such as defining obstacles as polygons or whatever and find paths that are lines wrapping around the obstacles (sorta hard to explain what I’m thinking). Computational geometry is fairly mature so I’m guessing there are many techniques that could be used to make this process fast. Plus there would be few objects to run the algorithm on rather than thousands of tiles.. Of course this method would have some disadvantages. I don’t see it working very well in a maze-like environment.

  2. Zeroth July 30th, 2008 5:11 pm

    Actually, I should note that if you use Foxit PDF reader, the pdf slides work. Adobe doesn’t work.

    Well, for larger, more open spaces, then one could use, I believe, navigation meshes. Or, just simple lookahead. However, most of the pathfinding problems are in fact, due to small maze-like maps. Take a look at the actual paper, its a lot more refined. http://ircl.cs.ualberta.ca/files/webfm/lrts/pubs/bulitko08a.pdf

  3. [...] public links >> pathfinding Dynamic pathfinding(Cutting edge research!) First saved by haleyhazardx | 1 days ago Fixing Pathfinding First saved by miik | 19 days ago [...]