I was hired to work as a summer RA with Michael Gleicher, working on the real-time crowd simulation project with Mankyu Sung. What follows is an overview of my experience, what I learned, what I contributed, and what my future plans are.
What were my goals for the summer?
Work on environments for the Eurographics presentation in France
Assist Mankyu with the improvement of the simulation system when needed
Improve the workflow between the simulation and playback steps required to make a demo
Learn new things about motion, real-time simulation, and general computer graphics
The Artist
Shortly after my work began, we added an artist from Cleveland to the
project. His job was to be the creation of geometry and textures for
environments. I was assigned the job of giving him projects to do, and making
sure that what he made matched the specifications needed for the demos we wanted
to create.
This turned out to be more trouble than it was worth; I spent a good portion of my time that should have been used for other things, by fixing problems with his work, trying to contact him, and finishing jobs at the last minute that should have been simple things to do. I learned a lot from this experience about the value of good communication vs. the talents that someone may have. In this case, the artist was a very talented individual, but his lack of communication far outweighed any benefits he brought to the project.
Despite all this, we managed to create two very nice demos for the Eurographics presentation. See below for more details.
Building Environments Case Study
Fairly early on in the summer, Mike suggested that I experiment with some
alternative methods of creating environments. We decided to give a program
called Image Modeler.
I decided to try making a model of the UW computer science building as
a test. The following summarizes my findings:
Image Modeler is very useful for creating models of buildings and other shapes, provided that the shape is fairly simple and other solutions aren't readily available, and detailed textures are also desired.
This software requires the skilled use of photography to capture important angles and details. This makes it difficult to make detailed models of an object, since several detailed photographs of each important corner are required to build an accurate model
Creating an entire area proves very difficult with Image Modeler. For instance, creating the interior of the Computer Science building would require a chain of photographs linked to a system of photographs throughout the entire area, and each new photograph must be linked into the system before it can be used. This can be very time consuming, especially for the interior of a building or area, since it can be tricky to capture several views of a subject while still containing the points in the scene to link the photograph. Exterior shots are somewhat more simple.
Modeling in Image Modeler seems to have been an afterthought in the development process. Tools exists for creating primitives and faces, as well as extruding faces, but many of the "must have" tools available in packages like Maya or 3D studio max don't exist. This is OK if the environment is very simple and the user is a talented 3D artist, but one must be very comfortable with 3D modeling to use this tool effectively.
The 3D navigation in Image Modeler is incredibly unintuitive and frustrating. Even after using the tool for almost a week, I still couldn't easily rotate or move through the environment I was trying to create.
There are several problems involved with exporting the models for use in a 3D engine such as the Unreal Runtime engine. Firstly, the textures created are not powers of two, so manual conversion or external scripting is required. I didn't have access to export to other 3D formats in the demo version of Image Modeler, so I couldn't directly test the process of converting the geometry. Another Inherent problem is that a separate texture is created for every part of the environment, whereas in a real-time 3D environment, tiling and texture repetition are often used to take advantage of similar walls etc. and to save video card memory for geometry.
It's very hard to "Tweak" the scene with image modeler. For instance, in crowd simulation a requirement is that the floor must be flat. The environment might not have a truly flat floor, but Image Modeler doesn't let you just create geometry in the 3D viewport. You must create geometry using the control points that you place on the photographs, which makes it difficult to create something any other way than how it truly exists in the environment.
Most often in my experiments, it would have taken less time to simply create a roughly accurate model of the object in the scene using Maya. Modeling the object in Image Modeler required taking many photographs, matching up control points to get a 3D "lock" on the object, adding extra control points for geometry creation guides, and then using the poorly designed interface and modeling options to actually build the object. Next, the geometry and textures must be converted, and usually the textures will need to be converted so that the dimensions are powers of two.
There may be other image modeling software available, but Image Modeler is hailed as the best, and its shortcomings, at least with respect to what we need it for, were enough to decide that it wasn't the best solution to our problem of creating environments.
In the end we decided that the best method at the moment was to model entire environments with Maya, then convert directly into the Unreal Engine, since that's what our system uses. I spent some time programming Mankyu's system to be able to import images of floor plans, and to specify the scale factors, allowing us to build our environments, then import information about that into the simulation system, and finally, build simulations which would then be played back in the environment in the Unreal Engine.
Completed Environments
In the end of the summer we basically had finished 2 fairly nice
environments. The first is a simple art gallery, which demonstrates the use of
several new situations, such as one where people stand and look at paintings on
the wall.
Here are a couple screenshots:
I created this environment using the Unreal editor, and Photoshop for the textures.
The second demo was a model of a city block on State Street in downtown Madison. I spent some time taking digital photographs of the block, then sent them to the artist who created most of the geometry. After he "finished," I then spent another week fixing geometry problems, adding textures, and converting to the Unreal Engine.
Here are a couple shots of the State Street environment:
Other Accomplishments:
I made lots of small improvements to both Mankyu's system and the playback
classes for the Unreal engine. I added a playback time display, with various
controls such as pausing and slow-motion playback.
In Mankyu's system, I added various gui improvements like the ability to zoom
and pan on the world editor window.
In addition, I also assisted Mankyu with some of the algorithms for collision
detection, to try to fix problems like characters standing around in a situation
where they should have been running.
Learning Experiences:
Throughout the summer, I learned much more about Motion and how it works, some of the difficulties with capturing and creating realistic motions, and some of the ways to get around these problems.
I learned (somewhat) how to use a software system created by Hyun Joon (a UW alumni) for creating motion graphs.
I went to Siggraph in LA, and learned much more about a
variety of topics. I also had the opportunity to see many of the
industry-standard crowd simulation systems, such as Massive, and see
how they compared to what we're doing, and how are system is different.