The avatar-centric communication project went through three major phases that reveal the power of iterative design. We created an initial prototype for two avatars where conversations could occur any time the two approached one another face to face. Most of the social and body language features were invented in this phase. Our intelligent cinematic camera worked very well, and we were all sold by this prototype on the success of our approach. Then we added more avatars, and the camera could not be made to work. It is a very hard problem finding good camera angles to view everyone in a conversation when they are allowed to stand in arbitrary positions relative to one another in the world, and without a good camera, most of the emotional power disappears. We needed avatars to be in specific positions to engage in conversation. We invented the chat prop, and our first chatprop was the loveseat. The loveseat was a two-person bench where we imagined a couple sitting, talking, flirting, arguing . . . and we made the camera change its position as the avatars changed their poses. If you sat facing your partner, the camera shot would emphasize togetherness.
If you turned away, it would emphasize separation. This chatprop was great, but unfortunately it was a dead end. In almost every other chatprop, emotional expression of pose and camera view needed to be separately controlled. Another iteration in our design lead to many chatprops, like a living room with a sofa and two chairs, a stage with audience seats, where camera view is directly controlled. All during this phase, conversational groups could only form when seated, never when standing around in the world because we did not know how to solve the camera problem with free-form groups. We knew what we wanted, but it was very difficult to program. We wanted avatars to be nudged into fixed positions relative to one another when they started talking. And we finally figured out how to do it. So, today if you walk up and talk to another avatar, you are both nudged into a specific position relative to one another, and the camera works correctly to show the conversation and cut to your facial expressions. Others can come up and join the conversation, and everyone moves sideways slightly to let them in.
We began with a simple prototype that allowed conversations in the world to work well, it didn’t scale, we had to solve many problems along the way, and finally we came full circle to solving our original problem. Of course, there were many design areas like this. For example, chat balloons rise from each avatar and their order tells you about the conversational order. If they rise too fast, it is impossible to follow the conversation. So we designed a fairly complex scheme to make conversations as legible as possible by keeping the chat balloon ascent as slow as possible during heavy chat.