AR Thing;
This AR app brings Thing from the Addams Family to life. Using the app, users can control Thing with a joystick and guide it through physical spaces by scanning the real world with their device in real time. The app allows for full interaction with the physical environment, making it possible to explore and play with a digital character in a whole new way

This project was published by 80 Level

Technology
Resight Engine
Unity
Blender
Goals
  • Deliver an engaging and interactive experience using AR technology and interactive gameplay mechanics.
  • Create a sense of presence and immersion by blending the virtual and physical worlds in a real-time experience.
Workflow
The process began with a POC in which I used a digital joystick to move a cube on a surface in Unity. The joystick worked well, so the next step was to convert it into an augmented reality experience to ensure that the control of the cube would also work well in the physical world. To verify that, I used the plane detection feature of AR Foundation and carried out a small experiment on my desk. The cube was fully controllable and appeared to work well, so I continued to work on the model.
This was a 24-hour hackathon project, so I used an existing model of an hand and worked on it in Blender. I mainly focused on the textures and the animation; I adjusted the colors so that they would stand out against the real world and modified the movement to make it easy to control. Finally, I baked the textures and imported the model into Unity, replacing the original cube with the revised version of Thing.

Working with the 3D Thing model in Unity presented several challenges:
1. Ensuring synchronization between the animation and movement.
2. Ensuring seamless transitions between animations for a natural feel.
3. Adjusting Thing’s movement to match the camera’s POV.
After completing the tasks within Unity, I progressed to a pivotal aspect of the application: enabling Thing to navigate the real world autonomously, eliminating the need for pre-scanning each location. My aim was to ensure Thing's interaction with its surroundings was immediate and seamless, which led me to leverage the Resight Engine for this purpose.

Utilizing its drag-and-drop feature, I facilitated swift mesh scanning to generate a dynamic, real-time model of the environment. This scan could either be visually represented through shaders or utilized invisibly for physics and collision detection. The noteworthy feature of this aspect lies in its effortless operation; there's no requirement for meticulous room scanning—simply moving around with a mobile phone automatically generates the scan in a rapid manner.
Upon integrating Resight's components (LibResight and MeshManager), I proceeded to test Thing's mobility in the real world, and I was pleased to witness its seamless functionality. Subsequently, I sought to enhance Thing's capabilities by enabling it to jump. Implementing this feature, I conducted tests on various surfaces including floors, couches, chairs, and even on objects like my dogs, all of which worked smoothly. Moreover, I evaluated Thing's jumping proficiency across different floors inside and outside my house, and it exhibited consistent performance.

To enhance the experience, I utilized another feature in Resight Engine, which enables a multiplayer experience. With this feature, I added the SnappedObject component to Thing and tested it on another mobile device to ensure that one Thing can see and interact with another Thing and that they are correctly positioned and synced in the world. Now, multiple users can control their own Thing, walk around the world with them, and meet others.

And voilà 💖