Skip to main content

Revolutionary new technology from startup DoubleMe can be used to make and share holographic 3D models of yourself in augmented reality or virtual reality.

With a background in computer engineering and software design Albert Kim has been working at the cutting edge of the virtual reality industry since the 1990s. After foundingZenitum in 2004, specialising in computer vision systems, Kim identified an exciting opportunity to develop new technology for live real time 3D video capture. In 2010, he set upDoubleMein order to take his vision forward. We talked with him about his ambition to enable users to generate their own real time 3D content for augmented reality apps quickly and cheaply.

Q: Firstly, can you tell us a little about your background?

A: I started working in VR in 2004. I foundedZenitum in Korea after returning from the States where I graduated in Computer Engineering at Ohio State University. When working on the factory floor with camera systems deployed for detecting defective products, I realised there was an amazing potential for using this kind of technology in the entertainment industry. It was the beginning of the IT boom in Korea at the time and the Government was putting a lot of money into creating tech-based startups, so I set up Zenitum to develop our own AR tracking engine. By 2007 we were selling licences to many companies to use our products. I noticed that AR developers were spending a lot of time and money on content creation for promotional purposes on mobiles. We discovered this for ourselves when we tried to commission a small animated dancing figure to demonstrate the power of our graphics engine. The projected fees we were quoted were prohibitively expensive relative to our limited budget, so I started to think that there must be a better way of producing this kind of thing. That’s when we decided to develop a new visual capturing technology in order to make promotional animations really quickly and cheaply. Initially, I thought we could deliver this technology in less than a year but it turned out to be a 5 year R&D project from start to finish. We attracted a lot of investment from Japanese companies but ran into problems following the 2011 tsunami which forced us to close down the company. At that point we moved over toVoxelogram, to take the project forward. When we set up in California, people struggled with the name so that’s when we changed it toDoubleMe.

Q: Can you tell us something about DoubleMe?

A: With DoubleMe, as the name says, we create your (digital) double in virtual reality.  Our slogan is: We put real people into virtual reality. Instead of hiring cartoon-like 3D characters, you can use your own body. There are many people creating virtual reality or augmented reality content, but it's not easy to create high resolution, high quality 3D models that look like actual human beings or animals. Our method is much cheaper than conventional approaches, we use multiple angle video to capture a moving subject, and then process the data through our algorithm which crumples it into a single 3D model for you to use. All of the original features such as facial expression, limb movement or clothing textures are preserved using a volumetric 3D mesh of the dynamic subject. Once we have the model you don't need to touch up anything at all, you can drop it directly into your own game or VR content, animation, movies or even export it for 3D printing! We have minimised all the hassle and the cost involved in creating 3D animated characters by using real-time capture.

DoubleMe DoubleMe

Q: That’s very exciting, what drew you to virtual and augmented reality?

A: My first encounters with virtual reality were when I was really young back in the 1980s. I felt it was really cool that you could create your virtual space and be whatever you wanted to be. You could forget all about the physical world, and fly around being a completely different person. Virtual reality is great, but in the late ‘90s I came across augmented reality, which involved bringing the main characters to life by using a virtual layer on top of the real world. Your whole body could become completely immersed in augmented reality. It was at this time that I decided to leave my job in Chicago and develop new technology based on my experience working with computer vision. I realised I could develop my own characters within augmented reality and play with them whenever and wherever I chose to, bringing my fantasies into the real world.

Q: Is this kind of product going to be affordable for average users?

A: Yes, it is. Our first commercial studio is due to open in London in July, and we will be opening more studios around the world as we get more investment and our business grows.  We are also working on cloud-based version, so you won't even have to come to the studio. Using the cloud means everyone can use our technology with their own cameras and a laptop. A minimum of two cameras is necessary to capture 3D. Once you upload your videos, our engine will quickly process everything for you. The rapid expansion of the online video industry happened because we already had the shift to content producing devices in homes such as the webcam. Now, anyone can become a Youtube star overnight, capturing and sharing video is something everyone takes for granted. In our studio, we call it the ‘HoloPortal’, we can create real time high quality videos for you, and the cloud-based equivalent ‘HoloCloud’ will be open to everyone online. Right now manufacturers are creating dual cameras to make 3D capture simple and affordable for all.

Q: What do you think the implications are for people who work in the film or game industry, spending months on creating a single animated character?

A: Our system will be much cheaper and provide better content generation. Up to now, whenever you think about game characters they usually have physically impossible bodies. A few companies such asElectronic Arts have started generating realistic characters by scanning real athletes to use in games, but this process is very expensive at the moment. Our technology makes this much cheaper to do and will be very attractive to the game industry for both large and small companies as well as individuals. The whole idea of DoubleMe is about encouraging people to project themselves into virtual reality. I think the game industry will soon start paying regular people to take part in their productions instead of using cartoon-like characters.

Q: What is the ultimate goal for DoubleMe?  

A: We would like to produce a new kind of marketplace for your virtual doubles. Right now it may sound like a crazy idea but instead of selling your direct skills or products, we want to help you to sell your skills virtually. For instance, instead of giving someone a yoga lesson in person, you’ll be able export your character and skills into a virtual environment and provide the lesson that way. Digital doubles will be presented in life-size in viewers (or users) living rooms through AR headsets such as MS Hololens. Your content becomes a marketable commodity that you can make money from. It will be like App Store for your body.

Q: How will the experience be delivered?

A: Once we have generated the 3D model of your body it's already vision compatible so you can mix this content into any augmented reality application. Up until now the content is usually attached to a target or marker and you experience it through a headset. Using recent technology likeHololens incorporating a visual sensor, you don't actually need any type of marker. You can simply scan your entire room and re-generate its full 3D coordinates. Actually, we just received Hololens from Microsoft and the tracking engine is amazing. It uses ‘natural feature tracking’, which enables you to scan any environment wherever you are. Your DoubleMe content will become part of this augmented reality layer.

We use cookies to improve your experience of the site only. Any data we record we do so anonymously, and no personal data is stored. By continuing to browse you agree to our cookie policy, full details of which you can read here.