The user selects a polygon in the video where the new object will be inserted. Then the intelligent software analyzes the videos to match color, texture and lighting differences. Once the video or image to be inserted is cleverly modified to blend into its surroundings, the final product will look like a composite part of the original video. The researches even took into consideration shadows that might fall on the inserted image, making the results as authentic as possible.
ZumaVision branched out from an earlier project named Make3D, which was a website that converted a single still photograph into a brief 3D video. This was done by locating planes in the image and computing the relative distances between the camera and the planes. Using this procedure as a starting point, the researchers implemented it into ZumaVision. Saxena says, “That means, given a single image, our algorithm can figure out which parts are in the front and which parts are in the background, and this capability has now been extended to videos”.