Here is an interesting demo of Video Trace:
I remember trying out technology along these lines sometime in 2000. In that program (the name escapes me now) you took pictures of a scene from different angles, and used the photographs as a guide when building polygons. My company was working on making virtual versions of a lot of real-world locales at the time, and were hoping this would be a time saver. It turned out the thing was very annoying and difficult to use. The software was so unwieldy that I decided we’d be better off doing the things the old-fashioned way. I loved the idea and wanted it to work, but it just wasn’t ready for prime time yet. It looks like the approach and the tools have been refined a great deal.
What isn’t clear from the video is how the shadow underneath the truck is copied. The area of the shadow was not included among the modeled polygons. I also get the feeling that it’s a lot more work than it looks like in the video. I think a few steps were omitted.
Also, the building in the second half of the video is the Sydney Opera House. I had to make a 3d model of it sometime in 1999. It was a murderous project. (Unlike in the Video Trace demo, I was making something for realtime rendering and had to be very careful with polygon count and texture size. Those 1999 computers weren’t quite up to the job of handling curvy buildings like that one at a decent frame rate.) I didn’t have good photographs to work with, and my modeling skills were not up to the job. The end result wasn’t very good. It’s a very difficult building, and it’s great seeing the place done right in this demo.
Quakecon 2012 Annotated
An interesting but technically dense talk about gaming technology. I translate it for the non-coders.
A Telltale Autopsy
What lessons can we learn from the abrupt demise of this once-impressive games studio?
Stolen Pixels
A screencap comic that poked fun at videogames and the industry. The comic has ended, but there's plenty of archives for you to binge on.
Steam Summer Blues
This mess of dross, confusion, and terrible UI design is the storefront the big publishers couldn't beat? Amazing.
Resident Evil 4
Who is this imbecile and why is he wandering around Europe unsupervised?
Interesting. He makes it seem so easy.
If you look really carefully you can see the sharp edge of the model in the shadow of the passenger side front bumper. Looks like he added a lightsource where the sun would be and it just casts a shadow appropriately. Not a great shadow, but passable for a quick glance.
I imagine if you’re using this tool you’re more interested in the mesh and texture than using it to produce acurate lighting effects. Once you’ve got your object you’ll probably import it into whatever 3d suite you’re working with.
Phlux: You’re right. The shadow on the duplicated truck doesn’t have a side mirror, and is at a slightly different angle.
hahah
not readily obvious to you non aussies, but that last bit of university of adelaide was a major bit of pwnage. adelaide and sydney are old rivals, as are most of the aussie cities
Shamus: That program wouldn’t have been DSculture or UZR 3D? (digs through pile of 3d world cover disks). I had a blast with dsculpture but it was really limited to small objects as you needed to mark out a perimeter for each angle.
In that first section I really like how the fake model is clipped behind the real jeep (I gather the software handles that). :-)
This is starting to get into the sci-fi conspiracy theory movie territory, eh?
Well, I can see the obvious next step: Having the software do the work automatically, like, you click on the car and then the button “gimme a 3D-model!” and stuff happens. Anyway, great demonstration of a really intriguing piece of software. :)
Being a 3D modeler myself, I find this very interesting. Mostly I am trying to figure out how it is working.
I am going to go ahead and wildly assume that it is finding contrast edges in the image(s) like Photoshop’s “magic wand” tool for the vertices to “latch” onto when the video frames advance. Otherwise I dont know how the 3d model figures out how to turn/move/scale with the video.
Of course, as Shamus mentioned there could be a hidden step where key vertices are defined and matched by the user to several different frames of video, so it gets an idea of how the camera moves.
Whatever, I dont really know but it is cool.