We’re live at the San Jose performing arts center for the Nvision 08 opening keynote. Join us as we watch Nvidia CEO Jen-Hsun Huang take on the Cylons.
2:56PM: Looks like the Wi-Fi connection here is working well right now. We passed Tricia Helfer in the lobby on the way in, surrounded by glassy-eyed geeks. Here’s a quick picture.
That one’s for you Geoff.
3:05PM: I’m now sitting in a puddle of drool created by Marco from HotHardware. Perhaps bringing in the Cylon chick wasn’t an entirely good idea. Still waiting for things to get started. The requisite loud pop music is playing now, so we’re getting close.
3:09PM: Lights down, we’re going. Please welcome Chuck Reed, Mayor of San Jose.
Hi, Chuck.
Chuck’s happy about having this event in his here city.
3:12PM: Lights down, animation on the big screen. Scenes from games, demos, CG animations. Even louder pop music.
Please welcome Jen-Hsun Huang.
Jen-Hsun: Welcome to the first Nvision.
He’s defining visual computing, talking about all of the markets and companies involved. Adobe, video games, movies, professional visualization, the iPhone…
We are on the cusp of a display resolution. Will show you some exciting things there.
Also exciting: computing with GPUs. “We are geeked up about parallel computing.”
The GPU has been the most dramatically innovative technology in computers in the last 15 years.
When we started, the GPU was a fixed-function pipeline ASIC. Over the years, it has become a general-purpose parallel computer. It started out only knowing the language of graphics, but today it understands C and C++, the language of computing. Computational ability has reached a teraflops, equivalent to a thousand Cray XMPs.
3:25PM: What can you do with this computational horsepower? Let me illustrate. [email protected] simulates life. Talking about how proteins fold, mis-fold. Can give us insight into diseases like Alzheimer’s.
Incredibly computationally intensive. Stanford researchers released software for people to donate computing power. [email protected] is equivalent to 288 teraflops. Enormous amount of contribution from 2.6 million computers.
Would only take 24,000 GPUs to achieve 1.4 petaflops. 24,386 is the number of CUDA GPUs contributing to [email protected] in the world now.
IBM Blue Gene, fastest supercomputer, is 1 petaflops.
By making the GPU general purpose, overnight we have increased computational horsepower 100 times, advanced research by 10 years–an extraordinary discontinuity.
3:32PM: Now talking about automotive design and digital prototyping. Intros CEO of RTT, Peter Stevenson.
Peter: We’re taking CAD data and “building” the vehficle without actually building the vehicle.
Jen-Hsun: Why is it so hard to create a virtual car?
Peter: The huge variance you’ve got involved. Lots of different materials.
Jen-Hsun: They want to project image of this car on a wall and make it seem real to every detail. Lots of geometric detail. Paint colors are very complex. And something about the headlights, have to be designed like jewelry.
Peter: In automotive, the car company is responsible for every square inch of the vehicle. They design the headlights, give it to suppliers, say “build this.” That’s where ray-tracing comes in.
Scene on big screen shows the prototyping in real time. Absolutely gorgeous Lamborghini car in simulation to the most exact detail, running maybe 15-20 FPS. Interactive with mouse view controls.
Uses a hybrid renderer with some ray-tracing. Lamborghini will only be building 20 of these cars. Since they couldn’t have them in showrooms, they made a movie using this and sent it to top clients, who purchased the model run.
Now looking at car interior. Suede looks… suedey. Leather looks real. Zoom in on stitching, is accurate. Very nice.
3:43PM: Peter’s out, moving on…
Talking about Google Earth. 400 million unique downloads. Talking mash-ups and overlays. Street views.
Why don’t we fly to Paris? Google Earth onscreen takes us there.
People are taking pictures of the screen. Has no one told them this is free?
3:47PM: Massive global phenomenon: MMOGs. Slide shows projected exponential growth of MMOGs. TR staff productivity reaches zero in 2013.
We have been working with a new company called Nurien (sp?) with a new genre of MMO. Welcomes Nurien’s CEO onstage.
Sweet! This is the virtual fashion show thing. I’ve seen this. It’s like 95% fan service.
Business model is based on micro-transactions. Buy clothing, items, etc. to distinguish your character. Somewhat evil!
Demo time: Virtual Jen-Hsun in the game. He was a little generous with the height.
Slider changes the facial shape.
His virtual girlfriend shows up. Virtual Jen-Hsun is kissing her.
And it’s a fashion show! haha The clothes, dress, hair, move realistically. Physics simulation.
Virtual Jen-Hsun is breakdancing. Jen-Hsun: “I do that every day.”
Now we’re looking at a Sims-like virtual apartment with a female avatar. Showing how we can modify her easily. Best avatar system I’ve ever seen, blows away most MMOs and RPGs.
They are going to sell some of the animations the avatars can do during chats with micro-transactions.
People can design fashions and have a brand in the virtual world. Can edit and create new furniture, as well.
Have a dance game element, since dance games are so popular in Asia.
Jen-Hsun: This is going to be the next Facebook.
4:03PM: Talking about SportVision, which did work for the visual presentation of the Olympics. CTO of SportVision, Marv White, comes onstage.
These guys do virtual first-down line in football, strike zone in baseball.
Camera uses sensors to read data in order to do perspective correction. Discerns how to avoid drawing over the players by discerning colors. Although the Green Bay Packers on astroturf is a problem.
Showing how they augment replays, track player and camera motion. Which is cool. Also have a virtual 3D camera tech.
Using simulations of computational fluid dynamics and physics to do some visualizations, including one that shows the effect of drafting on race cars in NASCAR.
VIdeo of ESPN dudes talking about the NASCAR drafting effect.
4:13PM: Visual computing is also transforming photography. Want to show you an example of computational photography. Most of the time, photographs rely on lighting of the room, can be hard to get the proper exposure. With a tech call HDR, and using computational capabilities to merge and combine the images from multiple exposures, we can now substantially enhance images. HDR is the first example. In the future, we could re-focus the image after the picture has been taken.
To tell us more about Photosynth, Joshua from Microsoft Live Labs is here.
App launched about 90 hours ago. Available for free from the website.
Demo! Reconstruction of Stonehenge, multiple pictures stitched together in correct relation to one another. Generates a three-dimensional point cloud based on common elements from multiple images, which is fricking incredible. Image positions are “projected” based on this data.
Another example. Some dude took a bunch of pictures of a house, a full model (point cloud) was built based on the pictures. Impressive, but it’s based on 207 images of the house.
4:24PM: Now we’re in the national archives. Can look at the Declaration of Independence, zoom in on a 50Mpixel scan, zoom in and out–all streaming in over the Internet. Applause.
People will be able to collaborate on point cloud creation by geotagging their images, letting them be mashed together.
Here’s a point cloud based on screenshots from Halo 3.
And Joshua’s finished.
4:30PM: We’re really excited about “dimensionalization.” Time to put on your 3D glassses, folks. Nv demo onscreen.
Holy crap, it’s 3D!
Time for another 3D demo, this time from Age of Empires III.
Man, the RTS perspective looks awesome on a 3D display.
I have a headache from the glasses, but the pure fashion statement here is bold.
4:38PM: Jeff ??? comes out, the guy who came up with the touch UI for the iPhone.
Jeff: I started out as graphics guy, really loved interactive graphics, watching the results change in front of you.
4:40PM: Battery alarm on the laptop. Doh!! Looks like the Wi-Fi here is sapping my juice too quickly. This may soon be over.
Demo of new multitouch interface. 100″ diagonal multitouch screen. Arbitrary number of contacts supported, and it senses pressure. Can draw on it, drag stuff around.