Archive for the ‘3D Scanning’ Category

h1

Rock Solid Kinect Mount (for Computer Vision)

April 24, 2012

The Kinect sensor does not come equipped with a tripod mounting screw. By now there are 8 million different ways to mount your Kinect to a tripod, your wall or the back of your TV. Unfortunately, they all mount to the bottom of the Kinect (beneath the tilt motor),  leaving the Kinect to jiggle and wiggle like it is listening to “Gettin’ Jiggy wit It.”

If you need the Kinect sensor to stay put, say to register it to another camera (e.g. DSLR) or a projector, then you need to get rid of the wiggle.

Our research group has come up with two methods to mount directly to the Kinect.

Disclaimer: Both of these methods involve modifying the Kinect.  You WILL do permanent and irrevocable damage to your Kinect if you continue down this path. We in no way claim that this is safe to yourself, others or your Kinect sensor.

Method 1: (developed by Kevin Karsch, implemented by Rajinder Sodhi)

This is the simplest, easiest and possibly recommended method.

Here we attach a tripod mount directly to the body of the Kinect (instead of the wiggly bottom) with adhesive. For the camera mount, you might be able to buy an adhesive camera mount such as:

EPIC Contoured Adhesive Mount

Kodak 1960806 Action Mount

We were in a time cruch, so we headed out to Radio Shack and bought a cheap tripod:

50″ 4-Section Tripod, Model: TV-1743

We then sawed off the top of the tripod, retrieving only the mount.

This was then glued with expoy directly to the base of the Kinect. Here we attached it to the top of the Kinect, in order to rigidly mount another camera on top. This process would also work to attach a tripod mount to the bottom of the Kinect.

Method 2: (developed by Brett Jones)

Here we use the existing screws holes of the Kinect to rigidly mount to a piece of plexiglass. This is definitely the most robust method, but also the most time consuming and costly (and dangerous?). You will need a sheet of plexiglass (acrylic), some screws, Torx security screwdrivers, a power drill and a power jigsaw. Do not proceed unless you are comfortable with power tools.

First we took off the motor and useless base for the Kinect. We followed the directions on iFixit, up to step 5. This requires purchasing a set of Torx (T6 & T10) security screwdrivers. Basically you remove the protective rubber pad on the base, then open the base using the T6 screwdriver.

We then just cut-off the plastic from the base using some metal clippers. Use scissors, a hacksaw, whatever you can find.

Now we will use the original screw holes to connect to an acryllic (plexiglass base).

Follow steps 6 & 7 on iFixit, to remove the bottom T10 security screws. We went to the local hardware store (Lowes) and bought some longer screws that fit the original holes. The thread count on the screws will not match perfectly, so this will most likely permanently damage the screw holes. (You won’t be able to put everything back together). I don’t remember the exact screws we bought, so just bring in one of the Kinect screws and find one that is the same diameter but longer in length (around 1.5 inches). You will also need matching nuts.

We also need a screw to attach to the tripod. So pick up a 1/4 inch screw that is 1-1.5 inches long with a matching nut.

We then need to cutout a piece of acrylic glass that will serve as a rigid body to attach to. We bought a 1/4 inch thick piece of acrylic glass at Lowes. We roughly measure the size of the Kinect and cut-out a rectangle of acrylic glass. Place the acrylic glass on the bottom on the Kinect and mark the locations of the 6 screw holes. Note where the cord comes out of the bottom on the Kinect. Also note where you would like to mount it to the tripod (taking the center of gravity of your camera-rig into account).

Now with a power drill (and all proper safety equipment) make 6 holes for the screws and one for the 1/4″ tripod screw. Then cutout a hole for the Kinect cord with a drill/jigsaw (it will have to big enough to thread the large USB end through).

Finally, insert the six screws, thread the cord through and screw them into the Kinect. Attach the 1/4″ screw for the tripod mount.

Whalla! Now you have a rock-solid Kinect mount.

Pro-tip: Check out 80/20 which sells a bunch of metal parts that connect with 1/4″ screws and are great for mounting cameras. They call themselves “An industrial erector set.”

Questions? Email me. I’ll update this post with more info.

Advertisements
h1

Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces

October 22, 2010

Andy Wilson and Hrvoje Benko over at Microsoft Research are working on some cool interactive projection techniques that extend beyond a normal display surface. The project uses multiple depth sensing cameras and projectors which allow users to interact with virtual elements in mid-air. Users can pick up a virtual object and see it in their hand as they walk to another display surface to transfer the object.

h1

Build Your World and Play In It

October 6, 2010

Update: “Build Your World and Play In It…” won Best Student Paper @ ISMAR 2010!

Next week we will be presenting our latest work at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2010) in Seoul, South Korea. ISMAR is the main augmented reality conference.

Novel display and interaction technology has enabled projecting digital content onto complex physical surfaces, freeing content from the confines of a limited, flat monitor display. These projection based interfaces are becoming increasingly feasible for use in ubiquitous display and interaction as projectors decrease in size, cost and power consumption. Everyday, passive objects can be transformed into interactive display surfaces through the use of novel projector-camera systems based on Spatial Augmented Reality. Walls, desks, foam core, clay models, wooden blocks, sand pits, among other materials, can be turned into interactive displays allowing for a new range of interaction possibilities. By acquiring a full 3D model of the display surface, we can interact directly with the world around us.

Abstract

We present a novel way of interacting with everyday objects by representing content as interactive surface particles. Users can build their own physical world, map virtual content onto their physical construction and play directly with the surface using a stylus. A surface particle representation allows programmed content to be created independent of the display object and to be reused on many surfaces. We demonstrated this idea through a projector-camera system that acquires the object geometry and enables direct interaction through an IR tracked stylus. We present three motivating example applications, each displayed on three example surfaces. We discuss a set of interaction techniques that show possible avenues for structuring interaction on complicated everyday objects, such as Surface Adaptive GUIs for menu selection. Through an informal evaluation and interviews with end users, we demonstrate the potential of interacting with surface particles and identify improvements necessary to make this interaction practical on everyday surfaces.

For more details see the paper: pdf
Or ask!

h1

Build Your Own 3D Scanner w/ Structured Light

November 23, 2009

I was about to embark on finally fixing my homemade structured light implementation (created in a Computer Vision class), when I stumbled across this amazing site.

Recently at SIGGRAPH 2009 there was a course on 3D scanning with structured light. I wasn’t able to make SIGGRAPH this year because I was working at Walt Disney Imagineering, so I didn’t find out about this course until this week. The course will also be at SIGGRAPH Asia on Dec. 16th, so make your way over to Japan and check it out.

The course features extensive notes explaining structured light and basic Matlab and C++ implementations.

Essentially, structured light uses a projector and a camera to create a 3D scan. Normally humans perceive depth based upon triangulating the location of an object using our left and right eye. In structured light you can think of the camera as your left eye and the projector as your right eye. Through projecting a series of uniquely identifiable codes the camera sees for the projector. The hardest part of implementing this is the calibration of the projector and camera. This involves determining the parameters of the lenses and the location/orientations of the projector/camera.

I’ve spent some time modifying the above code (which was very similar to my own implementation a few months ago, only it works :-P). The SL code now works with the Canon EDSDK. I am using my Canon T1i DSLR as the camera and an old NEC VT540 projector(1000 lumens).

I also have modified the calibration process to use a red/blue checker board pattern. In my opinion this makes the projector calibration step easier and more robust (I have no facts to back this up). Essentially, you use a red/blue checkboard pattern which appears black/white in intensity under red light. However, the pattern is roughly gray in intensity under white light. This technique come from “A Novel Method for Structured Light System Calibration” by Zhang, S. & Huang, P.

I will be posting my modified source code soon. I would be very interested in turning this into a full fledged open source project. With a minimal amount of work we could probably support most of the common camera SDKs (aka Point Grey, Dalsa etc). It would also be nice to make a GUI and support a wider range of output options. Any takers?

Check back soon for code and goodies.