More videos

8 11 2012

 

Advertisements




Videos

23 10 2012

Here are some videos of the prototype application we have built with the mapinect framework.

 





Blogo

16 08 2012

We’re on the final stage, working on two applications that show the framework capabilities and are eye-candy.

We’re proud to present our logo, that will soon be available in http://www.fing.edu.uy/grupos/medialab/





Buttons, buttons everywhere

4 07 2012

Today we recorded a short video showing the capabilities of generating buttons on almost any surface. The way it works is really simple, but first I should explain a bit about the architecture of the framework.

If you want to write an application using the Mapinect framework, your application will have to implement an interface and will have available an API. There are basically two ways of communicating with the framework. Via an API and via callbacks, this is the reason of implementing an interface. The difference is the way the communication occurs, if we stand from the framework side, the API listens from the applications and the callbacks notifies to the application of an event, and this communication is asynchronous.

So, having the basics of how the communication works, let’s explain how to do it. You first define a button. Then, there are two callbacks, called ButtonPressed and ButtonReleased where the framework notifies the application depending on the action performed (the names of the functions are self-explanatory).  At the moment we’re experimenting with two types of buttons, the Simple Button and the Draggable Button. They’re both implementations of the IButton interface that defines the more basics operations that a button needs to implement. The Simple Button as it’s name says, is the most basic one. Your application needs to instantiate the Simple Button defining a polygon (by its vertexes), the idle color, the pressed color and registers it with the Button Manager. By doing so, the Mapinect framework will render the button in the polygon defined and it’ll change the color of the button depending on the pressed state. The second type of button we had implemented, is the Draggable Button, this one is an extension of the Simple Button (is instantiated in the same way) that allows the final user to drag the button by pressing it and moving it around the table. The movement of this button is also managed by the Mapinect framework, there’s no extra work in the application implementation.





A brave touch world

18 06 2012

Hey everyone,

It’s been a while since our last post, but we continue working on our prototype.

Here it is an update on touch capability, making use of several techniques discussed on previous posts, and dividing the processing of clouds for touch events and object detection events.

Using this approach we are getting better performance in cloud processing and a more responsive touch feedback.





Building the robotic arm

5 11 2011
Many things we discussed since we started the project, back in April, about the design of the robot arm. Let me refresh the constraints we had about it:

  • We had 4 AX-12+ motors to articulate the arm.
  • It had to support the Microsoft Kinect and the 3M projector at the top.
  • It had to table mountable.
  • It had to be easy to transport it.
  • It had to be as cheap as possible 🙂

At the beginning we made some protoypes using an old meccano but it had a poor stability because of the shape of the meccano’s pieces as well as the ad hoc solution we had to made in order to attach the kinect, the projector, the motors (meccano’s pieces fits perfectly to each other but it’s really hard to fit external stuff).

After several failed attempts we decided we needed something more “professional” than a meccano so we decided to go to a workshop (fortunately the father of one of the team has one) and showed the people there our needs. We came up with a design that was really promising.

It has a central two concentric tubes one (the external) fixed to the table by two grips and the other one (the inner one) attached to one motor by an axis.
At the end of the tubes is another motor attached to it and a dipstick. Here we faced a major issue: the motor couldn’t handle the weight of the kinect and the projector at a distance of about 30 centimetres. So we decided to attach the kinect and the projector to one side of the dipstick and a counterweight to the other side. This way the motor will not have to move the weight of the kinect and the projector since the weight is compensated.
There’s a structure that presses the kinect and the projector together. This structure is attached to two motors, one gives the possibility of turn parallel to the floor and the other one moves upside-down.

So we started the construction phase. It included several trips to the workshop and many test but we finally got this arm:

This slideshow requires JavaScript.

The motors performs pretty well and moves fluently but, nevertheless, the second motor I described is at the limit of its capacity.

Another problem we faced is how to feed the motors. The motors works on 12V in parallel and each motor consumes 900mA in moving and 500mA while locking. The power source we had at the begging worked at 12V and 1A. Each time we tried it the motors “fainted”. The solution we found to this was to get an old computer source. This supplies 12V and 10A, much more than we needed. After this change, the motors perform much better and we haven’t had any faints.

What we have been working in the last days is the capability of controlling the arm with a joystick. We succeeded and it will probably be used this way.





Object tracking using PCL

29 10 2011

After several weeks of development and rework, we’re getting close to the prototype for our project.

Our prototype will consist of a simple application that can detect, track and project into objects placed on top of a table using a Kinect mounted in a robotic arm.

I’d like to talk about the implementation we’ve made for tracking objects with Kinect and PCL.

As we’ve commented in an earlier post, PCL offers an substancial set of methods for manipulating clouds of 3D points. Among the universe of existing functionalities, we focused on cluster detection and plane recognition.

A cluster is a group of elements. For our interests, we’ll define the universe as the 3D space where the samples are the points obtained from Kinect’s depth camera. If we create a 3D view of the cloud, we can see that certain elements can be grouped by proximity.

The clustering technique consist of, based on a function that for a couple of elements returns a numerical value representing how close is one sample from the other, creating groups of items that share or have closer values of some of their properties. In our case the concept of close is the euclidean 3D distance.

Then, the clustering allows us to extract from the original cloud the closer group to the depth camera’s eye, removing all the junk info from objects outside the table (our world). The cluster extraction is configurable and is part of the PCL library. This first cluster will hold the table with the objects laying on top of it.

On the next step, we remove the table plane using the PCL’s segmentation method for plane detection using RANSAC, and apply once again the clustering technique to treat as recognized objects the resulting clusters.

With the recognized objects from previous readings and new detected ones, we look for the best affine transformation with nearest centroid between a new object cloud and an existing one. If exists, we update the cloud, if not, a new object is added to the model within the cloud.

Using this process we keep track of the objects that stay visible in front of the Kinect’s depth camera and we’ll use this information, we can move the robotic where we want and follow the objects applying them a texture, video or any useful information in our way for tangible interaction.