3D Scan/Kinect Makerbot/Howard Community College/Spring2012/P1550jme

From Wikiversity
Jump to navigation Jump to search

Problem Statement

[edit | edit source]

Use the Kinect to capture a 3D image and print it using the Maker Bot

Team Members

[edit | edit source]


[edit | edit source]

Three-dimensional (3D) scanning has been increasing in popularity as a worthwhile project for engineers because it is a valuable tool to use in a multitude of applications. Using the Kinect, the team is going to capture a real-time model of a 3D object. This type of 3D model capturing is new to the world of 3D scanning and modeling. Once we have a model stored we will have many ways to push the project forward including printing the model with the MakerBot or changing the file format for purposes of reverse engineering. Originally, 3D models were created from polygons made up of vertices and compiled in Standard Tesselation Language (STL) format. In the 80s and 90s an algorithmic mathematical model commonly used in computer graphics for generating and representing curves and surfaces called non-uniform rational basis spline (NURBS) became popular and is commonly used today in programs such as Computer Aided Design (AutoCAD). 3D scanning works like radar. Sets of vertices in three-dimensional coordinate system, known as point clouds, are created using a camera, computer software, and electromagnetic radiation (namely visible light, lasers, laser lines, or infared projection dots in the case of the kinect). With geometric analysis the computer software finds the distance from the camera to the object that is reflecting the radiation. RapidForm XOR is an engineering tool of the future that will allow engineers to quickly and easily convert point clouds into solid surfaces usable with CAD capabilities. "For years, if you needed to really reverse engineer something, you would scan the part, you’d go through all the work of making the clean mesh and/or NURBS model, and then you’d open it in CAD and manually rebuild a new part model over top of the scanned geometry." Applications of 3D scanning range from testing aerospace structures through test flights, wind tunnels, and Computational Fluid Dynamics (CFD) to creating compression masks for burn victims. "The old method involved taking an impression of the patient’s face using the same material a dentist uses on teeth. This required anesthetizing the patient, and it took approximately 10 hours of OR staff and technician time." Starting this semester at Howard Community College, we want to expand our horizons by creating a fun way to learn through reverse engineering.


[edit | edit source]


[edit | edit source]

Tell a detailed story of the project. Describe how split up, what the obstacles were, what testing was done, what informal decisions were made, what assumptions were made, what the results were.

With the team decided upon 3D scanning as a project, we presented possible solutions to our professor and peers on how to make 3D scanning applicable at HCC. We needed to convince them

that we could design a system that would be beneficial to others interested in reverse engineering through 3D scanning. Requirements were made regarding which subsytems we were going to use.

Our goal was to use the Microsoft Xbox 360 Kinect and the MakerBot to successfully scan an object and print it.

Researching these subsytems led us to create additional requirements for the software programs that we would use for the project. For example, we discovered that the technology is not readily

available to go from 3D scanning directly to 3D printing with the click of a button. Therefore, we searched for a solution that would make the process as straightforward as possible. First, we found a

software to scan objects with the Kinect. Processing is an open source software that we were able to use with the Kinect for scanning. However, it was not coded to create a point cloud of an object

in its 360 degree entirety. One software called ReconstructMe had several successful videos on YouTube, but we were not able to replicate that success. After installing all the drivers, the execution

files still brought up errors stating that a binary file is missing. RGB-Demo was the most successful and intuitive open source software for full 360 degree view 3D scanning that we used. We were

able to run two different execution files for capturing point clouds that are saved as .ply meshes with a single click. However, both these files only captured single perspective views of the object.

RGB-Demo had execution files for programs that we needed to use to capture an object in 360 degrees. One program was to calibrate the Kinect. The execution file ran the prompt in the DOS

window briefly, but it never opened the program window. Whereas, the other was for capturing the point cloud and it would not display the image in the focus of the Kinect in the program window as

seen in YouTube videos. Scanning an object in 360 degrees can be done in multiple ways using the Kinect: either by calibrating and using multiple Kinects (2-3) which RGB-Demo is programmed to

do, by rotating an object in front of the Kinect which can also be done using RGB-Demo, or by moving a Kinect around the object which RGB-Demo was not programmed to do. We only worked with

one Kinect, so we chose to rotate an object in front of the Kinect. This gave us the inspiration we needed to build a turntable to rotate the object. Our first Lazy Susan was made out of a a box

taped to a piece of wood that was attached to a metal ball bearing rotatable bracket that we had to turn by hand. One person took the time to configure a DC motor with an Arduino that turned a

piece of cardboard. Although, scanning an object in its entirety demonstrated a major obstacle that would effect the integrity of our project, we maintained a steady pace and pushed the

project towards its next stages.

3D modelling was the intermediate step of our project, so we evaluated the following 3D modelling software programs: 123D, Maya, Blender, and Meshlab. For the 3D modeling part of our project,

we created tutorials on MeshLab and Blender. Our tutorials explain how to the softwares tools to perform the complex algorithmic approximations that 3D modelling software programs use such as

Normal or Gaussian Distribution. Our tutorials show and teach the steps that are needed to take a point cloud mesh and render it into a manageable and printable object file.

In the course of time, we got the hang of using the MakerBot and the Replicator-G software that it uses. We accomplished the arduous task of changing the filament of the MakerBot by heating

it to its target temperature and loosening the thumbscrew that presses the filament wire against the CNC motor. This gave us the option to print 3D objects in either black or white. We obtained

printable models from the internet which we printed for fun and as practice for when we finally created our own .stl file. As we suspected, a rendered .stl made from single perspective point cloud

was not printable. We were able to create our own .stl models using Blender, but have not yet discovered how to attach a base to an object file or mesh that we scanned using the Kinect, so that it

can be printed.

3D scanning turned out to be a constructive project that can be continually worked upon. This project peaked an interest in each of its team members that fueled us to branch out into individual

tasks, be creative, and push the project forward. Finally, we found a sense of achievement by documenting all of our work whether it was testing possible solutions to problems with tools such as

the MakerBot that we did not know existed before this project or creating detailed and concise tutorials that others can follow or use as a blueprint for their own tutorials.

Decision List

[edit | edit source]
I. 3D Scanning
   1)How does it work?
           -Camera and Laser Line
        B)Software- To Scan 360 degree
              -RGB-Demo v0.7.0
   2)What else might be needed?
      B)Open Source Software
II. 3D Modeling
      A)Meets Requirements
         -Easy to use
      B)Fills Multiple Roles
         -Sampling and Editing
              b)Autodesk Maya
         -Preset Shapes and file types
III. 2D Printing
      -Minimize required Materials
      -Keep the scope narrow
      -Create Tutorials

Material List

[edit | edit source]

Nothing needs to be purchased for this project, all the software and drivers can be download free of cost.

1. For printing in 3D we needed the MakerBot Cupcake CNC
2. A Computer to open the software to print
3. Microsoft Kinect
4. RgbDemo software
5. Kinect drivers
6. Meshlab

Software List

[edit | edit source]

You installed different software packages or used already installed software. Describ the programs here.

1. AutoCAD, Blender, Google SketchUp to be able to create STL files
2. ReplicatorG, to convert an STL file into GCode and be able to print them
74 hours


[edit | edit source]

Created (Tutorials For Repeating The Project):


Created (Tutorials For Troubleshooting The MakerBot):

Next Steps

[edit | edit source]

List specific details, advice, describe the current problems that the next team faces associated with the project here.

We successfully scanned objects using the Kinect and converted them to file types that are used by the MakerBot 3D printer. We were unable to scan a full 360 degree view of an object, but we were able to print an object we created in Blender. We had a lot of fun using the MakerBot and finding interesting objects to print with it. We encourage people to choose 3D scanning because it is fun and its applications for reverse engineering in the future are enormous. All of the software and drivers can be found in the tutorials along with the process of installing and running the demo software to get data from Kinect. This project can always be improved upon, but we do feel that we have made significant progress for future students to work through the scanning, modelling, and printing process quickly, so that all the major troubleshooting that was unsolved can be worked on in the first weeks of the project. We accredit that a scanned object will get printed before the next teams final presentation for their project.