Capture of Figures - Photogrammetry for 3D modeling
Research @ MIT Media Lab | 2018
#imagecapture #3D #rihnoceros
During the summer of 2018, I was lucky to work as an Affiliated Researcher at the MIT Media Lab under Ph.D. candidate Artem Dementyev in the Responsive Environments Group. His thesis is based on epidermal robots, and my role was in charge of designing 3D models and making wearables for the project, mainly by digital fabrication and prototyping with various 3D scanning methods.


Abstract
SkinBot is a 2x4x2 centimeter size epidermal robot that moves over the skin surface with two-legged suction-based locomotion. It measures the objective data of the body parameter. What if it tracks the location itself on the body and scans or captures images of skin? What if it helps correct the body posture? And what can the stiffness data of the skin tell us? All of these can be possible if there is accurate 3D data, which can be achieved by 3D scanning. My aim was to provide the basis of mapping for the location of the body, specifically the human arm, by transferring the physical object data into a computing system.
Agenda
Week 1-2: Researching & Exploring the Right 3D Scanner
The 3-dimensional modeling with parameters of the body part allows an object to be digitized. When digitized, it can be modified and imported to another system for a different purpose. One of the most effective ways to generate accurate 3D modeling is to apply the method of Reverse Engineering with the use of 3D body scanners. There are various types of 3D body scanners— each has its distinctive technological feature, resolution and price point from affordable consumer devices to those suited to small businesses. I created a flow chart that summarizes the options of 3D body scanners that could be used in various cases. Based on my experience working with 3D scanner for four years, a whole body scanner is the most effective way to achieve accurate data, but, due to constraint, I decided to proceed with a hand-held 3D scanner and mobile application, which share similar techniques for this project.
Qlone: 3D scanner mobile application
With the aim of exploring the mobile version of the 3D scanner, I downloaded the application called Qlone. Qlone is the all-in-one tool for 3D scanning. It is very easy to use and fast at scanning objects using the phone's camera, and it can modify them in the app and export them to many platforms. However, there are some disadvantages, as it's an open source application.
Disadvantages
-
Low resolution: It had the limitation of achieving accurate data. It didn't process the complicated object. For example, it fails to resemble the fragments of the roundness of the objects. (Check the pictures below)
-
Limited size: It can only scan the object that is within the scope of the printable mat that is used as a guideline for the camera to scan.



Takeaways
Unfortunately, due to its low resolution, it was not suitable for scanning a human's arm. However, because it exports in a variety of formats—OBJ, STL, USDZ, GLB, PLY, X3D—I exported to AR/VR (augmented reality) to see what other innovative elements I could bring into a table. I saw the potential in using it as a quick mock-up or demo of the 3D object for AR/VR.
-
Possible use in AR/VR
Another takeaway was the mechanism of the scanning process and use of matrix; it scans the object from two different angles, almost like creating a dome, and automatically merges the points to complete a 3D result. The more layers of the scan in the angular direction, the more it collects accurate data. (See the pictures below)
-
Mechanism of concentric layers of scanning

Week 2-3: Collecting Images for Photogrammetry
After testing out the handheld 3D scanner and Qlone suggested me a different approach in obtaining 3D scan which is Photogrammetry. As opposed to the mechanism of 3D scanning that uses the devices to collect the data, photogrammetry maps out the data from the collection of photographs that are already taken from different angles. This method is particularly effective in capturing the images of an arm or leg, a body part that is connected to another that requires 360-degree rotation angularly. After trial and error, the best way was to take pictures when the subject is on the ground is for the photographer to rotate around the body part instead. (See the drawing below)

Week 3-4: Testing Photogrammetry software, 3DF Zephyr
In the aim to construct the high-quality resolution of 3D modelings, I captured and compiled more than 500 pictures of multiple objects in a systematic order by using photogrammetry software, 3DF Zephyr.
Achieved 3D modeling of the arm


Presentation
I presented my research in photogrammetry and 3D modeling to the members, including Principal Investigator Joe Paradiso. It gave me an opportunity to work as an Affiliated Researcher, and it was a stepping stone for digital fabrication: Epidermal Wearable. (Click to see the continued project)

After creating 3D modeling of an arm, I got interested in the versatile visual computational representation of textures: shaded, ghosted, x-ray, technical, and drawing. Each 3D model can be exploited into points, lines, curves, surfaces, polygons, shades, etc. How could I modify and express the same subject in many different directions? The shaded model will tell me more about the depth of the surface of the object. The mesh that is made with polygons may correlate with how neural signals work in our skin and may lead to mapping out the neurological network for our body, just like how slime mold is a biomimicry for establishing the most efficient routes around congested cities, typically by road or rail.
As a designer and engineer, seeing the relationship between all these visual elements lets me explore the relationship between the computational data of stiffness of skin and density of patterns on digitized 3D modeling.
Future Implication
