3D reconstruction of Pottery Objects

Bachelor's thesis Project

3D Reconstruction of Incomplete Archeological Objects.

Problem

Related to the cultural heritage field, pottery reconstruction in my country (Peru) incur high costs of personnel and months of designing different preliminary models physically in order to decide the correct version. There have been plenty of solutions in order to reduce the time and costs of this process by using computational resources.

Solution

With the increase of Deep Learning models using images, my contribution was to implement and study the performance of a GAN Deep Learning Model applied to reconstruct fractured potteries.

Technical Steps

  • Simulate fractures: I used the 3D benchmark dataset to get the original pottery models, but in order to train the Deep Learning model, I had to generate different fractured models for each pottery in order to use them as the model dataset. To achieve this task, I used the PyMesh library, in which with boolean operations I could remove some parts of the objects simulating the fracture of these.
Example of original and fractured model.
Multi-View Completion Network workflow of the archeological objects.

By this, my task was to generate multi-view depth maps for the original and fractured models and I used the Blender API to take pictures front different points of view and obtain its depth maps. After taken the pictures, I used the Pillow library to combine the images in the specific database format.

Depth maps from 8 views of the fractured and original model.
  • Implement the Deep Learning model and train: As the paper explains, there is one GAN model per point of view, and the model retrieves not only the reconstructed image but also a shape descriptor that others GAN models use to reconstruct the image. To achieve this task, I used the PyTorch framework to implement the architecture and a training model monitoring system to analyze the training which took around 2 weeks in a dedicated server in my university.
Training dashboard of the MVCN.
  • Retroproject the results from Depth Maps to 3D model: Once the reconstructed depth maps were retrieved by the Deep Learning model, I have to retro project these into one 3D model again. For this task, I convert the Depth Map pixels into cloud points using the position of the camera points of view as their references with C/C++ using this project as the base code. Then, I applied some pruning for biased pixels and finally, I used the MeshLab API to convert these point clouds into a 3D mesh object.
Left: Retroprojection of original cloud points. Right: Retroprojection of reconstructed cloud points.
  • Compare the reconstruction with the original model and analyze the results: Finally, I used the Intersection over Union method to calculate the degree of similarity between the original and reconstructed models. As I chose 5 different pottery types, then I analyzed which types were easier to reconstruct and which ones were more difficult.