Collecting the data
In July 2015, in collaboration with a team led by Dr Rab Scott, NAMRC, a Leica ScanStation P20 3D scanner was used to capture a 3D point cloud of the charnel chapel. Seventeen scans were taken at different locations (see Figures 1 and 2) and registered (using Leica Cyclone) to produce a model containing 60 million points. (More information about the capture work.)
3D laser scanners rapidly fire highly directional laser beams from a base station (which can rotate both horizontally and vertically) and detect the reflection from any surface that is hit. The distance of the scanned object from the base station can be calculated from the journey time, known as the ‘time-of-flight’. The colour of the scanned surfaces can also be recorded using an associated digital camera. The resulting output from using a laser scanner, with associated camera, is a ‘3D point cloud’.
Figures 3-10 show the views from different capture positions, looking in specific directions. Figure 3 is from scan position 1 (scan position range 0..16), at the entrance to the room, and shows the other capture positions as ‘mirror balls’ floating in the space. Figure 4 shows the equivalent view without the mirror balls. The grey area at the bottom of the figure is the unseen area from the scan position, i.e. below the scan tripod.
The 3D point cloud
For each measurement taken by the laser scanner, and its associated digital camera, a three-dimensional (x,y,z) position and an associated colour (r,g,b) are stored. The collection of all measurements is called a 3D point cloud. Multiple scans, once registered, produce a very large 3D point cloud. A range of software can be used to visualise this data. Figure 11 uses Autodesk Recap 360 and Figure 12 uses free Web-based software called potree. To create a real-time visualisation of a large point cloud model, techniques can be used to reduce the size of the model by deleting some of the points, or viewing only a subset of the points. For example, the image in Figure 12 has only 4 million points, however your brain fills in the details so the shape of the room and the bones can still be identified.
The 3D data set has been published online, and is available to download from the University of Sheffield’s online data repository, ORDA
Our subsequent research work has investigated approaches to create a surface mesh from the point cloud Early work (Crangle et al, 2016) produced a model that demonstrated the problems of capturing such a complex geometrical space (e.g. difficulty in positioning the scanner in the space and dealing with complex lighting conditions), producing a noisy model. A collaboration with Wuyang Shui, a visiting researcher from Beijing Normal University, China, investigated a semi-automatic approach for producing a simplified mesh by downsampling the data set and simplifying different areas of the model in different ways (Shui et al, 2016a; 2016b). Again, the results demonstrated the challenges in dealing with such a complex data set. In the summer of 2016, as part of a SURE project involving undergraduate student James Williams, an aggressive simplification approach was used to produce a model suitable for real-time interaction on a website. However, as can be seen in Figure 13, the more the data is simplified the more the model begins to lose its realistic appearance. Here, the surfaces of the crania on the nearest stack of bones have merged into a single surface and have lost all their identifying features.
Academic papers and publications
- Jenny Crangle, Elizabeth Craig-Atkins, Dawn Hadley, Peter Heywood, Tom Hodgson, Steve Maddock, Robin Scott, Adam Wiles. The Digital Ossuary: Rothwell (Northamptonshire, UK). Proc. CAA2016, the 44th Annual Conference on Computer Applications and Quantitative Methods in Archaeology, Oslo, 29 March – 2 April, Session 06 Computer tools for depicting shape and detail in 3D archaeological models
- Wuyang Shui, Steve Maddock, Peter Heywood, Elizabeth Craig-Atkins, Jennifer Crangle, Dawn Hadley and Rab Scott. Using semi-automatic 3D scene reconstruction to create a digital medieval charnel chapel. Proc. CGVC2016, 15-16 September, 2016, Bournemouth University, United Kingdom.
- Wuyang Shui, Jin Liu, Pu Ren, Steve Maddock and Mingquan Zhou. Automatic planar shape segmentation from indoor point clouds. Proc. VRCAI2016, 3-4 December 2016, Zhuhai, China.
The Digital Ossuary project was funded by a University of Sheffield Digital Humanities Development GrantCreating the 3D model