Technical analysis of the 3D scanned Nefertiti

3D Nefertiti: the data is here and it looks beautiful. Let’s take a deep look at the released scan and it’s technical characteristics together. What is possible, what are the facts? 

Nora and Jan released the data of the Nefertiti at the 32C3 Congress and gave a talk about the whole action. In many media outlets, there are doubts about the source of the data, calling it a hoax. The quality is extremely high and many people wrote that it’s impossible to obtain such data quality with a simple scanning device (Kinect Sensor) and post-processing.

Philip Pikart CC BY-SA 3.0

WHY I AM WRITING THIS?

I’m not a legal expert and post this on my personal blog because it’s my personal opinion. In the following article I collected some facts and data, ran some tests, and performed some scans under the same conditions (as described by the artists).

The main motivation for this article came as I read the articles of some media outlets and blogs doubting the origin of the scanned data. I think many of them got their facts wrong so I wanted to set some things straight.

IS EVERYTHING A HOAX?

“With the data leak as a part of this counter narrative we want to activate the artefact, to inspire a critical re-assessment of today’s conditions and to overcome the colonial notion of possession in Germany”

Source: nefertitihack.alloversky.com

IMPORTANT (and i can’t emphasize this enough): Don’t forget that the main purpose is not to show that you’re able to perform undercover scans in a museum but to raise awareness about the stolen artworks and the fact that these should be available for the public domain. More about this can be read on Wikipedia.

I think you’re wrong and it’s still a great story.

But more about this in the conclusion, first let’s look at some facts.

THE FACTS

Let’s look at some rephrased statements.

TrigonArt is rightfully proud of their work, and their website includes a page showing a 360-degree orientable and zoomable preview of the scan they made of Nefertiti for Neues. I encourage you to take a look for yourself and compare it to the artists’ own scan. Even in this limited preview viewer, opening it up full screen and zooming in, you can see that every feature—including super-fine submillimeter details—appear to exactly match the model that the artists released.

Source: cosmowenman.wordpress.com

Well, if you scan the same object twice you better have the same data. Since the data from the company is not available (obvious) it’s impossible to check if they’re really the same. The company data is only available as pictures and you can’t draw conclusions from a 400×400 pixel image that show as scan of a 50cm bust with a resolution below 0,2mm.

At least someone who seems to share my opinion on this.


 

The data was stolen from servers.

Possible but I hope and think that’s not the case. Can’t be verified without the data from the company or museum and i hope it’s not the case.


 

You could never achieved such resolution from a Kinect. And in a museum, where you need to hide the scanner from guards you can’t freely move around it.

The scan from the Kinect might have just been a base to get the dimensions right or the data has been completed by photogrammetry and manual editing. People have shown that it’s possible to build 3D files only based on pictures they found on Google image search.
On the other side, on almost every single picture I find on the internet I see visitors with cameras. Either the guards are not so strict or the policies have changed.


 

The scan is a lie. They could have bought a replica (rare, cost about 10.000€) or got access to one and scanned this one with a high resolution device.

This is a plausible theory and would be the easiest way with the less amount of work to get the data. Since the replicas are true replicas, you won’t be able to distinguish a replica-scan from a original-scan.

WHAT ABOUT COPYRIGHT? LEGAL ACTIONS?

There is no copyright protection on the bust of the Nefertiti. Most copyrights vanish between 50 and 100 years after the author has died. The bust ist over 3,000 years old. This has nothing to do with it’s material value and the fact, that it belongs to someone (Stiftung Preußischer Kulturbesitz). In case they performed a scan themselves on the original bust you can’t sue them for copyright. The only thing is that the museum could ban them from the museum since they’ve violated the house rules (no pictures!).

If the Neues Museum is able to prove that they’ve had financial damage due to the scan (e.g. less visitors) or that the bust took damage because of the scan they can take legal actions. I think it’s nearly impossible to prove that in court.

Shapeways wrote two good blog posts about this:

Until now (March 2016) there are no informations about legal actions from the museum or the owner of the bust. I don’t think this will change since a legal action could lead to some sort of Streisand effect because of the following points:

  • The Nefertiti is public domain. Why isn’t the data released? The accusation is that the owner refuses to do this so they could sell expensive replicas.
  • Egypt wants their bust back. This has been refused many times and was discussed many times in the last 100 years. More details on Wikipedia.
  • Many other museums gave access to people with 3d scanners or published the scans, e.g. the The Metropolitan Museum of Art.

There are many other legal aspects but since i’m not an expert in this field, i’ll keep it for myself.

THE PROVIDED DATA

You can download the files at nefertitihack.alloversky.com as zipped .obj or as .stl via torrent.

This is a reduced quality preview mesh with only 50,000 faces. It needs WebGL to run and may take some time to load due to a high amount of data (2.5MB) for a webpage. The original file has about 100MB of data.

The following analyses were performed on the STL files downloaded via torrent.

IS IT POSSIBLE WITH A KINECT?

Just with the Kinect? No. With heavy post-processing and photogrammetry? Possibly yes. Why? The physical resolution of the Kinect is simply too low:

Spatial X/Y resolution (2m distance from sensor) = 3mm
Depth Z resolution (2m distance from sensor) = 1cm

Data from the PrimeSense Website (manufacturer of the depth sensor) until it went offline after being purchased by Apple in fall 2013.

But the Kinect gives you the correct size and shape of the bust. I think this would be the hardest part if you would completely draw it from scratch. So you have to combine all the data you get (depth, pictures) and perform heavy post-processing.

The Nefertiti is a rather simple shape and many high resolution pictures are available. I decided to perform a test with the Kinect to see what you get. I’ve used the Kinect many times on humans but never on busts under a glass protection.

Here a list of thing I used:

nefertiti_kinect_front nefertiti_kinect_side

A total of 4 scans (fltr) with the following setup:

  • 1 & 2: very poor light, under a glass protection.
  • 3 good light under glass.
  • 4 good light without glas.

The scanner was held like vertical and only moved around at the same height (like if it was attached to my body). I went about 3 times around the bust and covered the sensor many times like they did in the video below (software had no trouble catching up afterwards). The sensor was positioned perfectly in front of the bust, this is because i’m quite big.

 

What about the glas and the Kinect?

[…] but even if they carried some kind of battery, Nefertiti is under glass which also causes errors in this method of scanning.

Source: thegreatfredini.com

 

Again this is not possible with the scanning setup presented within the video.

Source: amarna3d.com

Well, as you can see with my scans, this is not true :-). Prove me wrong but I looks like none of you took the time to test it before writing your article. As you can see i performed a scan with and without the glass and was unable to see a real difference.

You can see without any distortion through glas, sometimes you don’t even realize there’s a glas window. So why should a scanner, which operates with almost the same light (different wavelength, but not far away), see something different?

This has been tested with normal double window glas and acrylic glas.

The data I got from Skanect, after cropping, no post-processing.

Point distances:
Min 0.888mm Max 3.276mm
Avg 1.890mm Med 1.956mm
StdDev 0.196mm

The data from the artists have the following statistics:

Point distances: 
Min 0,121mm Max 2,527mm 
Avg 0,614mm Med 0,573mm 
StdDev 0,218mm

About this data:

  • Low average distances and high face counts can be misleading. These only give you an idea of the data quantity and resolution. You can take a mesh with 1.000 faces and divide each face in 10 sub-faces resulting in a total of 10.000 faces. The mesh looks exactly the same but has ten times more data.
  • The standard deviation shows the spread in your data. E.g.: for my own scan, 62% of the point-point distances are between 1,524mm and 2,086mm. For the provided scan, 62% of the distances are between 0,396mm and 0,823mm.

As you can see, the points from my mesh have an average distance below the minimal resolution of the Kinect. I assume this comes from smoothing processes in the scanning software, especially if you pass a point multiple times.

Quality map of the released data

Quality map of the released data. The scale on the left side shows the point distance to color mapping.

BLabla

Quality map of my scan. The scale on the left side shows the point distance to color mapping.

I think this speaks for itself. I don’t see how you could get from my result to the released data just by post-processing.

THE VIDEO

There are several things about this video.

What about the infrared and power LED?

Whilst this has some merit, most digital camera equipment has a built in infra-red filter to prevent just this kind of light spoiling a recording. So this cannot be used as a convincing argument towards this event being a hoax.

Source: amarna3d.com

Here a picture I took with my LG G3, a recent Smartphone. You can barely see it with the naked eye.

20160302_010501

Left dot: infrared LASER | Right dot: power LED

If you look closely in the video at 00:05 or on the screenshot below you can see a piece of tape over the power LED.

Screen Shot 2016-03-03 at 04.12.24

Here’s a screenshot from the video at about 01:03.

Screen Shot 2016-03-03 at 04.12.14

Then, the video shows Ms. Badri repeatedly covering and recovering the Kinect as she circles Nefertiti. A normal scan would require uninterrupted line of sight of the statue as the scan is happening.

Source: thegreatfredini.com

This depends on the software you use but i would say it’s not true. As stated above the software I used easily catches up.

What about the top of the bust?

In order to get coverage of the top of the headdress, the scanner would have to be held high above the statue, and not at waist level as the video indicates.

Source: thegreatfredini.com

This is a valid point and in the video it looks like the scanner was too low. But i’m 1m95 high and would be able to hide the scanner while getting the top of the bust.

Where did they hide the laptop?

Other questions include where was the connected laptop hidden?

Source: thegreatfredini.com

This is in fact a good question. The Kinect needs an external power supply. But you don’t need a good laptop with GPU, you can also record the Kinect’s data and post-process it later on a workstation or send the data over WiFi to a nearby computer. Since they didn’t seem to have any display to monitor the results I would assume they’ve just recorded data and analysed it later.

CONCLUSION

From a technical point of view, the released data is a extreme high quality scan and there must have been other data sources (meshes, photogrammetry, sculpting by hand) than the Kinect.

It’s a guerrilla art performance mixed with political activism, these things always come with some question marks.

Things we would need to confirm or deny their claims of scanning it in the museum:

  • Compare the provided mesh with data from the museum. With the Hausdorff distance we could compare the two scans and validate the quality by matching the scan to the data held by the museum.
  • Compare the provided mesh with the real object or a replica.
  • More details about how they did it, raw data etc.

I don’t think these things are likely to happen in the near future.

Even if the 3D scanning part with the Kinect ist a lie, this doesn’t mean the whole story is a hoax. Just think about the goals the artists may have had in mind.

  • Their main point, the controversy of public domain art being held by countries or museums is widely discussed over the globe.
  • Many people took a close look at the matter.
  • The data is available for everyone.

As you can see, each goal is achieved. So don’t call it a complete hoax.


Who cares how they got the data or how they scanned it in the museum, it’s where it should be: in the public domain.


 

LINKS

Here some other cool articles that have not been cited in the text:

MISC

Here’s a timelapse video (sorry for the bad light) from the print of the 1:2 version.