|
An amazing sequence of technological development, such as digital
photography, laser scanning, video projection, QTVR and 3D, has come about in
the last ten years, and we believe that all of these innovations have immediate
applications to archaeology.
Kevin Cain, director of the Insight Digital
Introduction
Recent improvements in computer and laser technologies allowed scientists to
preserve ancient treasures (e.g. archaeological sites, artifacts, fossils, clay
tablets) by creating their virtual copies. These 3-D digital copies offer highly
accurate, and often enhanced, representation of the physical objects and
structures.

Replica of David in Australia
Articles available on this page (summaries)
-
The Digital Michelangelo Project
Recent improvements in laser rangefinder technology, together with algorithms
developed for combining multiple range and color images, allow us to reliably
and accurately digitize the external shape and surface characteristics of many
physical objects. Examples include cultural artifacts, machine parts, and design
models for the manufacturing, moviemaking, and video game industries. Modern
technologies in 3D scanning allow us also to reconstruct 3D digital
representations of real objects in a semi-automatic way, with high precision and
wealth of details. The motivations
behind this project are to advance the technology of 3D scanning, to
place this technology in the service of the humanities, and to create a
long-term digital archive of some important cultural artifacts.
-
Polynomial Texture Mapping (PTM) - Image-based Relighting Technology
The article describes how fossils can be photographed with a new digital
technique, enhancing contrast in order to bring out subtle details. Fifty
pictures are taken with light coming from different angles, and a computer
calculates how the intensity of each point in a combined image depends on
light angle. From this information, an image can be re-created with
different lighting and optical surface properties, such as increased
shininess. In addition, the technique allows electronic publication of
images where light and surface properties can be manipulated by the reader.
This can be very useful because different features are often enhanced under
different lighting conditions.
-
3-D technology Preserves Ancient Treasures
Similar to how DNA banks are being created to store genetic data on endangered
animals, archaeologists now are preserving archaeological treasures in the
virtual world, for accuracy, ease of study, and in case real world problems,
like erosion, lead to damage or destruction.
-
Related Links:
Rebuilding Ancient Monuments in Mesoamerica
How
They Rebuilt Stonehenge
The following article contains copyright material.
Text and images
courtesy of Digital Michelangelo Project, Stanford University.
Reprinted with
permission of the project director Marc Levoy.

Introduction
Modern technologies in 3D scanning allow us to reconstruct 3D
digital representations of real objects in a semi-automatic way,
with high precision and wealth of details.
Our goal was to produce a set of 3D computer models and to
make these models available to scholars worldwide.
An accurate digital model of Michelangelo’s David has been
created during the Digital Michelangelo project (1999-2000),
coordinated by Professor Marc Levoy from Stanford University.
This model has been made using a custom Cyberware laser scanner
and post processing software developed by the Stanford’s Computer
Graphics Lab. Acquisition and reconstruction required a long time
because of dimensions/com- plexity of the modelling and also due to
the pioneering status of the technology.
The availability of an accurate digital representation opens
several possibilities of utilization to the experts (restorers,
archivists, students), the students or the museum visitors. Virtual
presentation and interactive visualization are in general the first
uses
of these data, but we think that the use of 3D models should go
beyond the simple possibility to create synthetic images.
An important application of 3D models should be in the restoration
of artworks.
The integration between 3D graphic and restoration represents an
open research field and the David restoration project has given
several starting points and guidelines to the definition and
development of innovative solutions.
Our activity is based on problems and specific requests suggested by
restorers and briefly described as follows.
The 3D digital models are used in two different but not subsidiary
modes:
- as an instrument for the execution of specific
investigations
- as a supporting media for the archival and integration of
multimedia data, produced by the different scientific studies
planned during the David restoration.
A 3D computer model of the head of Michelangelo's David
Recent improvements in laser rangefinder technology, together with algorithms
developed at Stanford for combining multiple range and color images, allow us to
reliably and accurately digitize the external shape and surface characteristics
of many physical objects. Examples include machine parts, cultural artifacts,
and design models for the manufacturing, moviemaking, and video game industries.
As an application of this technology, a team of 30 faculty, staff,
and students from Stanford University and the University of Washington
spent the 1998-99 academic year in Italy scanning the sculptures and
architecture
of Michelangelo. |

|
Picture(s): Courtesy of Digital Michelangelo
Project,
Stanford University |
Famous marble sculpture of David
by Michelangelo |

© Photograph by Marc Levoy and Paul Debevec
The Cyberware gantry can be reconfigured to scan objects of
any height from 2 feet to 24 feet. In this photograph the gantry is
at maximum height. The 2' truss section roughly level with David's
foot was added at the last minute after we discovered, much to our
horror, that the statue was taller than we thought.
(We designed our gantry according the height given in Charles De
Tolnay's 5-volume study of Michelangelo. This height was echoed in
every other book we checked, so we assumed it was correct. It was
wrong. The David is not 434cm without his pedestal; he is 517cm, an
error of nearly 3 feet!)

© Photograph by Marc Levoy and Paul Debevec
The scanner head is also reconfigurable. It can be mount atop
or below the horizontal arm, and it can be turned to face in any
direction. To facilitate scanning horizontal crevices like David's
lips, the scanner head can also be rolled 90 degrees, changing the
laser line from horizontal (shown here) to vertical. These
reconfigurations are performed while standing on a scaffolding.
 |
Picture(s): Courtesy of Digital
Michelangelo Project, Stanford University
|
On the left is a photograph of Michelangelo's David. On
the right is a computer rendering made from a geometric model. Constructed
in December, 1999 at a resolution of 1.0 mm, the model is watertight and
contains 4 million polygons.
|
  |
Picture(s): Courtesy of Digital
Michelangelo Project, Stanford University
|
An aligned and vripped
model of David's left eye at the full resolution of our dataset, 0.29mm.
(right image). At this scale, we believe
we are capturing everything Michelangelo did with his chisel. At left is a
color photograph for comparison. The viewpoints are similar but not
identical.
|
Links related to this article
Copyright Notice
These models, images, and photographs [of
Michelangelo's statues that appear on the Digital Michelangelo Project's web
pages] are the property of the Digital Michelangelo Project and the Soprintendenza ai beni artistici e storici per le province di Firenze, Pistoia,
e Prato. They may not be copied, downloaded and stored, forwarded, or reproduced
in any form, including electronic forms such as email or the web, by any
persons, regardless of purpose, without express written permission from the
project director Marc Levoy. Any commerical use also requires written permission
from the Soprintendenza.
Reprinted with permission of the project director
Marc Levoy.
Digital Forma Urbis Romae Project
The Forma Urbis Romae, also known as the Severan Marble Plan, is a
giant marble map of ancient Rome. Measuring 60 feet wide by 45 feet high and
dating to the reign of Septimius Severus (circa 200 A.D.), it is probably the
single most important document on ancient Roman topography. Unfortunately, the
map lies in fragments - 1,186 of them, and not all the fragments still exist.
Piecing this jigsaw puzzle together has been one of the great unsolved problems
of classical archaeology.

The fragments of the Forma Urbis present many clues to the
would-be puzzle solver: the pattern of surface incisions, the 2D
(and 3D) shapes of the border surfaces, the thickness and physical
characteristics of the fragments, the direction of marble veining,
matches to excavations in the modern city, and so on. Unfortunately,
finding new fits among the fragments is difficult because they are
large, heavy, and numerous. We believe that the best hope for
piecing the map together lies in using computer shape matching
algorithms to search for matches among the fractured side surfaces
of the fragments. In order to test this idea, we need 3D geometric
models of every fragment of the map. To obtain this data, during
June of 1999 a team of faculty and students from Stanford University
spent a month in Rome digitizing the shape and surface appearance of
every known fragment of the map using laser scanners and digital
color cameras. Our raw data consists of 8 billion polygons and 6
thousand color images, occupying 40 gigabytes.
The goals of the Digital Forma Urbis Romae Project are threefold:
to assemble our raw range and color data into a set of 3D (polygon
mesh) models and high-resolution (mosaiced) photographs - one for
each of the 1,186 fragments of the map, to develop new shape
matching algorithms that are suitable for finding fits between 3D
models whose surfaces are defined by polygon meshes, and to use
these algorithms to try solving the puzzle of the Forma Urbis Romae.
Whether or not we succeed in solving the puzzle, one of the tangible
results of this project will be a web-accessible relational database
giving descriptions and bibliographic information about each
fragment and including links to our 3D models and photographs. A
sample database, containing 28 selected fragments, is currently
online; click on the link at the bottom of this page to view it. Our
long-term plan is to make the entire database (1,186 fragments)
freely available to the archeological (and computer graphics)
research communities, educators, museum curators, and the general
public.
This project is sponsored by the National Science Foundation
under the name Solving the Puzzle of the Forma Urbis Romae. Some of
the early work was funded under an NSF Digital Libraries Initiative
pilot grant called Creating Digital Archives of 3D Artworks. Other
early funding came from Stanford University, Interval Research
Corporation, the Paul G. Allen Foundation for the Arts, the Mellon
Foundation, the City of Rome, and Pierluigi Zappacosta.
Links related to this article
Copyright Notice
The text, models, images, and photographs in the above
article may not be copied, downloaded and stored, forwarded, or reproduced
in any form, including electronic forms such as email or the web, by any
persons, regardless of purpose, without express written permission from the
project director Marc Levoy.
Reprinted with permission of the project director
Marc Levoy.
Polynomial Texture Mapping (PTM) - a new method for increasing the
photorealism of texture maps.
HP Labs Technology is helping scholars to decipher ancient texts applying Image-based Relighting
Technology

Introduction
Tom Malzbender, an HP Labs
researcher has invented a tool that lets scholars see ancient texts in
ways never before possible. Malzbender's technology for capturing and
viewing images of three-dimensional objects can make
characters that were previously invisible or undecipherable clear enough
to read. As a result, scholars can derive more accurate meanings from the text
— and potentially obtain a better understanding of the past.
By changing the
angle or type of light shining on the tablets, scholars can sometimes see
the text more clearly. In the early 1980s, Zuckerman, a scholar and teacher of
the Bible and ancient Semitic languages at USC, pioneered the use of
high-resolution photographs in the study of ancient writing.
Malzbender's
invention, a type of image-based relighting, takes that technique several
steps further by automating the collection of images and allowing scholars to
manipulate the lighting and other aspects of the image on the screen. By
changing the appearance of the object, the technology brings out surface
details previously invisible to the naked eye.
Picturing the Past
HP Labs Technology Helping Scholars Decipher Ancient Texts
Yale scholar Walter Bodine had spent the better part of four long years
painstakingly transcribing the ancient Sumerian characters inscribed on a
crumbling, 4,000-year-old clay tablet.
Then he met Tom Malzbender, an HP Labs researcher who has invented a tool
that lets scholars see ancient texts in ways never before possible.
Malzbender's technology for capturing and viewing images of three-dimensional
objects can make characters that were previously invisible or undecipherable
clear enough to read. As a result, scholars can derive more accurate meanings
from the text - and potentially obtain a better understanding of the past.
"I've been working on these texts for years, trying to figure out what the
lines say," says Bodine, a research affiliate with the Babylonian Collection at
Yale, the nation's premier collection of ancient cuneiform tablets. "This
technology gives me access to more data than I get when using my own eyes."
In one case, researchers were even able to make out the fingerprint of the
author of a document, imprinted ever so faintly into the clay thousands of years
ago.
"You quite literally have the human touch of 3,000 or 4,000 years ago," says
Bruce Zuckerman, director of the West Semitic Research Project at the University
of Southern California. "Potentially, this (technology) could mean a profound
improvement in our knowledge of the ancient world."
Traditionally, scholars of ancient texts have scrutinized the physical
tablets, stones or other materials to decipher and transcribe inscriptions.
Trouble is, the tablets are thousands of years old - they're worn and often
crumbling, and the writing is faded.
By changing the angle or type of light shining on the tablets, scholars can
sometimes see the text more clearly. In the early 1980s, Zuckerman, a scholar
and teacher of the Bible and ancient Semitic languages at USC, pioneered the use
of high-resolution photographs in the study of ancient writing.
Malzbender's invention, a type of image-based relighting, takes that
technique several steps further by automating the collection of images and
allowing scholars to manipulate the lighting and other aspects of the image on
the screen. By changing the appearance of the object, the technology brings out
surface details previously invisible to the naked eye.
Malzbender and his collegues in HP Labs' visual computing department, Dan
Gelb and Hans Wolters, didn't set out to change the study of ancient texts. He
was trying to solve problems of existing 3D graphics rendering technologies,
looking for a way to both improve photorealism and rendering efficiency.
To collect data, Malzbender headed to his basement and built a geodesic dome
- constructed of glued-together wooden dowels - that would help him control the
angle of light. In a darkened room, he photographed a crumpled-up newspaper 40
times, each time changing the angle of the light, a table lamp.
It worked. On the screen, he could manipulate the angle of light to change
and enhance the surface appearance of the newspaper.
To speed the data-collection process, Malzbender teamed with fellow
researchers Gelb, Eric Montgomery and Bill Ambrisco to build an automated dome.
This one, a plastic dome with 50 flashbulbs mounted on its inside surface, takes
50 photos at the touch of a button.
Malzbender next wondered if there were other ways to accentuate surface
details. He wound up developing an entirely new class of constrast-enhancement
mechanisms that can bring out even surface details.

By changing the optical properties of the imaged object, he can synthesize
specular highlights - that is, he can make a surface that isn't reflective, such
as a clay tablet, appear to be as glossy as obsidian. Another technique, called
diffuse gain, changes how the surface responds to light. A third method, model
extrapolation, makes the lighting appear more oblique than is physically
possible.
Malzbender and other researchers are now working on a freestanding portable
unit that scholars could easily transport to remote sites to capture images of
large objects such as walls or statues.
The image-based relighting technology has the potential to reshape other
fields as well, including forensics and diagnostic medicine, where materials are
degraded or difficult to view.
"It's been really exciting," says Malzbender. "A lot of things have come out
of this we didn't expect."
Source:
http://www.hpl.hp.com/news/2000/oct-dec/3dimaging.html
The above segment courtesy of Jamie
Beckett
Reprinted with permission
Image-based Relighting Technology Demonstration
By changing the optical properties of the imaged object, you can synthesize
specular highlights
This QuickTime VR demo* is merely a coarse approximation of the actual
image-based relighting technology. Drag the mouse to change the lighting on the
object. Although this demo shows an abrupt transition between the various
lighting directions, image-based relighting technology is a smooth,
continuous-tone process.
*demo uses QuickTime VR
If you don't have
QuickTime, you can get it here
Wait for the status bar below to reach 100% before beginning.
Once the movie is completely loaded, click anywhere to move the light source.
You may also drag the mouse around and the light source will follow.
The Shift key allows you to zoom in, the Control key zooms out.
This QuickTime VR is merely a coarse approximation
of the actual image-based relighting technology. Drag the mouse to change the
lighting on the object. Although this demo shows an abrupt transition between
the various lighting directions, image-based relighting technology is a smooth,
continuous-tone process.
PMT and Fossils Studies
The article describes how fossils can be photographed with a new digital
technique, enhancing contrast in order to bring out subtle details.
Fifty
pictures are taken with light coming from different angles, and a computer
calculates how the intensity of each point in a combined image depends on light
angle.
From this information, an image can be re-created with different lighting
and optical surface properties, such as increased shininess. In addition, the
technique allows electronic publication of images where light and surface
properties can be manipulated by the reader. This can be very useful because
different features are often enhanced under different lighting conditions. The
technique has been tried on various kinds of fossils. It performs very well for
some types of fossil preservation although in some cases traditional photography
produces equally good results. The possibility for the reader to change the
lighting in electronically published pictures is always a benefit, however.
Many fossils cannot be fully illustrated using a setting of physical light
sources, and conventional photographic approaches are therefore insufficient.
Polynomial texture mapping (PTM), a new technique for imaging surface relief
using an ordinary digital camera and multiple light sources, offers an
opportunity to resolve these difficulties. In PTM, a series of 50 pictures is
taken with the specimen and camera in fixed positions, but with varying
directions of incoming light. These pictures are converted digitally into a PTM
file, incorporating information about the reflectance of each pixel under
different lighting conditions. The number, position, and intensity of virtual
light sources can then be manipulated, as can the reflectance properties of the
imaged surface. In addition to bringing out low surface relief that may
otherwise be difficult to illustrate, PTM also allows electronic transfer,
storage, and publication of images, allowing users to manipulate virtual light
sources interactively. Herein we test this technique using fossils with
different types of preservation, including Cambrian fossils from the Burgess
Shale and Chengjiang conservation lagerstätten, Cambrian fossils with 3D relief
from dark shales of Norway, Carboniferous plant fossil impressions from England,
Cambrian trace fossils in sandstone from Sweden, and Neoproterozoic impression
fossils from the Ediacara lagerstätten of south Australia. Whereas not optimal
for all types of fossil preservation, PTM enhancement techniques can provide
noticeable improvement in the imaging of specimens with low color contrast and
low but definite relief. The imaging of specimens with higher relief, although
they can be easily photographed with traditional techniques, will still benefit
from PTM because of the opportunity of interactive manipulation of light and
reflectance.

Fig. 2
PTM-reconstructed view of a Marella specimen from the
Burgess Shale,
Middle Cambrian, British Columbia.
2.1. One virtual light source normal to shale surface.
2.2. Oblique virtual light source from SE, inverted image.
This image can be regarded as a "traditional" film-based photographic
image for comparison with the enhanced versions.
2.3. Specular enhancement. Note that color contrast is
almost removed, whereas relief is enhanced.
2.4. Addition (overlaying) of image 2.2 and 2.3.
The PTM images can be acquired with a single light source that is manually
positioned for each exposure, but it is more practical to use a set of
computer-controlled light sources in fixed positions.
The above segment contains copyright material
Reprinted with permission
Links related to this article
3-D technology Preserves Ancient Treasures

Similar to how DNA banks are being created to store genetic data on
endangered animals, archaeologists now are preserving archaeological treasures
in the virtual world, for accuracy, ease of study, and in case real world
problems, like erosion, lead to damage or destruction.
The new 3-D process, developed by a nonprofit organization Institute for
Study and Integration of Graphical Heritage Techniques (called
Insight), gradually is replacing old data-gathering techniques,
which rely upon time-consuming single shot photography and hand-drawn images.
Insight's developers use a variety of technologies, combined with custom
designed computer software, to digitally recreate buildings and objects.
Many archaeological sites
worldwide are in peril. Recently, for example, the Taliban destroyed the Bamiyan Buddhas, an American film crew damaged Machu Picchu, organized theft
in Cambodia hurt Khmer antiquities and Egypt's Aswan Dam led to erosion of
buildings and hieroglyphics. There is a race to record and preserve such
treasures, and sometimes the recorded data is the only hope for
posterity. Political and financial obstacles often prevent restoration.
The new technology will record monuments that are fast disappearing due to a
rising water table in the Nile Valley. Previous efforts at epigraphic (wall
relief) recording have taken up to 90 years for a single monument. No tomb has
ever been completely recorded digitally. The speed at which this laser
technique operates promises that less information will be lost to history and
researchers.
 |
Picture(s): Courtesy of
Insight
This beginning model of the Parthenon was created by Jeremy Sears, one
of the team members for INSIGHT's 1999 Pilot in Egypt. |
 |
|
For Bernard Maybeck's Palace of Fine Arts — the current home of San
Francisco's Exploratorium — a prototype laser scanner was used to build
this 3D model. 3-D data was combined with the Palace's original
blueprints to create a short animated sequence that reflects Maybeck's
unrealized design ideas for the building.
|
 |
|
This image, showing the Incan site of Machu Picchu circa 1570, is based
on existing archaeological research. Colors and texturing were drawn
from research and photographs of the site.
|

Picture(s): Courtesy of Insight
| A Sculpture in the British Museum
|
Through Insight, researchers can obtain the necessary equipment without
charge. Projects using the technology are underway at sites in Cairo,
Alexandria, London
and San Francisco.
In Thebes, archaeologists are reconstructing a colossus of Ramses II that
was destroyed by Christians hundreds of years ago. Like a puzzle, hundreds of
pieces lay strewn on the ground. Cain and his team photographed each piece in
3-D on a revolving metal caster plate. Images were transferred to a computer
program where the jigsaw puzzle was put together in virtual space. The new technology can
record images accurate to 40 microns. Virtual reconstruction can serve as a roadmap and can help a committee
reach a consensus about whether or not the colossus should in fact be rebuilt.
The above segment contains copyright material. Text and images
courtesy of
INSIGHT Reprinted with permission Related Link/Source:
INSIGHT
Related Links and Resources

Review the best
Science & Nature Magazines
(Click on the link above to explore)
|