Projection-based exhibition methods have been used in a museum to create digital contents of cultural objects. They can enrich the exhibition of a cultural heritage object with physically co-located digital contents, and multiple users can enjoy the projected contents without the aid of additional devices. But the quality of projection is often restricted by surrounding environment such as the influence of the ambient light and occlusions by obstacles. The degree of freedom of projected contents also is limited due to the usage of a static projection surface. In this paper, we propose a novel projection-based exhibition system that resolves these shortcomings. We introduce a new design by combining the multi projection mapping with an optical see-through display. It provides high-quality projection contents robust to the influence of the ambient light and to the problem of obstacles. We also introduce a mechanically moving projection surface to provide a dynamic projection content by changing its shape and appearance. Our prototype system demonstrates applications that show a realistic three-dimensional effect and photo-realistic appearance of a cultural object.
We consider the problem of localizing visitors in a cultural site from egocentric (first person) images. Localization information can be useful both to assist the user during his visit (e.g., by suggesting where to go and what to see next) and to provide behavioral information to the manager of the cultural site (e.g., how much time has been spent by visitors at a given location? What has been liked most?). To tackle the problem, we collected a large dataset of egocentric videos using two cameras: a head-mounted HoloLens device and a chest-mounted GoPro. Each frame has been labeled according to the location of the visitor and to what he was looking at. The dataset is freely available in order to encourage research in this domain. The dataset is complemented with baseline experiments performed considering a state-of-the-art method for location-based temporal segmentation of egocentric videos. Experiments show that compelling results can be achieved to extract useful information for both the visitor and the site-manager.
The manual shape archaeological projectile point classification is an extensive and complex process because involves a large number of typological categories. The present work is focused on the development of an automatic classifier algorithm of projectile points, based on its digital image. Additionally, the algorithm supports different conditions such as scale and quality image. The algorithm requires a uniform background and an approximate north south projectile point orientation. The principal computer methods that compose the classifier are the CSS-map (curvature scale space map), the gradient contour application on the projectile point, and the SVM (Support Vector Machines) algorithm. The classifier was trained and tested on a dataset of approximately 800 projectile points images. The results have shown a better performance than other shape descriptors such as PHOG, HOOSC (both used in a ?Bag of Words? context), and geometric moment invariants (Hu moments).
High fidelity reproductions of paintings provide new opportunities to museums in preserving and providing access to cultural heritage. This paper presents an integrated system which is able to capture and fabricate color, topography and gloss of a painting, of which gloss capturing forms the most important contribution. A 3D imaging system, utilizing stereo imaging combined with fringe projection, is extended to capture spatially-varying gloss, utilizing the effect of specular reflectance polarization. The gloss is measured by sampling the specular reflection around Brewsters angle, where these reflections are effectively polarized, and can be separated from the unpolarized, diffuse reflectance. Off-center gloss measurements are calibrated relative to the center measurement. Off-specular gloss measurements, following from local variation of the surface normal, are masked based on the height map and corrected. Shadowed regions, caused by the 3D relief, are treated similarly. The area of a single capture is approximately 180x90mm at a resolution of 25x25¼m. Aligned color, height, and gloss tiles are stitched together off-line, registering overlapping color regions. The resulting color, height and gloss maps are inputs for the poly-jet 3D printer. Two paintings were reproduced to verify the effectiveness and efficiency of the proposed system. One painting was scanned four times, consecutively rotated by 90 degrees, to evaluate the influence of the scanning system geometric configuration on the gloss measurement. Experimental results show that the method is sufficiently fast for practical application, i.e. to scan a whole painting within eight hours, during closing hours of a museum. The results can well be used for the purpose of physical reproduction and other applications needing first order estimates of the appearance. Our method to extend appearance scanning with gloss measurements is a valuable addition in the quest for realistic reproductions, in terms of its practical applicability - number of images needed for reconstruction and speed - and its perceptual added value, when added to color and topography reproduction.
The most successful approach for hieroglyph representation for retrieval starts thinning the hieroglyph contour lines. Then, a set of interest points from the thinned hieroglyph is randomly selected, and a local descriptor from each selected interest point is computed. These local descriptors are used under the Bag of Visual Words (BoVW) model for performing hieroglyph retrieval. This approach has as drawback that a random selection of a subset of interest points does not guarantee suitably preserving the most useful information of a hieroglyph. Additionally, during the thinning process, contour shape distortions could lead to unwanted branches, which do not represent important information and could affect the quality of the local descriptors. Therefore, in this paper, we propose improving the hieroglyph representation quality by pruning unwanted branches over the thinned contour of a hieroglyph and introducing an improved interest point selection process. Our experiments show that our proposal allows significantly improving the image retrieval results previously reported in the literature.
The technological advances brought about by the Internet of Things enable new opportunities for a more direct interaction between users, objects and places. This is an extremely valuable innovation for the Cultural Heritage sector, as it allows a more transparent use of technology in the digital augmentation of museums and cultural heritage sites. The possibility to augment physical objects with sensors detecting when they are moved and manipulated enables scenarios where descriptive information about objects is presented to users at the very exact time they are looking at them, stimulating engagement. This paper describes a collaborative research effort between cultural heritage professionals, human-computer interaction experts and developers which was aimed at investigating the goals and constraints curators consider for a physical encounter between visitors and historic relics. In a case study, we co-designed an interactive plinth centred on tangible interaction and evaluated the impact on the user experience of combining digital information with a hands-on experience of relics of World War I. Our findings show that visitors value this type of tangible interaction with collection objects positively, as it allows the discovery of details and the learning of aspects that normally go unnoticed. The synergy between physical and digital aspects stimulates empathy with the original users of the object and fosters social interaction.
Acquiring images of archaeological artifacts is an essential step for the study and preservation of cultural heritage. In constrained environments, traditional acquisition techniques may fail or be too invasive. We present an optical device including a camera and a wedge waveguide that is optimized for imaging within confined spaces in archeology. The major idea is to redirect light by total internal reflection to circumvent the lack of room, and to compute the final image from the raw data. We tested various applications onsite in autumn 2017 during an archaeological mission in Medamoud (Egypt). Our device was able to successfully record images of the underground from slim trenches of about 15 cm wide, including underwater trenches, and between rocks composing a wall temple. Experts agreed that the acquired images were good enough to get useful information that cannot be obtained as easily with traditional techniques.
With this paper we present the ongoing research project Tango Danceability of Music in European Perspective and the transdisciplinary research design it is built upon. Three main aspects of tango argentino are in focusthe music, the dance, and the people in order to understand what is considered danceable in tango music. The study of all three parts involves computer-aided analysis approaches, and the results are examined within ethnochoreological and ethnomusicological frameworks. Two approaches are illustrated in detail to show initial results of the research model. Network analysis based on the collection of online tango event data and quantitative evaluation of data gathered by an online survey showed significant results, corroborating the hypothesis of gatekeeping effects in the shaping of musical preferences. The experiment design includes incorporation of motion capture technology into dance research. We demonstrate certain advantages of transdisciplinary approaches in the study of Intangible Cultural Heritage, in contrast to conventional studies based on methods from just one academic discipline.
Fourth Industrial Revolution technologies, such as artificial intelligence, big data, the internet of things (IoT), and virtual reality, have disrupted legacy methods of operations and have led to progress in many industries worldwide. These technologies also affect the cultural and national heritage. IoT generates large volumes of streaming data; therefore, advanced data analytics using big data analytics and artificial neural networks is an important research topic. In this study, IoT sensor data was collected at the restored Woljeong Bridge, which was originally built in the eighth century, or 760 AD, during the Silla Dynasty (57 BC?935 AD) in South Korea. We empirically evaluate a recurrent neural network with recurrent units, including a long short-term memory (LSTM) unit and a gated recurrent unit (GRU). Additionally, we evaluate hybrid deep-learning models (convolution neural networks [CNN]-LSTM and CNN-GRU), to build a prediction model, facilitating the preventive conservation of an invaluable cultural and national heritage site. The experimental results show that the LSTM unit is an effective and robust model. When comparing the hybrid models (i.e., the joint CNN-LSTM and CNN-GRU architectures), we found that the vanilla LSTM and GRU models had superior time-series prediction capabilities.
Digital heritage comprises a broad variety of approaches and topics and involves researchers from multiple disciplines. Against this background, this paper presents a four-stage investigation on standards, publications, disciplinary cultures as well as scholars in the field of digital heritage and particularly tangible objects as monuments and sites, carried out in 2016 and 2017. It includes results of (1) the inquiry of nearly 4000 publications from major conferences, (2) a workshop-based survey involving 44 researchers, (3) 15 qualitative interviews as well as (4) two online surveys with 1000 and 700 participants respectively. As an overall finding, the community is driven by researchers from European countries, especially Italy, with a background in humanities. Cross-national co-authorships are promoted by cultural and spatial closeness and?probably due to funding policy?EU membership. A discourse is primarily driven by technologies and the most common keywords refer to the technologies used. Most prominent research areas are data acquisition and management, visualization or analysis. Recent topics are for instance unmanned airborne vehicle (UAV)-based 3D surveying technologies, augmented and virtual reality visualization, metadata and paradata standards for documentation or virtual museums. Since a lack of money is named as biggest obstacle nowadays, competency and human resources are most frequently named as demand. An epistemic culture in the scholarly field of digital heritage is closer to engineering then to humanities. Moreover, conference series are most relevant for a scientific discourse, and especially EU projects set pace as most important research endeavors.
This article presents a new algorithm for the automated reconstruction and visualization of damaged ancient inscriptions. After reviewing current methods for enhancing incisions, a hybrid approach is adopted that combines advantages of 2D and 3D analytical techniques. A photogrammetric point cloud of an inscription is projected orthographically from an ideal vantage point, generating a 2.5D raster including channels describing depth and surface derivatives. Next, the obstacles to legibility posed by breaks in an ancient text are considered, leading to the creation of a new segmentation algorithm based on SLIC superpixels and region-merging that operates on the geometry channels of the inscribed surface, rather than color or intensity values. With high accuracy, the algorithm classifies surface points by their likelihood of belonging to the uninscribed original plane, deliberate strokes, or breaks. Conventions for static visualization are developed for epigraphical analysis and publication. Three case studies demonstrate the power and flexibility of this method, which has resulted in substantial changes to IG XIV 1, an early Greek text whose reading has been debated for more than 150 years.
Terrestrial laser scanning campaigns provide an important means to document the 3D structure of historical sites. Unfortunately, the process of converting the 3D point clouds acquired by the laser scanner into a coherent and accurate 3D model has many stages and is not generally automated. In particular, the initial cleaning stage of the pipeline in which undesired scene points are deleted remains largely manual and is usually labour intensive. In this paper we introduce a semi-automated cleaning approach which incrementally trains a random forest (RF) classifier on an initial keep/discard point labelling generated by the user when cleaning the first scan(s). The classifier is then used to predict the labelling of the next scan in the sequence. Before this classification is presented to the user, a denoising post-process, based on the 2D range map representation of the laser scan, is applied. This significantly reduces small isolated point clusters, which the user would otherwise have to fix. The user then selects the remaining incorrectly labelled points, and these are weighted, based on a confidence estimate, and fed back into the classifier to retrain it for the next scan. Our experiments, across 4 scanning campaigns, show that when the scan campaign is coherent i.e. it does not contain widely disparate or contradictory data, the classifier yields a keep/discard labelling which typically ranges between 95-99%. This is somewhat surprising, given that the data in each class can represent many object types, such as tree, person, wall etc, and that no further effort beyond the point labeling of keep/discard is required of the user. An informal timing experiment over a 15 scan campaign, to compare the cleaning times produced by our software against those of an experienced user, showed that we were able to produce a result at 98% (average) accuracy 20 minutes sooner than the cumulative expert cleaning time, even with a non optimized code.
In this article, the design, development, and evaluation of augmented reality (AR) based mobile application for a tour guide are discussed. The primary objective is to develop a complete working set of a mobile tour application comparable to the classical guided tour to provide an enhanced tour experience to the visitors. The developed application is demonstrated by applying it to an actual tour site, the Hwaseong Fortress in Suwon, South Korea, a UNESCO designated World Heritage. The usability of the developed mobile application is then evaluated by random tourists at the tour site. The application is designed with the three main functions: navigation to the point of interests, visualization of information with AR technology, and interactive learning activities with AR-based serious games. Important contents about the heritage are categorized and grouped into multi-themes, and multiple tour routes are designed and implemented accordingly to maximize the effective information delivery and to avoid being monotonous while using the application. Efforts are also made to provide a more immersive and interactive experience of the historical, cultural, and architectural details of the heritage utilizing novel AR visualization methods. A systematically developed survey instrument from the field of information systems and human-computer interaction is tailored to fit into this research, and employed for the application evaluation. The survey returned positive results with suggestions of possible refinements for the future works. The proposed method of a device-aided tour is anticipated to enhance the tourists' experience, and thereby play an important role as an alternative to the classical guided tour.
Numerous image inpainting algorithms are guided by a basic assumption that the known region in the original image itself can provide sufficient prior information for the guess recovery of the unknown part, which is not often the case in actual art image inpainting. Sometimes, the art image need to be inpainted is so badly damaged that there is little priors as a good model to infer the unknown fragment. Focusing on the lookup strategy for optimal patches, a novel semi-automatic exemplar-based inpainting framework based on a sample dataset is proposed in this paper to solve such a problem with 3 steps: 1) reference images selection from the dataset using deep convolutional network; 2) sample image creation based on reference images with melding algorithm; 3) exemplar-based inpainting according to the created sample image. Several comparative experiments over Dazu Rock Carvings with the state-of-the-art image completion approaches demonstrate the effectiveness of our contributions. Firstly, the search space for candidate patches is extended from the known region to a sample image. It performs effectively for the inpainting case of little prior information existing in the original image itself. Furthermore, sample image creation is added to reduce the complexity of inpainting via multiple images and avoid the taboo of complete duplication in art restoration. Moreover, Poisson blending is used for post-procedure to improve the visual harmony between the reconstructed fragment and the known region in both color and illumination. Last but not least, our method is successfully applied in the virtual inpainting of Dazu Buddhist face images. The inpainted proposals can be as a reference for the final actual artificial inpainting as well as a base for VR show.