Yao-Yi Chiang & Weiwei Duan 
Using Historical Maps in Scientific Studies [PDF ebook] 
Applications, Challenges, and Best Practices

Dukung


This book illustrates the first connection between the map user community and the developers of digital map processing technologies by providing several applications, challenges, and best practices in working with historical maps. After the introduction chapter, in this book, Chapter 2 presents a variety of existing applications of historical maps to demonstrate varying needs for processing historical maps in scientific studies (e.g., thousands of historical maps from a map series vs. a few historical maps from various publishers and with different cartographic styles). Chapter 2 also describes case studies introducing typical types of semi-automatic and automatic digital map processing technologies. 


The case studies showcase the strengths and weaknesses of semi-automatic and automatic approaches by testing them in a symbol recognition task on the same scanned map. Chapter 3 presents the technical challenges and trends in building a map processing, modeling, linking, and publishing framework. The framework will enable querying historical map collections as a unified and structured spatiotemporal source in which individual geographic phenomena (extracted from maps) are modeled (described) with semantic descriptions and linked to other data sources (e.g., DBpedia, a structured version of Wikipedia). Chapter 4 dives into the recent advancement in deep learning technologies and their applications on digital map processing. The chapter reviews existing deep learning models for their capabilities on geographic feature extraction from historical maps and compares different types of training strategies. A comprehensive experiment is described to compare different models and their performance.


Historical maps are fascinating to look at and contain valuable retrospective place information difficult to find elsewhere. However, the full potential of historical maps has not been realized because the users of scanned historical maps and the developers of digital map processing technologies are from a wide range of disciplines and often work in silos. Each chapter in this book can be read individually, but the order of chapters in this book helps the reader to first understand the “product requirements” of a successful digital map processing system, then review the existing challenges and technologies, and finally follow the more recent trend of deep learning applications for processing historical maps. 


The primary audience for this book includes scientists and researchers whose work requires long-term historical geographic data as well as librarians. The secondary audience includes anyone who loves maps!

€52.99
cara pembayaran

Tentang Penulis


Yao-Yi Chiang is an Associate Professor (Research) in Spatial Sciences, the Director of the Spatial Computing Laboratory, and the Associate Director of the NSF’s Integrated Media Systems Center (IMSC) at the University of Southern California (USC). He is also a faculty in Informatics (Data Science) at USC. Dr. Chiang received his Ph.D. degree in Computer Science from the University of Southern California; his bachelor’s degree in Information Management from the National Taiwan University. He develops computer algorithms and artificial intelligent systems for discovering, collecting, fusing, and analyzing data from heterogeneous sources to solve real-world problems. His research interests in artificial intelligence include information integration, machine learning, data mining, computer vision, and knowledge graphs. Recently, Dr. Chiang and his lab developed a deep-learning system for predicting and forecasting air quality at a fine spatial scale. They are also developing acomputer vision system that can use pre-existing knowledge of an area for object extraction and recognition from images, including satellite imagery and scanned maps. Dr. Chiang teaches a very popular graduate data mining course at USC and hosts annual data mining competitions on the topic of recommendation systems using real data. Before USC, Dr. Chiang worked as a research scientist for Geosemble Technologies and Fetch Technologies in California. Geosemble Technologies was founded based on a patent on geospatial data fusion techniques, and he was a co-inventor. 

Weiwei Duan is a Ph.D. student major in Computer Science at the University of Southern California (USC). She is working on building a computer-vision-based system for extracting information on georeferenced images and storing them in a structured format for analysis. The system localizes geographic objects on images by integrating geospatial information and using limited noisy labeling data. Her research interests are computer vision, knowledge graphs, and machine learning. 

Stefan Leyk is an Associate Professor at the Department of Geography, University of Colorado Boulder and a Research Fellow at the Institute of Behavioral Science. He is a Geographical Information Scientist with research interests in information extraction, spatio-temporal modeling and socio-environmental systems. In his work he uses various sources of historical spatial data to better understand the evolution of human systems and how the built environment interacts with environmental processes in the context of land use and natural hazards.

Johannes H. Uhl is a Ph D candidate at the Department of Geography, University of Colorado, Boulder, USA. He received his MS degree in Geomatics, Geodesy, and Cartography from Karlsruhe University of Applied Sciences, Germany, and from Polytechnic University of Valencia, Spain, in 2011. His current research interests include spatiotemporal information extraction and data modeling, uncertainty analysis, geospatial data integration, and machine learning. He uses a variety of spatial-temporal datasets, such as remote sensing data and derived data products, historical topographic maps, or large real-estate related databases.

Craig A. Knoblock is Executive Director of the Information Sciences Institute of the University of Southern California (USC), Research Professor of both Computer Science and Spatial Sciences at USC, Research Director of the Center on Knowledge Graphs, and Associate Director of the Informatics Program at USC. He received his Bachelor of Science degree from Syracuse University and his Master’s and Ph.D. from Carnegie Mellon University in computer science. His research focuses on techniques for describing, acquiring, and exploiting the semantics of data. He has worked extensively on source modeling, schema and ontology alignment, entity and record linkage, data cleaning and normalization, extracting data from the Web, and combining all of these techniques to build knowledge graphs. He has published more than 300 journal articles, book chapters, and conference papers on these topics and has received 7 best paper awards on this work. Dr. Knoblock is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the Association of Computing Machinery (ACM), past President and Trustee of the International Joint Conference on Artificial Intelligence (IJCAI), and winner of the 2014 Robert S. Engelmore Award.

Beli ebook ini dan dapatkan 1 lagi GRATIS!
Bahasa Inggris ● Format PDF ● Halaman 114 ● ISBN 9783319669083 ● Ukuran file 11.3 MB ● Penerbit Springer International Publishing ● Kota Cham ● Negara CH ● Diterbitkan 2019 ● Diunduh 24 bulan ● Mata uang EUR ● ID 7270290 ● Perlindungan salinan DRM sosial

Ebook lainnya dari penulis yang sama / Editor

8,203 Ebooks dalam kategori ini