Research

Notoriously curious me exploring the world

Selected papers and articles

[1]
J. Böhm, M. Bredif, T. Gierlinger, M. Krämer, R. Lindenbergh, K. Liu, F. Michel, and B. Sirmacek, “The IQmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI–B3, pp. 301–307, 2016.
[ | ]

Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

@article{boehm-bredif-gierlinger-kraemer-lindenbergh-liu-michel-sirmacek-2016,
  title = {The IQmulus Urban Showcase: Automatic Tree Classification and
    Identification in Huge Mobile Mapping Point Clouds},
  author = {B\”ohm, J. and Bredif, M. and Gierlinger, T. and Kr\”amer, M. and
    Lindenbergh, R. and Liu, K. and Michel, F. and Sirmacek, B.},
  journal = {ISPRS - International Archives of the Photogrammetry, Remote
    Sensing and Spatial Information Sciences},
  volume = {XLI-B3},
  year = {2016},
  pages = {301–307},
  keywords = {Mobile mapping},
  keywords = {Big data},
  keywords = {Classification},
  keywords = {Trees},
  keywords = {Cloud computing},
  keywords = {Web-based visualization},
  doi = {http://dx.doi.org/10.5194/isprs-archives-XLI-B3-301-2016},
  abstract = {
    Current 3D data capturing as implemented on for example airborne or mobile
    laser scanning systems is able to efficiently sample the surface of a city
    by billions of unselective points during one working day. What is still
    difficult is to extract and visualize meaningful information hidden in these
    point clouds with the same efficiency. This is where the FP7 IQmulus project
    enters the scene. IQmulus is an interactive facility for processing and
    visualizing big spatial data. In this study the potential of IQmulus is
    demonstrated on a laser mobile mapping point cloud of 1 billion points
    sampling ~ 10 km of street environment in Toulouse, France. After the data
    is uploaded to the IQmulus Hadoop Distributed File System, a workflow is
    defined by the user consisting of retiling the data followed by a PCA driven
    local dimensionality analysis, which runs efficiently on the IQmulus cloud
    facility using a Spark implementation. Points scattering in 3 directions are
    clustered in the tree class, and are separated next into individual trees.
    Five hours of processing at the 12 node computing cluster results in the
    automatic identification of 4000+ urban trees. Visualization of the results
    in the IQmulus fat client helps users to appreciate the results, and
    developers to identify remaining flaws in the processing workflow.
  }
}
[2]
M. Krämer and I. Senner, “A modular software architecture for processing of big geospatial data in the cloud,” Computers & Graphics, vol. 49, pp. 69–81, 2015.
(Best paper award at CGA 2016)[ | ]

In this paper we propose a software architecture that allows for processing of large geospatial data sets in the cloud. Our system is modular and flexible and supports multiple algorithm design paradigms such as MapReduce, in-memory computing or agent-based programming. It contains a web-based user interface where domain experts (e.g. {GIS} analysts or urban planners) can define high-level processing workflows using a domain-specific language (DSL). The workflows are passed through a number of components including a parser, interpreter, and a service called jobmanager. These components use declarative and procedural knowledge encoded in rules to generate a processing chain specifying the execution of the workflows on a given cloud infrastructure according to the constraints defined by the user. The job manager evaluates this chain, spawns processing services in the cloud and monitors them. The services communicate with each other through a distributed file system that is scalable and fault-tolerant. Compared to previous work describing cloud infrastructures and architectures we focus on the processing of big heterogeneous geospatial data. In addition to that, we do not rely on only one specific programming model or a certain cloud infrastructure but support several ones. Combined with the possibility to control the processing through DSL-based workflows, this makes our architecture very flexible and configurable. We do not only see the cloud as a means to store and distribute large data sets but also as a way to harness the processing power of distributed computing environments for large-volume geospatial data sets. The proposed architecture design has been developed for the {IQmulus} research project funded by the European Commission. The paper concludes with the evaluation results from applying our solution to two example workflows from this project.

@article{kraemer-senner-2015,
  title    = {A modular software architecture for processing of big geospatial data in the cloud},
  journal  = {Computers & Graphics},
  year     = {2015},
  volume   = {49},
  pages    = {69–81},
  issn     = {0097-8493},
  doi      = {http://dx.doi.org/10.1016/j.cag.2015.02.005},
  url      = {http://www.sciencedirect.com/science/article/pii/S0097849315000138},
  author   = {Michel Kr{\”a}mer and Ivo Senner},
  keywords = {Cloud computing},
  keywords = {Big Data},
  keywords = {Geoprocessing},
  keywords = {Distributed systems},
  keywords = {Software architectures},
  keywords = {Domain-specific languages},
  abstract = {
    In this paper we propose a software architecture that allows for
    processing of large geospatial data sets in the cloud. Our system is
    modular and flexible and supports multiple algorithm design paradigms
    such as MapReduce, in-memory computing or agent-based programming. It
    contains a web-based user interface where domain experts (e.g. {GIS}
    analysts or urban planners) can define high-level processing workflows
    using a domain-specific language (DSL). The workflows are passed through
    a number of components including a parser, interpreter, and a service
    called job manager. These components use declarative and procedural
    knowledge encoded in rules to generate a processing chain specifying the
    execution of the workflows on a given cloud infrastructure according to
    the constraints defined by the user. The job manager evaluates this
    chain, spawns processing services in the cloud and monitors them. The
    services communicate with each other through a distributed file system
    that is scalable and fault-tolerant. Compared to previous work
    describing cloud infrastructures and architectures we focus on the
    processing of big heterogeneous geospatial data. In addition to that,
    we do not rely on only one specific programming model or a certain
    cloud infrastructure but support several ones. Combined with the
    possibility to control the processing through DSL-based workflows, this
    makes our architecture very flexible and configurable. We do not only
    see the cloud as a means to store and distribute large data sets but also
    as a way to harness the processing power of distributed computing
    environments for large-volume geospatial data sets. The proposed
    architecture design has been developed for the {IQmulus} research
    project funded by the European Commission. The paper concludes with the
    evaluation results from applying our solution to two example workflows
    from this project.
  }
}
[3]
T. Ruppert, J. Dambruch, M. Krämer, T. Balke, M. Gavanelli, S. Bragaglia, F. Chesani, M. Milano, and J. Kohlhammer, “Visual Decision Support for Policy Making – Advancing Policy Analysis with Visualization,” Policy practice and digital science: Integrating complex systems, social simulation and public administration in policy research. Springer, pp. 321–353, 2015.
[ | ]

Today’s politicians are confronted with new information technologies to tackle complex decision-making problems. In order to make sustainable decisions, a profound analysis of societal problems and possible solutions (policy options) needs to be performed. In this policy-analysis process, different stakeholders are involved. Besides internal direct advisors of the policy makers (policy analysts), external experts from different scientific disciplines can support evidence-based decision making. Despite the alleged importance of scientific advice in the policy-making process, it is observed that scientific results are often not used. In this work, a concept is described that supports the collaboration between scientists and politicians. We propose a science-policy interface that is realized by including information visualization in the policy-analysis process. Therefore, we identify synergy effects between both fields and introduce a methodology for addressing the current challenges of science-policy interfaces with visualization. Finally, we describe three exemplary case studies carried out in European research projects that instantiate the concept of this approach.

@chapter{ruppert-dambruch-kraemer-2015,
  title     = {Visual Decision Support for Policy Making – Advancing Policy Analysis with Visualization},
  author    = {T. Ruppert and J. Dambruch and M. Kr{\”a}mer and T. Balke
    and M. Gavanelli and S. Bragaglia and F. Chesani and M. Milano and J. Kohlhammer},
  booktitle = {Policy practice and digital science: Integrating complex systems,
    social simulation and public administration in policy research},
  editor    = {Christopher G. Reddick},
  publisher = {Springer},
  year      = {2015},
  pages     = {321–353},
  keywords  = {Visual decision support},
  keywords  = {Digital society},
  keywords  = {Human computer interaction (HCI)},
  keywords  = {Information visualization},
  keywords  = {Visual analysis},
  keywords  = {Policy modeling},
  keywords  = {Decision support},
  doi       = {http://dx.doi.org/10.1007/978-3-319-12784-2_15},
  abstract  = {
    Today’s politicians are confronted with new information technologies to
    tackle complex decision-making problems. In order to make sustainable
    decisions, a profound analysis of societal problems and possible solutions
    (policy options) needs to be performed. In this policy-analysis process,
    different stakeholders are involved. Besides internal direct advisors of the
    policy makers (policy analysts), external experts from different scientific
    disciplines can support evidence-based decision making. Despite the alleged
    importance of scientific advice in the policy-making process, it is observed
    that scientific results are often not used. In this work, a concept is
    described that supports the collaboration between scientists and
    politicians. We propose a science-policy interface that is realized by
    including information visualization in the policy-analysis process.
    Therefore, we identify synergy effects between both fields and introduce a
    methodology for addressing the current challenges of science-policy
    interfaces with visualization. Finally, we describe three exemplary case
    studies carried out in European research projects that instantiate the
    concept of this approach.
  }
}
[4]
M. Krämer and R. Gutbell, “A case study on 3D geospatial applications in the Web using state-of-the-art WebGL frameworks,” in Proceedings of the Twentieth International ACM Conference on 3D Web Technologies (Web3D), 2015, pp. 189–197.
[ | ]

We evaluate three Open-Source WebGL frameworks (X3DOM, three.js and Cesium) and investigate their use for geospatial applications in the Web. Over the course of one year we carried out five case studies and developed different software prototypes using these frameworks. We focus on geospatial applications since they require unique features that bring the evaluated WebGL frameworks to their limits. Geospatial data is typically heterogeneous and very large. Geospatial applications require special data management and interaction techniques. We describe the criteria that we defined to test the WebGL solutions and present our experience from working with them. We give a qualitative comparison and finish the paper with conclusions and an outlook on future research perspectives.

@inproceedings{kraemer-gutbell-2015,
  author    = {Kr{\”a}mer, Michel and Gutbell, Ralf},
  title     = {A case study on 3D geospatial applications in the Web using state-of-the-art WebGL frameworks},
  booktitle = {Proceedings of the Twentieth International ACM Conference on 3D Web Technologies (Web3D)},
  series    = {Web3D ‘15},
  year      = {2015},
  pages     = {189–197},
  doi       = {http://dx.doi.org/10.1145/2775292.2775303},
  location  = {Heraklion, Crete, Greece},
  publisher = {ACM},
  abstract  = {
    We evaluate three Open-Source WebGL frameworks (X3DOM, three.js and Cesium)
    and investigate their use for geospatial applications in the Web. Over the
    course of one year we carried out five case studies and developed different
    software prototypes using these frameworks. We focus on geospatial
    applications since they require unique features that bring the evaluated
    WebGL frameworks to their limits. Geospatial data is typically
    heterogeneous and very large. Geospatial applications require special data
    management and interaction techniques. We describe the criteria that we
    defined to test the WebGL solutions and present our experience from working
    with them. We give a qualitative comparison and finish the paper with
    conclusions and an outlook on future research perspectives.
  }
}
[5]
C. Malewski, J. Dambruch, and M. Krämer, “Towards Interactive Geodata Analysis through a Combination of Domain-Specific Languages and 3D Geo Applications in a Web Portal Environment,” in Proceedings of the 20th International Conference on Urban Planning, Regional Development and Information Society REAL CORP, 2015, pp. 609–616.
[ | ]

Urban planning processes affect a wide range of stakeholders including decision makers, urban planners, business companies as well as citizens. ICT-enabled tools supporting urban planning are considered key to successful and sustainable urban management. Based on previous work in the areas of web-based participation tools for urban planning, rule-based geospatial processing as well as 3D virtual reality applications we present a tool that supports experts from municipalities in planning and decision making but also provides a way for the public to engage in urban planning processes. The main contribution of this work is in the combination of 3D visualization and interaction components with a new ontology-driven rule editor based on domain-specific languages. The 3D visualization, on the one hand, enables stakeholders to present and discuss urban plans. On the other hand, the rule editor particularly targets expert users who need to perform spatial analyses on urban data or want to configure the 3D scene according to custom rules. Compared to previous approaches we propose a portable and interactive solution. Our tool is web-based and uses HTML5 technology making it accessible by a broad audience.

@inproceedings{malewski-dambruch-kraemer-2015,
  author       = {Malewski, Christian and Dambruch, Jens and Kr{\”a}mer, Michel},
  title        = {Towards Interactive Geodata Analysis through a Combination of Domain-Specific Languages and 3D Geo Applications in a Web Portal Environment},
  booktitle    = {Proceedings of the 20th International Conference on Urban Planning, Regional Development and Information Society REAL CORP},
  year         = {2015},
  pages        = {609–616},
  organization = {CORP – Competence Center of Urban and Regional Planning},
  publisher    = {CORP, Vienna},
  abstract     = {
    Urban planning processes affect a wide range of stakeholders including
    decision makers, urban planners, business companies as well as citizens.
    ICT-enabled tools supporting urban planning are considered key to
    successful and sustainable urban management. Based on previous work in the
    areas of web-based participation tools for urban planning, rule-based
    geospatial processing as well as 3D virtual reality applications we present
    a tool that supports experts from municipalities in planning and decision
    making but also provides a way for the public to engage in urban planning
    processes. The main contribution of this work is in the combination of 3D
    visualization and interaction components with a new ontology-driven rule
    editor based on domain-specific languages. The 3D visualization, on the one
    hand, enables stakeholders to present and discuss urban plans. On the other
    hand, the rule editor particularly targets expert users who need to perform
    spatial analyses on urban data or want to configure the 3D scene according
    to custom rules. Compared to previous approaches we propose a portable and
    interactive solution. Our tool is web-based and uses HTML5 technology
    making it accessible by a broad audience.
  }
}
[6]
M. Krämer, “Controlling the Processing of Smart City Data in the Cloud with Domain-Specific Languages,” in Proceedings of the 7th International Conference on Utility and Cloud Computing (UCC), 2014, pp. 824–829.
[ | ]

In this paper a new user interface for the processing of big geospatial data in the Cloud is presented. The paper focuses on the processing of Smart City data since this kind of data is inherently large and becomes more and more important for the maintenance and planning of sustainable future Smart Cities. The user interface is facilitated by Domain-Specific Languages (DSLs) that are tailored to the urban planning domain. In this paper a modelling method for DSLs is described and then applied to an example use case from the research project IQmulus funded by the European Commission. The resulting DSL is specified using a Parsing Expression Grammar (PEG) which is used to automatically generate a language parser. Furthermore, a technique to interpret scripts written in the Domain-Specific Language is presented. This technique makes use of mapping rules that specify how the interpreter has to translate terms in the DSL to processing services in the Cloud.

@inproceedings{kraemer-2014,
  author    = {Kr{\”a}mer, Michel},
  title     = {Controlling the Processing of Smart City Data in the Cloud with Domain-Specific Languages},
  booktitle = {Proceedings of the 7th International Conference on Utility and Cloud Computing (UCC)},
  series    = {UCC ‘14},
  year      = {2014},
  isbn      = {978-1-4799-7881-6},
  location  = {London, UK},
  pages     = {824–829},
  numpages  = {6},
  publisher = {IEEE},
  abstract  = {
    In this paper a new user interface for the processing of big geospatial data in the
    Cloud is presented. The paper focuses on the processing of Smart City data since this
    kind of data is inherently large and becomes more and more important for the
    maintenance and planning of sustainable future Smart Cities. The user interface is
    facilitated by Domain-Specific Languages (DSLs) that are tailored to the urban
    planning domain. In this paper a modelling method for DSLs is described and then
    applied to an example use case from the research project IQmulus funded by the
    European Commission. The resulting DSL is specified using a Parsing Expression
    Grammar (PEG) which is used to automatically generate a language parser. Furthermore,
    a technique to interpret scripts written in the Domain-Specific Language is
    presented. This technique makes use of mapping rules that specify how the interpreter
    has to translate terms in the DSL to processing services in the Cloud.
  }
}
[7]
J. Dambruch and M. Krämer, “Leveraging Public Participation in Urban Planning with 3D Web Technology,” in Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies (Web3D), 2014, pp. 117–124.
(Best paper award at CGA 2015)   (Best paper award at Web3D 2014)[ | ]

In this paper we present a web-based platform that makes use of HTML5 technology and WebGL to facilitate public participation in urban planning. It consists of components that enable city administrations to present urban plans to the public and to engage with stakeholders. One of these components uses the open source library X3DOM to visualise 3D content—for example, a city model containing a 3D representation of a planned building. Since X3DOM does not need additional software to be installed on the user’s system our implementation is more portable than previous work. Our solution is based on the open source software Liferay which allows it to be configured for various urban planning projects. In order to enable communication between different web application components residing in inline frames (iframes) we implemented a special message bus based on HTML5 postMessage. In this paper we describe implementation details, but we also intensively discuss the possibilities of modern web technology for urban planning. We motivate the use of such technology through three examples that can be implemented using our web application. In this paper we also present results from evaluating our application in user workshops carried out within the project urbanAPI that is funded by the European Commission. Finally, we draw conclusions and discuss possibilities for future urban planning use cases.

@inproceedings{dambruch-kraemer-2014,
  author    = {Dambruch, Jens and Kr{\”a}mer, Michel},
  title     = {Leveraging Public Participation in Urban Planning with 3D Web Technology},
  booktitle = {Proceedings of the Nineteenth International ACM Conference on 3D Web Technologies (Web3D)},
  series    = {Web3D ‘14},
  year      = {2014},
  isbn      = {978-1-4503-3015-2},
  location  = {Vancouver, British Columbia, Canada},
  pages     = {117–124},
  numpages  = {8},
  doi       = {10.1145/2628588.2628591},
  acmid     = {2628591},
  publisher = {ACM},
  address   = {New York, NY, USA},
  keywords  = {GIS, HTML5, WebGL, X3DOM, visualization},
  abstract  = {
    In this paper we present a web-based platform that makes use of HTML5 technology
    and WebGL to facilitate public participation in urban planning. It consists of components
    that enable city administrations to present urban plans to the public and to engage with
    stakeholders. One of these components uses the open source library X3DOM to visualise
    3D content—for example, a city model containing a 3D representation of a planned building.
    Since X3DOM does not need additional software to be installed on the user’s system our
    implementation is more portable than previous work. Our solution is based on the open
    source software Liferay which allows it to be configured for various urban planning projects.
    In order to enable communication between different web application components residing
    in inline frames (iframes) we implemented a special message bus based on HTML5 postMessage.
    In this paper we describe implementation details, but we also intensively discuss the
    possibilities of modern web technology for urban planning. We motivate the use of such
    technology through three examples that can be implemented using our web application. In
    this paper we also present results from evaluating our application in user workshops
    carried out within the project urbanAPI that is funded by the European Commission.
    Finally, we draw conclusions and discuss possibilities for future urban planning use cases.
  }
}
[8]
M. Krämer and A. Stein, “Automated Urban Management Processes: Integrating a Graphical Editor for Modular Domain-Specific Languages into a 3D GIS,” in Proceedings of the 19th International Conference on Urban Planning, Regional Development and Information Society REAL CORP, 2014, pp. 99–108.
[ | ]

In this paper we present the results of integrating a graphical editor for geospatial processing workflows into a 3D GIS. We use modular domain-specific languages (DSLs) that are tailored to specific application domains. The vocabulary consists of so-called recipes that are grouped into cookbooks representing the language for a certain application domain. Recipes can be reused in multiple cookbooks. This approach allows for a good usability as the user quickly becomes familiar with the domain-specific languages by recognizing common recipes. In this paper we also describe guidelines for choosing the right granularity for recipes which allows for complex rules while using simplest possible recipes. We also describe a workflow for domain-specific language design based on ontologies to identify the correct domain vocabulary. Our approach can be used to automate processing of geospatial datasets in the area of urban planning. To evaluate our approach we use the implemented graphical rule editor in a practical scenario and present it to a user group from the urbanAPI project.

@inproceedings{kraemer-stein-2014,
  author       = {Kr{\”a}mer, Michel and Stein, Andreas},
  editor       = {Manfred Schrenk and Vasily V. Popovich and Peter Zeile and Pietro Elisei},
  title        = {Automated Urban Management Processes: Integrating a Graphical Editor for Modular Domain-Specific Languages into a 3D GIS},
  booktitle    = {Proceedings of the 19th International Conference on Urban Planning, Regional Development and Information Society REAL CORP},
  year         = {2014},
  pages        = {99–108},
  organization = {CORP – Competence Center of Urban and Regional Planning},
  address      = {Schwechat, Austria},
  abstract     = {
    In this paper we present the results of integrating a graphical editor for geospatial
    processing workflows into a 3D GIS. We use modular domain-specific languages (DSLs) that
    are tailored to specific application domains. The vocabulary consists of so-called
    recipes that are grouped into cookbooks representing the language for a certain application
    domain. Recipes can be reused in multiple cookbooks. This approach allows for a good
    usability as the user quickly becomes familiar with the domain-specific languages by
    recognizing common recipes. In this paper we also describe guidelines for choosing the
    right granularity for recipes which allows for complex rules while using simplest possible
    recipes. We also describe a workflow for domain-specific language design based on ontologies
    to identify the correct domain vocabulary. Our approach can be used to automate processing
    of geospatial datasets in the area of urban planning. To evaluate our approach we use the
    implemented graphical rule editor in a practical scenario and present it to a user group
    from the urbanAPI project.
  }
}
[9]
M. Krämer, D. Ludlow, and Z. Khan, “Domain-Specific Languages for Agile Urban Policy Modelling,” in Proceedings of the 27th European Conference on Modelling and Simulation (ECMS), 2013, pp. 673–680.
[ | ]

In this paper we present a new approach of performing urban policy modelling and making with the help of ICT enabled tools. We present a complete policy cycle that includes creating policy plans, securing stakeholders and public engagement, implementation, monitoring, and evaluating a particular policy model. ICT enabled tools can be deployed at various stages in this cycle, but they require an intuitive interface which can be supported by domain-specific languages (DSLs) as the means to express policy modelling aspects such as computational processes and computer-readable policy rules in the words of the domain expert. In order to evaluate the use of such languages, we present a real-world scenario from the urbanAPI project. We describe how DSLs for this scenario would look like. Finally, we discuss strengths and limitations of our approach as well as lessons learnt.

@inproceedings{kraemer-ludlow-khan-2013,
  author       = {Kr{\”a}mer, Michel and Ludlow, David and Khan, Zaheer},
  editor       = {Webj{\o}rn Rekdalsbakken and R.T. Bye and H. Zhang},
  title        = {Domain-Specific Languages for Agile Urban Policy Modelling},
  booktitle    = {Proceedings of the 27th European Conference on Modelling and Simulation (ECMS)},
  year         = {2013},
  pages        = {673–680},
  organization = {European Council for Modelling and Simulation},
  address      = {{\AA}lesund, Norway},
  keywords     = {Urban planning, Domain specific languages (DSL), Human-computer interaction (HCI)},
  abstract     = {
    In this paper we present a new approach of performing urban policy modelling and making with
    the help of ICT enabled tools. We present a complete policy cycle that includes creating policy
    plans, securing stakeholders and public engagement, implementation, monitoring, and evaluating
    a particular policy model. ICT enabled tools can be deployed at various stages in this cycle, but
    they require an intuitive interface which can be supported by domain-specific languages (DSLs) as
    the means to express policy modelling aspects such as computational processes and computer-readable
    policy rules in the words of the domain expert. In order to evaluate the use of such languages, we
    present a real-world scenario from the urbanAPI project. We describe how DSLs for this scenario
    would look like. Finally, we discuss strengths and limitations of our approach as well as lessons
    learnt.
  }
}
[10]
M. Krämer and A. Kehlenbach, “Interactive, GPU-Based Urban Growth Simulation for Agile Urban Policy Modelling,” in Proceedings of the 27th European Conference on Modelling and Simulation (ECMS), 2013, pp. 75–81.
[ | ]

In this paper we present a novel approach of simulating urban growth by utilising the computation power of modern GPUs. The simulation results can be used in urban policy modelling to reduce turnaround times in the policy cycle. We use a state-of-the-art agent-based simulation model that consists of rules to describe human behaviour. The simulation incorporates geospatial information such as land-use, current population density and road network data. In order to simulate the phenomena of urbanisation, in our model citizens more likely settle near roads or existing settlements/cities. In this paper we present our implementation that is based on the FLAME GPU framework. Each agent on the GPU represents a group of citizens at a specific location. In order to evaluate our approach we present a practical use case. We measure the performance of our implementation and compare it with a Java-based solution. Finally, we discuss our approach and show opportunities for agile and interactive urban policy modelling.

@inproceedings{kraemer-kehlenbach-2013,
  author       = {Kr{\”a}mer, Michel and Kehlenbach, Andreas},
  editor       = {Webj{\o}rn Rekdalsbakken and R.T. Bye and H. Zhang},
  title        = {Interactive, GPU-Based Urban Growth Simulation for Agile Urban Policy Modelling},
  booktitle    = {Proceedings of the 27th European Conference on Modelling and Simulation (ECMS)},
  year         = {2013},
  pages        = {75–81},
  organization = {European Council for Modelling and Simulation},
  address      = {{\AA}lesund, Norway},
  keywords     = {GPU computing, Urban planning, Geographic information systems (GIS)},
  abstract     = {
    In this paper we present a novel approach of simulating urban growth by utilising the computation
    power of modern GPUs. The simulation results can be used in urban policy modelling to reduce
    turnaround times in the policy cycle. We use a state-of-the-art agent-based simulation model that
    consists of rules to describe human behaviour. The simulation incorporates geospatial information
    such as land-use, current population density and road network data. In order to simulate the
    phenomena of urbanisation, in our model citizens more likely settle near roads or existing
    settlements/cities. In this paper we present our implementation that is based on the FLAME GPU
    framework. Each agent on the GPU represents a group of citizens at a specific location. In order
    to evaluate our approach we present a practical use case. We measure the performance of our
    implementation and compare it with a Java-based solution. Finally, we discuss our approach and
    show opportunities for agile and interactive urban policy modelling.
  }
}
[11]
V. Coors and M. Krämer, “Integrating Quality Management into a 3D Geospatial Server,” in 28th Urban Data Management Symposium UDMS, 2011, pp. 7–12.
(Best paper nomination at CGA 2012)[ | ]

In recent years the technology and workflow for producing and management of large 3D urban models has been established and widely been used. Standards such as CityGML enable the modelling and exchange of semantically enriched multi-purpose 3D urban models for applications like urban planning, public participation, environmental simulation and navigation. However, data quality management is essential to control and enhance the quality of these models in order to be able to meet the needs of the aforementioned applications. Quality management should be performed throughout the whole lifecycle of geospatial datasets—from data acquisition to processing, analysis and visualisation. In this paper, we therefore focus on the integration of a quality management software module into a 3D geospatial data server. First results of a prototype system developed at HFT Stuttgart together with Fraunhofer IGD will be presented in this paper as a starting point for further research into the field of quality management of 3D city models.

@inproceedings{coors-kraemer-2011,
  author       = {Volker Coors and Michel Kr{\”a}mer},
  title        = {Integrating Quality Management into a 3D Geospatial Server},
  booktitle    = {28th Urban Data Management Symposium UDMS},
  year         = {2011},
  series       = {The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; XXXVIII-4/C21},
  pages        = {7–12},
  organization = {International Society for Photogrammetry and Remote Sensing (ISPRS)},
  publisher    = {ISPRS},
  address      = {Delft, The Netherlands},
  abstract     = {
    In recent years the technology and workflow for producing and management of large 3D urban
    models has been established and widely been used. Standards such as CityGML enable the modelling
    and exchange of semantically enriched multi-purpose 3D urban models for applications like urban
    planning, public participation, environmental simulation and navigation. However, data quality
    management is essential to control and enhance the quality of these models in order to be able
    to meet the needs of the aforementioned applications. Quality management should be performed
    throughout the whole lifecycle of geospatial datasets—from data acquisition to processing,
    analysis and visualisation. In this paper, we therefore focus on the integration of a quality
    management software module into a 3D geospatial data server. First results of a prototype system
    developed at HFT Stuttgart together with Fraunhofer IGD will be presented in this paper as a
    starting point for further research into the field of quality management of 3D city models.
  }
}
[12]
S. Thum and M. Krämer, “Reducing Maintenance Complexity of User-centric Web Portrayal Services,” in Proceedings of the Sixteenth International ACM Conference on 3D Web Technologies (Web3D), 2011, pp. 165–172.
[ | ]

With the success of applications providing geospatial 3D data to end-users via virtual globes and web-based services, the need for accessing more accurate and up-to-date information has come up. Geodata typically comes from different and unrelated sources, whereas users expect a seamless view which respects their specific needs. A modern geo portrayal service therefore has to deliver information from heterogeneous data sources adapted to specific applications. Each data model introduces limitations that might make it harder to fulfil user requirements. Since in practise user requirements constantly change and data sets are updated, the complexity of configuring and administering such a process increases disproportionately to both the number of requirements and data model limitations. To overcome this situation, we outline an approach which seeks to avoid what we call the explosion of interdependencies between formalized requirements and geodata features. Our goal is to keep original data virtually untouched while delivering adapted data to multiple users with specific portrayal requirements. We therefore introduce an intermediate representation (IR) to alleviate the growing complexity of interdependencies from O(n x m) to O(n + m). We adapt this technique to the geospatial domain, deriving a prototypical implementation of an OGC Web 3D Service which delivers X3D documents. The service is capable of adapting to user requirements and at the same time serving originally heterogeneous geodata. We introduce the implementation, discuss results and application opportunities.

@inproceedings{thum-kraemer-2011,
  author       = {Simon Thum and Michel Kr{\”a}mer},
  title        = {Reducing Maintenance Complexity of User-centric Web Portrayal Services},
  booktitle    = {Proceedings of the Sixteenth International ACM Conference on 3D Web Technologies (Web3D)},
  year         = {2011},
  pages        = {165–172},
  organization = {ACM SIGGRAPH and European Association for Computer Graphics (Eurographics) and Web3D Consortium},
  publisher    = {ACM Press},
  address      = {New York},
  abstract     = {
    With the success of applications providing geospatial 3D data to end-users via virtual
    globes and web-based services, the need for accessing more accurate and up-to-date
    information has come up. Geodata typically comes from different and unrelated sources,
    whereas users expect a seamless view which respects their specific needs. A modern geo
    portrayal service therefore has to deliver information from heterogeneous data sources
    adapted to specific applications. Each data model introduces limitations that might make
    it harder to fulfil user requirements. Since in practise user requirements constantly
    change and data sets are updated, the complexity of configuring and administering such a
    process increases disproportionately to both the number of requirements and data model
    limitations.

    To overcome this situation, we outline an approach which seeks to avoid what we call the
    explosion of interdependencies between formalized requirements and geodata features. Our
    goal is to keep original data virtually untouched while delivering adapted data to multiple
    users with specific portrayal requirements. We therefore introduce an intermediate
    representation (IR) to alleviate the growing complexity of interdependencies from O(n x m)
    to O(n + m). We adapt this technique to the geospatial domain, deriving a prototypical
    implementation of an OGC Web 3D Service which delivers X3D documents. The service is
    capable of adapting to user requirements and at the same time serving originally
    heterogeneous geodata. We introduce the implementation, discuss results and
    application opportunities.
  }
}
[13]
M. Krämer, M. Dummer, T. Ruppert, and J. Kohlhammer, “Tackling Uncertainty in Combined Visualizations of Underground Information and 3D City Models,” in GeoViz Hamburg 2011 Workshop, 2011, p. 2.
[ | ]

Cities are under constant development. They are characterized not only by their surface constructions like buildings and traffic infrastructure, but also by their underground structures. Besides human-created lifelines, tunnels and quarries, there are also diverse geological formations. Underground information contains a lot of uncertainty by nature, because measurements provide information along drilling lines only. Additionally, man-made structures are often hardly documented. In this paper we will present ways to visualize such uncertainty in combination with exact surface structures from 3D city models in order to assist stakeholders in making decisions. We will evaluate existing techniques and describe the requirements imposed on uncertainty visualization.

@inproceedings{kraemer-dummer-ruppert-kohlhammer-2011,
  author       = {Michel Kr{\”a}mer and Martin Dummer and Tobias Ruppert and J{\”o}rn Kohlhammer},
  title        = {Tackling Uncertainty in Combined Visualizations of Underground Information and 3D City Models},
  booktitle    = {GeoViz Hamburg 2011 Workshop},
  year         = {2011},
  pages        = {2},
  organization = {HafenCity University, Hamburg and International Cartographic Association (ICA): Commission on GeoVisualization},
  address      = {Hamburg, Germany},
  abstract     = {
    Cities are under constant development. They are characterized not only by their surface
    constructions like buildings and traffic infrastructure, but also by their underground
    structures. Besides human-created lifelines, tunnels and quarries, there are also diverse
    geological formations. Underground information contains a lot of uncertainty by nature,
    because measurements provide information along drilling lines only. Additionally, man-made
    structures are often hardly documented. In this paper we will present ways to visualize
    such uncertainty in combination with exact surface structures from 3D city models in order
    to assist stakeholders in making decisions. We will evaluate existing techniques and
    describe the requirements imposed on uncertainty visualization.
  }
}
[14]
T. Reitz, M. Krämer, and S. Thum, “A Processing Pipeline for X3D Earth-based Spatial Data View Services,” in Proceedings of the 14th International ACM Conference on 3D Web Technologies (Web3D), 2009, pp. 137–145.
[ | ]

Over the last years, a high demand for scenario-specific visualizations of 3D urban models has evolved. At the same time, established service specifications do not yet provide the means to define 3D map products and to deliver them in suitable formats, since they are focused on traditional 2D map products. In this paper, we present an approach for the definition of a 3D urban model view service. This approach consists of a three-step process, in which original geodata is integrated, filtered and then transformed into various scene graph formats such as X3D. We were able to maintain a high degree of compatibility with existing services and specifications such as Styled Layer Descriptors and the Web Map Service interface. The paper concludes with the experiences gathered from implementing and using this approach and provides an outlook as to how the lessons learned can be used in application and standardization.

@inproceedings{reitz-kraemer-thum-2009,
  author       = {Thorsten Reitz and Michel Kr{\”a}mer and Simon Thum},
  title        = {A Processing Pipeline for X3D Earth-based Spatial Data View Services},
  booktitle    = {Proceedings of the 14th International ACM Conference on 3D Web Technologies (Web3D)},
  year         = {2009},
  pages        = {137–145},
  organization = {ACM SIGGRAPH and European Association for Computer Graphics (Eurographics) and Web3D Consortium and Fraunhofer-Institut f{\”u}r Graphische Datenverarbeitung (IGD) and Gesellschaft f{\”u}r Informatik (GI), Fachbereich Graphische Datenverarbeitung},
  publisher    = {ACM Press},
  address      = {New York},
  abstract     = {
    Over the last years, a high demand for scenario-specific visualizations of 3D urban models
    has evolved. At the same time, established service specifications do not yet provide the
    means to define 3D map products and to deliver them in suitable formats, since they are
    focused on traditional 2D map products. In this paper, we present an approach for the
    definition of a 3D urban model view service. This approach consists of a three-step
    process, in which original geodata is integrated, filtered and then transformed into
    various scene graph formats such as X3D. We were able to maintain a high degree of
    compatibility with existing services and specifications such as Styled Layer Descriptors
    and the Web Map Service interface. The paper concludes with the experiences gathered
    from implementing and using this approach and provides an outlook as to how the lessons
    learned can be used in application and standardization.
  }
}

Selected magazine articles

[15]
M. Krämer, “NoSQL süß-sauer: MongoMVCC stattet MongoDB mit Datenversionierung und ACID-Eigenschaften aus,” Java-Magazin, no. 1, pp. 91–95, 2013.
[ | ]

Gegenüber herkömmlichen Datenbanken hat MongoDB den Ruf, performanter und einfacher in der Handhabung zu sein. Für viele Anwendungen fehlt meist jedoch eine wichtige Eigenschaft: MongoDB unterstützt keine Transaktionen. Die Open-Source-Library MongoMVCC schließt diese Lücke und bietet ganz nebenbei noch die Möglichkeit, alle Daten mit einem Git-ähnlichen Verfahren zu versionieren.

@article{kraemer-nosql-2013,
  author       = {Michel Kr{\”a}mer},
  title        = {NoSQL s{\”u}{\ss}-sauer: MongoMVCC stattet MongoDB mit Datenversionierung und ACID-Eigenschaften aus},
  journal      = {Java-Magazin},
  year         = {2013},
  number       = {1},
  pages        = {91–95},
  abstract     = {
    Gegen{\”u}ber herk{\”o}mmlichen Datenbanken hat MongoDB den Ruf, performanter und
    einfacher in der Handhabung zu sein. F{\”u}r viele Anwendungen fehlt meist jedoch eine
    wichtige Eigenschaft: MongoDB unterst{\”u}tzt keine Transaktionen. Die Open-Source-Library
    MongoMVCC schlie{\ss}t diese L{\”u}cke und bietet ganz nebenbei noch die M{\”o}glichkeit,
    alle Daten mit einem Git-{\”a}hnlichen Verfahren zu versionieren.
   }
}
[16]
M. Krämer and C. Hopf, “Profitable Use of 3D City Models: The Mainz Experience,” GIS.Business - GIS.Science - GIS.Trends+Markets, no. 1, pp. 28–35, 2012.
[ | ]

3D models are becoming increasingly important in the sustainable development of modern towns and cities. Centralised data storage and future-oriented continuation concepts are vital to ensure that data provide a long-term benefit. To this end the city of Mainz, capital of the German state of Rhineland-Palatinate, has initiated a project to develop a 3D city model. Here, solutions developed by the Fraunhofer Institute for Computer Graphics Research IGD are employed in the management, continuation and visualisation of 3D data.

@article{kraemer-hopf-2012,
  author       = {Kr{\”a}mer, Michel and Hopf, Christiane},
  title        = {Profitable Use of 3D City Models: The Mainz Experience},
  journal      = {GIS.Business - GIS.Science - GIS.Trends+Markets},
  year         = {2012},
  number       = {1},
  pages        = {28–35},
  abstract     = {
    3D models are becoming increasingly important in the sustainable development of modern
    towns and cities. Centralised data storage and future-oriented continuation concepts are
    vital to ensure that data provide a long-term benefit. To this end the city of Mainz,
    capital of the German state of Rhineland-Palatinate, has initiated a project to develop
    a 3D city model. Here, solutions developed by the Fraunhofer Institute for Computer Graphics
    Research IGD are employed in the management, continuation and visualisation of 3D data.
  }
}

Selected presentations

I was once asked by a French colleague of mine if Cloud Computing is ethically justifiable. At first I thought he was referring to Big Data Analytics and possible misuse of personal data, but in fact he was asking if the huge energy consumption of data centers and the impact on the environment outweighs the benefits of Cloud Computing. I was baffled at first because I couldn’t really answer this question. I did some research on this topic and found the solution in the combination of the two areas that I’ve been working on in the last couple of years: Smart Cities use Cloud Computing to make the urban environment more livable (and this is a huge benefit). This presentation summarizes my findings.

In this presentation that I gave at the Smart City Clouds Workshop co-located with the Utility and Cloud Computing Conference UCC 2014 I talked about a novel user interface that allows domain experts such as urban planners to harness the capabilities of Cloud Computing. The user interface is based on Domain-Specific Languages (DSLs) that are really readable and understandable even for users with no background in computer science. My DSLs also hide the technical details of Cloud Computing and enable the users to specify what should be done instead of how it should be done. In this talk I specifically focused on the modelling method that I use to specify new Domain-Specific Languages.

In this 1.5h tutorial that I gave at the Web3D 2014 conference in Vancouver, BC, Canada I presented different tools to prepare large geospatial data sets (in particular 3D city models), so they can be visualized in the Web. The presentation starts with a motivating example explaining how 3D city models in the Web can help urban planners and decision makers to present their policy plans to the public to engage all stakeholders in the planning process. Most of the talk consisted of a live demo where I actually converted a large CityGML model to X3D to put it into the web browser using WebGL/X3DOM.

These slides are based on Garr Reynolds’ great books ‘Presentation Zen’ and ‘Presentation Zen Design’ (http://www.garrreynolds.com). They describe a bunch of best practices that everyone who wants to create outstanding presentations should follow, including tips for the preparation, design, and performance. The presentation itself is based on these principles, hence the subtitle “… or how I designed this presentation”.

This is the English translation of a talk I gave at the JAX con 2012 (http://www.jax.de/). At Fraunhofer IGD we are using relational databases to store large 3D city models. In the past, we realized that performance of relational databases is not good enough for such kind of data. We were thus seeking for better solutions and we eventually found MongoDB. However, like most NoSQL databases, MongoDB does not support transactions. So, we implemented the Multiversion Concurrency Control (MVCC) paradigm on top of MongoDB to fill that gap. Our solution works completely lock-free and is almost as fast as plain MongoDB. This talk gives some details about our application as well as insights into MongoMVCC’s implementation. The library is available for free at GitHub: https://github.com/igd-geo/mongomvcc