Recent projects of the GeoComp group
Projects of the Machine Perception Lab with major contributions of the GeoComp group
Past important projects
GEOFUSION (Change detection and event recognition with fusion of images and Lidar measurement)
Various key aspect of machine-based environment interpretation are the  automatic detection and recognition of objects, obstacle avoidance in  navigation, and object tracking in certain applications. Integrating visual  sensors, such as video cameras, with sensors providing direct 3D spatial  measurements, such as Lidars may offer various benefits (high spatial and  temporal resolution, distance, color or illumination invariance). However,  fusing the different data modalities often implies sensor specific unique  challenges. Since the data characteristics provided by the recently appeared 3D  laserscanners is significantly different from earlier sensors, the options of  data fusions are not widely exploited yet in the literature. On the other hand,  the results are not only interesting from a scientific point of view, but they  also provide useful information regarding possible future utilization, which  may mean benefits for hardware manufacturers.
                      Another set of critical problems are dynamic event recognition and  change detection. The research article counted as the basis of the proposal  contains important preliminary results regarding this aspect, presenting  various Markovian change detection methods based on feature fusion. The article  compared direct feature-differencing based techniques to post classification  comparison approaches, which experiences can be highly utilized in the project  by generalization to 3D data processing. The project will also offer  possibilities to gifted undergraduate and PhD students to join the addressed  research work, in particularly for master students of the Pázmány Catholic  University, and the Budapest University of Technology and Economics
Coming soon.
INSTGEO (Instant environment perception from a mobile platform with a new generation geospatial database background)
Up to date 3D  sensors revolutionized the acquisition of environmental information. 3D vision systems  of self driving vehicles can be used for -apart from safe navigation- real time  mapping of the environment, detecting and analyzing static (traffic signs,  power lines, vegetation, street furniture), and dynamic (traffic flow, crowd  gathering, unusual events) scene elements. The onboard 3D sensors - Lidar laser  scanners, calibrated camera systems and navigation sensors - record high  frame-rate measurement sequences, however due to their limited spatial resolution,  various occlusion effects in the 3D scenes, and the short observation time  caused by the vehicle's driving speed, environment analysis purely based on  onboard sensor measurements exhibits significant limitations.
                      The new generation  geo-information systems (GIS) store extremely detailed  3D maps about the cities, consisting of dense  3D point clouds, registered camera images and semantic metadata. Here great  challenges are also present yet, due to the large expenses of the environment  scanning missions, the cost of evaluation of the tremendous data quantity, implementation  of quick querying, and efficient updating of the semantic databases.
                      The main goal of  the project is to facilitate the joint exploitation of the measurements from  the cars' instant sensing platforms and offline spatial database content of the  newest GIS solutions. We propose a new algorithmic toolkit which allows self  driving cars to obtain in real time relevant GIS information for decision  support, and provides opportunities for extending and updating the GIS  databases based on the sensor measurements of the vehicles in the everyday  traffic.
The integrated 4D (i4D) Preproduction System (i4D-PS) aims to support a complete storyboard designing workflow by enabling one to synthesize and visualize virtual 3D scenes from real and fictional elements. An efficient 3D preview can be obtained from large scale dynamic scenarios, which can be recorded from arbitrary viewpoints, while both the scene elements and the virtual camera configurations can be freely modified by the end-users. The finished i4D Storyboard can be viewed as a clip from which precise technical data can be derived to assist efficient filmmaking.
Subject: Application development for 3D point cloud processing
Bacgkround: taking 3D point clouds of buildings (prepared by a FARO Focus x330 laser scanner), the goal of the project is to analyse the building facades in automatic or semi-automatic manner. The analsys includes the recognition and separation of special facade elements (windows, walls, ledges), and calculation of the surface area of the different facade segments, to help insulation planning with pre-defined panels.
The integrated4D (i4D) project of MTA SZTAKI was a joint mission of the Distributed Events Analysis Research Laboratory (DEVA) and the Geometric Modelling and Computer Vision Laboratory (GMCV). The main objective of the project was to design and implement a pilot system for the reconstruction and visualisation of complex spatio-temporal scenes by integrating two different types of data: outdoor 4D point cloud sequences recorded by a car-mounted Velodyne HDL-64E LIDAR sensor, and 4D models of moving actors obtained in an indoor 4D Reconstruction Studio. The main purpose of the integration was our desire to measure and represent the visual world at different levels of detail.
OTKA #101598 "Comprehensive Remote Sensing Data Analysis, was a postdoctoral project funded by the Hungarian Scientific Research Fund between Jan. 2012 and Dec. 2014 (36 months).
Outline: Earth observation is a growing field of interest in various application   areas, such as monitoring agricultural activity, detection of pollution   and environmental crimes, management of urban area expansion, crisis   management, including civil protection, or homeland security. However,   the evaluation of the collected remotely needs exhausting human   intervention up to now due to the rich and continuously augmenting   content and various aspects of assessment. For this reason, necessity of   automated recognition problems in remote sensing is raised by both   national and international demands.
                      The work focuses on the   research towards a generalized framework and procedure library for   representing different targets, hierarchic structures and various levels   of changes using remotely sensed 2-D images and 3-D (LIDAR, ISAR or   DEM) data. The developed methods attempt to collect similar tasks   appearing in different application areas, and handle them in a joint   methodological approach. An important feature of the proposed models   will be the separation of the data and application dependent elements   from the abstract hierarchical structure which has various levels, such   as pixel, region, object, object group and land cover class.  Task   definitions will origin from specific applications, followed by problem   grouping, abstraction and generalization.
Featured results: Multi-level Object Population Analysis with an Embedded Marked Point Process model, with applications of automatic traffic monitoring and built-in area analysis. Change detection survey.
We developed automated algorithms which help in monitoring various public premises, including road quality assessment, surveys of road marks and traffic signs, urban green area estimation, traffic analysis. Our partner with experience in GIS developments based on mobile laser scanning was the Department of Road Management of Budapest (Budapest Közút Zrt), which had recently launched the RODIS (ROad Data Information System) for providing an integrated solution for mobile and terrestrial LIDAR scanning of large city areas, an efficient 3D geo database capture system using even CAD solution for GIS data production, as well as complete GIS data publication and visualization for 3D data.

Geo-Information Computing @ Machine Perception Lab.
GeoComp Demos:
GeoComp Group leader: Dr. Csaba Benedek benedek.csaba@sztaki.hu
i4D project manager: Dr. Zsolt Jankó janko.zsolt@sztaki.hu
Head of MPLab: Prof. Tamás Szirányi
MPLab administration: Anikó Vágvölgyi
Address:
SZTAKI