3DforXR Project update

By | News | No Comments

3DforXR project is an Open Call project funded by the SERMAS EU Research Project. It aims to develop a multimodal software module for the generation of textured 3D mesh models from 2D images or text. During the first three months of the project the first two modalities were implemented, leading to the 1st release MVP, a REST API that generates a basic, functional 3D model from 2D images.

The first modality supports 3D reconstruction from multiple overlapping 2D images of an object. Two pipelines were adopted. The first implements Structure-from-Motion, Multi-View Stereo and Multi-view Texture Mapping, which are best suited for objects with rich textures, while the second is based on Neural Radiance Fields (NeRF), which can reconstruct effectively challenging texture-less surfaces. Both pipelines are complemented by an automatic background removal process, to identify the major object in a scene for 3D reconstruction. The second modality supports 3D model prediction from a single image based on a diffusion model that generates novel poses of a single-view object, coupled with a state-of-the-art pre-trained neural surface reconstruction approach.

During the second trimester (M4-M6) these two modalities were improved leading to refined 3D models.  A library of 3D processing tools was also implemented, allowing users to further post process both the geometry and the appearance of the derived 3D assets.


The 2nd release MVP is again a REST API that generates refined 3D assets from multiple or single images and a second API to further process the 3D mesh models and their texture. The MVP was tested on object categories suggested by the project User Partners, and the quality of the results is improved compared to the ones of 1st release.The basic improvement of the 3D reconstruction module from multiple overlapping 2D images focuses on the elimination of isolated components and sharp edges and the improvement of texture generation in the Neural Radiance Fields (NeRF) pipeline. Regarding the second modality of 3D model prediction from a single image several improvements were implemented, the most effective of which is the addition of a neural texture refinement pipeline that takes advantage of the input image and seamlessly blends it into the frontal side of the mesh.
The current solution is TRL4. Three KPIs were achieved during the first release, regarding the success rate of 3D model generation, the 3D models accuracy and qualitative feedback from users on more than 80 reconstructed 3D models.  Two KPIs were achieved during the second release, regarding the 3D models geometric accuracy and texture quality. To showcase the improvement of results and further assist the user evaluation procedure, an interactive webpage was developed that displays a comparison of the generated 3D models of the two releases.

XR and Lots of Data for Better Situation Awareness: H2020 Innovation Action XR4DRAMA Successfully Completes Its First Project Year

By | News | No Comments


Kicked off in November 2020, the EU-funded XR4DRAMA project recently wrapped up its first year – with a number of impressive technical achievements successfully implemented by a Pan-European consortium.

XR4DRAMA – which resolves to “Extended Reality For Disaster Management And Media Planning” – aims to build digital situation awareness (SA) tools for basically any organization that sends staff to unfamiliar, unsteady, or unsafe locations.

The idea is to improve SA by exploiting multi-modal data and XR technology. In concrete terms, this means:

  • collecting geospatial data, all kinds of web content, sensor readings, cultural context etc.
  • structuring, interpreting, and ingesting it into a single platform
  • creating a “virtual twin” of a preselected environment
  • making it accessible and maintainable via user-friendly interfaces that potentially support everything from light-weight mobile devices to complex VR setups.

XR4DRAMA partners

Seven partners have taken on the “better SA via XR” challenge. They all play different roles – and complement each other:

The Autorità di bacino Distrettuale Delle Alpi Orientali (Alto Adriatico Water Authority, AAWA) is an Italian public body that knows a great deal about flood defense, hydrogeological modelling, disaster risk mapping etc. In XR4DRAMA, AAWA is mainly responsible for the disaster management pilots. The Information Technologies Institute (ITI) at the Centre for Research and Technology (CERTH) is XR4DRAMA’s lead coordinator and also in charge of technical implementation. DW Innovation (a special unit of international broadcaster Deusche Welle) takes care of scenario development and user requirements (with a focus on a media planning pilot) and also handles external communications/dissemination. Another partner is German software developer Nurogames – an expert in AR, VR, 3D animation and interfaces. Smartex, also hailing from Italy, creates smart garments, e.g. textiles that can sense the bio signs of a first responder. The Natural Language Processing Group at the Universitat Pompeu Fabra (UPF-TALN) offers their expertise in natural language processing (NLP/NLProc): Disaster managers and media workers may profit from quickly processed natural language info concerning their deployment area – and maybe their stress levels can be detected via their speech? Last, but not least, Greek SME up2metric supports XR4DRAMA by providing the latest in computer vision, i.e. technology that makes cameras and other gadgets see and understand what is going on in a designated location.

Technical achievements so far

Over the course of XR4DRAMA’s first 12 months, the consortium has achieved major achievements. Our company had significant involvement in the following in terms of technical research and engineering:

Backend tools/modules

  • recording of drone footage (thousands of images and videos for the media production use case captured on Corfu, Greece)

  • establishment of a satellite service able to exploit satellite data from the sentinel hub (true color, all bands, DEM) for specific areas and time spans; the data is turned into a rough 3D terrain model which can be used in the VR or geo services applications

  • finalization of a (constantly running) 3D reconstruction service that is able to
    ○ receive multiple requests (images, videos)
    ○ generate a 3D model
    ○ simplify the produced mesh
    ○ provide the result to XR4DRAMA users

  • completion of the geo service modules (initial version, incl. geoserver and GIS); data from OSM, opengov.gr and AAWA are aggregated and organized via categories and subcategories that address the user requirements; existing data can be updated/extended (also via user frontend tools)

User/frontend tools

  • development of an AR app (1st version)
    ○ improves situation awareness of first responders, location scouts, and other users in the field by providing relevant platform data (e.g. geospatial information)
    ○ allows for bilateral communication: both users in the field and control room staff can send updates
    ○ offers the following first set of user features: add/edit POIs, upload multimedia files, add comments, manage project, manage user profile
    ○ offers 2D maps and an initial AR view (e.g. display of POIs, POI info, and navigation on top of real world view)
    ○ exploits mobile phone sensors (GNSS, IMU, compass) to help users in the field navigate and make sure control room staff can locate them

Outlook on 2022

Upcoming project months will see a lot of system integration, fine-tuning, and – of course – user testing. Laptops, headsets, and smart devices have already been procured and set up to thoroughly check the prototype applications. XR4DRAMA is officially scheduled to run until the end of 2022, with a possible extension into 2023, as the Corona crisis has taken a toll on the proper execution of some of the project’s pilot use cases

Further information on XR4DRAMA

XR4DRAMA official website: xr4drama.eu
XR4DRAMA on Twitter: twitter.com/xr4drama
XR4DRAMA on LinkedIn: linkedin.com/company/xr4drama

Skip to content