Scientific Publications by DIGITbrain Project

Scientific publications in DIGITbrain Project

The following scientific publications have been financed by the project and have been authored by project members:

Publication Title:


Systematic manufacturability evaluation using dimensionless metrics and singular value decomposition: a case study for additive manufacturing

Authors: Eric Coatanéa; Hari P. N. Nagarajan, Suraj Panicker, Romaric Prod’hon, Hossein Mokhtarian, Ananda Chakraborti, Henri Paris, Iñigo Flores Ituarte & Karl R. Haapala

The paper discusses the challenges and opportunities of using 3D printing (additive manufacturing) for large-scale industrial production. It introduces a novel method to evaluate whether a design is suitable for 3D printing, considering various factors. The approach relies on mathematical techniques like dimensional analysis and singular value decomposition. This method can help companies make informed decisions about when and how to use 3D printing in their manufacturing processes. It also emphasizes the need for automated tools to support the design and production of parts that are best suited for 3D printing.


Dynamic Composition and Automated Deployment of Digital Twins for Manufacturing

Authors: James DesLauriers, Tamas Kiss, Jozsef Kovacs

The paper introduces the concept of "Digital Twins" applied in manufacturing. These digital twins are simulations of physical products or systems used to enhance performance and optimize manufacturing processes. The paper discusses the idea of a "Digital Product Brain," which combines both physical and digital dimensions, storing information about real-world events. This allows for quick customization and adaptation of manufacturing processes, supporting the concept of "Manufacturing as a Service" (MaaS). The technical approach involves separating and reusing assets like data, models, and algorithms, creating interconnected microservices that can run on various computing resources, from cloud servers to edge devices. A key challenge is dynamically generating deployment descriptors to manage these assets efficiently.


MiCADO-Edge: Towards an Application-level Orchestrator for the Cloud-to-Edge Computing Continuum

Authors: Amjad Ullah, Huseyin Dagdeviren, Resmi C. Ariyattu, James DesLauriers, Tamas Kiss & James Bowden

The paper discusses the challenges of managing computer applications in cloud computing environments that extend to the edge (closer to data sources). While there are established solutions for cloud-based applications, handling applications across the cloud-to-edge continuum is more complex. The paper introduces "MiCADO-Edge," an extension of an existing cloud orchestration framework, designed to deploy interconnected microservices in such multi-layered environments automatically. It also demonstrates how this system collects monitoring data and enforces user-defined policies for applications, with real-world examples in video processing and secure healthcare data analysis.


Industrial digitalization in the industry 4.0 era: Classification, reuse and authoring of digital models on Digital Twin platforms

Authors: Valentina Zambranoa, Johannes Mueller-Roemer, Michael Sandberg, Prasad Talasila, Davide Zanin, Peter Gorm, Larsen Elke Loeschner, Wolfgang Thronicke, Dario Pietraroia, Giuseppe Landolfi, AlessandroFontana, Manuel Laspalasa, Jibinraj Antony, Valerie Poser, Tamas Kiss, Simon Bergweile, Sebastian Pena Serna, Salvador Izquierdo, Ismael Viejo, Asier Juan, Francisco Serranoa, André Stork

The paper delves into the topic of Digital Twins (DTs) and their role in transforming the manufacturing industry. It further explores the significant advantages of reusing Digital Twins (DTs) by distinctly separating their core components. Digital Twins offer invaluable insights and optimizations in manufacturing. By dissecting DTs into Model, Algorithm, and Data components, DIGITbrain, an integrated digital platform, enables easy reuse and sharing of these vital assets. This separation fosters flexibility and efficiency, allowing various manufacturers to apply DTs across different processes. DIGITbrain's approach promises streamlined Industry 4.0 integration, lowering barriers for small and medium-sized enterprises (SMEs) and promoting innovation in manufacturing.


Abstractions of Abstractions: Metadata to Infrastructure-as-Code

Authors: Deslauriers, J., Kovacs, J. and Kiss, T. 2022.

In the context of the DIGITbrain project, this paper presents a novel approach to streamline the deployment of digital twins (DTs) in manufacturing by leveraging metadata-driven Infrastructure as Code (IaC) generation. DTs, representing real-world systems, are composed of microservices called algorithms, which can be individually configured and assembled. MiCADO, an execution engine, orchestrates the deployment, auto-scaling, and runtime management of DTs using DevOps tools like Ansible, Kubernetes, and Terraform. What sets this approach apart is its emphasis on user-friendliness; metadata-driven descriptions of DT components abstract the underlying complexities of DevOps tools and cloud middleware, making it accessible even to non-experts in cloud technology.


Toward a reference architecture based science gateway framework with embedded e‐learning support

Authors: Gabriele Pierantoni, Tamas Kiss, Alexander Bolotov, Dimitrios Kagialis, James DesLauriers, Amjad Ullah, Huankai Chen, David Chan You Fee, Hai-Van Dang, Jozsef Kovacs, Anna Belehaki, Themistocles Herekakis, Ioanna Tsagouri, Sandra Gesing

This work presents a novel science gateway framework built upon cloud-based reference architectures with integrated e-learning support. This framework facilitates the creation, publication, selection, and execution of reference architectures, each comprising various components, for diverse scientific applications. Leveraging open-source technologies and demonstrating automated deployment, multi-level autoscaling, and embedded e-learning support, the framework is poised to advance scientific research accessibility and usability. Future directions involve refining and extending the framework's capabilities, including automated composition of reference architectures and learning graph generation.


To Offload or Not? An Analysis of Big Data Offloading Strategies from Edge to Cloud

Authors: Kiss, T., Terstyanszky, G., Arjun, R., Sardesai, S., Goertz, M. D. and Wangenheim, M.

This study explores the dynamic data migration challenge from edge nodes to cloud resources, factoring in variables such as data transfer rates, processing capabilities, task intricacies, and server workloads. It introduces a decision-making model optimized for total task completion time. The analysis unveils the contextual reliance on task complexity, WLAN and WAN speeds, showing that high complexity and fast WAN speeds can lead to substantial efficiency gains through edge-cloud synergy. Additionally, server congestion tilts the preference towards cloud processing. It underscores the pivotal role of real-time parameter accessibility in effective edge-cloud orchestration. Future work involves operationalizing these insights in offloading strategies for edge-to-cloud orchestration solutions, notably within the framework of MiCADO-Edge orchestrator and Big Data analytics.


Interoperable Data Analytics Reference Architectures Empowering Digital-Twin-Aided Manufacturing

Authors: Attila Csaba Marosi, Márk Emődi, Ákos Hajnal, Robert Lovas, Tamás Kiss, Valerie Poser, Jibinraj Antony, Simon Bergweiler, Hamed Hamzeh, James Deslauriers, József Kovács

This paper presents a pioneering application of Digital Twins (DTs), Reference Architectures (RAs), and Machine Learning (ML) within the DIGITbrain project, aiming to revolutionize manufacturing in the context of Industry 4.0. By integrating these technologies, the paper demonstrates an innovative approach that simplifies the development and deployment of ML-driven solutions in manufacturing processes. The use of RAs provides a structured framework for designing and orchestrating systems, as exemplified through a real-world use case. This showcases how DIGITbrain leverages RAs to accelerate the adoption of DTs and ML techniques in complex industrial environments.


Fast harmonic tetrahedral mesh optimization

Authors: D. Ströter, J. S. Mueller-Roemer, D. Weber & D. W. Fellner

This article addresses the critical need for mesh optimization to ensure adequate element quality for numerical methods like the finite element method (FEM). It highlights the challenge of long run times associated with sequentially optimizing large meshes, particularly in Delaunay-based methods. The study introduces the concept of harmonic triangulations, demonstrating significantly faster execution times compared to traditional Delaunay methods for tetrahedral meshes. Boundary treatment plays a crucial role in achieving efficiency and high element quality, and the research explores the use of directional derivatives and massively parallel GPUs for mesh optimization. Through parallel flipping, the study achieves impressive speedups of up to 318× for harmonic mesh optimization, ensuring boundary preservation and superior mesh quality while reducing run times.


Toward Reference Architectures: A Cloud-Agnostic Data Analytics Platform Empowering Autonomous Systems

Authors: Ákos Hajnal, Ádám Kisari, Márk Emődi, Attila Farkas, Róbert Lovas

This conference paper introduces a scalable, cloud-agnostic, and fault-tolerant data analytics platform designed for advanced autonomous systems. It emphasizes the use of open-source, reusable building blocks to create a blueprint for processing, enriching, and analyzing structured and unstructured data from IoT-based scenarios. The platform leverages industry best practices, containerization, orchestration tools, and reusable components, making it adaptable for various use cases. It is currently deployed within the National Laboratory for Autonomous Systems in Hungary (ARNL) and will be demonstrated through a specific use case involving data collection from autonomous vehicles.


Investigation of scenarios format delivery in user studies for future technology

Authors: Xia Liu; Setia Hermawati


Edge-Cloud Synergy: Unleashing the Potential of Parallel Processing for Big Data Analytics

Authors: Tamas, Kiss; Raghubir Singh

This paper explores the optimal partitioning of tasks within edge-cloud orchestration scenarios, considering a spectrum of task complexities and varying Wide Area Network (WAN) speeds. Notable findings include substantial time savings achieved as WAN speeds increase and the critical role of task complexity in dictating optimal processing strategies. Moreover, the study uncovers the potential of edge-to-multiple clouds for task partitioning, offering remarkable reductions in overall task completion times. Future research directions aim to address computational efficiency concerns, especially for low-complexity tasks, and delve into the integration of edge nodes with cloud resources to further elevate the performance of Big Data analytics.


Comparison Between the HUBCAP and DIGITBrain Platforms for Model-Based Design and Evaluation of Digital Twins

Authors: Prasad Talasila, Daniel-Cristian Crăciunean, Bogdan Constantin Pirvu, Peter Gorm Larsen, Constantin Zamfirescu, Alea Scovill

Digital twin technology is an essential approach to managing the lifecycle of industrial products. Among the many approaches used to manage digital twins, co-simulation has proven to be a reliable one. There have been multiple attempts to create collaborative and sustainable platforms for management of digital twins. This paper compares two such platforms, namely the HUBCAP and the DIGITbrain. Both these platforms have been and continue to be used among a stable group of researchers and industrial product manufacturers of digital twin technologies. This comparison of the HUBCAP and the DIGITbrain platforms is illustrated with an example use case of industrial factory to be used for manufacturing of agricultural robots.


Supply chain simulation as a service to increase adaptation capability in manufacturing

Authors: Kiss, T., Terstyanszky, G., Arjun, R., Sardesai, S., Goertz, M. D. and Wangenheim, M.

The paper discusses the response of the CO-VERSATILE project to the challenges faced by the manufacturing industry, particularly SMEs, during the COVID-19 crisis. It introduces the Digital Technopole, a central hub for decentralized services, including manufacturing and supply network simulation, offered through the Digital Agora (emGORA), an external cloud-based platform. The paper presents a case study focused on optimizing the supply network for manufacturing silicon facemasks, involving three European SMEs. The solution emphasizes cloud-based services to enhance scalability, accessibility, and cost-efficiency in addressing manufacturing challenges during the pandemic.


Everyday orchestration with Docker on Kubernetes

Authors: Deslauriers, J., Arjun, R., Kovács, J. and Kiss, T.

Container orchestrators like Kubernetes offer powerful capabilities but are complex to deploy and manage. Science Gateways facilitating application deployment are ideal candidates for these technologies, but the technical barriers hinder their adoption. We present two tools, MiCADO and DocKubeADT, addressing these challenges. MiCADO provides vendor-agnostic Kubernetes clusters with auto-scaling and enhanced security. It utilizes Application Description Templates (ADT) based on TOSCA for cluster management. DocKubeADT translates Docker-Compose files into MiCADO-compatible ADTs, simplifying container deployment. It streamlines Kubernetes deployment via a familiar toolchain and is integrated into the DIGITbrain platform for manufacturing SMEs, demonstrating its practicality.


Reference Architecture for IoT Platforms towards Cloud Continuum based on Apache Kafka and Orchestration Methods

Authors: Farkas, Zoltán and Lovas, Róbert

This paper delves into the utilization of Apache Kafka, a widely adopted distributed open-source event streaming platform, within the context of IoT applications, particularly within the Autonomous Systems National Laboratory and various initiatives in Hungary, including the development of cyber-medical systems. The default reference architecture offers the groundwork for establishing a multi-node Kafka cluster on Hungary's research infrastructure, ELKH Cloud. Nevertheless, the limitations in capacity, accessibility, and availability of a single data center deployment can pose challenges. To address this issue, we propose an approach to extend Apache Kafka beyond its initial boundaries. Our solution enables the expansion of the Kafka cluster by incorporating additional nodes within the existing cloud infrastructure, as well as by integrating resources from external cloud providers. In this paper, we outline our innovative method for enhancing the foundational reference architecture, enabling IoT use cases to seamlessly augment the resources of a deployed Apache Kafka cluster, leveraging the capabilities of the Occopus cloud orchestrator, including third-party commercial cloud providers like Microsoft Azure and AWS.


Edge-Cloud Synergy: Unleashing the Potential of Parallel Processing for Big Data Analytics

Authors: Singh, R. and Kiss, T.


Interested in joining the project?

Contact our team directly.