The DIGITbrain Solution in which the Digital Brain will be integrated, will provide a standards-based ecosystem on which digital twin models can be deployed, executed, and exploited. The integration of simulation tools, big data analytics, and artificial intelligence mechanisms will boost the operation and performance of the Digital Brain, bringing an innovative solution to rapidly and efficiently model digital twins. The following table contains tools and apps that are being integrated into the DIGITbrain Solution by DIGITbrain partners. Most of the tools, however not all, are open source and their code is available in open source code repositories. Find the links below. The tools and the required know-how will be available to - new and old - members of the DIGITbrain consortium.
Do you want to get familiar with the DIGITbrain platform? Whether you want to publish a Microservice, create an Algorithm, or describe some Data, on GitHub you can find a detailed listing of the very latest specification for each of the DIGITbrain assets and necessary pre-requisities to get started.
Learn more by exploring
DIGITbrain Solution on GitHub:
MinIO is a high-performance object storage, provided Amazon S3-compliant interface. MinIO is supported by Kubernetes, also provides graphical user interface. MinIO supports authentication (username-password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
MinIO (Community edition) is licensed under GNU AGPL v3.
Hadoop Distributed File System (HDFS)
Hadoop file system (HDFS) is a distributed, scalable, replicated, high performance file storage. HDFS is often used in big data processing applications, by Hadoop jobs and Spark Applications. HDFS is designed to work in closed clusters, there are options to secure HDFS access though when accessed remotely (not addressed by the descriptors).
Hadoop HDFS is licensed under Apache 2.0.
For further description and details also refer to GitHub.
Stream resources, message broker
Apache Kafka is an open source, high performance message brokering and stream-processing framework. Kafka is highly scalable, replicated, having API for many programming languages. Kafka Connect API allows to import/export data to other systems. Messages are organized into topics from/to which consumers and publisher can read from/write in a publish-subscribe manner.
Kafka is composed of more than one container. A Zookeeper serves as persistency provider, and there might be one or more Kafka brokers serving and accepting messages. In the simplest configuration there is one Zookeeper node and one Kafka broker. We provide a one Zookeeper – five Kafka broker setup as well to illustrate how a larger cluster can be composed (to serve better performance at a higher load due to parallel processing).Apache Kafka supports various authentication and network traffic encryption, which are not yet addressed by the presented containers.
Stream resources, message broker
Eclipse Mosquitto is a popular, lightweight MQTT message broker implementation. It supports MQTT protocol versions: 5.0, 3.1.1 and 3.1. Client APIs are available for various programming languages. Brokers and clients in MQTT also follow the publish-subscribe model. Mosquitto supports authentication (password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
Stream resources, message broker
RabbitMQ is one of the most popular, lightweight, open-source message broker. RabbitMQ client libraries are available in many programming languages.RabbitMQ supports authentication (username-password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
MySQL is one of the most popular relational database management system (RDBMS). Data are organised into tables, tables have columns, entries are rows. Relationships between data can be expressed using foreign keys. Client libraries are available for many programming languages.
MongoDB is a very popular document-oriented NoSQL database. It scales well, replicated, supports indexing and advanced queries. MongoDB supports authentication (password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage. MongoDB is licensed under SSPLv1.
Redis is an in-memory, key-value-based, NoSQL database, which can also be used distributedly, also as a message broker due to its outstanding speed, durability, persistence, replication, clustering can also be configured. It supports abstract data structures (strings, lists, maps, sets, …). APIs are available for most programming languages. Redis supports authentication (password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
InfluxDB is a time series database (TSDB), fast and high-availability, scalable server, providing an SQL-like language with several built-in time-centric functions. InfluxDB supports authentication (password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
PostgreSQL (Postgres) is an (object-)relational database ((O)RBDMS), which extends relationships and the SQL query language (plpgSQL). PostgreSQL support replication, indexing, transactions, triggers and many extensions compared to SQL: object inheritance, wide variety of data types, XML and XPath queries, and so on. PostgreSQL supports authentication (username-password) and network traffic encryption (TLS/SSL), which can thus be considered as a secure storage.
This adapter realises connecting an MQTT and a Kafka server using the tool called Telegraf. Having an MQTT server (given by its IP address, port number topic name) and a Kafka server (given by another IP address port and topic name), Telegraf can automatically flow all data arriving to the MQTT server to the Kafka server to the specified topic. The user only has to configure Telegraf with the parameters of the MQTT and Kafka servers and launch this component. There is no restriction where the MQTT server or the Kafka server resides. Note the difference between the MQTT-Kafka bridge: in that case there is a Kafka server included as well.
This adapter can also be called as a reverse proxy, which is based on the widely used Nginx implementation. The goal of this adapter is to provide secure HTTPS interface on one side (showed to the clients), whereas on the other side (on the backend), we can use simple HTTP traffic. In other words, HTTPS termination happens at the reverse proxy. With the use of reverse proxy, we can easily open a port to the outside world which is secured with SSL/TLS protocol, with a server certificate (and potentially client authentication) of our choice. In this way, other components behind the proxy do not have to care about encryption of data or authentication – assuming that the backend nodes reside on a protected private network.
Rclone is a tool capable of connecting to a large number of file-based storage resource providers (more than 50 at the time of writing this), and can even mount them, so such remote files/blobs become (virtually) available in the local file system. (Rclone proxies automatically reads and writes on such virtual drive to the actual remote storage.)
For description and further details refer to GitHub.
This setup is based on Mosquitto server installation, however, in this case, Mosquitto is started in “bridge-mode” with an extra configuration pointing to another MQTT server from which data will be automatically pulled into the current MQTT server. The reason for such a setup is to do delegate connection, authentication, encryption tasks to the link between the data source and the bridge, whereas the data processing Microservice of the Algorithm can read data from this bridge without authentication or encryption.
Keyrock is the FIWARE component responsible for Identity Management. FIWARE KeyRock enables an OAuth2-based authentication and authorisation security. The associated dashboard provides an interface to create, organise and distribute applications, organisations, roles and permissions. In combination with FIWARE’s PEP proxy Wilma, a mechanism can be set up whereby for any user requesting access to the Orion Context Broker, it is ensured that only people with the correct access rights can use the data.
For further description and details refer to: https://www.eu-startups.com/directory/keyrock/
Wilma PEP Proxy
Thanks to this component, in combination with identity management and authorisation Police Enforcement Point, general enablers authentication and authorisation security is added to backend applications, such as the context broker and the IoT agents.
Orion context broker
Orion Context Broker is the core component of the FIWARE architecture. It enables to manage context information in a highly decentralised and large-scale manner. It provides the FIWARE NGSI LD API or FIWARE NGSIv2 API which are simple yet powerful Restful APIs enabling to perform updates, queries or subscribe to changes on context information.
JSON IoT Management
The FIWARE Foundation provides different IoT Agents. IoT Agents act as translators between the protocols that devices use to send or receive information, and a common language and data model across the entire platform, FIWARE NGSI. FIWARE NGSI is the API exported by a FIWARE Context Broker, used for the integration of platform components within a “Powered by FIWARE” platform and by applications to update or consume context information. For this project the JSON IoT Agent has been selected since it is a widely applied protocol. The IoT agent can receive data both using HTTP and MQTT protocols. The MQTT protocol provides a lightweight method of carrying out messaging using a publish/subscribe model. This makes it suitable for IoT messaging, such as with low power sensors or mobile devices, e.g. phones, embedded computers or microcontrollers.
CYGNUS data persistence connector
Cygnus is a persistence connector which allows saving the context data into third-party storages. Cygnus can subscribe to changes in the Orion context broker and send data to different components such MongoDb, MySql, DynamoDb, Kafka and many others.
Machine Learning Framework
TensorFlow is an open-source framework for numerical computing based on data flow graphs. This framework counts with a comprehensive ecosystem of libraries and a great community support (about 3000 contributors), allowing researchers to easily develop state-of-the-art ML solutions. TensorFlow was created and maintained by Google Brain, and it is released as Apache 2.0 open-source license. It is written in C++ and Python and has non-guaranteed API compatibilities with other languages, such as Java, Go and R. TensorFlow can run on single Central Processing Unit (CPU) systems, GPUs, Tensor Processing Units (TPUs) and mobile devices. The simplified framework, called TensorFlow light, is specially designed to run on mobile devices working on Android platforms.
Machine Learning Framework
PyTorch is a Python library specially designed for the GPU accelerated ML and DL applications. It has been developed by Facebook research group and written in Python, C and CUDA languages. PyTorch supports tensor computations which require high GPU acceleration. The library is freely available under BSD license and supports the Open NN Exchange (ONNX) format, that allows transferring models between different frameworks, such as CNTX, Caffe2, MXNet etc.
The library is freely available under BSD license and supports the Open NN Exchange (ONNX) format, that allows transferring models between different frameworks, such as CNTX, Caffe2, MXNet etc.
For further description and details refer to GitHub.
Machine Learning Framework
Scikit-Learn is a popular ML Python library with the highest number of contributors in the open-source community (1977 contributors on GitHub in April 2021). Scikit-Learn package provides various functions to perform classification, regression, dimensionality reduction, clustering, and data pre-processing. The package is distributed as open-source Berkeley SW Distribution (BSD) license and developed in Python language. The package also gives C++ support in some of the functions.
Machine Learning Framework
Keras is not directly an ML framework, but rather a Python wrapper library, that binds to other ML frameworks, such as TensorFlow, CNTK, etc. Currently, Keras is developed alongside with TensorFlow (TensorFlow 2 supports Keras integration), and it is available as open source with a MIT license.
Integrated Development Environment
RStudio is an IDE for R and Python, with a console and a web-based graphical interface, syntax-highlighting editor that supports direct code execution, and tools for plotting, debugging and workspace management. The RStudio Reference Architecture provides a general R developer environment with integrated data management and tools that can help the development process. In this stack, RStudio manifest is integrated with the data management part.
Integrated Development Environment
Project Jupyter aims to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages. Project Jupyter develops and supports the Jupyter Notebook, JupyterHub and JupyterLab products.
MiCADO is an application-level cloud orchestration and autoscaling framework, (from the project COLA) that enables the automated deployment and run-time orchestration of applications in heterogeneous cloud infrastructures. It makes Kubernetes accessible for anyone around the world by combining several DevOps tools to simplify manageability and enable one-click deployments.
Monitoring system nd time series data base
Prometheus is an open-source system monitoring and alerting toolkit. It collects and stores its metrics as time-series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
Occopus is a software framework that supports the building and configuration, or orchestration, of distributed applications built for the cloud. Interoperating components brought to life on virtual machines form a virtual infrastructure that can run on one or more cloud systems. It is part of the MiCADO framework.
Terraform is an open-source and cloud-agnostic orchestration tool with salient features. It allows infrastructure to be expressed as code in a simple, human-readable language called HCL (HashiCorp Configuration Language). It reads configuration files and provides an execution plan of changes, which can be reviewed for safety and then applied and provisioned.
For further description and details refer to Terraform.
IT automation platform
Ansible Vault encrypts variables and files, so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles.
For further description and details refer to Ansible.
CloudiFacturing Digital Agora
CloudiFacturing Digital Agora is an independent dynamic web application with its own business logic, data management, and API, serving as the default entry point to the CloudiFacturing Solution (also its Platform), enabling the execution of cloud-based software available within execution engines, and fostering the development of the community around Information and Communication Technologies (ICT) for the manufacturing industry. The DIGITbrain Digital Agora will extend the CloudiFaccturing Digital Agora.
For further description and details refer to emGORA.eu.
It is a digital platform targeting manufacturing SMEs including (among others) data and user management, repository, and workflow and application mediator. This is a meta platform, enabling the standard communication with execution engines. DIGITbrain will extend the CloudiFacturing Platform with co-simulation and FIWARE-based communication.
For further description and details refer to URL:
INtegrated TOol chain for model-based design of CPSs repositories
It is a web-based front end used for configuring and invoking co-simulation scenarios. An initial version of this is available in a cloud context.
Application Library CAELIA
CAELIA is a library for building intelligent applications, designed to shorten the time needed to build a productive Digital Twin of a system and to enable its efficient lab or shop-floor deployment as a cyber-physical system.
Maestro provides the ability to easily launch, orchestrate and manage mulitple Docker containers as single unit. It is the FMI v2.0 based co-simulation orchestration engine from the INTO-CPS project, which is able to combine independent simulation units located at different computing nodes.
It is a suite of applications (DDD Model Editor, DDDSimulator, DDDMachine and DDDSupervisor) addressing modular 3D kinematics and discrete events for Digital Twin development (design and execution) at machine and factory level, based on digital-real synchronisation and data-continuity exploitation.
RISTRA is a GPU-accelerated high-performance solver for structural analysis. In DIGITbrain, it will be extended towards massively parallel edge computing, inverse simulation, and optimisation.
ConSenses edge devices
IIoT solutions and services with high robustness and reliability in the edge, dealing with sensors, data acquisition, pre-processing, and connectivity.
AI Digital Twin Orchestrator
The artificial intelligence enhanced Digital Twin Orchestrator combines data streams, information and knowledge bases, and digital artefacts from all design phases to dynamically reconfigure the Digital Twin.
Framework of open source platform components
It is a curated framework of open source platform components, which can be assembled together with other third-party platform components to build Smart Solutions. An API (FIWARE NGSI) enables the integration of components and provides the basis for the interoperability and replication (portability) of smart solutions based on a universal set of standards for context data management. The FIWARE Context Broker component is the core component of any “Powered by FIWARE” platform; it enables the system to perform updates and access to the current state of context.
For further description and details refer to FIWARE.org.
Cloud Compute gives you the ability to deploy and scale virtual machines on-demand. It offers guaranteed computational resources in a secure and isolated environment with standard API access, without the overhead of managing physical servers.
Cloud Compute offers the possibility to select pre-configured virtual appliances (e.g. CPU, memory, disk, operating system or software) from a catalogue replicated across all EGI cloud providers.
Read more at: https://www.egi.eu/service/cloud-compute
EGI Data Hub
DataHub allows for a simple and scalable access to distributed data for computation, and to publish a dataset and make it available to a specific community, or worldwide, across federated sites.
Read more at: https://www.egi.eu/service/datahub