In this post, we shared high level insights and observations on how compute resources distributed over the MS Azure, AWS, Google Cloud and on-premise perform in the multi-domain data operations.
Logistics and defense organizations throughout the World are launching their multi-domain data management efforts to expedite the data flow across the organizational units and assets. Cloud Ararat engineers have been building integrated, multi-domain connected intelligence solutions using a hybrid combination of public cloud providers, in-house resources and clients' on-premise data centers. The recipe for an optimum cloud architecture is based on the ultimate goals and constraints of our clients. Over the years, we have built our signature solution Connected Ops, a distributed cloud data management solution for multi-domain operations. Our mission is to simplify multi-domain operations, resource and data management using software and cloud computing technologies, so everyday users can utilize World-class technologies like a walk in the park. This post contains a high-level information and insights, please contact us for cooperation within your specific use case.
Multi domain operations are very dynamic and consists of diverse platforms of data centers, crewed / uncrewed ground vehicles, aircrafts, satellites, IoT sensors, handheld electronics, vessels and personnel. The referred assets consist of broad and non-uniform data sources, payloads, compute resources and capabilities; processors, memory, storage, connection protocols, operating systems, software applications and workloads. One of the primary challenges of a multi-domain networks is to architect a flawless data flow mesh network of systems, operating with non-uniform resources.
Second primary challenge is triggered by moving assets. As the assets move geographically in three dimensions (6 DOF), their connectivity capabilities change drastically. In multi-domain operations with geographically moving assets, the predefined mesh network topology and connection nodes do not perform as designed due to the ever-changing conditions and the amount of data generated from the assets and sensors/payloads. Thus, the performance becomes sluggish, leading the sensitive and important data to be delayed and sometimes not transmitted at all. Especially defense and global logistics companies are facing tremendous amounts of challenges in actionable intelligence when systems of systems are trying to work together in the multi-domain. With that these challenges in mind, the systems are designed as closed loop groups of assets managed by ground stations or operators.
First of all, Connected Ops solution data centers are distributed among multi-cloud architecture; private data centers, AWS, Azure, Google Cloud, on-premise infrastructure as well as mobile data centers. Our clients pick and choose based on their preference, budget and goals. All resources in the Connected Ops ecosystem are monitored in real time whether data centers or an UGV onboard computer. Below images are some examples of how various compute resources perform throughout the same data workloads through an one minute period. These measurements are collected each distinct compute thread, compute process and data flow throughout the AWS, Google Cloud, MS Azure, a UGV/robot mini computer, and an on-premise computer. In this post, we only referred to some of the measurements to keep the post brief.
We used the following Connected Ops modules:
1) Core Data Center (CO-Core) module : distributed data centers and API platforms are managed
2) Command & Control Distribution Center(CO-CCDC) module : data distribution and delivery from central distributed data centers to/between the edge nodes, field assets, personnel are managed.
3) Connected Ops Asset (CO-Asset) module : embedded and integrated on all preferred assets, including but not limited to crewed / uncrewed aerial vehicles, ground vehicles, satellites, IoT sensors, cameras, handheld devices and many others are managed.
4) Connected Ops AI (CO-AI) module : internal/external artificial intelligence / machine learning and data processing algorithms, workflows and infrastructure are managed.
5) Connected Ops Mobile (CO-M) module : mobile applications, tablets, AR/VR modules used by personnel and field operators are managed.
In this test scenario, our teams:
When identifying the optimum resources in multi-cloud and multi-domain architecture, we need to decide on the tradeoffs based on business goals and priorities. Every stakeholder prefers the "best" available resources in the market, however in most cases, that won't be the case unless you are in a lab environment or looking at a PowerPoint presentation. Resource utilization is the essential differentiator to accomplish a success story. All cloud/compute providers will claim that they have the best capabilities. However the organizations must have the tools to analyze and manage the cloud/hybrid compute environments to make continuous improvements and better decision making.
The decisions should not be based on shiny advertised marketing materials. All compute systems and data workflows have to be architected and improved continuously. Data journey from the source to the destination should be designed based on the organization's operating geographies, onboard compute power on assets, data distribution capabilities and personnel's level/ability to understand the results. It is important to avoid designing the systems according to the textbooks and executive presentations, but rather design based it as a "data supply chain" solution. Software systems are undervalued compared to the hardware suppliers, however poor software design will destroy the infrastructure utilization.
Using Connected Ops, we have full, clear visibility to the the data packages, transfers, compute resource utilization of all your multi-domain assets and their sensors (UAVs, UGVs, satellites, vehicles, IoT sensor networks, cameras, data centers, edge networks...). You can make better informed decisions ; thus saving time, resources, and funds.
AWS, Azure, Google Cloud and many others market about their scalability. While true, these providers may fail to meet your expectations if the distributed computing and software architectures are not designed in a native approach. Many organizations transform into the cloud by copying the local software, data and content to the virtual machines and cloud storage accounts rather than designing a native cloud solution. Then, the inevitable truth kicks-in and systems do not perform as marketed in the management PowerPoint presentations.
As a result, we are aiming to deliver the optimum end-to-end data journey for your organization in real time, among the assets operating in the multi-domain networks and in various geographies.
Product Guy : Mert Altindag
E-mail : email@example.com
Call / WhatsApp
Germany : +49 (151) 5411 9300
USA : +1 (404) 996 9393