Palantir Announces Availability of Foundry on Microsoft Azure

Amid global economic uncertainty, access to integrated, protected, and trusted data and analytics is more vital than ever when it comes to creating business value. To further enable transformative outcomes, Palantir is pleased to partner with Microsoft in making  available on Microsoft Azure, empowering existing and new customers to more effectively apply data and analytics in their operational decision-making.

Through this new collaboration, organizations will be able to quickly deploy Palantir Foundry 鈥  鈥 as well as being able to unlock further value in Azure Data Services with Microsoft鈥檚 cloud-scale analytics and AI solutions.

As part of this relationship, our Foundry platform is available on Azure, enabling customers to deploy our software at speed, while benefiting from Azure鈥檚 trusted and secure infrastructure, as well as its global commercial footprint.

Availability on the  will enable seamless purchasing and invoicing, with customers able to use their existing Microsoft Azure Consumption Commitment (MACC) to purchase a Foundry license and infrastructure costs.

Foundry鈥檚 single view ontology can layer on top of Azure Data Services, where they can then use investments for faster time to value, by better unlocking insights, and predicting and simulating outcomes for more data-driven decision making.

Palantir Foundry on Microsoft Azure Blog Embedded Image 2023

The platform will also integrate with native Azure Data Services for enterprise data management on Microsoft Azure, such as Azure Data Lake, Azure Synapse Analytics, Microsoft Power BI, Microsoft Dynamics 365, Microsoft Teams, and Microsoft Industry Clouds. This means customers will be able to further build on their existing IT investments in Azure Data Services through Palantir鈥檚 software-defined data integration (SDDI) to products like Azure Synapse Analytics, Azure Data Lake Storage, Azure AI and Azure Machine Learning, alongside others.

鈥淲e鈥檙e pleased to partner with Palantir to bring Foundry to Microsoft Azure. Organizations around the world will be able to make their data more actionable by using Palantir鈥檚 platform for data-driven operations and decision making, powered by Azure鈥檚 cloud-scale analytics and comprehensive AI services.鈥 鈥 Deb Cupp, President, Microsoft North America

Better Together with Palantir Foundry and Azure Data Services

Our new relationship with Microsoft will also see us go to market together in joint opportunities across industries like energy and renewables, retail and CPG, as well as other cross-industry sustainability and ESG efforts, where Microsoft customers can enhance their existing digital transformation efforts in Azure Data Services:

  • Energy and Renewables: Foundry enables customers to integrate data at speed and scale from remote sensors and Azure IoT Hub, apply this data to drive up the efficiency of assets, from offshore oil to onshore wind.
  • Retail and CPG: The platform enables organizations to bring near-instant visibility into demand and the ability to adapt their promotions, inventory, and operations in real time.
  • Sustainability and ESG: We鈥檙e helping organizations in their net zero transition by creating a common carbon ontology to empower front line decision makers to adjust their work to meet emissions targets.
  • Healthcare and Life Sciences: Foundry is used across the healthcare and life sciences value chain, from drug discovery and development, through to manufacturing, marketing, and sales. Integrate with Azure Health Data Services to manage protected health information.

We are also working together to accelerate time to value for customers in these industries any many more, by consolidating SAP and other ERPs using  helping them to create a more integrated data landscape. Palantir HyperAuto can help customers accelerate their journey to SAP on Azure and quickly surface insights in just hours.

Partnership in Action

Additional Palantir Foundry capabilities that can be deployed at speed via Azure include those from customers like the connected vehicle company Wejo.  is a proud Palantir partner, optimizing Foundry鈥檚 capabilities, and a global leader in Smart Mobility for Good鈩 cloud and software solutions for connected, electric, and autonomous vehicle data.

Their data comes from over 92 billion vehicle journeys and consist of more than 19.5 trillion data points to data that provide businesses and organizations across a variety of industries the power to innovate, drive growth, transform communities, and save lives.

鈥淲e want to help reduce the 1.3 million deaths that happen each year on the road and the additional 8 million due to emissions with smart mobility for good products and services. As part of the Foundry platform, we are excited that Palantir customers with Azure will be able to more rapidly drive integrated, protected, and trusted data and analytics from Wejo for smart mobility initiatives and business value.鈥 鈥 Sarah Larner, Executive Vice President of Strategy and Innovation at Wejo

We look forward to working with Microsoft to broaden Foundry鈥檚 availability, enabling clients across industries to better leverage their existing investments for improved operational outcomes.

Those interested in learning more about Palantir and Microsoft鈥檚 relationship can visit the  or get started today via the 

This post contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, expectations regarding the terms of the partnership and the expected benefits of the software platform and solutions. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond Palantir鈥檚 control. These risks and uncertainties include Palantir鈥檚 ability to meet the unique needs of its customers; the failure of its platforms and solutions to satisfy its customers or perform as desired; the frequency or severity of any software and implementation errors; its platforms鈥 reliability; and the ability to modify or terminate the partnership. Additional information regarding these and other risks and uncertainties is included in the filings Palantir makes with the Securities and Exchange Commission from time to time. Except as required by law, Palantir does not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

This post originally appeared on  and is re-published with permission.

Download our Resource,  to learn more about how Palantir Technologies can support your organization.

Enabling Responsible AI in Palantir Foundry

Editor’s Notes: The following is a collaboration between authors from Palantir鈥檚 Product Development and Privacy & Civil Liberties (PCL) teams. It outlines how our latest model management capabilities incorporate the principles of responsible artificial intelligence so that Palantir Foundry users can effectively solve their most challenging problems.

At Palantir, we鈥檙e proud to build mission-critical  Foundry 鈥 our operating system for the modern organization 鈥 provides the infrastructure for users to develop, evaluate, deploy, and maintain AI/ML models to achieve their desired organizational outcomes.

From stabilizing consumer goods supply chains, to optimizing airplane manufacturing processes, and monitoring public health outbreaks across the globe, Foundry鈥檚  has enabled data science teams worldwide to readily collaborate with their business and operational teams, enabling all stakeholders to create data-driven impact.

Palantir Responsible AI in Foundry Blog Embedded Image 2023

As we discussed in a previous  using AI/ML for these important use cases demands software that spans the entire model lifecycle. Foundry鈥檚 first-class security and data quality tools enable users to develop AI/ML models, and by establishing a  foundation, our software offers the connectivity and dynamic feedback loops that these teams need in order to sustain the effective use of models in practice.

Further to this, developing capabilities that facilitate the responsible use of artificial intelligence is an indispensable part of building  capabilities. Here, we鈥檒l share more about what responsible AI means at Palantir, and how Foundry鈥檚 latest model management and  capabilities enable organizations to address their most challenging problems.

Responsible AI at Palantir

At its core, our AI/ML product strategy centers around developing software that enables responsible AI use in both collaborative and operational settings. We believe that the term has many dimensions and includes considerations around AI safety, reliability, explainability, and governance. We鈥檝e publicly advocated for a  as well as the importance of  to AI/ML in multiple forums.

We believe that the tenets of responsible AI are not just limited to model development and use but have considerations throughout the entire  For example, developing reliable AI/ML solutions requires tools for the management and curation of high-quality data. These considerations extend beyond model deployment alone and include how end-users interact with their AI outputs and how they can use feedback loops for iteration, monitoring, and long-term maintenance.

Incorporating responsible AI principles in our software is also a core part of our commitment to  Building this kind of software means recognizing that AI is not the solution to every problem and that a model for one problem will not always be a solution to others. A model鈥檚 intended use should be clearly and transparently scoped to specific business or operational problems.

Moreover, the challenges of using AI for mission-critical problems span a variety of domains and require expertise from a diverse breadth of disciplines. Building AI solutions should therefore be an interdisciplinary process where engineers, domain-experts, data scientists, compliance teams, and other relevant stakeholders work together to ensure the solution represents the specialized demands and requirements of the intended field of application. The values of responsible AI shape how we build our software, and in turn, they enable our customers to use AI/ML solutions in Foundry for their most critical problems.

Model Management in Foundry

Building on the platform鈥檚 robust security and data governance tools, Foundry鈥檚 model management capabilities are designed to encourage users to incorporate responsible AI principles throughout a model鈥檚 lifecycle. We have recently released product capabilities that improve the testing and evaluation ecosystem through no-code and low-code interfaces. We encourage you to read more about these 

Problem-first modeling

In Foundry, orienting around the 鈥渙perational problem鈥 that models are trying to solve is at the heart of this new model management infrastructure. Foundry offers many tools for a data-first and exploratory approach to model experimentation, but for mission-critical use-cases, AI/ML applications need to be scoped to a specific problem. We have deliberately built  to focus model development, evaluation, and deployment around well-defined problems.

The Modeling Objectives application enables users to define a problem, develop candidate models as solutions to these challenges, perform large-scale testing and evaluation, deploy models in many modalities to both staging and production applications, and then monitor them to enable faster iteration.

Specifying the modeling problem from the outset enables collaborators to better understand 鈥 and test for 鈥 the application and context for which the models are intended. This also provides greater insight into inadvertent reuse or repurposing of models. Modeling objectives provide a flexible yet structured framework that presents an opportunity to streamline model development and deployment by collecting key datasets, identifying stakeholders, and creating a testing and evaluation plan before their development begins.

These objectives also transparently communicate state about a particular AI/ML solution 鈥 from model development to testing, to deployment and further post-deployment actions like monitoring and upgrades. This enables users to be more intentional, responsible, and effective in how they use AI to address their organization鈥檚 operational challenges.

Deep integrations for security and governance

Data protection, governance, and security are core components of Palantir Foundry and are especially important for AI/ML. AI solutions must be traceable, auditable, and governable in order to be used effectively and responsibly. To facilitate this, Foundry鈥檚 model management infrastructure integrates deeply with the platform鈥檚 robust capabilities for versioning, branching, lineage, and access control.

Users can submit a model version to an objective and propose that model as a candidate solution for the problem defined in that objective. When submitting a model, users are encouraged to fill out metadata about the submission which becomes part of its permanent record. Project stakeholders and collaborators can use this to better understand the details of each submission and create a system of record that catalogs all future models for a particular modeling problem. With  they can also quickly see the provenance of every model that is submitted to an objective, revealing not only the models themselves, but also their training and testing data and what sources those datasets originally came from.

Foundry鈥檚 model management infrastructure natively integrates with the platform鈥檚 security primitives for access controls. This enables multiple model developers, evaluators, and other stakeholders to work together on the same modeling problem, while maintaining strict security and governance controls.

Robust testing and evaluation capabilities

Testing and evaluation (T&E) is one of the most critical steps in any model鈥檚 lifecycle. During T&E, subject matter experts, data scientists, and other business stakeholders determine whether a model is both effective and efficient for any given modeling problem. For example, models may need to be evaluated quantitatively and qualitatively, assessed for bias and fairness concerns, and checked against organizational requirements before they can be deployed to applications in production environments. That鈥檚 why we have released a  to facilitate more effective and thorough T& in Foundry.

Foundry now offers  for common AI/ML problems as a part of the Modeling Objectives application. The availability and native integration of these libraries within Foundry鈥檚 model management infrastructure enable users to quickly produce well-known, quantitative metrics in a point-and-click fashion for common modeling problems, all without having to dive into any technical implementation.

We鈥檝e also included a framework for users to write their own  Libraries authored in this framework benefit from the same UI-driven workflow and integration with modeling objectives. This extends the power of the integrated evaluation framework to more advanced modeling problems or context-specific use cases.

Building on the evaluation library integrations, we鈥檝e also added the ability to easily evaluate models across  This lets users quickly and exhaustively compute metrics to identify areas of model weakness that might otherwise go undetected if only computing aggregate metrics. Evaluating models on subsets can more easily surface bias or fairness concerns that affect only a portion of the model鈥檚 expected data distribution. Users can also configure their T&E workflows to run  on all candidate models proposed for a problem in order to build a T&E procedure that is both systematic and consistent.

We also recognize that not all T&E procedures are quantitative. Therefore,  in modeling objectives help keep track of certain pre-release tasks that might need to get done as part of the T&E process before a model can be released.

Looking ahead

Modeling objectives and the T&E suite are just some of the latest capabilities to encourage responsible AI in Foundry, and we continue to invest in new capabilities for effective model management. From the tools that facilitate robust model evaluation across domains, to mechanisms for seamless model release and rollback in production settings, our model management offering will always focus on empowering our customers to use their AI/ML solutions effectively, easily, and responsibly for their organization鈥檚 most challenging problems.

This post originally appeared on  and is re-published with permission.

Download our Resource, to learn more about how Palantir Technologies can support your organization.

Safeguarding Mission-Critical Data: Veeam鈥檚 Unwavering Commitment to Data Protection and Secure Products for Government Customers

Protecting customer data

In today鈥檚 digital landscape, data security is of utmost importance. At Veeam Software (Veeam), we recognize the significance of safeguarding our customers鈥 sensitive information. As part of our ongoing commitment to security, we are actively pursuing Common Criteria and Department of Defense Information Network Approved Product List (DoDIN APL) certifications. In addition, we are fully compliant with Cybersecurity Maturity Model Certification v2 level 1 (awaiting validation) and engage in Independent Verification & Validation (IV&V). We have also successfully completed FIPS 140-2, SOC type 2 level 1, ISO 27001 certifications and are implementing the Secure Software Development Framework (SSDF) to fortify our security measures further. This update provides an in-depth understanding of these certifications and our dedication to maintaining the highest data protection standards.

Common Criteria certification and DoDIN APL

Common Criteria is an internationally recognized standard for evaluating the security of information technology products. It involves a comprehensive evaluation process, testing our software against rigorous security requirements. By pursuing Common Criteria certification, our goal is to provide our customers assurance that our products adhere to the highest security standards acknowledged by over 30 countries worldwide.

In parallel, we are also pursuing the DoDIN APL certification, which is specifically relevant for our customers operating within the Department of Defense (DoD) ecosystem. This certification ensures that our products meet the stringent security requirements set by the Defense Information Systems Agency (DISA), thereby enhancing the protection of data within the DoDIN framework.

CMMC v2 Compliance

Veeam Safeguarding Mission-Critical Data Blog Embedded Image 2023

The Cybersecurity Maturity Model Certification (CMMC) is an integral part of our commitment to ensuring the security of our customers’ data. CMMC v2 is the latest version of this unified standard designed to assess the cybersecurity posture of the defense industrial base (DIB). Compliance with CMMC v2 signifies that our security practices align with the stringent requirements defined by the Department of Defense (DoD). By adhering to these standards, we assure our customers within the defense sector that their data is safeguarded with the utmost care and resilience.

Independent Verification & Validation (IV&V)

To reinforce our security measures, we have engaged in Independent Verification & Validation (IV&V). This process involves a third-party organization conducting thorough testing and evaluation of our software. The independent nature of IV&V ensures an unbiased assessment of our security controls, offering an additional layer of confidence in our commitment to protecting valuable customer data.

FIPS 140-2, SOC type 2 level 1 and soon 2 and ISO 27001 certifications

Veeam has successfully completed several vital certifications that further fortify our security posture. FIPS 140-2 is a U.S. government standard that verifies the security requirements of cryptographic modules. This certification ensures that our encryption methods meet the highest standards and provide robust data protection.

SOC type 2 level 1 certification demonstrates our dedication to maintaining the security, availability, processing integrity, confidentiality and privacy of data. We are actively working towards achieving SOC type 2 level 2 certification, enabling us to demonstrate even greater control efficacy and maturity across our systems and processes.

Additionally, Veeam鈥檚 compliance with the ISO 27001 standard underscores our commitment to establishing and maintaining a comprehensive information security management system (ISMS). This certification validates that our security practices align with globally recognized best practices, ensuring customer data remains safe and secure.

Implementation of the Secure Software Development Framework (SSDF)

As part of our continuous improvement efforts, Veeam is in the process of implementing the Secure Software Development Framework (SSDF). This framework provides guidance on designing, developing and testing software to ensure adherence to specific security standards. The SSDF allows us to integrate robust security practices into our software development lifecycle, ensuring we proactively address security concerns at every stage of the development process and build products with security in mind from the ground up. By incorporating the SSDF into our development processes, we enhance the security of our software and reinforce our commitment to delivering robust and secure solutions.

At Veeam, our customer鈥檚 data security is our top priority. We are committed to maintaining the highest levels of protection for mission-critical data. Pursuing Common Criteria and DoDIN APL certifications, complying with CMMC v2, engaging in Independent Verification & Validation, completing FIPS 140-2, SOC type 2 level 1 and soon 2, ISO 27001 certifications and implementing the Secure Software Development Framework (SSDF) all demonstrate our unwavering dedication to data security.

By undergoing these certifications and implementing industry-leading security measures, we ensure that customer data remains secure, regardless of the sector. We will continue to evolve and improve our security practices to stay ahead of emerging threats and provide customers the peace of mind they deserve.

When customers choose Veeam and the Veeam Data Platform, they can rest assured they have selected a trusted partner committed to securing their data and the data of their customers, end-users and partners. We value the trust we have built with our government customers and will continue to deliver the highest level of data protection possible to ensure mission continuity.

Contact a member of our team today and learn more about how Veeam can support your mission-critical data initiatives.

7 Key Takeaways from HIMSS23

In April, over 40,000 global health professionals converged in Chicago for the highly anticipated . Over the course of five days, healthcare, government and technology leaders discussed everything from wearable medical devices and artificial intelligence (AI) to cybersecurity and compliance. Here are some highlights and key themes from the conference.

  1. Change is happening quickly: The buzz around ChatGPT offers a perfect illustration of just how quickly AI has become part of our everyday lives. There are many applications for AI in the healthcare space as well. In procedure rooms, cameras with AI can ensure processes are being followed, and thereby helping avoid malpractice. One key question circulating at the conference was: how can regulations be put in place to protect patients and practitioners鈥 privacy as this new technology starts to be implemented?

 

  1. 探花视频 HIMSS 23 Blog Embedded Image 2023The cloud is here to stay: Underpinning many new technologies is the cloud. As more healthcare organizations use hybrid and multi-cloud environments, compliance becomes increasingly complicated and important. This is particularly true considering regulations and data protection laws are constantly changing. One benefit is there is a lot of overlap between compliance requirements. Looking for these common requirements (i.e. encrypting sensitive data) can help organizations navigate the seemingly complex world of compliance.

 

  1. Data presents a paradox: Data holds tremendous potential to transform healthcare operations, but the promise of data-informed decision-making must be balanced with both the data overload felt by those on the front lines, and the preservation of patient privacy. Electronic health records (EHRs) have made the lives of doctors and nurses easier in many ways, but they have also required workers to document much more granular information to meet regulation and reimbursement requirements. As such, many workers are skeptical of health IT鈥檚 ability to alleviate burnout. Integrating data into the culture of the organization is the best way to ensure everyone is capturing the proper data and maximizing new technology investments.

 

  1. Pursue interoperability: Not just having the data, but sharing that information is also crucial. By improving access to clinical data across institutions, we can discover new therapies, lower medical costs and improve patient care; however, interoperability also requires compliance and due diligence. At HIMSS23, panelists from the National Institute of Standards and Technology (NIST) described how next-generation database access control can facilitate data-sharing without moving large volumes of data. This promotes interoperability while preserving local protection policies. Additionally, panelists from the Centers for Medicare and Medicaid Services (CMS) emphasized the importance of Fast Healthcare Interoperability Resources (FHIR) standards.

 

  1. Care is expanding beyond hospital walls: Increasingly, wearable technology is becoming a staple of healthcare, as it can help with monitoring everything from glucose levels to physical activity, in addition to supporting weight control and disease prevention. More than anything, wearables offer the opportunity to continue patient care outside the walls of the hospital, which reduces the cost of care. The data collected by wearable technology holds tremendous potential for analysis at both a patient level and the population level.

 

  1. Cybersecurity must be top-of-mind: While wearables have many benefits, they must be used with cybersecurity in mind. A continuous glucose monitor that connects to the internet and patient portal, for example, could put all patient data at risk if the device is compromised. That鈥檚 why an Institute of Electrical Engineers (IEEE) working group has developed a framework with Trust, Identity, Privacy, Protection, Safety, Security principles (TIPPSS) for keeping devices with sensors safe. The goal is to make TIPPSS the standard for clinical Internet of Things (IoT) devices first, then for other solutions.

 

  1. Privacy: Patient privacy was also a leading theme at HIMSS23. When working with AI, algorithms must be trained on large volumes of data. At the conference, panelists discussed how healthcare providers and tech companies can balance using this protected health information (PHI) to improve AI while still adhering to privacy laws like HIPAA. Data de-identification is one approach to get the most out of large volumes of data while maintaining patient privacy.

Overall a common thread throughout HIMSS23 was balance. Healthcare providers and tech companies must balance the promises of technology with due diligence, while working in partnership to develop innovative solutions. From data standards to data privacy, it is crucial to collaborate with the government to lay the right foundation for using these cutting-edge technologies.

 

Visit our Healthcare Solutions Portfolio to learn more about HIMSS 2023 and how 探花视频 can support your organization鈥檚 healthcare technology goals and initiatives.

*The information contained in this blog has been written based off the thought-leadership discussions presented by speakers at HIMSS 2023.*

AvePoint Adds Governance, Management, Data Protection and Migration Support for Microsoft Power Platform

探花视频 partner AvePoint Public Sector recently announced its support for the governance, management, migration and data protection of  As more organizations adopt Power Platform to automate processes, build digital solutions, analyze data and create virtual agents, IT leaders need strategies that support their unique governance, security and compliance requirements.

AvePoint鈥檚 support for Power Platform helps organizations:

  • Provide scalable management and governance: Access management and risk assessments allow organizations to quickly drive impactful collaboration and sustainable Power Platform adoption. Best practices and productivity can be achieved through automated governance and policies, enforcing proper control of data access and functionality.
  • Protect critical workspaces, apps and flows: AvePoint鈥檚 automated backup for Power BI workspaces, Power Apps and flows makes it seamless to avoid accidental data deletion, user error or ransomware. This way, organizations can ensure they鈥檙e protected, compliant and prepared for business continuity when using Power Platform.
  • Seamlessly migrate data: Building on AvePoint鈥檚 award-winning migration capabilities, organizations can now migrate apps from an environment within the same tenant or between tenants 鈥 giving organizations more opportunities to successfully use Power Platform.
AvePoint and Microsoft Integration Blog Embedded Image 2023

Some organizations are already taking advantage of the AvePoint鈥檚 Power Platform support. 鈥淎vePoint鈥檚 support for Power Platform has helped us empower employees to safely build solutions that will enhance their work,鈥 Mike Fettner, Principal Office 365 Engineering at Regeneron, said. 鈥淎s an organization, this allows us to continue taking smart risks because we know robust governance solutions will put the right guardrails in place, and data protection will ensure none of our data or workflows are lost.鈥

Register today to join AvePoint and Microsoft for  coming to a city near you later this Spring.

Connecting Customers with AvePoint and Industry Solutions

It has never been easier to count on 探花视频 and AvePoint. We can help your agency with:

  • Quick quote turnaround and smart spending
  • Industry-expert cloud computing product recommendations
  • 24/7 live assistance to get you up and running faster

Contact a member of the 探花视频 and AvePoint Public Sector team today and discover how we can support your organization.

The Open Source Revolution in Government

Open source technology accounts for a significant portion of most modern applications, with some estimates going as high as 90%, and it is the foundation of many mainstream technologies. Its strength lies in the fact that a vibrant ecosystem of developers contribute to and continually improve the underlying code, which keeps the software dynamic and responsive to changing needs. Enterprise open source software further augments these community-driven projects by providing enterprise-grade support and scalability, while retaining the innovation and flexibility driven by the open source development model. By providing the best of both worlds, such solutions represent a powerful arsenal of tools for addressing government鈥檚 most pressing challenges. In a recent pulse survey of FCW readers, 93% of respondents said they were using open source technology. And more than half of respondents to FCW鈥檚 survey see open source as an integral resource for strengthening cybersecurity. That number reflects a positive trend toward a better understanding of open source software鈥檚 intrinsic approach to security. The power of enterprise open source technologies lies in a combination of collaboration, transparency and industry expertise. As agencies expand their use of such technologies, they maximize their ability to achieve mission success in the most secure, agile and innovative way possible. Learn how the combined power of community-driven innovation and industry-leading technical support is expanding the government鈥檚 capacity for transformation in 探花视频鈥檚 Innovation in Government庐 report.

 

Why Open Source is a Mission-Critical Foundation

IIG FCW Open Source Revolution November Blog Embedded Image 2022鈥淥pen source transforms the way agencies manage hybrid and multi-cloud environments. The most critical technology in the cloud, across all providers, is Linux. Everything is built on top of that foundation 鈥 both the infrastructure of the cloud and cloud offerings. Given the right partner, the promise of Linux is that it provides a consistent technology layer for agencies across all footprints, including multiple cloud providers, on-premises data centers and edge environments. From that foundation, agencies and their partners can build portable architectures that leverage other open source technologies. Portability gives organizations the ability to use the same architectures, underlying technologies, monitoring and security solutions, and human skills to manage mission-critical capabilities across all footprints.鈥

Read more insights from Christopher Smith, Vice President and General Manager of the North America Public Sector at Red Hat.

 

How Open Source is Expanding its Mission Reach

鈥淭he real power of open source technologies was revealed when they cracked the code on being highly powered, mission-specific, distributed systems. That鈥檚 how we are able to get insights out of data by being able to hold it and query it. Today, open source innovation is being accelerated by the cloud, and the conversation is still changing, with people now demanding that their open source companies be cloud-first platforms. Along the way, the open source technologies that start in the community and then receive a boost of commercial innovation have matured. The most powerful ones are expanding their ability to address more of the government鈥檚 mission needs. They are staying interoperable and keeping the data interchange non-proprietary, which is important for government agencies.鈥

Read more insights from David Erickson, Senior Director of Solutions Architecture at Elastic.

 

The Open Source Community鈥檚 Commitment to Security 听

鈥淎 central tenet of software development is visibility and traceability from start to finish so that a developer can follow the code through development, testing, building and security compliance, and then into the final production environment. Along the way, there are some key activities that boost collaboration and positive outcomes, starting with early code previews, where developers can spin up an application for stakeholders to review. Other activities include documented code reviews by peers to ensure the code is well written and efficient. In addition, DevOps components such as open source, infrastructure as code, Kubernetes as a deployment mechanism, automated testing, and better platforms and capabilities have helped developers move away from building ecosystems and instead focus on innovation.鈥

Read more insights from Joel Krooswyk, Federal CTO at GitLab.

 

The Limitless Potential of an Open Source Database

鈥淥ne of the most important elements of any database migration is ensuring that proper planning and due diligence have been performed to ensure a smooth and successful deployment. In addition, there are some key considerations agencies should keep in mind when moving to open source databases. It is essential to start with a clear understanding of the business case and objectives for adopting an open source approach. Agencies also need to decide how the database should function and what it should do to support their digital transformation. Then they must choose the optimal method to deploy the database.鈥

Read more insights from Jeremy A. Wilson, CTO of the North America Public Sector at EDB.

 

Modernizing Digital Services with Open Source

鈥淎 composable, open source digital experience platform (DXP) enables agencies to overcome those challenges. Open source technology is continuously contributed to by a community of developers to reflect a wide array of needs across organizations in varying industries and of varying sizes. A composable approach allows agencies to assemble a number of solutions for a fast, efficient system that is tailored to their needs. When agencies combine a composable DXP with open source technology, they have access to best-of-breed software and the ability to customize the assembly to suit their requirements. An enterprise DXP will enable agencies to achieve a 360-degree view of how constituents are engaging with their digital services and gain valuable data to understand how to enhance their experience. Finally, a composable, open source DXP provides a proactive approach to protecting against security and compliance vulnerabilities.鈥

Read more insights from Tami Pearlstein, Senior Product Marketing Manager at Acquia.

 

Creating Secure Open Source Repositories

鈥淧rotecting the software supply chain requires looking at every single thing that might come into an agency鈥檚 environment. To understand that level of visibility, I like to use the analogy of a refrigerator. All the ingredients necessary to make a cake or pie are in the refrigerator. We know they are of good quality, and other teams can use them instead of having to find their own. At Sonatype, our software equivalent of a refrigerator is the Nexus Repository Manager. A second aspect of our offering, called Lifecycle, allows us to evaluate the open source components in repositories at every stage of the software development life cycle. One piece of software can download a thousand other components. How do we know if one of those components is malicious?鈥

Read more insights from Maury Cupitt, Regional Vice President of Sales Engineering at Sonatype.

 

Better Data Flows for a Better Customer Experience

鈥淎 more responsive and personalized customer experience isn鈥檛 much different from the initial problem set that gave birth to Apache Kafka. When people interact with agencies, they want those agencies to know who they are and how they鈥檝e interacted in the past. They don鈥檛 want to be asked for their Social Security number three times on the same phone call. They also expect that the information or service they receive will be the same whether they are accessing it over the phone, via a mobile app and on a website. To elevate the quality of their service, agencies must be able to stream information in a low-friction way so different systems are consistent with one another and up-to-date at all times, regardless of the communication channel an individual uses. President Joe Biden鈥檚 executive order about transforming the federal customer experience is based on this capability. The most successful companies across industries have figured out how to do it, and for the most part, they鈥檝e done it with open source software.鈥

Read more insights from Jason Schick, General Manager of Confluent US Public Sector.

 

An Open Source Approach to Data Analytics

鈥淔or the past 40 years, agencies have used data warehouses to collect and analyze their data. Although those warehouses worked well, they were limited in what they could do. For instance, they could only handle structured data, but by some estimates, 90% of agencies鈥 data is unstructured and in the form of text, images, audio, video and the like. Furthermore, proprietary data warehouses can show agencies what has happened in the past but can鈥檛 predict what might happen in the future. To achieve the government鈥檚 goal of evidence-based decision-making, agencies need to be able to tap into all their data and predict what might come next.鈥

Read more insights from Howard Levenson, Regional Vice President at Databricks.

 

Download the full Innovation in Government庐 report for more insights from these open source thought leaders and additional industry research from FCW.

Overcoming Data Challenges With Virtualization

Despite the variation in their individual mandates, all regulatory agencies have one main objective: to protect the public. However, there are hurdles to this goal. There are heavy costs associated with data warehousing, as large projects require extensive telecommunication and server space. This can be both expensive and time-consuming. Luckily, by implementing data virtualization tools, agencies can overcome these constraints and provide more effective services.

What is Data Virtualization?

Data virtualization is an approach to data management that helps organizations accelerate the turnaround time for converting data into digestible information. These data sources can range from a variety of locations, including distributions and data stores and any documents, emails or spreadsheets an agency has. With such a wide array of data, accessing and understanding all vital information can be both time-consuming and overwhelming. Data virtualization is necessary to streamline access to the answers and information agencies and users require.

Thentia Data Virtualization Blog Embedded Image 2022How It Works

Data virtualization software begins by creating a layer over or around all existing data sources in an organization. Through its complementary interface, the software outputs the needed information. This process saves an abundance of time that is otherwise spent reading labels and searching for a single piece of information.

Another major benefit is that data virtualization software creates a layer of abstraction between the data source and what the user ultimately sees. The software arranges heterogeneous data from all the different sources across an organization, and then quickly presents it to the user. By properly interacting with the data sources, data virtualization software ensures that all data sources are correctly represented. This way, users can receive sufficient context behind the information they are accessing.

Boons that Enhance Virtualizing Servers

Typically, data virtualization exists between the user and their vast array of data sources. Virtualizing tools have several benefits. They:

  • Reduce the processing time and cost
  • Provide the same opportunity to accomplish a variety of goals and objectives
  • Reduce expenses associated with data integration

In addition to these numerous advantages, virtualizing servers have the same security benefits that any other IT system has. For one, data servers exist on a single network, and are isolated from potential threats. Servers have network isolation and segmentation to prevent the unnecessary cross of information. With granular access control, users can implement micro-segmentation to further this boon. Lastly, by maintaining updates and new security patches, virtualizing servers can stay up to date with the latest cybersecurity practices. For a professional licensing agency, it is always beneficial and necessary to take steps to secure their software. Additional steps don鈥檛 need to be taken to protect virtualizing servers.

Choosing the right data virtualization software

The process of implementing data virtualization can be daunting at first. As each organization differs in the types of information it collects and how that information is categorized, data virtualization will also differ. However, there are a few elements that regulatory agencies should consider. First, regulators should determine the setup/layout of their existing organization structure. Questions to consider include:

  • What existing technology is owned?
  • What systems are being worked with?
  • What are the agency鈥檚 needs?
  • What are the agency鈥檚 top priorities?

All these factors contribute to how data virtualization is implemented. Once the respective regulator reaches a higher end of technological maturity, it should begin looking into fully implementing data virtualization. With the proper virtualization software, regulators can swiftly sift through information.

Data virtualization servers reduce time, resources and cost for regulators

For a variety of agencies, data virtualization can greatly streamline and improve their access to information. By transforming manual systems into a digital, accessible process, virtualization servers reduce time, resources and cost for regulators in their ongoing work to best utilize data to aid the public.

To learn more about Thentia’s data virtualization solutions,