Learning Ecosystem: An Accelerated Move Toward Digital

The learning educational ecosystem has evolved immensely through the years, backed by technology disruption and the accelerated move toward digital across the landscape, aiming at lifelong learning for all.

The global digital education market is expected to reach USD 77.23 billion by 2028, enabling faster and expanded reach to learners and hybrid learning ecosystem models, bringing efficiency, flexibility, and innovation.

Compelling themes that stand out are:

Accelerated move to the cloud: Hybrid is the key across industries, and there is widespread acknowledgment that the learning ecosystem is not untouched by it.

The onset of the pandemic last year brought in the urgency to transition to online, asynchronous/on-demand learning models, and the recent global events further accelerated this.

Digital-first approach to Learning Management Systems, Student Information Systems, and online/e-assessments has proved invaluable in this transition.

To further enable this, the trend of moving the application estate into the cloud needs to be faster than ever, offering flexibility and collaboration to learners and the educators’ ecosystem.

Data and AI going strong: The learner continuum encompasses schools, higher education, vocational, and workforce, underpinned by assessment and credentialling touchpoints throughout the journey.

A learner-centric approach with personalized, adaptive learning is the key factor in serving the entire spectrum.

The collective performance and behavioral insights from learner data points enable AI to analyze and predict the gaps, with further recommendations for the learner and educator to work collaboratively on a progressive plan.

Organizations and institutions should aggressively adopt a learner-centric and AI-embedded approach to offer the best experience and value from e-learning, further transforming the learner ecosystem.

Enhanced experiences for learners: From targeted courseware learning to microlearning and short-form to user-generated content, providing the best experience to navigate, understand, and upskill is critical.

A unified digital learning experience enabling accessibility and inclusivity and supporting the different touchpoints in a learner’s journey can be achieved by Learning Experience Platforms.

It is imperative to craft this experience in the current content and learning material development approach, making it a part of the direct-to-learner strategy.

None of these themes are brand new, but with a continuously evolving digital learning ecosystem, these need to be carefully considered and positioned well to maximize learning outcomes truly.

Masking sensitive data: A Must for Test Environments in the Public Cloud

Make it Need of Data Masking Solution

Why mask data?

Earlier this month, the security firm Imperva announced it had suffered a significant data breach. Imperva had uploaded an unmasked customer database to AWS for “test purposes.”

It was a test environment and not monitored or controlled as rigorously as the production environment. Cybercriminals stole the API key and used it to export the database contents.

Surprisingly, the victim is a security company selling data masking cyber security tools, Imperva Data Masking.

Imperva could have avoided this painful episode if it had used its product and established a policy requiring every development and test environment to be limited to masked data.

The lesson for the rest is that if you’re moving workloads to AWS or another public cloud, you need data masking for testing and development environments.

Here is how companies can implement such a policy

The rationale for data masking

Customers concerned about data loss/theft risk seek to limit the attack surface area presented by critical data. A common approach is to restrict sensitive data to “need-to-know” environments. Here is a quick answer to the question, what is masked data? Masked data is the deliberate disguise or hiding of sensitive data using data masking techniques or data obfuscation techniques such as encryption, redaction or anonymization data masking techniques.

What is data masking in cyber security

It generally involves obfuscating data or masking of data or masking information in non-production (development, test) environments.

Data masking or masking of data or masking information is the process of irreversibly, but self-consistently, transforming data such that one cannot recover the original value from the result. In this sense, it is distinct from reversible encryption and has less inherent risk if compromised.

As data-centric enterprises take advantage of the public cloud, a common strategy is to move all non-production environments. The perception is that these environments present less risk.

In addition, the nature of the development/test cycle means that these workstreams can enormously benefit from the flexibility in infrastructure provisioning and configurations offered by the public cloud.

To leverage this flexibility, development and test data sets need to be readily available and close to production as possible to represent the wide range of production use cases.

Yet, some customers are reluctant to place sensitive data in public cloud environments.

The answer to this puzzle is to take production data, mask it, and move it to the public cloud. The perception of physical control over data continues to provide comfort (whether false or not).

Data masking or masking of data or masking information also makes it easier for public cloud advocates to gain traction in risk-averse organizations by addressing concerns about data security in the cloud.

Regulations such as GDPR, GLBA, CAT, and HIPAA impose data protection standards that encourage some form of masking in non-production environments for Personal Data, PII (Personally Identifiable Information), and PHI (Personal Health Information), respectively.

Requirements of data obfuscation tools

Data obfuscation tools must have the following requirements:

  • Data Profiling: The ability to identify sensitive data across data sources (e.g., PII or PHI)
  • Data Masking: The process of irreversibly transforming sensitive data into non-sensitive data
  • Audit/Governance Reporting: A dashboard for Information Security Officers responsible for meeting regulatory requirements and data protection

Building such a feature set from scratch is a big lift for most organizations, and that’s before we begin considering the various masking functions that a diverse ecosystem will need.

Masked data may have to meet referential integrity, human readability, or other unique requirements to support distinct test requirements.

Referential integrity is crucial to clients with several independent data stores performing a business function or transferring data between each other.

Hash functions are deterministic and meet the referential integrity requirement. However, they do not meet the uniqueness or readability requirements.

Several different data masking algorithms may be required depending on application requirements. These include:

  • Hash functions: E.g., use a SHA1 hash
  • Redaction: Truncate/substitute data in the field with random/arbitrary characters
  • Substitution: With alternate “realistic” values (common implementation samples with real values to populate a hash table)
  • Tokenization: Substitution with a token that can be reversed—generally implemented by storing the original value along with the token in a secure location

Database obfuscation tools

AWS has the following whitepapers and reference implementations:

  • An AI-powered masking solution for Personal Health Information (PHI) that uses API Gateway and Lambda to retrieve and mask PHI in images on S3 and returns masked text data posted to API gateway
  • A design case study with Data guise to identify and sensitive data masking in S3
  • A customer success story of a PII masking tool built using EMR and DynamoDB
  • An AWS whitepaper that describes using Glue to segregate PHI into a location with tighter security features

However, these solutions must address relational database masking and integrate with the AWS relational database migration product, DMS.

Types of data masking

Microsoft offers Azure data masking solutions in both versions of its SQL Masking product on Azure, which includes:

  • Dynamic Masking for SQL Server, a data masking SQL server solution which overwrites query results with masked/redacted data
  • Static Masking for SQL Server, a data masking SQL server solution which modifies data to mask/redact it

For this discussion, we focus on what Microsoft calls “static masking” since “dynamic masking” leaves the unmasked data present on the DB, failing the requirement to shrink the attack surface as much as possible.

We will also cover AWS data masking solution and technologies to explore cloud-native vs. vendor implementations.

Build Your Data Masking Solution with AWS DMS and Glue

AWS Data Masking: AWS Data Migration Service (DMS) currently provides a mechanism to migrate data from one data source to another, either at one time or through continuous replication, as described in the diagram below (from AWS documentation):

AWS_Data_Masking_Solution

DMS currently supports user-defined tasks that modify the Data Definition Language (DDL) during migration (e.g., dropping tables or columns). DMS also supports character-level substitutions on columns with string-type data. When used for data masking or masking information, you can build AWS ETL solution Glue to fit into this framework, operating on field-level data rather than DDL or individual characters.

The below diagram shows an automated pipeline to provision and mask test datasets and environments using DMS, Glue, CodePipeline, and CloudFormation:

Data Masking in AWS

When using DMS and Glue, the replication/masking workload is run on AWS, not in the customer’s on-premises data center. As a result, unmasked or unredacted data briefly existed in AWS before the transformation.

This solution does not address security concerns around placing sensitive data (and accompanying compute workloads) on AWS for clients who still are cautious about their approach toward public clouds.

For firms looking towards a cloud-native solution as the answer, the above can form a kernel of a workable solution, combined with additional work around identifying data needing masking and reporting/dashboarding/auditing.

Data Masking Technology Vendors offering Data Masking Tools

If organizations are less concerned about cloud-native services, there are commercial data obfuscation tools or data masking tools offering masking services in various forms. These data masking technology solutions include IRI Field Shield, Oracle Data Masking, Okera Active Data Access Platform, IBM Infosphere Optim Data Privacy, Protegrity, Informatica, SQL Server Data Masking, CA Test Data Manager, Compuware Test Data Privacy, Imperva Data Masking, Dataguise, and Delphix.

Several of these data masking technology vendors have partnerships with cloud service providers. There are also many data masking open source tools in the market. The best data masking solution for the use case under consideration is the one offered by Delphix.

Data Masking with Delphix

This option leverages one of the commercial data masking providers to build AWS data masking capability.

Delphix offers a masking solution on the AWS marketplace. One of the benefits of a vendor solution like Delphix is that it is easily deployable on-premise and in a public cloud.

It allows customers to run masking workloads on-premise and ensure no unmasked data is ever present in AWS.

For example, AWS services such as Storage Gateway can run on-premise. However, Glue and Code Commit/CloudFormation cannot.

Database Virtualization

Delphix is appealing due to its ability to integrate between its masking solution and its “database virtualization” products.

Delphix virtualization lets users provision “virtual databases” by exposing a file system/storage to a database engine (e.g., Oracle), which contains a “virtual” copy of the files/objects that constitute the database.

It tracks changes at a file-system block level, thus offering a way to reduce the duplication of data across multiple virtual databases (by sharing common blocks). Delphix has also built a rich set of APIs to support CI/CD and self-provisioning databases.

Delphix’s virtualized databases offer several functions more commonly associated with modern version control systems, such as Git. It includes versioning, rollback, tagging, low-cost branch creation, and the ability to revert to a point along the version tree.

These functions are unique as they bring source code control concepts to relational databases, vastly improving the capacity of the CI/CD pipeline to work with relational databases.

It allows users to deliver masked data to their extensible public cloud environments.

A reference architecture for a chained Delphix implementation, utilizing both virtualization and masking, would look like this:

Delphix_Data_Masking

Conclusion

It is imperative to mask data in lower environments (dev, test).

Masking such data also makes migrating dev and test workloads to public clouds far more manageable and less risky.

Therefore, organizations should build an automated data masking pipeline to provide and mask data efficiently.

This pipeline should support data in various forms, including files and relational databases.

If your build/buy decision is tilting towards a purchase, data obfuscation tools can provide core masking and profiling functions.

Our experience has led us to choose Delphix.

Author

Subir Grewal
Global Head, AWS Practice
Ness Digital Engineering

FAQs

1. What is data obfuscation techniques?

Data obfuscation refers to the process of making it difficult to interpret, although it is useful. It is mainly useful for privacy and security purposes. Common techniques include Encryption, Masking, Tokenization, and Data Scrambling.

2. What is an example of data obfuscation?

Masking sensitive information, such as a company’s document with confidential information, including names, addresses, and social security numbers, to protect privacy and enable third-party vendors to complete their work is a great example of data obfuscation.

3. What are the different types of data obfuscation?

Encryption, Masking, Tokenization, Data scrambling, Data perturbation, and Anonymization are the different types of data obfuscation.

4. What is data masking in ETL?

Data masking in ETL refers to obfuscating sensitive data during the ETL process as a part of the data warehousing or business intelligence solution.

5. What is the masking tool?

A masking tool is a software application that enables organizations to protect sensitive data. The tool works through a set of pre-defined masking rules.

Infrastructure as a Code (IaC)

Infrastructure as Code (IAC)

What is IaC

Infrastructure as a Code (IaC) transforms and automates the manual process of standing up data center environments and processes, such as hardware instantiation, networking, run books, appliance and software configuration, into automated deployment and configuration.

The Infrastructure as a Code concept or IaC concept has been around for several years with startups and tech firms. IaC is now gaining wider traction.

TechNavio cites the increased adoption of IaC as a major trend across all industries and geographies in their Global DevOps Platform Market 2018-2022 report.

Every industry is challenged by Digital Disrupters–firms competing based on enhanced capabilities and lower costs derived from digital innovation.

According to the 2018 IDC Whitepaper, Designing Tomorrow, “Over 67% of companies believe a digitally enabled competitor will gain a competitive advantage within the next five years.”

Traditional companies must be able to move faster at lower costs and yet continue to manage risk. Firms willing to undergo digital transformation can achieve this with IaC.

Benefits of Infrastructure as Code

Benefits of Infrastructure as Code is that Infrastructure cost (CAPEX) and human cost (OPEX) can be reduced by leveraging the dynamic and self-service capabilities that IaC provides.

Increased velocity means recasting multi-step, multi-hour, manual processes—such as racking servers, loading software patches, installing services and applications, configuring networks, and enabling storage—into automated, repeatable, scalable processes performed in minutes.

When done correctly, IaC reduces risk by addressing traditional IT problems, including configuration drift, human error, inconsistencies, and loss of context.

These additional capabilities—faster dynamic infrastructure delivery and consistent configuration during the infrastructure as code software delivery cycle—allow organizations to make changes faster, more confidently, and at lower risk.

Infrastructure as code security is also very resilient, infrastructure as code security ensures there is considerable cloud security cover, to detect any misconfigurations in code early in software development to mitigate vulnerability risks.

DevOps Infra as Code and Digital Transformation

An excellent place to begin Digital Transformation is implementing IaC to facilitate the adoption of DevOps practice and leverage DevOps Infra as Code

However, there can be some Infrastructure as Code challenges. Firms starting this journey are faced with the challenge of assessing whether the organization has the skills and know-how to embark on the journey alone or requires collaboration with skilled practitioners.

Most “not-born-in-the-cloud” firms realize they need to bring in outside resources (unfortunately, sometimes after first failing internally).

Ness has vast experience across Finance, Healthcare, and Telecom industries with deep expertise in IaC technologies and in DevOps Infra as Code.

We realize that even significant journeys start with a single step and have developed a unique Player-Coach engagement model that facilitates new DevOps principles, enabling the demonstration of infrastructure as code best practices through quick-win projects.

At Ness, we are agnostic (yet opinionated) about our tools. Our choices are informed by various factors and our clients’ needs.

However, we do have our favorites.

Tools for Infrastructure as Code

One such tool is an Infrastructure as a Code solution calledTerraform, the service provisioner and infrastructure orchestrator by HashiCorp. Terraform is cloud-agnostic and supports all significant clouds, both public and private.

In hybrid environments, where there are advantages to a single set of tooling, Terraform allows our practitioners to develop, validate, and roll out orchestration templates quickly. We also tried infrastructure as code tools comparison, and here are our views.

We implement CM with two tools, Salt and Ansible. Ansible focuses on simplicity, getting going quickly, making changes easy to understand, and fast organizational adoption.

We recommend Salt for organizations with greater infrastructure programming complexity.

Salt has an entirely declarative model that includes components to dynamically manage configuration and detect drift, along with the ability to layer buildouts, react to environmental signals, and dynamic infrastructure changes in response to business demands.

These abilities require additional infrastructure programming complexities and result in a steeper learning curve, but clients with sufficient scale, compliance requirements, or complexity find great benefit from the extra features.

DevOps Infrastructure as Code – Ness Expertise

At Ness, we offer Infrastructure as a Code solutions and Infrastructure as Code DevOps services, including Infrastructure as Code DevOps deployment. Our Cloud and Infrastructure as Code DevOps teams support transformation initiatives and demonstrate domain expertise in the following areas:

  • Infrastructure as Code Orchestration with tools or IaC solution like HashiCorp’s infrastructure as code Terraform and Cloud-native Orchestration with CloudFormation, ARM, and HEAT. infrastructure as code Terraform Snowflake provider plugin in used to manage Snowflake accounts. Terraform Snowflake provider plugin has a open source infrastructure as a Code tool. Terraform Snowflake provider plugin enables users to manage Snowflake resource in a better way. Terraform workspaces is used to manage Terraform multiple environments within one Terraform configuration. Terraform multiple environments use the same configuration. Terraform multiple environments are isolated and do not intervene with each other.
  • Infrastructure as Code automation or IaC automation capabilities ensures configuration automation with technologies, including Salt and Ansible
  • Migrating applications to the public Cloud, including re-architecting applications to become more cloud-compatible or cloud-native
  • Containerization, including extensive experience with Docker, Docker Swarm, OpenShift, Kubernetes, EKS, GKE, and Cloud migration and hybrid cloud implementation using VMWare, Open Stack, AWS, GCP, and Azure
  • Process and methodology improvements and CI/CD pipeline implementation by leveraging tools, such as Git, JIRA, Jenkins, and Multi-cloud Monitoring and Log aggregation via Splunk, Elastic, and InfluxDB

Author

Cary Dym,
BU Head, Global Alliance Cloud and Data Sales, Ness Digital Engineering

FAQs

1. What is meant by Infrastructure as a Code?

It is an approach to manage IT infrastructure by managing infrastructure resources, including defining and provisioning them using code.

2. What is an example of Infrastructure as Code?

A good example is using a tool such as Terraform to provision cloud resources in an automated, repeatable manner.

3. What is the benefits of Infrastructure as a Code in DevOps?

The benefits include, automation, collaboration, better agility, consistency, repeatability, and faster time to market.

4. How do you connect a Terraform to a Snowflake?

Use the Terraform Snowflake Provider plugin to connect Terraform to Snowflake.

6. What languages are used in IaC?

The languages used in IaC include Python, Ruby, JSON, YAML, and HCL.

A Leading GovTech Firm Future-Proofs Jury Product Customer Experience without Reinventing the Existing Legacy Service Architecture to Drive Sales

Case Study

A Leading GovTech Firm Future-Proofs Jury Product Customer Experience without Reinventing the Existing Legacy Service Architecture to Drive Sales​

The solution delivers a modern user experience without changing the underlying legacy architecture.

Overview

The client is a leading provider of software solutions to Federal and State governments. 90% of federal courts in the United States use their jury management solution.

Challenge

The client sought a tactical visual refresh, but looming mandates such as ADA compliance, form factor compatibility via increased responsiveness, and EOSL framework risks presented significant challenges. Their legacy application was used by 90% of federal courts in America, yet the outdated visual appearance of the product created marketing challenges for the sales team. Lack of ADA compliance and the absence of a responsive design jeopardized contract renewals and prevented further market penetration.

Solution

Ness enabled the creation of a new user experience greater than the original while leveraging/maintaining the legacy service architecture. The success was developing a modern, reusable visual identity and solving compliance mandates with little to no change in the legacy service layer.

Result

A hugely successful product for Avenu Insights was modernized as a future-proof solution, empowering the sales and marketing teams to expand the Jury product footprint at the federal, state, and local levels.

A Leading Provider of Industrial Infrastructure Unifies Customer Experience Across Digital Commerce Ecosystems to Drive Up Sales

Case Study

A Leading Provider of Industrial Infrastructure Unifies Customer Experience Across Digital Commerce Ecosystems to Drive Up Sales​

The solution delivers a unified customer experience across ecommerce, account management, and telemetry dashboards.

Overview

The client is a leading industrial equipment manufacturer. Underpinning the client’s growth has been acquisitions in related and competing products, new product development (within the Salesforce ecosystem), and a new vision of the arc of the customer experience across all those parts.

Challenge

Despite market leadership in one slice of their vertical, the client wanted to expand market share and increase service orders by capturing the larger digital ecosystem in one unified experience.

Solution

Ness enabled an end-to-end experience that the client’s customers perceive as one cohesive journey aligned with the flagship brand while sharing strategically important inventory and telemetry data across the infrastructure management workflow.

Result

The data-sharing CX opportunities between the e-commerce experience and the maintenance, account management, and telemetry dashboards unified the customer experience leading to a 20% increase in service part and warranty sales in the first quarter of deployment.

Navigating The Leadership Journey: A CTO’s Perspective

banner-image

Navigating The Leadership Journey:
A CTO’s Perspective

Description:
Listen to Peter Meulbroek, CTO of Digital Consulting, Ness Digital Engineering, as he unveils his illustrious leadership journey, current roles and responsibilities including exciting insights and views from his life experiences, along with some very practical advice to manage our lives and career. He is in conversation with Shivadarshan Deshamudre, AVP & Global Head – L&D, Ness Digital Engineering.

Listen now

About Speaker:
Peter Meulbroek – Chief Technology Officer – Digital Consulting
Leads Digital Consulting for Cloud, Data, Design, and Salesforce. 30+ years of experience across Yahoo, Caltech, Lockerz LLC (CTO), DimensionU, WebKite, and Level.Works (Founder)

How to Combat the Top 3 AI Challenges

How to overcome AI Challenges

As many of us indulge in Netflix a little more than usual, you may have seen a documentary called Free Solo.

It’s about a rock climber’s journey to conquer the first free solo climb of Yosemite’s 3,200-foot vertical rock face without a rope. (Yes, that’s right, without a rope.)

Much like you’d come to expect from an award-winning film, it has the right balance of excitement, suspense, and fear.

As we talk to companies about leveraging Artificial Intelligence (AI), there’s often that same mix of excitement to move forward with this type of initiative, fear of failure, and suspense if it will do what is intended for their organization.

Much like the climbing expedition, when there’s careful planning around navigating those tricky spots, you too can be successful.

While the film focused on scaling the world’s most famous rock, we’re going to focus on a different kind of scaling – how to scale AI at an enterprise level successfully.

Challenges in Machine Learning & Artificial Intelligence

There are machine learning challenges. Let’s understand challenges with artificial intelligence and machine learning challenges of scale and learn from those mistakes. Here are some common challenges in machine learning.

• Data Complexity
Enterprise data is commonly viewed as a cost rather than an opportunity. But, for many, the light bulb has turned on—there’s a drive to monetize it. However, we see enormous challenges around data quality, management, stewardship, lineage, and traceability. The multitude of data formats also adds further complexity. Often, AI and Machine Learning (ML) initiatives must ingest multiple combinations of data types with differing maturity according to how they are structured, making it complicated to get started.

• Collaboration
Effective collaboration is required for enterprises to scale their teams. New partners may be needed as skills need adjustments. A data science team requires a mix of roles—senior and junior data scientists, data engineers, DevOps engineers, data architects, business analysts, and scrum master/project managers. This iterative nature of data science development differs from traditional SDLC, leading to further gaps in process understanding.

• Existing Data Science Solutions Are Lacking
Many data science solutions in the market solve one or more aspects of data science work very well but fail to address the end-to-end problem statement at an enterprise level. These types of solutions fall into a ‘workbench,’ meaning they are well-suited for experimental work but soon start to struggle when tasked with enterprise-scale use cases. With all these challenges in front of us, how do we navigate around these common pitfalls? What is solution is to overcome these AI problems and techniques to fix them.

The solution? A scalable, easy-to-implement, modular, enterprise-ready AI platform that leverages best-in-class open-source technologies. Meet NessifAI!

NessifAI – To Solve AI Challenges

This solution was designed around key AI challenges organizations typically face and how to solve them.

• Sustainability: AI systems need automated testing for data, infrastructure, model training, and monitoring to keep them in sync with real-world data. We call this closed-loop AI. The inability to automate may result in the initiative being unsustainable. A trustworthy AI platform relies on advanced MLOps and automation while incorporating techniques such as ML-assisted data curation, AutoML, and automatic model and retraining to make this a sustainable initiative.

• Agility: AI workloads demand iterative and collaborative work. With multiple teams contributing code to the same pipeline and the system changing with each check-in, the traditional sprint-based release cadences are challenging to manage. What if you could remove these headaches by automating these processes, allowing AI assets such as data, features, models, code, and pipelines to be shared and reused by different personas simultaneously? It also brings standardization to the development cycle with built-in quality checks across the AI lifecycle.

• Explainability: With the myriad of regulations for companies to follow, the auditability and traceability of AI processes are imperative. Securing PII data without compromising the model’s accuracy is another essential. NessifAI provides end-to-end lineage and auditing across the lifecycle, right from data ingestion to model serving, documenting the decision logic at all stages of the AI lifecycle.

• Monetization: Monetizing data is a cornerstone of digital transformation, and AI calls for a new wave of monetizing insights. NessifAI creates a platform marketplace where all AI assets can be published and monetized for downstream consumption, allowing for an easy Google-like search for all assets and curation of assets.

Start with the Right Footing

Intended to give you the right footing and foundation, NessifAI comprises key components that leverage your data and allow you to deliver value rapidly.

• Foundry: A powerful and unified batch and streaming data platform engineered to meet the demanding enterprise workload.

• Ledger: A single pane of glass to all operations on the platform, making data easily searchable and experiments reproducible.

• Studio: A hyper-scale AI pipeline composer that allows cross-collaboration among different actors on the platform; a catalyst for rapid model building.

• Insight: A highly performant model serving platform that enables automated model rollouts and monitors model performance over time.

With any AI adventure, it is vital to set a clear objective and understand how to avoid the challenges along the way.

FAQs

1. What are 3 problems with AI?

The three AI problems include bias towards the data they are trained on, difficulting in interpreting deep learning models and vulnerability to cyber attacks.

2. What are the 3 basic types of machine learning problems?

The three basic types of ML problems are supervised, unsupervised and reinforcement learning.

3. What are AI problems and techniques?

AI problems are the challenges which arose when developing or using AI systems, it can be technical, design or implementation challenges. AI techniques are used to fix these challenges, which include regularization, data augmentation, ensemble methods, interpretable models, and fairness and bias mitigation.

4. How can we overcome the challenges of artificial intelligence?

The steps to overcome the challenges of AI include, ensuring high quality of data, data must be representative of the problem to be solved, careful selection of algorithms, ensure good computing power, ensure AI models can continuously learn and adapt to changing environments and data.

Ness v O2 Czech Republic pomáhá s přechodem do cloudu

Praha, 10. února 2023: Experti společnosti Ness Czech úspěšně provedli u českého telekomunikačního operátora O2 Czech Republic migraci online aplikací z vlastní infrastruktury do cloudového prostředí Microsoft Azure. Mezi hlavní důvody pro změnu patřilo snížení nákladů, flexibilita a zrychlení přípravy nových projektů.

„V prostředí největších českých telekomunikačních operátorů jde pravděpodobně o první větší zákaznický systém, který běží ve veřejném cloudu,“ říká Martin Silvička, generální ředitel společnosti Ness Czech. Cílem byl přesunout většinu kritických portálových řešení do cloudu. Online aplikace mají totiž velmi variabilní zatížení, ve špičkách jsou to statisíce uživatelů a stovky požadavků za sekundu, mimo špičky pak řádově menší požadavky. Na on-premise infrastruktuře není možné takovou variabilitu v čase řídit, potřebný hardware i softwarové licence je třeba pořídit na zvládnutí zátěže ve špičce.

Cloudovou strategii začali v O2 Czech Republic připravovat na konci roku 2019. Cílem bylo stát se agilnější, zrychlit inovace a uvádět nové produkty na trh rychleji. Do té doby operátor spoléhal na provoz v lokálním prostředí. Před přechodem do cloudu bylo potřeba prokázat, že úroveň zabezpečení v Azure bude minimálně na stejné úrovni a je možné dosáhnout úspory nákladů.

„U společnosti Ness oceňujeme dodržování časového harmonogramu, bezproblémové nasazení a flexibilitu z pohledu řízení infrastruktury. Celá migrace proběhla za méně než 6 měsíců,“ pochvaluje si Zdeněk Boháč, ředitel BSS Development & Operations ve společnosti O2 Czech Republic.

„Driverem přechodu do Cloud bylo umožnit zrychlení inovací. Zavedení nové technologie nám dříve trvalo zhruba šest měsíců. Dnes to na cloudové platformě Azure zvládneme za pouhý měsíc. Nyní tak můžeme pružněji reagovat na potřeby a požadavky zákazníků,“ říká Martin Kožíšek, Transformation Manager v O2 CZ s tím, že dalším důležitých aspektem přechodu byl požadavek, aby měla každá přesouvaná aplikace pozitivní business case. „Dva roky po přesunu portálových aplikací do MS Azure se nám potvrzuje předpoklad 30% úspory celkových nákladů na vlastnictví.“

Na migraci aplikací do cloudu spolupracují Ness a O2 již od roku 2020, kdy došlo k převedení zákaznické aplikace pro online řešení problémů a poruch. Do cloudu nyní v O2 pod taktovkou expertů Ness probíhá převod dalších částí IT infrastruktury.

O společnostech Ness Czech a Ness Digital Engineering
Ness Czech, přední český systémový integrátor a součást mezinárodní skupiny Ness Digital Engineering, patří mezi největší poskytovatele IT služeb v České republice. Od roku 1993 se profiluje jako průkopník v zavádění nových technologií a softwarových produktů. Mezi nejvýznamnější zákazníky patří například O2 Czech Republic, Komerční Banka a ČÚZK. Ness Digital Engineering je globální poskytovatel komplexních řešení a služeb v oblasti informačních technologií.

Kontakt pro média
Kamil Pittner, Media Consultant, PRCOM, +420 604 241 482, kamil.pittner@prcom.cz

Risk On Cloud with Streaming (ROCS)

About the Solution Brief

Ness’ Risk On Cloud with Streaming is a data-first modernization approach that avoids a “Big Bang” and leverages streaming architectures on the cloud to produce near-real time calculations and visualizations. Successful Re-orchestration projects require a modern approach to building infrastructure, a detailed business domain knowledge to extract business logic, and a deep understanding of legacy technologies.

ROCS on AWS simplifies the transformation of Risk, Pricing and Valuation Architectures from EOD batch to streaming using re-orchestration. It is 100% cloud-based, Analytics agnostic, and Streaming enabled that connects real-time data feeds (Trade, Market, Reference) to the Cloud for processing, distribution and monitoring.

GET THE SOLUTION BRIEF

test