The Latest in Agile Development: What’s Coming and How to Prepare for it

Agile development has become ubiquitous in the software development world and for good reason. It helps teams work efficiently and iteratively, so they can react quickly to changing needs. But Agile methodology itself is constantly evolving. Our Chief Technology Officer Moshe Kranc and CTO Associates Phyllis Drucker and Alexandru Balmus share how developers can keep pace with the latest agile trends and leverage them to advance in their careers.

Why is it important for developers to follow the latest Agile trends?

If an Agile technique results in proven efficiency, then it can only help a developer’s career. Being open to changes, communicating a clear view on current progress and developing a closer relationship with the client are all results of following the latest agile techniques and will be beneficial to any developer’s career. Plus, if everyone is adopting a certain trend, having experience with it will definitely look good on a resume.

There’s an important article about career development by Jayne Groll, called ‘Skilling the Squad.’ It addresses the skills developers need to be successful and focuses on the fact that one area of knowledge is no longer enough. IT professionals need to become ‘T-shaped’ or have depth in one area but capabilities or breadth across many. Agile practices help grow that breadth.

How can developers stay abreast of the latest trends?

As with any other IT practice, there are two very easy ways to stay updated: read and get involved. Agile is a widespread practice with a large base of information available from books and online articles. At least two major professional associations (the Agile Alliance and the Scrum Alliance) make it possible to learn and engage with others at local and international conferences.

That said, once you learn, go for practices that have real-life-proven efficiency; choose a “new” approach only if it clearly (and practically) demonstrates improvement over existing ones. There’s no need to experiment all the time; if you do, be careful about generalizing. It’s always good to check the context and see what’s best suited for your situation.

What’s the best way to check the reliability of a trend before adopting it?

With so much information about Agile and other trends available, sometimes the best way to evaluate its reliability is to do your own research. Investigate the leaders behind the trend and how much and how deeply it is being written about. When DevOps and “The Phoenix Project” hit the industry, for example, one look at the authors and their background indicated that the trend was coming from well-known leaders in the service management profession.

But try to avoid being dogmatic about the requirements and ceremonies of a particular Agile practice; see what has worked well for others and consider your specific needs.

As Agile has taken flight, it is finding wider applications across organizations. What are some uses you’ve seen?

Project Management: Many project teams are starting to leverage Agile to break large initiatives into smaller units of work, similar to Agile’s concept of the “Minimum Viable Product.” This way, they can engage in smaller projects with smaller budgets and perform mid-project course corrections, resulting in outcomes that are more aligned with changing business needs.

IT Service Management (ITSM) Initiatives: In the past, these were often seen as very large initiatives that take a lot of money and sometimes fail due to their lengthy implementation times. Organizations are now using Agile practices to implement ITSM iteratively. In fact, the recently released ITIL4 framework has two Agile principles at its core: start where you are and progress iteratively.

Is there anything else developers should keep in mind?

Remember, Agile is not the last trend we’ll see in development or other areas of IT. It’s critical for IT professionals to continually educate themselves, network with others through conferences, and keep both an innovative and open mind. People who don’t read and grow will stagnate. Continuous learning needs to be a part of your profession, especially if you want to keep up with the rate of change in the industry. Innovation comes from knowledge and curiosity, and a professional’s best asset is that curiosity.

Ness offers expertise in a number of innovative areas, and our ability to help our customers implement solutions that make their businesses more competitive relies on having a team of people who are keeping track of current developments and inventing innovative solutions to address the rapidly changing needs of the industry.

 

 

Approaches to Successful Digital Transformation

Digital Transformation is a continuous journey, so companies expecting to survive and thrive in the digital era are transforming themselves to offer customers a connected digital experience. Better data, good customer and user experience, more revenue, and improved efficiencies are core objectives for businesses. So, how are companies managing one or even multiple projects to achieve transformational improvements? Watch this 30-minute webinar to learn experience-based insights from enterprise companies driving digital transformation.

Based on survey results, Doug Barth, founder of Gatepoint Research, and Mark Lister, Chief Digital Officer of Ness Digital Engineering, share their observations on how companies are investing in digital transformation programs. The survey asked respondents to report on various factors:

According to the survey, it is reported that in 2019, more than half of respondents have projects in both big data and ML/AI and customer/user experience design. However, the right business drivers are essential for continued business success and growth of their projects. Almost, 68% respondents said cost reduction or efficiency gains are making their business smarter, below which 60% said revenue growth, and 50% aimed to do a better job of engaging customers.

While digital transformation is rewriting business models, concerns about staying abreast or ahead of the competition are high for two thirds of respondents. Only 14% are happy that they are keeping up—or will be unaffected. However, the digital transformation journey is not an easy task as it involves a myriad of challenges. When surveyed about the transformational digital business/technology projects challenges that respondents have faced in the past, it is reported that more than half of the project challenges involved scope creep and cost overruns. Another 42% felt they lacked suitable expertise and the tools to encourage the adoption of new digital best practices.

The survey reveals that the four most important qualities of a strategic partner to assist in their digital endeavors are innovative thinking, a flexible approach, relevant experience, and an agile and collaborative model. The webinar also covers how high-value strategic partners, like Ness Digital Engineering, can help companies with innovative thinking outside of institutional boundaries to better compete in today’s environment.

 

How Machine Learning can Increase the Autonomy of Electric Vehicles

The Automotive domain is evolving fast. If 5 years ago software written in ECUs was simple and some might say even “dumb”, today we can see Machine Learning (ML) is here and will play an important role in how vehicles will behave in the future. Autonomous driving is not just a distant dream, but tomorrow’s reality, as intelligent vision, radar or lidar systems are becoming part of standard vehicle equipment. But applications for ML don’t stop there.

One of the biggest challenges in Electric Vehicles (EV) is the autonomy of the batteries, measured in both maximum charging cycles until full degradation and remaining kilometers until the next charge. How can this autonomy be extended by applying knowledge from the applied ML domain?

Data-driven prediction of the battery life cycle, such as the one presented in [R1], can help with the first challenge: determine the maximum charging cycles until full degradation. Data collected during numerous charging situations, including fast charging, varying cell temperature yield currents and voltages which are fed to mathematical models that can predict the battery degradation curve over time. Digital representations of the physical batteries can be used to speed up the prediction process. The digital copy and the original device shall start from the same set of parameters when they leave the production line. Over time, the digital copy will evolve based on data collected from the on-field physical battery. Running the ML models with the set of data mentioned above for several times (i.e. simulating same historical usage of the batteries in the same conditions repeatedly) will produce a prediction of the battery’s decay.

A second challenge involves determining the maximum distance until the next required charge based on driver habits. These include the way one accelerates, brakes, but also the driving routes, and daily routines e.g. number of stoplights to work, number of stops, or weekend drives. This profile can be learned (in the ML sense) and lead to automatic autonomous driving level IV [R2] adjustments of the usage of the batteries. We can imagine taking advantage of crowd-sourced driving experiences, i.e. millions of driving patterns and being notified e.g. “you can improve your car’s remaining battery life if you do X instead of Y”. Combining individual driving profiles with the shared general driving experience can optimize battery usage.

Going forward, the OEM can intervene and optimize battery usage by actively limiting the driver’s actions. Consider for example the case of a carsharing operator that may be interested in limiting the maximum acceleration of the vehicle, the recuperation (i.e. regenerative braking is an energy recovery mechanism that slows a vehicle or object by converting its kinetic energy into a form that can be either used immediately or stored until needed [R3] percentage or the numbers of fast charges in order to protect the vehicle’s batteries and increase their utilization period. All of this is possible if the operator has access to the driving profiles and behaviors resulting from applied ML models and taking constraints in the form of parametrized data sent over-the-air to the vehicles. Drivers can see this as a money-saving mode being activated (by default), compared to, say, a sports mode for which an extra fee would unlock the desired driving experience.

References:

[R1] “Data-driven prediction of battery cycle life before capacity degradation”, Kristen A. Severson, Peter M. Attia, Norman Jin, Nicholas Perkins, Benben Jiang, Zi Yang, Michael H. Chen, Muratahan Aykol, Patrick K. Herring, Dimitrios Fraggedakis, Martin Z. Bazant, Stephen J. Harris, William C. Chueh & Richard D. Braatz, Nature Energy volume 4, pages 383–391 (2019)

[R2] “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles”, SAE International, J3016_201806

[R3] “Regenerative brake”, Wikipedia

 

S&P Global Taps Ness Digital Engineering to Build New Extended Talent Center in India

Collaboration expands long-standing partnership

Hyderabad, India – November 15, 2019 Ness Digital Engineering is partnering with S&P Global, a leading provider of ratings, benchmarks, analytics and data to capital and commodity markets worldwide, to develop its new Extended Talent Center in Hyderabad, India. With this collaboration, Ness Digital Engineering, a provider of digital transformation and custom software engineering services, builds on its decade-long partnership with S&P Global. The new Extended Talent Center will expand S&P Global’s software delivery capabilities through a center of excellence that further accelerates software development and testing services and provides flexibility to support additional engineering demands.

To help create the new Center, Ness will build and manage project teams, bolstering S&P Global’s ability to address business needs and opportunities in a highly-agile manner. Based on demand, S&P Global will continue to collaborate with Ness to help support its growing technology needs across areas such as order to cash, legacy system modernization, cloud migration, and platform and data as a service from an engineering execution perspective.

“For more than 10 years, S&P Global has partnered closely with Ness Digital Engineering to establish industry-leading solutions,” said Marcus Daley, CTO Ratings Technology, S&P Global. “Over the course of our relationship, Ness has developed a deep understanding of our business, technology operations and software applications. As a result, the Ness team can immediately help us facilitate the productivity of additional talent at this new Center.”

“Ness and S&P Global have a strong, proven track record cultivating talented teams that provide strategic value,” said Paul Lombardo, Ness Digital Engineering CEO. “The new Extended Talent Center is a testimony to our partnership, which began more than 10 years ago. Together, Ness and S&P Global built the Orion Center, a 700+ member team, which S&P Global effectively leverages to deliver enhanced technology and innovation capabilities for its clients. We look forward to continuing to contribute to S&P Global’s success with the creation of this new Center.

The new Extended Talent Center will operate out of Ness’s current location in Mindspace, Hyderabad. The company will grow its workforce, adding approximately 200 highly-talented engineers over the next six months to support S&P Global’s business needs.

 

About Ness Digital Engineering
Ness Digital Engineering designs, builds, and integrates digital platforms and enterprise software that help organizations engage customers, differentiate their brands, and drive profitable growth. Our customer experience designers, software engineers, data experts, and business consultants partner with clients to develop roadmaps that identify ongoing opportunities to increase the value of their digital solutions and enterprise systems. Through agile development of minimum viable products (MVPs), our clients can test new ideas in the market and continually adapt to changing business conditions—giving our clients the leverage to lead market disruption in their industries and compete more effectively to grow their business. For more information, visit ness.com.

  

About S&P Global
S&P Global is a leading provider of transparent and independent ratings, benchmarks, analytics and data to the capital and commodity markets worldwide. The Company’s divisions include S&P Global

Ratings, S&P Global Market Intelligence, S&P Dow Jones Indices and S&P Global Platts. S&P Global has approximately 21,000 employees in 33 countries. For more information, visit www.spglobal.com.

  

Media Contacts
Vivek Kangath
Global Manager – Corporate Communications
Ness Digital Engineering
[email protected]
Mobile: +91 97425 65583

 

Perspectives from Ness’s CTO-How to Leverage Metadata for Data Lineage and Avoid Cloud Vendor Lock-in

In a previous note, also published on Ness’s Blog, I wrote about Platform Modernization and Artificial Intelligence – two topics we regularly speak with our customers about as they consider how to further extend the productivity of their digital platforms. “Data” and “Cloud” are also frequently part of our discussions. Below are some thoughts in particular about using metadata for data lineage and avoiding cloud vendor lock-in. 

Metadata for Data Lineage

Data quality is an issue for most organizations that are trying to monetize their data; e.g., by extracting insights that can move the needle for their business. If the data that is input to a predictive algorithm is “dirty” (e.g., missing or invalid values), then any insights produced by that algorithm cannot be trusted.

To achieve data quality, each data value must have a clearly defined lineage; i.e., an enterprise user should be able to determine where it came from, what transformations it underwent along the way, and where it is going…what other data items it affects. Data lineage provides an enterprise with many benefits; e.g., the ability to perform impact analysis and root-cause analysis by tracing lineage backwards (to find all data that influenced the current data) or forwards (to identify all other data that is impacted by the current data) from a given data item.

It sounds great, but where does data lineage information come from? There are several competing techniques to collect lineage metadata, each of which has its strengths and weaknesses:

  • Data Similarity Lineage: This approach builds lineage information by examining data and schemas without actually accessing your code. On the one hand, this approach will always work regardless of your coding technology, because it analyzes the resulting data, regardless of which technology generated the data. But, it has several glaring weaknesses, e.g., it cannot detect lineage metadata that is only executed rarely; e.g., end of year processing.
  • Decoded Lineage: This approach focuses exclusively on the code that manipulates the data, providing the most accurate, complete and detailed lineage metadata, as every single piece of logic is processed. But, it has some weaknesses; e.g.:
    • Code versions change over time, so your analysis of the current code’s data flow may miss an important flow that has since been superseded.
    • The code may be doing the wrong thing to the data. For example, suppose your code stores personal identification information in violation of GDPR and despite clear requirements to the contrary from the product manager. A Decoded Lineage tool will faithfully capture what the code does without raising a red flag.
  • Manual Lineage Mapping: This approach builds lineage metadata by mapping and documenting the business knowledge in people’s heads; i.e., talking to application owners, data stewards and data integration specialists. The advantage of this approach is that it provides prescriptive data lineage – how data should flow as opposed to how it actually flows after implementation bugs. But, because the metadata is based on human knowledge, it is likely to be contradictory (because two people disagree about the desired data flow) or partial (If you do not know about the existence of a data set, you will not ask anyone about it).

As you can see, there is no magic bullet – each approach has its strengths and weaknesses. In Ness’s experience, the best solution combines all three approaches:

  • Start with Decoded Lineage.
  • Augment with Data Similarity Lineage to discover patterns in the database.
  • Augment with Manual Lineage Mapping to capture how the data flows were supposed to be implemented. 

Avoiding Cloud Vendor Lock-In

Many of Ness’s customers are in the process of transitioning some parts of their business to the Cloud. Technology has overcome early concerns about the Cloud’s privacy and reliability, and the Cloud vendors provide a very tempting tool stack with capabilities that may be hard for you to implement on your own; e.g., serverless processing, elasticity.

At the same time, Ness reminds our customers that vendor lock-in to a specific Cloud provider is still vendor lock-in, no matter how financially or technically attractive it is. Down the road, you may find that you need to switch Cloud providers; e.g.:

  • Suppose you run a supermarket chain using Amazon Cloud, and Amazon one day decides to compete in your space. You may not be comfortable at that point with Amazon storing your sensitive customer data.
  • Some Cloud products have a “hockey stick” pricing model, where the price rises precipitously once you cross a certain size or performance threshold.
  • A Cloud provider could one day decide to discontinue a product you find essential, forcing you to look for an equivalent product on another vendor’s Cloud. Remember Google Search Appliance, a mainstay in many companies for enterprise search, which Google discontinued after 9 years.
  • Privacy regulations could force you to move personal data from a Cloud vendor to your own on-premise private cloud.

What can you do to avoid Cloud vendor lock-in? Here is some advice Ness gives its customers:

  • Wherever possible, use the Cloud as infrastructure rather than as a platform. At the infrastructure level, you are using the Cloud to provide you with hardware, so you can deploy and run your own containers. In that case, your dependence on a specific Cloud vendor is minimal. On the other hand, using the Cloud as a platform means you are using higher-level functionality like databases and elasticity frameworks that are far less standard across Cloud vendors.
  • Wherever possible, stick to standard SQL, which is fairly uniform across all Cloud vendors, and avoid non-standard SQL extensions or stored procedures, which can be hard to port to other databases.
  • Use features that are supported by all Cloud providers; e.g., serverless computing. The API may be different from vendor to vendor, but this will require a relatively painless conversion effort. Avoid features that exist only in one vendor’s platform; e.g., a proprietary machine learning development platform.
  • Choose commercial products that support all the major Cloud providers, so you can easily move from one vendor to another.

Where feasible, consider using multi-Cloud APIs that hide the differences between Cloud platforms. If you are using the Cloud as infrastructure, there is OpenStack, a free and open source software platform for cloud computing, which consists of interrelated components that control hardware pools of processing, storage and networking resources throughout a data center. If you are using the Cloud as a platform, consider Cloud Foundry or OpenShift, each of which provides a uniform Cloud platform on any Cloud vendor’s infrastructure.

So, when should you use a Cloud provider’s vendor-specific technology? When it improves scalability, elasticity, availability, resilience or reliability. These are features of the underlying infrastructure rather than the application, and you should take advantage of your Cloud vendor’s capabilities and managed services rather than trying to achieve it yourself. While this may add some degree of vendor lock-in, the benefits far outweigh the risks, and porting to a different Cloud vendor’s infrastructure is a manageable risk.

Translating Big Data Trends into Useful Solutions

Data is everywhere, and there’s a lot of it. How can we best leverage it? In an article for Toolbox, Mircea Velicescu, VP of Delivery and global practice head of Big Data & Analytics at Ness Digital Engineering, shares his perspective on the next evolutions in Big Data & Analytics and the factors to consider when companies are choosing the best Big Data solutions to meet their needs.

Click here to read the full article.

O datové analytice a jejích trendech

Tomáš Mužík, náš Delivery šéf, v rozhovoru prozrazuje, proč je užitečné se datové analytice věnovat, kde lze narazit na možná úskalí i kam se bude její vývoj v budoucnu ubírat.

Vzhledem k obrovskému množství dat, které v dnešní digitální době společnosti generují, se přímo nabízí možnost extrahovat z nich užitečné poznatky. Data Analytics neboli datová analytika je obor, který s pomocí různých metod z dat odvozuje cenné informace a napomáhá efektivnějšímu fungování firem.

Pro jaký typ firem je datová analytika vhodná?

V podstatě se dá využít v jakékoliv firmě, protože dnes už neexistuje prakticky žádná, jejíž činností by nevznikla digitální stopa. Pokud bych měl jmenovat sektory, pro které má datová analytika svůj význam, zmínil bych rozhodně banky a pojišťovny, obecně tedy firmy podnikající ve finančních službách.

Zajímavá může být pro výrobní podniky, které kromě transakčních dat disponují technickými daty a ta se dají zpracovávat a vytěžovat – například mohou vyhledat stroje, u kterých se dá předpokládat, že budou mít v nejbližší době poruchu a opravovat je, vyměňovat díly dopředu, což samozřejmě vede k úspoře nákladů.

Vhodnými klienty jsou i všechny síťové společnosti, to znamená telekomunikační a energetické společnosti. Do výčtu přidám i veřejnou správa, která disponuje obrovským množstvím dat o občanech, respektive jejich životních situacích.

Jak efektivně zavést datovou analytiku?

Co se týče datové analytiky, v nějaké podobě ji má většina společností implementovanou. Prakticky se nesetkávám se situací tzv. zelené louky, tedy že by firma v datové oblasti nedělala nic.

Říká se: mysli ve velkém, začni v malém. To znamená, že musíte mít vymyšlený celkový koncept, ke kterému se chcete dostat. Datová analytika není primárně o technologiích, které použijete, ani o tom, jak postavíte datový model nebo jak často „nalíváte“ data, je o týmech a procesech, které jsou okolo dat postaveny a které jsou schopny vymyslet, co v datech hledat.

Kdo by měl mít hlavní slovo při implementaci?

Většina činností, která je v souvislosti s datovou analytikou realizována, je zaměřena na dvě oblasti. První je regulatorní agenda, primárně ve finančním světě a ve světě utilit. Druhá oblast se věnuje analýze chování zákazníka a je vedená snahou nabídnout zákazníkovi ty správné produkty, prodloužit dobu, po kterou zákazník s firmou kooperuje (například nakupuje), získat nové zákazníky a maximalizovat marži.

Hlavní slovo, pokud jde o regulatorní agendu, má samozřejmě management. Co se týče druhé agendy, je to spíše Sales nebo Customer Relationship Management.

Kde mohou firmy tzv. narazit při zavádění datové analytiky?

Jednoznačně dostupnost a kvalita dat. Existuje spousta frameworků, jak se má dělat Master Data Management nebo Data Governance, jak data čistit, jak je udržovat v rozumné kvalitě atd. Jenže málokdo má postupy zavedeny v takové šíři, aby nějakým způsobem pokrývaly veškerá rizika.

Všechny informace, které těžím, jsou natolik kvalitní, nakolik jsou kvalitní vstupní data. Náklady na správu dat jsou však poměrně vysoké a málokdo s tím počítá, když projekt datové analytiky rozjíždí.

Jaké benefity plynou z využívání datové analytiky?

Podle všech pouček kvalitnější rozhodování. Což je trochu klišé, takže uvedu něco konkrétního.

V oblasti zákaznických analýz je benefitem například výraznější zvýšení konverze při cílených marketingových kampaních. Pokud správně vybereš ze svého zákaznického kmene ty, které oslovíš, získáš lepší odezvu vyjádřenou konverzí, že si ti lidé skutečně Tvůj produkt koupí, než když proces neřídíš a střílíš od boku, náhodně.

Co se týče oblasti regulatoriky, tam jde o snížení všech možných rizik. Výsledkem je třeba lepší auditovatelnost.

Nedávno jsem slyšel o výborném příkladu optimalizace spotřeby energie v datových centrech. Pomocí unsupervised learningu (což je metoda, kdy počítači neřeknete, co má hledat nebo co se má učit, on sám přichází na nějaké vztahy prostřednictvím hluboké neuronové sítě) zjišťovali vytížení datového centra, počítač poté podle toho vypínal nebo nevypínal zdroje (počítače, disková pole atd.). Za krátkou dobu se mu podařilo najít ustálený model, který neměl vliv na kvalitu obsluhy pro klienty, ale cíleným vypínáním nepoužívaných zdrojů či zařízení snížil spotřebu energie o 30 %, což je u datových center poměrně významná věc a úspora, protože tam je spotřeba opravdu obrovská. To je zajímavý benefit.

Jak optimalizovat procesy pomocí datové analytiky?

Jak už jsem řekl, v dnešní době veškeré činnosti podniků zanechávají digitální stopu. Vezměte si třeba prosté doručení faktury. Víte, kdy Vám faktura přišla, že ji někdo někdy naskenoval a poslal dál, všechno má časová razítka. Na základě transakčních logů můžete zjišťovat, kudy se faktura nebo jiný dokument pohybuje, jestli je cesta optimální nebo se dá vylepšit. To je oblast, kde se zatím datová analytika tak úplně nevyužívá. Za mě je to do budoucna směr, o kterém má smysl uvažovat.

Jaké jsou trendy v oblasti datové analytiky?

Uvedl bych pokusy o využívání nestrukturovaných dat, přičemž reálných business casů zatím moc není.

Dále rozumné nasazení machine learning, což byl příklad regulace spotřeby v datovém centru. Na spoustu úloh se však strojové učení nehodí, protože chybí popis kauzality. Pro lepší pochopení uvedu příklad pojišťovny, která v hromadě pojistných událostí má nějaké podvody.

Vy je potřebujete identifikovat, předat je dál lidem, kteří konkrétní případy prošetří a rozhodnou: je to podvod, není to podvod. Ti lidé potřebují vědět, proč si myslíte, že je to podvod. To Vám ale neuronová síť neřekne, té to prostě vyjde. Myslí si to, ale neříká to proto, že by znala fakta, která na podvod jasně ukazují. Takže s trendy trochu opatrně a nasazovat tam, kde to dává smysl.

Za zmínku stojí samoobslužná datová analytika. Čím dál tím více se firmy snaží přenést jednak schopnost, jednak odpovědnost za datové analýzy na koncového uživatele, nenechávat to na svém IT oddělení nebo nějaké externí organizaci. Má to své přínosy, typicky rychlost zpracování, interpretace informací. Označil bych to slovem demokratizace Data & Analytics.

Nebo třeba internet věcí (IoT). Mám spoustu čidel, která se mi různě potulují po republice nebo po světě, data z nich soustředím na jedno místo a vytěžuji je. Pro tento případ je typický koncept cloudového zpracování, tedy trend Data & Analytics v cloudu.

Zajímavý je i crowdsourcing. Znám jednu firmu, která působí v oblasti půjček. Má zpracované modely kreditního rizika, které říkají, komu půjčit může, komu raději ne. Za dobu, co funguje, jsou tyto modely dobře odladěné. Tahle firma se chtěla posunout dál, tak použila právě zmíněný crowdsourcing.

Vypsala celosvětovou soutěž, samozřejmě honorovanou, o to, kdo přijde s modelem, který situaci „neplatičů“ nejlépe vystihuje. Přihlášeným zpřístupnila anonymizovaná data o těch, kteří platili, i o těch, kteří se splácením měli problém, zafungovalo to a opravdu se jí podařilo model vylepšit.

Co dodat na závěr?

Technologické nástroje jsou fajn, v oblasti datové analýzy se dá dělat mnohé, ale strašně záleží na invenci lidí, kteří s nimi v dané firmě pracují a na jejich motivaci.

Zažil jsem projekt, kdy se postavil úžasný datový sklad, který pokrýval v podstatě všechny datové zdroje společnosti. Realizovaly se první dvě agendy – controllignové sestavy a rekonciliace dat mezi účetním a provozním systémem a nějaké marketingové věci a tím to skončilo.

Lidé, kteří projekt rozjížděli, byli plní entusiasmu, bohužel když odešli z firmy, kontinuita se přetrhla. Vše je vždycky nakonec o lidech. Technologie a postupy se až jejich prostřednictvím mohou stát skutečně užitečnými.

Part 2- Building a Solution for Real-Time Payments

In the first part of this blog series, I outlined some of the payment industry trends. In this second part, I am excited to share a few insights about the challenges in supporting faster payments and a solutions approach for real-time payments:

The Faster Payments Landscape

Faster Payments, aka Real-Time Payments, has been gaining steady adoption around the globe. In the U.S., TCH (The Clearing House) introduced Real-Time Payments (RTP) in 2017, following which, several Financial Institutions, FinTechs and other stakeholders have been using the platform, accounting for over $250B worth of transactions in 2018.

Real-time payment apps such as Zelle and Venmo have been popular for some time now; however, these are consumer facing apps for P2P payments, although the underlying platforms such as Visa Direct and Mastercard VocaLink can handle B2C payments. TCH and 40 such instant payment rails worldwide, are suited to B2B and B2C payments which have the complexities of cross-border payments, cross-currency, high-value transactions, multiple Methods of Payment (MOP), account switching, payment re-routing, and handling a multitude of clearing schemes, protocols, channels, etc.

The U.S. isn’t the only country – or even the first – to come up with a faster payment scheme. South Korea ushered in fast payments in the early 2000s, followed by Malaysia, Iceland, and South Africa. The United Kingdom became the first major economy to launch Faster Payments in 2008, followed by China, India, Brazil, Mexico and Singapore. The European Union launched SCT Inst for Credit Transfers, and then rolled out the TIPS (Target Instant Payment Settlement) platform in 2018. Australia, too, started the NPP Service in 2018 for real-time payments.

Faster Payments and an enhanced infrastructure are enabling financial institutions and FinTechs to bring new products and services to the marketplace and create new revenue streams, such as providing intra-day loans to corporations and offering liquidity management services. The drivers for RTP adoption include technology advancements with smartphone and payments linked with social apps, merchants’ desire to receive funds fast and reduce fraud, and customer expectations of an immediate settlement with no transaction fees.

Challenges in Supporting Faster Payments

Traditionally, Treasury Operations teams at banks have been controlling payments using a manual process involving fixing errors, making liquidity arrangements, bypassing cutoff requirements, etc. The underlying systems are slow, batch-driven and lack instantaneous processing 24/7/365.

The challenges are quite enormous: being highly available, always-on, scaling up or down depending on the speed of incoming RFPs (request-for-pay), meeting the stringent SLAs mandated by the RTP Payment rails, predicting liquidity needs to a certain limit, integrating with core banking platforms for account hard-postings and balances, MOP selection based on a multitude of rules, enriching the payments data as required in various steps of the flow, providing an omnichannel experience across devices, and checking with back-end systems for fraud detection, sanctions, and AML.

Financial Institutions and FinTechs started integrating/consolidating their back-end systems into ‘Payment Hubs’. However, most of these have been built 15-20 years ago and do not satisfy the requirements & SLAs demanded by the new wave of Real-Time Payments platforms. 

A Solutions Approach

The problem lends well to a solution based on Domain Driven Design and Microservices. Each piece of functionality mentioned in the previous section can have a clear bounded context. E.g., the Method of Payment selection can be a domain which serves only one purpose: given the details of a payment request, to determine the form of payment based on a set of rules embedded within the context. Similarly, Sanctions can be an autonomous domain that determines whether values in certain fields of the payment request, e.g., the destination country, is in a sanctioned list. Core Banking or legacy accounting systems can be front-ended with a separate domain that checks for limits, balances, liquidity, and makes hard-postings to accounts. A Routing domain may be designed to enrich the payments data to include any intermediaries for the payment, e.g., a cross-border payment may be re-routed through a correspondent bank in the destination country.

Thus, stateless microservices can be designed for about 15-20 autonomous domains, each defining its interface API and data model, usually a subset of an enriched payments message defined by ISO 20022, a standard for the payments industry. The solution needs to take into consideration traditional methods of payments, e.g. ACH and wire-transfers, and not just real-time payment rails such as TCH and TIPS.

Once the domains and microservices have been designed, the problem is of coordination; how do these microservices interact with each other? One can think of an orchestration layer at the top, which is the traditional way of handling interactions with a central controller handling request-response from various microservices. Given the asynchronous nature of the problem as described in the previous section and the stringent SLAs for faster payments, I’d tend to think of an event-driven, reactive architecture to better suit the needs of the solution. Thus, each microservice is a consumer of events and produces events back into the event stream, without the need for central orchestration. Building orchestration logic in the middle layer like an ESB, is going out-of-style in modern, containerized applications.

A hybrid architecture combining these two paradigms is also possible. In either case, a number of open-source technologies are available to implement these architectures with the desired scalability and fault-tolerance.

While infrastructure rationalization and modernization are underway and new open banking standards and APIs are enabling collaboration between unusual partners to develop new value propositions, Ness brings its leading-edge Microservices and Event-Driven Architecture expertise and hard-core engineering capabilities for helping clients achieve a flexible and highly scalable business solution, migrating away from a monolithic architecture. We also help create an end-to-end data platform architecture, integrating multiple payment gateways, systems, and channels, so a company can build a 360-degree view for its customers.

Click to Read Part 1-Payment Industry Trends

Payment Industry Trends- Part 1

Digital is changing the way consumers buy and transact, and thereby influencing how merchants sell their products. Modern consumers expect digital commerce experiences which are personalized, efficient, reliable, frictionless, convenient and secure. They want to engage anytime, anywhere in an omni-channel experience. This evolution of smart commerce has disrupted the payment space and has ushered in a new era of digital payments.

In Part 1 of this two-part blog post series, I’ll share observations about some of the payment industry trends:

Digital Payment Methods

Physical-world transactions are transitioning to online, providing convenience and tangible benefits to consumers. Online commerce and payments are now moving onto smartphones, easing fast, simple and secure mobile payments. Moreover, payments via wearables, automotive, and appliances are also the future. As the IoT technology evolves, the number of device options increases along with it, forcing businesses to adopt a comprehensive, multichannel payment solution.

New Digital Payment methods are emerging: contactless card to mobile, contactless payment to P2P payments, to in-app payments, to store value cards, gift cards, etc.  The payment ecosystem is getting complex and big changes are also coming in instant payments, open banking, and the second Payment Services Directive (PSD2).

Rise of Mobile Wallets:

With the growing Millennial and Gen X population, there’s a recent upward trend in mobile payments with providers such as Google Pay, PayPal, Alipay, We Chat Pay, Apple Pay, Amazon Pay, Samsung Pay and Android Pay. These mobile payments, mobile wallets are currently being widely accepted by businesses of all sizes, since it helps customers steadily move away from carrying a physical card, make payments, earn loyalty points and redeem coupons. For example: mobile wallets from financial institutions, retailers like Starbucks and Walmart (offering their own digital wallets for both online and in-store shopping) and many other brands, comes with perks like promotions, cash back rewards, special offers, coupons and loyalty points.

Going forward, brands are working hard to orchestrate the right offer, loyalty and redemption experience for customers. We also see the potential out there for the offers and rewards to be more predictive; for example, brands can help a customer orchestrate the food they are buying at the moment, the rewards and offers could tell the customer “hey this is the best way to spend your rewards and offers on this item”. Thanks to technologies like blockchain, where there could be a potential for more efficient use of the broader, open platform for awards and loyalty and also more consortium of retailers working together.

Unlocking the value of payment data:

Payment players are sitting on a goldmine of payment transaction data. The ability to harness data and turn it into actionable insights about a customer’s spending habits, payment preferences, future needs is key to better understanding the customers and driving growth and assess financial risk. Advanced analytics tools underlined by AI/ML techniques can be leveraged to improve the overall customer experience.

IoT Payments:

The business world is changing with the introduction of IoT, even whole cities can interact through connected devices. In the future, we can also see IoT devices become the commerce portals forming an “Internet of Payments”—enabling customers to use wearables, such as watches, wristbands, or even your cars to make secure, reliable, and seamless payments. In-car payment technology— incorporating onboard payment technology in vehicles will be the next big trend. You will be able to pay right from your dashboard at the gas station, for parking, for tolls, etc. without having to leave the car. Very soon your fridge will be able to order and replenish groceries, and your washing machine can order supplies. Embedded payment technologies in your smart home assistant will allow you to order and pay right from your couch.

AI in the Payments Industry

AI systems can be powerful at helping companies spot fraudulent transactions in real time, customer risk scoring, identity verification, authorization of payment transaction and monitoring, and customer churn.

With new payment methods like card-not-present (CNP) transactions, comes the persistent and rising challenge of online payment fraud. Artificial Intelligence (AI) and Machine Learning (ML) technology not just detects anomalies and potential instances of fraud, but prevents it before they happen. These AI and ML models can already uncover patterns and drive hidden insights, since they are trained via supervised learning, where the robust models are fed with a huge amount of labeled data. Today the technology is moving past this more static model to unsupervised learning, where the system can refine these insights further and perform more complex processing tasks than supervised learning systems.

Today, conversational interfaces like chatbots are also on the rise to automate and improve customer service.

Open APIs

Regulations such as PSD2 and open banking are forcing traditional players to open up their products (such as banking, payments, etc.) to third-party developers. This type of data integration is possible through open APIs, which enables a two-way data sharing between the banks and the third parties in a secure, scalable, and accelerated manner. This advancement has introduced massive disruption— a new collaboration and race to develop an ecosystem around the brand.

Payments ‘Platform-as-a-Service’

Amazon, Facebook and other BigTech companies are developing payments as a platform, a new concept set to revolutionize payments processing. The innovative new ‘Platform-as-a-Service’ model covers all payments processing needs today and for the foreseeable future, bringing benefits of payment efficiency, faster time to compliance, scalability, and flexibility.

Anticipating and analyzing how these trends may influence organization’s choice of future business models, payment companies need to build, modernize, expand their platforms, and build a solutions approach to achieve their desired future state.

At the juncture, Ness brings next-generation payment & processing expertise for issuers, acquirers and merchants. Our experience ranges from tokenization to card digitization in Mobile Wallets to acceptance of a wide range of payment methods. We also bring a combination of leading-edge payment consulting expertise to hardcore engineering capabilities for helping companies understand their customer behaviours, enable strong engagements, and how to serve new customer segments. With IoT being the game-changer in today’s digital era, Ness brings a unique combination of expertise in IoT and Digital Payments to help hi-tech, device manufacturers, and industrial companies to offer secure and seamless payment solutions.

In Part 2 of this blog post, I’ll share perspective on the challenges in supporting faster payments and how to build a solutions approach for real-time payments.

Click to Read Part 2- Building a Solution for Real-Time Payments

 

 

test