Learning from the Greats – Henry Ford and the lessons for implementing visionary 21st century software platforms

I was recently reading an article about the history of Henry Ford’s famous Assembly Line and I found it very inspirational. I have always enjoyed learning about historical figures and the titans of industry that helped shape the way we live today. There are great lessons to be learned from the wisdom they displayed. Habits they pioneered are often just as applicable today as they were in their prime. The key highlights I took away from my reading about Ford were:

  1. It is important to have a goal/vision – Ford wanted to build “motor car[s] for the great multitude”
  2. Establish principles and follow them – Ford had four principles: interchangeable parts, continuous flow, division of labor, and reduce wasted effort
  3. Realize the benefits incrementally – He did not wait for the fully functional assembly line before using his advances to improve his processes. He implemented changes in smaller advances
  4. Break the big challenge down to smaller manageable steps – Ford broke down automobile assembly into 84 discrete steps
  5. Use automation to optimize where possible and reduce complex human tasks as much as possible

All those points have as much relevance in the IT world of today as they did in the early 1900s for the fledgling automotive industry. The longevity of Ford’s approach to process optimization is very admirable and as I read about his accomplishments I wanted to try to emulate some of his best practices in my professional career.

It was with that reverence as context that I began to review a project that I have been heavily involved in over the past year. Working at Ness, I have been driving an effort to create industry frameworks that can be used to aggregate best of breed technologies and create niche business solutions that can be easily integrated into these base frameworks. I looked at each of the 5 points I took away from studying Ford and tried to draw parallels in our approach

  1. The key motivation of these frameworks was to create platforms that allow many business solutions to plug and play in an integrated and synergistic yet independent manner
  2. To accomplish the goal, principles were applied:
    • Interchangeable modules – each business solution follows a contract (set of APIs) that allows it to be swapped out for a newer version or one from a different vendor
    • Continuous flow using the platform for orchestration – Business solutions serve the customer’s needs at appropriate times in their experience with the organization. The use of a central platform to provide access to all solutions and centralize the data used throughout provides a continuous flow for the customer
    • Division of labor – BPM flows are used to break the work down in to discrete tasks and assign it to the right individuals at the right time
    • Reducing wasted effort – integration to core systems and external vendors are built once and shared by all business solutions. Data is shared as much as possible to prevent rekeying and duplicate capture
  3. Implementing our solutions incrementally was necessary to gain market. Our solutions need to be something organizations could choose to select from piecemeal. Implementing any business solution individually is always the best first option. It allows the platform foundation (the integration to master data systems) to be set while quickly adding the value of the first business solution. Future business solutions are then able to piggy back on the platform foundation to accelerate their implementation.
  4. Business solutions are often complex processes. To penetrate that complexity, our solutions break down the functionality of all processes into simple, discrete steps and services. This allows for greater flexibility. Flexibility that is realized in the ability to swap out one service provider for another as well as flexibility in the way that services are invoked and orchestrated to optimize the business solution, and ultimately the customer journey.
  5. Looking to optimize the opportunities for automation our solutions leverage BPM as a key component of the foundation. BPM technology is used to automate process tasks. Additionally, third party resources are used to automate complex business tasks.

The Ness platform approach is to build solutions in a way that truly encourages reuse. As the initial framework was being built out, it was very apparent that there were additional advantages that were being realized. By creating a framework based hub, with a common data representation and an integration mechanism into core client systems, we could achieve some great synergies across business challenges. For example, in our financial services platform, the Know Your Customer (KYC) module was a window into a wealth of data that has relevance to Customer Servicing, Marketing, and Sales. A follow up effort currently under development is a fraud investigation module which leverages the comprehensive customer profile to enrich the data available to fraud investigators.  Key benefits for fraud investigations include:

  • Accessing a data set that is much greater than that of a typical fraud investigation
  • Allowing pre-emptive detection through analytics to determine high risk individuals or scenarios based on enhanced due diligence data in the customer KYC profile
  • Shortening investigation times by utilizing pre-fetched and scored data with tasks such as de-duplicating and eliminating false data already performed

The technical synergies presented are based on integration capabilities made available because of the discrete services developed during the KYC solution implementation. The KYC solution required access to client Customer Information Systems, Product Systems, and external vendors such as credit bureaus. These integrations are all relevant to fraud investigations and since they are built on the same platform and share a common data dictionary they can plug and play as needed.

After reviewing all that we have accomplished and looking for lessons to glean from the incredible accomplishments of Ford, I was comforted in knowing that there was enough similarity in approach to feel like we are on the right track. Ford’s contributions to American industry changed the habits of a Nation and helped shaped its character. Our goals were much more modest. But I’d like to think that if Ford had built a software integration platform, it would have looked a lot like ours.

API Series Part 2: Key API Design Principles and Best Practices

There are five key API design principles. Here I will elaborate on each of these areas:

Documentation

Documentation is a key principle of API design and development. Please keep in mind that the developers are the users of your API; therefore, the documentation should be the paramount deliverable item of APIs. They need to be detailed in every aspect whether you call it a contract definition or developer guide, it needs to be easy to understand and simple to use. Keep it up-to-date and eliminate outdated items. There are tools available such as Swagger that simplify the complexities and techniques involved in document writing.

Content Negotiation

There should be a provision for flexibility in terms of technology and usage preferences, which allows developers to prefer an option that best suits them. Greater flexibility helps in accelerating the adaption of your APIs. It includes:

  • Supporting multiple formats (media types)
  • Understanding developers’ technological preferences, and support if possible
  • Adapting the preferred standards and specifications

Compatibility

It is important to be compatible with each consumer and changing business needs. It is also essential to version each API release and document each changelog, so that the behavior of the API remains consistent, thus keeping the various consumers unaffected and stable. Do not forget to deprecate the older and unused versions.

Adaptability

The APIs that are published must be easy to understand and simple to use. There should be a provision for developers to try-out your APIs before they see benefits and start adopting them for production use. This should help to ease onboarding of third party developers and partners. As the saying goes, “The first impression is the last impression”, and hence chances are high that developers will adopt your APIs if they find them easy to implement. A community for trial-users and enterprise support for production-users will greatly enhance the adoption of APIs.

Security

This must be the most basic principle of any application design and development. Moreover, when securing the public APIs, the complexity grows in terms of design factors. Some of the key factors to consider are:

  • Always use SSL for public APIs
  • Use tried and tested authentication schemes
  • Enable only authorized access to each resource
  • Always encrypt key information that is shared over the wire

API Guidelines and Best Practices

Some of the guidelines and best practices described below are subjective but they are essential in today’s API development. They provide fundamental benefits and help to stay at par with industry wide adoption of best practices:

Vocabulary

This refers to the standard naming conventions one should following while naming each API endpoint. They should be human readable, easy to understand, and follow the HTTP standards.

Versioning

By versioning, you are allowing various consumers to access your published APIs in two different variations. Though Version management adds complexity to the existing APIs, they do however help in better management of API endpoints, thereby serving various consumers through different mediums. There are two different ways to implement this:

  • URL – For e.g., api.myorg.com/v1/users
  • Accept Header – requesting for specific version via request/accept header

Support Multiple Media Types

At any point in time, a given object or resource can have multiple representations. This is necessary so that various consumers can request the content or resource in the manner that they would like. Having said that, it is not necessary to support all media types, only the ones that are required based on specific use cases.

Here is an example of support for two different data formats:

Caching and Concurrency Control

Caching improves performance, thereby providing faster access to frequently accessed resources and eliminating the load on backend services. However, with caching comes the challenge of managing concurrent access. Therefore, it is essential to manage the caching in a better way using HTTP standards such as:

  • ETag – Entity tagging. Equivalent to versioning each entity for updates
  • Last-Modified – Contains the last modified timestamp

Standard Response Codes

This responsibility lies with business owners as it affects the business needs of consumers of your APIs. The contract definition should contain all possible error codes that could occur with each API.

  • Adhere to the standard HTTP response codes
  • Include both business and developer messages. Developer messages should be optional, and contain technical messages that guide debugging and troubleshooting techniques.
  • Due to security reasons, do not reveal too much about the request (to avoid Cross-Site Request Forgery).
  • Best practice is to limit the list of potential error codes, as too many error codes leads to chaos.

Security Considerations

This does not require much explanation, as security requirements are the basic needs of any application or an API. Keep in mind that your API’s are mostly public, so invest the effort required to secure them. API management platforms (explained in the later section) provides security mechanisms; however, as an API developer, you should be aware of the current trends and industry best practices adopted in addressing security requirements.

  • Always use SSL
  • APIs are stateless, so avoid session/cookie management – authenticate each request
  • Authorize based on resource, not on URL
  • HTTP status code 401 vs. 403: Some may prefer to use code 401 rather than 403 to indicate that either authentication or authorization failed
  • Follow the guidelines defined by Open Web Application Security Project (OWASP) Threat Protection

Possible Authentication Schemes:

Basic Authentication •         Must be on SSL only

•         Encoded using Base64 and sent in Authorization request header

•         Ideal to use within secured networks

SAML •         Transport agnostic, can be used with HTTP, SOAP or JMS

•         Ideal for B2B enterprise applications

OAuth •         Uses only HTTP

•         Ideal for consumer facing applications that authorize data for 3rd party access

•         OAuth 2.0 with bearer tokens is ideal for mobile applications

JSON Web Token (JWT) •         Compact, transmission is fast

•         Can be sent through URL query parameter, POST parameter or inside HTTP header


Great!
We now have the strategy in place, understand the design principles and best practices necessary for the development of APIs. In my next blog I will help you to understand the capabilities of API management platforms that address the non-functional challenges, changing business demands, and ways to accelerate the development and adaptation of APIs.

This is the second blog in a 3-part series on API. “Part 1: API Centric Design” can be viewed here. “Part 3: Understanding the Capabilities of API Management Platforms can be viewed here. 

Big Data Open Source Projects vs Amazon Web Services (AWS)

All data engineers are looking for the latest trends regarding the Vs (such as Volume, Variety, Velocity and others) of Big Data. Most approaches lead to higher ingested Volume of enterprise data, an increased Variety of enterprise or cloud source systems, with constant Velocity demand. Data engineers and data analysts are looking to migrate data warehousing to the Cloud to increase performance and lower costs.

Big Data architects are looking for new intelligent solutions to govern the data swamp and in the end, to create robust security models to protect data and manage “data lakes” with full respect for compliancy. The rules of the game always push the return of investment to the limit, so most of the time organizations need to find a balanced technical solution between open source technologies and proprietary/commercials ones.

From an engineering point of view, a big data discussion always starts with the cluster type. At the beginning most of the clusters were built “on premise”, but evolution led to the “public cloud” and nowadays we get full benefit of “hybrid” ones.

Anything related to provisioning, dynamic commissioning and decommissioning is already offered as a service – Infrastructure (IaaS) or Platform (PaaS) as Service. Amazon EC2 (Elastic Compute Cloud) provides a full custom integrated scalable environment but also leaves space for open source platforms like Cloudera, Hortonworks and MapR.

From a financial point of view, the hotspot instances combined with Amazon EMR (Elastic MapReduce) services definitely raise the bar in terms of capacity planning. IaaS & PaaS are already mature enough to offer solid support to embrace the transition from capital expense to operational costs. In the same context there are few big questions: “Which solution is the most cost effective? In terms of licensing costs or support costs?” I would respond that it’s a combination of them.

From a strategic point of view any full integration with a big data platform strongly affects “independence of work”. Amazon’s Big Data Platform has definitely heavily integrated open source technologies, an area where it is a big contributor, but it also offers innovative services. Some examples are related to data persistence services like S3, Glacier, EBS and Hadoop’s HDFS.

In addition to this brief blog introduction, I am also providing a video presentation from Ness TechDays virtual conference that consists of 2 distinct parts:

  • The 1st part reviews where enterprise systems end and big data solutions begin.
  • The 2nd part is a comprehensive comparison between Apache open source projects and Amazon AWS. This snapshot of the current valuable technologies in the Big Data ecosystem is meant to shorten the time needed for architectural decisions.

The comparative approach covers architectural aspects, such as cost model, performance, availability, scalability and elasticity for analytics and data warehousing, outlining available AWS services and open source alternatives.

The final goal of the presentation is to offer a reference for a typical transition of a software solution from “on premise infrastructure” to “hybrid cloud infrastructure.”  View the full “Big Data Open Source Projects vs Amazon Web Services” presentation here.

Interview with Ness CTO: Digital Engineering and Big Data Trends for 2017

In a new, wide-ranging Q&A with Virtual-Strategy Magazine, Ness’s Chief Technology Officer Moshe Kranc, discusses several topics, including current trends in digital platform engineering, Ness’ specific approaches to digital engineering, the role of big data analytics in digital engineering, notable big data trends for 2017, and other emerging technologies destined to have an impact on digital technology in the new year. Moshe also adds insights about Ness’s abilities in mitigating risks to success in digital engineering projects.

read more »

The Skyrocketing Cost of Technical Debt: Yahoo Security Breach in Perspective

Hacking seems to be one of the lead stories as we begin the New Year. The Yahoo security breach announced last month notes one of the latest victims. Yahoo released information regarding a theft in late 2014 of encrypted passwords and unencrypted personal information for over 1 billion Yahoo users.

The first question a techie like myself asks is “What security hole did Yahoo leave open?” The statement by Bob Lord, Yahoo chief information security officer, gives a hint: “The account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (the vast majority with bcrypt) and, in some cases, encrypted or unencrypted security questions and answers.” (http://yahoo.tumblr.com/post/150781911849/an-important-message-about-yahoo-user-security) Bcrypt is a fairly secure hashing algorithm that takes months of brute force calculation to crack, so most people will have time to change their passwords before any damage is done.

The real shocker in Lord’s statement is “… unencrypted security questions and answers.” These are the questions a site uses to verify your identity when you forget your password, e.g., “What was the name of your first pet?”, “Who is your favorite actress?” Unlike passwords, which most people change regularly, these security questions and answers are your identity – they define who you are. You can’t change your mother’s maiden name or the city where you were born. Someone who has this information can steal your online identity to any site by proving they are you, no matter how secure your password is.

Stunningly, Yahoo encrypted the passwords, but apparently left the far more valuable personal information unencrypted. Hackers now have this information, in the clear, for billions of online users who are uniquely identified by their name and email address. Your Yahoo account is the least of your problems, if you still use Yahoo. You need to go to every other site where you gave personal information and change the questions or fabricate new answers, then remember your new answers. That’s a high price to pay for the benefits of having a Yahoo account.

Encrypting all personal information is a fairly basic tenet of online security, even for a fledgling online startup, let alone an attractive hacking target like Yahoo. It’s not as if Yahoo was unaware of hacker activity on their site, after a breach of over 450,000 accounts in 2012 and a series of spam mail attacks in 2013. And yet, Yahoo was slow to take any corrective action, despite warnings from Yahoo’s own internal security team. “…When it came time to commit meaningful dollars to improve Yahoo’s security infrastructure, [Yahoo CEO] Ms. Mayer…denied Yahoo’s security team financial resources and put off proactive security defenses.”
(http://www.nytimes.com/2016/09/29/technology/yahoo-data-breach-hacking.html?_r=1)

A security hole is a form of technical debt – a hole in the company’s technology that sooner or later will cost the company money, either to fix or to suffer the consequences. Almost every company has these sorts of “skeletons in the closet”:

  • Perhaps you have incompatible business systems as a result of inorganic growth. The business manages today with a fragmented view of the company’s data, but the day will come when it needs a unified view in order to compete.
  • Perhaps a key algorithm for de-duplicating data assumes that all the data will fit into memory. It will work fine until the day your data no longer fits in memory, at which point it will become an all-hands-on-deck emergency.
  • Perhaps you have a business-critical system that is based on technologies like RPG or COBOL that no one in your organization understands. Even the smallest change or bug fix takes weeks, but you never have the time or budget to re-factor it using a more modern and maintainable technology.

The longer you wait to fix the problem, the more it costs the company, in risk or impaired performance. Yahoo chose to defer fixing their technical debt, but that debt has now come due with compound interest, and the timing could not be worse. Yahoo is in the midst of negotiating a sale to Verizon for $4.8 billion, and the mega breach threatens to slice the purchase price by billions or, if enough users flee Yahoo, could even cancel the deal or sink Yahoo entirely. That would be a new record for the skyrocketing cost of technical debt.

Yahoo’s plight provides some valuable take-away lessons for every company:

  • Technical debt does not go away.
  • Every year you defer tackling the problem only increases the ultimate cost.
  • Not fixing the problem is often the most expensive option.
  • Beware of the echo chamber. Yahoo’s internal discussions led to an incorrect choice to defer tackling their technical debt. Often it is beneficial to bring in an outside opinion to help evaluate the true cost of fixing or not fixing your technical debt.

Ness Digital Engineering to add 800 to workforce

Satyajit Bandyopadhyay, President and Chief Delivery Officer, Ness Digital Engineering, in an interview with “The Economic Times” emphasizes the company’s focus on digital platform engineering, and its plans to expand its workforce globally to meet the rapidly changing technology needs in the digital space. Bandyopadhyay says, “We had double-digit growth in the last two years and expect the growth to accelerate mainly because of our digital product engineering focus.”

read more »

Mining the Data Intersection of Artificial Intelligence and BPM for Financial Services

Banks have been at the center of a data explosion for decades and have traditionally aggregated this data in the immediate context, often to solve one business problem at one-point-in-time, in order to stay compliant. Enter Artificial Intelligence (AI)…while not a new term, its use case in banking today speeds what researchers have usually had to do manually to help financial institutions keep up with the growing regulatory scrutiny, to flag suspicious activities and to capture wrongdoing (like money laundering) that could be sneaking through the cracks of the organization.

From Months to Minutes – AI in KYC

When used during the Know Your Customer (KYC) process, AI can cut costs and complexity. It can generate profiles on individuals and companies in mere minutes and turn an otherwise tedious process into instant, auditable information so that key stakeholders can make informed decisions. The clear business value for AI in KYC is time-savings which ultimately leads to increased customer satisfaction and revenue.

Our recent partnership with OutsideIQ brings this opportunity to our customers via the company’s investigative cognitive computing technology, DDIQ®. With the growing number of data sources available, the product’s cognitive computing engine searches thousands of international structured and unstructured sources to cut through false positives and accurately flag potential risk issues on a subject. It thinks like an investigator and searches the open web and deep web for anything relevant published about the third party. The technology is designed to enhance (not replace) the reporting process so that human researchers can be armed with the information they need, when they need it, to make key decisions.

While KYC and improved due diligence is one great example of AI’s value, we are merely scratching the surface with its potential to improve overall business process management practices. There’s a business case for looking at the data as a whole vs. immediate instance to uncover new customer experiences and cross-selling opportunities. The more companies can dive into the data and make sense of its patterns, the more they can understand its impact to create a competitive advantage.

API Series Part 1: API Centric Design

In this Digital Transformation era, it is of paramount importance for organizations to adopt APIs to execute ideas quickly and acquire new business opportunities. APIs (Application Program Interface) play an essential role, as they are the building blocks of Digital Transformation, thus enabling organizations to deliver an exceptional customer experience, and thereby increasing their revenue streams and optimizing internal IT operations.

Gartner predicts that 50% of business-to-business collaboration will take place through APIs by 2017, and by 2018 75% of Fortune 1000 firms will offer public Web APIs1

  • Added Revenue Source – Extend the reach of your content by maximizing the distribution channels. For example, 90% of Expedia’s revenue is derived from APIs.2
  • New Customer Acquisition Channel – Organizations can open new channels that provide significant growth in CLTV (Customer Life Time Value) when compared to other traditional channels. Gartner states that by 2018, more than 50 percent of users will use a tablet or smartphone first for all online activities.3
  • Fosters Open Innovation – With developer-friendly APIs, organizations can attract creative developers who bring in a variety of insights that can result in new uses for your service/product. For example, Indian Railways opened up their ticketing API, which enabled the creation of consumer friendly apps. There are now 6 to 8 user-developed mobile apps in addition to the website from Indian Rail and other third party partners.
  • Increased Collaboration – APIs nurture integration and interoperability of Enterprise Applications, thereby removing the barriers in software collaboration that were there for years. For e.g., Twitter redesigned their API strategy so that twitter.com would be consumers of their own API. This shift helped in optimizing the collaboration of their API team with each of the consumers including their own website team and mobile apps team.
  • Flexibility – APIs can help in addressing unanticipated future business needs. By making data available through APIs, one can provide greater flexibility in delivering services. For example, Myntra, a leading online fashion apparel store, went with a mobile-only strategy in 2015 that backfired, as they saw customer visits dip by 20%.4 Nevertheless, thanks to an API strategy in place, they were able to bring back the desktop-based application in a short amount of time.

That is Awesome! APIs are transformative, and the benefits are multi-fold. However, how do Organizations build an “API Economy” so that they do not fall behind in the race? Are there any challenges?

Yes, there are:

  • Accelerating Go-to-market Strategy

How soon can one conceptualize, develop and publish the APIs?

  • Orchestration within the Enterprise Ecosystem

How do we orchestrate APIs within the suite of enterprise applications besides access to external third party developers and partners?

  • Securing Open/Public APIs

Consideration of best practices and industry standards to secure open/public APIs.

  • Onboarding Developers and Partners

Ability for third party developers to discover and trial-option before using the published APIs. The ideal solution would provide a self-service portal that can enable some of these, including documentation, authorization and support.

  • Deployment Agility to Support Changing Business Needs

Simplifying the development, testing and deployment aspects thereby mitigating the risks involved while reacting to changing business needs.

  • Understand API Usage and Optimization

Provision to log and understand the API usage metrics thereby providing an opportunity to optimize the published APIs. Continuous monitoring and improvement is the key to success.

  • Monetization

From providing various subscription models to defining usage tiers. There should be a trial option before charging the third party partners and developers.

API Centric Design is a strategy that provides an insightful approach to the design, development, and provisional aspects of API that one can apply to overcome some of these challenges.

Here are the aspects that one should consider for the design and development of APIs:

  • Architectural Patterns

Emphasizes on the trade-offs of various architectural styles and patterns such as SOAP, RESTful and Event driven based APIs. It is essential to understand the business needs and characteristics of APIs and the consideration of various factors before deciding upon the pattern.

  • Design Principles

Key design principles to consider at the beginning of an API design and development project.

  • Guidelines & Best Practices

Industry specific standards and guidelines for the design and development of APIs.

  • API Management

It is essential to manage the APIs once published for external/internal users. Published APIs bring in revenue and hence adherence to contract definition, continuous monitoring, understanding the usage metrics, and other non-functional needs such as security, scalability, and availability. All are important factors of API management. A few platforms such as Apigee, WSO2, Amazon API Gateway, and many prominent providers offer capabilities around these, and one can make viable options in deciding to build vs. buy.

Contact Ness if you are looking for assistance with your API strategy. This is the first blog in a 3-part series on API. “Part 2: Key API Design Principles and Best Practices can be viewed here. “Part 3: Understanding the Capabilities of API Management Platforms can be viewed here.

1 http://searchsoa.techtarget.com/feature/What-CIOs-developers-should-know-about-the-API-economy.

2http://hbr.org/2015/01/the-strategic-value-of-apis

3 http://www.gartner.com/newsroom/id/2939217

4 http://www.business-standard.com/article/companies/myntra-s-app-only-dream-is-dead-to-relaunch-desktop-website-on-june-1-116050301169_1.html

Taming the Big Beast: Cultural Transformation during DevOps Evolution

Amit Gupta, Associate Vice President, Ness Digital Engineering has authored an article for The Economic Times, highlighting why a cultural shift can be so critical to successful implementation of DevOps. He writes, “When merging software development and IT operations teams to improve software quality and accelerate release cycles, the way people within the organization think, work and collaborate is vital for successful adoption and execution of DevOps.”

read more »

test