Taming the Cultural Transformation Beast during DevOps Evolution

In a new article for CIOReview, Amit Gupta, Ness’ associate vice president of delivery, discusses how embedding DevOps into a company’s culture is critical to a successful adoption and execution, given that cultural transformation lies in the heart of an organization’s DevOps evolution. Gupta also elaborates on the key focus areas designed to drive a DevOps mindset into the cultural fabric of an organization and the best tools that go into embedding DevOps in a company’s culture.

read more »

Web data extraction: Custom, commercial offerings ease the task

In a new article for TechTarget’s SearchITChannel, Moshe Kranc, Ness’ chief technology officer, describes how 25 years after the internet was compared to “a library where the books have been tossed on the floor,” custom and commercial offerings, bolstered by machine learning, can now facilitate web data extraction. Kranc goes on to establish the best guidelines for web data extraction tools, including web crawling, using proxies, and learning from user actions over time.

read more »

Interesting Highlights from the Applied Machine Learning Days

I attended the first edition of the “Applied Machine Learning Days” which took place at the Federal Engineering School of Lausanne (EPFL, Switzerland).

The event involved two days of talks and tutorials on Machine learning and Artificial Intelligence, organized by Prof. Marcel Salathé and his team of the Lab of Digital Epidemiology that gathered 450 academics and practitioners around a variety of topics including healthcare, satellite image processing, social sciences, telecommunications and online media.

It was also an opportunity to announce winners on crowdAI, an exciting open platform for machine learning competitions.

I am sharing some interesting highlights of this dense and well-organized event.

Peter Warden of Google presented TensorFlow for Poets. His project, inspired by a tutorial on EC2, allows non-experts in deep learning to get quickly up-and-running with one of the leading packages for Recurrent Neural Networks or Convolutional Neural Networks. During his presentation, Peter highlighted the benefits of Transfer Learning. One popular application of TensorFlow is to recognize images. With Transfer Learning, users can start from a pre-trained network which knows already about certain image features. He leverages Docker, a popular container technology we frequently come across in our software labs.

Swisscom, the leading mobile operator in Switzerland presented its incident detection toolbox catering to its various business units including core network or IPTV. They provide an integrated set of tools around notification (what is the signal), incident routing (classification) and incident prediction to help for example with agent onboarding finding similar tickets or provide IPTV services for example with sentiment analysis.

An interesting effort developed by Professor Salathé’s lab in cooperation with WikiMedia focuses on recommendations of missing content on WikiPedia across all languages. The team is experimenting with GapFinder. The web application allows you to find topics which are not covered typically in non-english versions of the online encyclopedia. It is still a prototype and will help contributors add missing pages to the fabulous resources.

Juergen Schmidhuber, a pioneer in neural network research and applications, presented some milestones taken from a paper on the history of deep learning. He started with Long Short Term Memory or LSTM which is used when you summon your Android phone with “OK Google”. The presenter also paid tribute to early pioneers of self-driving technology in the 1980s such as Ernst Dickmanns.

I was very excited to also hear probabilistic graphical models being addressed by Prof. Marloes Maathuis in her work on causal effects from observed data. Probabilistic graphical models form a rich toolset combining data structures from computer science such as directed acyclic graphs with probability theory, in particular conditional probabilities, the chain rule and Bayes’s rule. The work started by computer scientist and philosopher, Judea Pearl in the eighties finds applications in machine learning including virtual assistants, guiding systems or image processing for example.

Social network giant, Facebook also presented fastText. The fastText library is a key component for understanding text. Facebook data scientists are working on pre-trained word representation (word2vec), its scalability to large amounts of text and finally compression of the fastText’s data structures to small footprints for phones and micro-controllers.

Google talked about the research agenda of the largest R&D lab outside of the US, based in Switzerland. Google is working hard at natural language understanding by constructing a model of the world along with prior beliefs which will allow the computer to notice that a cow on top of an airplane is not realistic and probably meant as a joke.

Additionally, Animashree Anandkumar of AWS and long-time advocate of tensor mathematics in machine learning presented mxnet and its use on P2-class machines on Amazon’s cloud.

In general, there’s a strong desire by the machine learning community to make these approaches more accessible to all. One of the talks also addressed challenges of distributed computing and possible approaches to scale problems typically solved via gradient-descent with an alternative more suited for large clusters of parallel workers. The CoCoA project (communication-efficient distributed coordinate ascent) has also caught the attention of Apache Flink folks with an SVM implementation based on it.

This was a terrific event! I am glad some of the talks touched on data plumbing and infrastructure aspects of machine learning which can be a huge impediment for scientists eager to experiment and rapidly in a continuous delivery mode, which is something that remains top of our minds at Ness. Next year’s event was already announced and I already look forward to attending!

Ness Digital Engineering Achieves Leadership Zone Rating in Enterprise Software & Consumer Software Categories in Zinnov Zones 2016 Product Engineering Services Report

TEANECK, N.J. – February 15, 2017 Ness Digital Engineering, a provider of digital transformation and software engineering services, was recognized by Zinnov in the leadership zone in both the enterprise software and consumer software categories. The Zinnov Zones for Product Engineering Services report focuses on providing a holistic rating of Global Engineering Service providers (GESPs ) across 13 vertical categories.

“We are proud to be recognized for our global capabilities in helping our customers envision and develop impactful digital products that drive innovation and expand their brands,” said Paul Lombardo, CEO at Ness Digital Engineering. “Ranking in Zinnov’s leadership zone highlights our work in a critical business category, as global 500 organizations’ research and development spending in 2016 rose to $621 billion, with a focus on building digital.”

Qualities of GESPs in Zinnov’s Leadership Zone include the following:

  • Capable of performing concept to Go-To-Market for the product
  • Formal innovation culture, resulting in IPs and strategic innovations
  • Large internet and consumer ISVs
  • World’s largest ISVs

Zinnov Zones is an annual rating which rates Service Providers based on their competencies and capabilities. It has become one of the most trusted reports globally for both enterprises and service providers to better understand the vendor ecosystem in multiple domains. It ranks global players on the basis of their R&D Practice Maturity, Breadth, Innovation and Ecosystem Connect.

For more information, please click here.

About Zinnov

Founded in 2002, Zinnov is headquartered in Silicon Valley and Bangalore. In over a decade they have built in-depth expertise in engineering and digital practice areas. They assist their customers in effectively leveraging global innovation and technology ecosystems to accelerate innovation and digital transformation.  With Zinnov’s team of experienced professionals, they serve clients in Software, Automotive, Telecom & Networking, Semiconductor, Consumer Electronics, Storage, Healthcare, Banking, Financial Services & Retail verticals in US, Europe, Japan & India.

For any further media queries, please contact Jaya Shukla at [email protected]

About Ness Digital Engineering

Ness Digital Engineering designs and builds digital platforms and software that help organizations engage customers, differentiate their brands, and drive revenue growth. Our customer experience designers, software engineers and data experts partner with clients to develop roadmaps that identify ongoing opportunities to increase the value of their digital products and services. Through agile development of minimum viable products (MVPs), our clients can test new ideas in the market and continually adapt to changing business conditions—giving our clients the leverage to lead

market disruption in their industries and compete more effectively to drive revenue growth. For more information, visit ness.com.

 

Media Contacts

Vivek Kangath
Global Manager – Corporate Communications
Ness Digital Engineering
[email protected]
+91 9742565583

Amy Legere
Greenough
[email protected]
617.275.6517

Representation of large corporate data in HTML tables under JavaScript AngularJS framework

In corporate or enterprise platforms, the representation of tabular data in single-page web apps is a common occurrence. There are challenges with the amount of data rows, the variety of data types, and number of simultaneous columns enterprise users wish to see.

In addition, the perceived performance of tabular data in a client-side web application can be affected by many factors. Ideally, the users would like to have all of the relevant data available locally (not requiring multiple round-trips to the web server) and still have lightning fast performance.

My blog will talk about how I was able to achieve all of the user requirements for displaying tabular data using the popular client-side AngularJS JavaScript framework.

AngularJS framework

Our team develops single-page applications (SPA) for our customer. SPA is a web application that doesn’t reload at any point of time. Instead all necessary code is loaded at once or dynamically when is requested usually by a user action.

AngularJS is a popular JavaScript framework adopting SPA principle. It is based on bidirectional User Interface (UI) data binding. That is when a model changes, the view is updated. When the view changes, the data model is updated accordingly. The model is maintained within the browser. No interaction with the server is needed to update the whole view.

When developing a web page under AngularJS:

  • data comes from a web service, database, etc.
  • an HTML table is rendered from an array of entities using the ng-repeat directive,
  • one-way binding to dynamically render an HTML table is used,
  • rendering process takes time when you have hundreds of rows, tens of columns, or you format table cells,
  • this will likely discourage the user because the perceived performance is poor.

Solution Options & Overview

Typical approach

One approach would be to use to pagination approach that divides the data into discrete pages. The performance is good however the user has to click to get next or previous page.

Better approach

  • render only visible data rows which fill the available space of the browser window,
  • custom scrollbar next to table shows current relative position of the visible data vs the entire data set,
  • scrollbar gradually fills-up with its background color when the user scrolls down the table,
  • allows the user to have entire dataset at hand,
  • intention is NEITHER to render the whole table NOR to display the common vertical scrollbar, only visible data rows,
  • HTML table occupies a major portion of the browser window (appropriate # of rows),
  • browser window size changes trigger table computation of visible area (similar to WinForms behavior).

HTML table with virtual scrolling

My goal was to quickly display only range of data and also allow the user to browse it at ease. The solution renders an HTML table by displaying appropriate number of rows to fit the browser window as if it were a Windows application. By hooking to the window resize event if the browser window size changes, the table height will be re-computed.

The table height is computed within the ng-style directive so the table fits the available browser screen space.

The table uses the ng-repeat directive and a custom range filter to display current range of data. To know how many rows to display in the table, the rendering process is intercepted by a custom directive and the height of first row is computed. The solution expects each row to have the same height. When the table height and rows height are known, it is easy to compute the number of to be displayed rows.

The table columns width depends on their content. When you scroll the table up or down, one row is removed and a new one is added to table. If the content changes, table column width may change which is an unexpected behavior. From this reason every column has a pre-defined width in the table header.

A custom scrollbar displayed to the right of table shows the current position in the whole array of data. The user can use the scrollbar to scroll up and down the table to see rows which are not currently visible. In contrary to the common scrollbar, the custom scrollbar gradually fills-up when the user scrolls down the table.

Consider the restrictions

The solution has some limitations though:

  • If the content above or below the HTML table changes, the table height needs to be re-computed,
  • the height of each row in the HTML table should be the same. When the first row is rendered, the solution finds out its height. Since we know the height of the first row and the HTML table, we can easily compute the number of visible rows. Also the row height should not change.

Conclusion

This solution renders HTML table fast regarding the amount of data because only data that can be visible NEEDS to be rendered. The next step to take is implementing the solution under Angular (technically Angular 2) to utilize its speed and performance.

The Key Essentials to Remember in Your Digital Transformation Journey

The accelerated pace at which digital transformation is moving across industries could be overwhelming for enterprises that are catching up or deciding where and how to start, or for those that are disappointed in some way with their efforts in going digital. It is important to pause and think – what ensures success in a digital transformation. Though there isn’t a guaranteed approach to success, there are some key essentials that are highly beneficial in ensuring a steady start. One key component is integrating design thinking and digital engineering, and this is something many organizations are struggling with.

In this Q&A published by Virtual Strategy Magazine, Moshe Kranc, Chief Technology Officer, Ness Digital Engineering, highlights the importance of having an integrated approach of design thinking and engineering excellence and how Ness, with its Ness Connected framework, is using the approach to offer more value to its clients.

The Q&A also offers additional insights on:

  • The role of the three pillars – customer experience, platform engineering and big data analytics in building digital products
  • The key big data trends in 2017
  • Emerging technologies that will have an impact on digital technology in 2017
  • The various risks to success in digital engineering projects and how Ness can help

Read more http://virtual-strategy.com/2017/01/12/interview-with-moshe-kranc-cto-at-ness-digital-engineering/

BPM is for All Company Sizes – Target the Right Opportunities and Implement the Right Way

Historically Business Process Management (BPM) solutions have primarily been targeted at large enterprises. Smaller organizations have not invested as heavily due to perceived high costs and unknown value. However, BPM is a valuable tool for almost any business. To realize its potential benefits, it needs to be applied to the right opportunities and in the right way. If this is done correctly, the benefits will clearly outweigh the costs and even small business can enjoy great ROI.

For large enterprises, the volume of repetition introduces a bigger set of eligible processes for optimizing. Their role as industry leaders allows these organization to dictate how process should run, and how parties that interact with their processes should behave. That authority often extends to both internal and external actors. This is the perfect environment for implementing transformational BPM solutions as there is a lot of opportunity to orchestrate and control. In a small business environment, it is more important to evaluate which current processes provide the right opportunities to benefit from BPM solutions. Some factors that are important to consider are:

  • Costs vs. benefits – Costs can be both tangible and intangible. Typical benefits include, cost savings, improved services, and customer experience
  • Agility vs predictability – is the process suitably predictable and stable enough to benefit from automation?
  • Growth impact – will automation benefits increase with business growth

Some business processes are proven candidates for most small to medium sized organizations to optimize with BPM. Three examples – invoicing, client onboarding, customer servicing and contract management – benefit from reduced paperwork, automated follow up, integration with external systems, managed SLAs and system managed auditing. For all three processes, the ROI is good, the business activity flows are stable and the BPM solution impacts are improved as the organizations grow.

Approaching BPM in the right way is just as critical as identifying where to use it. There are logical steps that should be followed that will provide insight and tangible benefits throughout the journey:

  • Modeling repetitive business processes – journaling routing activities and examining them, looking for repetitive tasks, bottlenecks, redundancy, and high complexity
  • Selecting appropriate processes – identifying the processes documented in the previous step that are the best candidates for using a BPM solution. If the processes are bad, ensure that you correct them before starting to automate. A bad process will still be bad once it is implemented in BPM
  • Optimizing to address current challenges – what are the key business challenges in the way it is managed today? Looking for and planning to address inefficiencies will improve the benefits of your BPM solution
  • Leveraging existing strengths – what technologies and processes are you using today that are working well. You won’t need to change everything. Focus on identifying the technology shortcomings that need to be addressed then procure new technologies to fill the gaps
  • Approaching the solution in the right way – business requirements driving functional requirements which are then driving technical requirements. Make sure that the tail is not wagging the dog and that this order is always followed
  • Leveraging new technologies as appropriate – cloud solutions and omni channel capabilities are great ways of addressing cost and resource challenges
  • Planning for optimization – Implementing the process of process improvement should be universal. Scheduling evaluation and optimization with appropriate business and technical resources will help organizations maximize the benefits they can derive from BPM over time

In any organization, as additional processes are implemented, the cost per process implementation goes down and the cross functional benefits increase. There is more opportunity for transformational benefit in larger organizations, but small companies can benefit too. In a company with 25 employees, if a BPM solution can save each employee one hour of work per week (conservative estimate), that is equal to 1,300 man hours per year (ignoring time off). Cloud based BPM solutions can cost as little as $200 per month, so there is clearly a good case to be made for adopting BPM solutions at small businesses. The room for error is much smaller in smaller organizations and it is critical that the right opportunities are identified and implemented the right way to ensure the benefits of BPM can be realized.

API Series Part 3: Understanding the Capabilities of API Management Platforms

So, why do we need an API Management Platform?

An API platform provides capabilities in managing the APIs; starting from documenting the contract definition to defining the revenue generation model (subscription model), one can use the platform to address non-functional needs such as deployment agility, continuous monitoring, usage metrics, subscription models, scalability, availability, etc. Platforms provide abstraction to these low-level non-functional needs, thereby allowing developers to concentrate on addressing the functional aspects alone, leading to efficiency.

Below is a sample illustration of an API Gateway/Platform that provides an abstraction layer between the consumers and published actual API endpoints (bound to backend services).

Some of the key vendors of API Management/Gateway are Tibco Mashery, Apigee, WSO2, Amazon API Gateway, Kong, and Mulesoft. Kong, which is open-source based solution, has an interesting architecture wherein the core platform can be extended by adding plugins that enable customization to the customer needs. Most of these vendors have both On Premise and Cloud based solutions.

Key features of an API Management Platform are:

        Documentation

•  Well documented APIs with good examples

•  Tools such as Swagger I/O simplify documentation
techniques

           Onboarding
•  Portal for Developer self-service•  Community Support•  API development studio
        Deployment Agility

•  Flexible support for cloud and on premise
deployments

•  Automated deployments

          Sandbox Environment

•  Ability for developers to test APIs

•  Increases the value of an API and its adaptation rate

         Security

•  Provide IAM to users and developers

•  Tried and tested authentication schemes

•  OWASP threat protection

          Operations

•  Zero downtime upgrades

•  Multi-tenancy and scaling

•  Cross-region automated routing & failover

        Analytics & Statistics

•   Metrics on API usage

•   Monitor API availability and uptime, traffic, and

response time

         Monetization

•   Define subscription models

•   Define volume and usage tiers

•   Reporting and billing

 

By now, you have likely realized the crucial role that the API Economy plays in this Digital Transformation era. There should be no second thoughts about API adoption. It is necessary if an organization wants to provide an exceptional customer experience or wants to optimize their existing application infrastructure and helping the growth of their business. There are many examples of real-world organizations that have derived great business benefit from adopting an API-centric approach.

Contact Ness if you would like help with your Digital Transformation or API strategy.

This is the final blog in a 3-part series on API. “Part 1: API Centric Design” can be viewed here. “Part 2: Key API Design Principles and Best Practices can be viewed here. 

Digital Retail Road Map: Enhancing Customer Experience and Increasing Margins

In a new article for Total Retail, describes future trends surrounding digital retail, both online and in-store. Elements on the digital roadmap for retail include interactive store experiences, impulse buy enablement, convenience for fulfillment, and insight-driven dynamic pricing. As a result, the digital transformation of retail continues to provide new opportunities for brands to differentiate themselves in terms of customer experience online and offline, while boosting revenues and profit margins.

read more »

test