Watch this SBI TV interview with Ness Chief Digital Officer to hear a new world view around the emerging digital experience trends. Digital Transformation is not new and the trend to enhance a company’s digital experience has been evolving across industries for many years and is, inherently, an iterative process. The progress that your company has made in its digital transformation may vary – but the importance to keep listening, experimenting and improving is the key to future success.
Digital Transformation of the enterprise is not new and the trend to enhance a company’s digital experience has been evolving across industries for many years and is, inherently, an iterative process. The progress that your company has made in its digital transformation may vary – but the importance to keep listening, experimenting and improving is the key to future success.
The ability to enrich the digital experience and make it more meaningful and impactful is increasingly critical. “Extreme Personalization” is a deep understanding of the customer’s digital journey and delivering it results in a frictionless and exceptional personal experience which helps retain and attract customers.
Watch this SBI TV interview with Ness Chief Digital Officer, Mark Lister, to hear a new world view around the emerging digital experience trends on topics including:
- How to make a digital experience a competitive differentiator
- Importance of being a “data whisperer” – listening and understanding your data
- Artificial Intelligence – 3 things you need for success
- How to get a sales leader engaged in a digital initiative
Mark Lister demonstrates a deep understanding from his experience of helping multiple Ness clients continually evolve their digital presence with digital services, user experience design and data analytics.
Most serious developer teams today have at least some form of code review discipline in place. This means that at least one person, other than the author, looks through a new code change before the change goes out to production. There is a good reason for that; code reviews, even when practiced less formally, have been found to increase the quality of a software product by reducing the number of defects. See  for research involving modern open-source software projects.
As principal engineer at Ness Digital Engineering, I see teams that do code review with varying degrees of rigor, from a light touch approach, all the way to mandatory, thorough reviews by at least two other engineers.
Irrespective of the actual process, the benefits of a code review discipline in your team go beyond whether the number of coding defects is below your threshold for quality. We will take a look at those first.
What I’ve repeatedly noticed is that very few engineers actually look forward to having their code reviewed. It feels like a trip to the dentist: you know it’s good for you, you know that a small intervention now prevents major decay later, but still your mom or your partner has to push you to go, because the anticipation of the drill in your mouth doesn’t exactly thrill you. In the second part of this post we’ll review some tips for running effective code reviews.
Here are other benefits of code reviews you should keep in mind for your team.
1) Code reviews spread knowledge and create a sense of ownership of the entire code base. If Alice has become an expert in module A of your code base, and Bob is an expert in module B, Alice can learn new things by reviewing Bob’s code changes in module B, and can in turn, guide Bob when he makes changes in module A. Your team is healthier when it’s free of “cliques,” and everyone feels that it’s OK to make changes to any part of the code. Also, most assume that it’s always the more senior engineer who reviews the junior engineer’s code, but a junior engineer should also be asked to do a review; he or she will learn a lot by reading through and trying to understand a code change from a more experienced colleague. Oh, and, inevitably, the time will come when a junior finds an issue with a senior’s code. To find out what to do to prevent bruised egos, skip towards the end of this post.
2) Code reviews add skin in the game. If I review and approve a code change by Charlie, my name is on the record. If that change introduces a defect, it will not be just Charlie who takes the heat.
3) Code reviews prevent deviation. It’s important for a team’s code base to have a consistent “style of doing things” everywhere. A consistent style reduces the cognitive burden of an engineer who must make changes to an unfamiliar section of the code. It doesn’t have to be the best way of doing things, even when the best way is objectively so.(it’s usually not, and likely to cause endless debate within the team). Here is an exaggerated example of why consistency is important. You can debate whether it’s better for motor vehicles to drive on the left or on the right side of the road, but an “interim” state, when trucks drive on the right and small cars drive on the left, would be a disaster.
Here are some tips on how to run code reviews effectively and with minimum pain.
- Make them quick and non-formal. There should be no extensive meetings, and the process should be as light as possible. This lessens the friction for the reviewer. The easier it is for a reviewer to jump in, the more time he or she will have for looking over the code changes, and that’s a good thing. .
- Make them frequent. Discourage huge changes and instead, insist that changes be submitted, reviewed and merged after two days of work, maximum.
- Make them painless for the reviewee too. Encourage the reviewers to critique the code verbally in a one-on-one with the author. Discussions will be faster and less confrontational when done face to face. Beware of the cultural sensitivities in your team. Some authors may feel uncomfortable or ashamed if there are permanent records of the shortcomings of their code, such as normally done in a GitHub pull request. In that case, it’s better that all reviews happen verbally, and the only permanent record is the sign-off (thumbs-up) from the reviewer.
- Give adequate time for the reviewer to actually read and understand the code change. Common sense, as well as research,  indicates that the effectiveness of reviews is reduced by the more lines of code per hour a reviewer is asked to review.
- Make the reviews fair. Ensure reviews are not dictated by the personal taste of the reviewer. Instead, insist that the team have a Coding Conventions document.
- Review with an optimistic mindset. If the reviewer can’t find anything wrong with the code, it’s probably because the code is fine and should be given a thumbs-up right away. Don’t guilt-trip reviewers into hunting for issues where there aren’t any. The review activity is sure to descend into nitpicking. Anyone who has been on the receiving end of it remembers the frustration. This is a toxic culture that should not be allowed to spread in your team. Keep the reviewers focused on major functional issues, or issues affecting readability and changeability, such as proper naming and adherence to SOLID principles. Don’t focus on line length or any sort of issue that a static analyzer can catch.  In fact, you should have a static analyzer (such as ESLint or Sonar) run before starting a code review.
- Finally, just like with any process, don’t let the code review process go stale. Take the opportunity to improve things after every sprint retrospective. Measure the number of reviews per change, the total time a task spent in code review, and the number and magnitude of new commits added to the changes during review. This gives you an indication of the impact of the reviewers on the code being reviewed.
If your team is just getting started with code reviews, you will probably need to choose a tool. Pick one that the team already knows or already has installed. Your code hosting platform, e.g. GitHub, BitBucket, Gitlab, already offers code review features. There are also open-source tools specifically for code review, e.g. Gerrit, Phabricator Differential, ReviewBoard, as well as many commercial offerings.
 McIntosh, Kamei, Adams, and Hassan, The Impact of Code Review Coverage and Code Review Participation on Software Quality, http://sail.cs.queensu.ca/Downloads/MSR2014_TheImpactOfCodeReviewCoverageAndCodeReviewParticipationOnSoftwareQuality_ACaseStudyOfTheQt,VTK,AndITKProjects.pdf
 Bosu, Greiler, and Bird, Characteristics of Useful Code Reviews: An Empirical Study at Microsoft, https://www.microsoft.com/en-us/research/publication/characteristics-of-useful-code-reviews-an-empirical-study-at-microsoft/
 Kemerer and Paulk, The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data, http://www.pitt.edu/~ckemerer/PSP_Data.pdf
My company is in the business of making better software and making software better. Every customer I talk to is in the midst of some initiative to have new/better/latest software solutions to offer their own customers or employees; they all expect an “Uber-like experience across the channels” if you can forgive my predictable reliance on the common parlance du jour. A regular frustration for those tasked with the ‘make our software better’ challenge is that everyone in the organization is a critic, but few ever get together to work on the answer – or even declare in a crisp, thoughtful, actionable phrase what exactly is the problem. What, specifically, needs to be better?
Organizations that are planning to jump onto the AI (Artificial Intelligence) bandwagon must make sure they have a sense of the business questions they are trying to answer with AI. What are the ways to generate new revenues, how to identify new customer segments and new product lines, or how to predict buying behavior – these are some of the important questions that must be asked before moving to full-blown AI implementations. This blog sheds some light on the key factors that are leading to AI going mainstream, the business possibilities and challenges involved, and how Ness can help organizations overcome these challenges to succeed with their AI initiatives.
The availability of data libraries is an important factor that is driving AI’s popularity and usage. Previously there were not many AI libraries available for use. Today, companies like Google publish libraries written in Python or R, which can do regression analysis in just one line of code. In less than 20-40 lines of code, you can complete the regression test analysis. This is possible because these libraries are Open Source technology, available for everybody to use. Libraries that once were available only to researchers are now accessible to all, thanks to social collaboration. Today a high school student who understands Python programming can learn Tensor Flow and build a neural network in less than three days. So, the major changes that have supported AI adoption are:
- Availability of flexible computing power on the cloud.
- Availability of data.
- Availability of ready to use software libraries.
With these changes, it is now possible for people to build their own algorithms, without having to go to a research Ph.D. lab.
AI to Drive the Next Level of Business Opportunities
Image classification, natural language processing (NLP), and automated data cleansing are some of the use cases we are currently seeing. It is hard to fully envision the magnitude of impact that AI will have in the coming years, because this is just the tip of the iceberg, and there is certainly a lot more to come. But industries have started realizing the potential of AI in making revolutionary business transformations. Imagine all the connected devices like your watch, car, home appliances, personal health devices etc. throwing data into the Cloud somewhere. The combination of all that data gives organizations the opportunity to draw powerful insights to improve their processes, rethink business models and create personalized customer experiences. Wealth Management, Retail, and Manufacturing are just some of the industries that are poised for rapid AI adoption.
For example, in the Wealth Management space, AI robots are playing the role of wealth advisors who can create endless portfolio combinations based on client data (net worth, income, liabilities, buying patterns, etc.) and come up with highly customized and personalized platforms for them. Banks are also looking to invest in data science to find out what AI can achieve, within the regulatory framework.
Ness’s Strength in AI
Our Connected approach is a unique framework we use to engage with clients. When a Ness team goes into the Discovery phase, it can include a solutions architect who knows AI and machine learning, a delivery manager who is working with the client, a subject matter expert like a data architect, and a UX designer. This composite team of multiple skills engages with the client over a one or three-day workshop, to understand the real business challenges the client is trying to solve with big data or AI.
With a strong understanding of the business problem, we try to help the client through the Envision phase, where we discuss the best practices, do a proof of concept and try some prototypes to help the client envision a roadmap. We then create an implementation plan showing how Ness can help them implement this. The Ness Connected model is a powerful method of engagement – it reduces the risks and builds legitimacy.
Our product engineering focus is one of our unique advantages – we always talk in terms of features, road-map, agile development, faster time to market, automation and more. We have a critical mass of subject matter experts in the company who can coach in the areas of AI, machine learning and big data. With a global footprint, Ness is an ideal partner as you take your first steps into high-end technologies like Big Data and Machine Learning.
With the world and Ness forging ahead in the Machine and Deep Learning space, I wanted to throw open this new age Pandora’s box with some interesting thoughts.
Recently I watched an Indian Sci-fi movie, Robot, in great awe. “Chitti” the robot (Rajinikanth), learns to love Aishwarya Rai, the human. Once denied the affection, the same robot turns into a menacing villain tormenting one and all, till it gets annihilated at the end. How “Chitti” the robot extracted the nuances of human emotion in a partially supervised and partially unsupervised manner is a “deep learning” miracle. This is not just the stuff of Bollywood or Hollywood – these miracles have already begun to happen!
Statistically, most of us interact with AI (Artificial Intelligence) almost every day. We trust the translations emitted by Google, Bing, and Facebook. It just goes to show that wherever there is infrastructure, there is AI. These systems do things that the savviest programmers can’t explain and could be indicative of Elon Musk’s worst fears of AI being more unpredictable and devastating than a nuclear world war.
This makes us wonder whether, having placed our faith in these algorithm-driven machines that we are still learning to understand more deeply, are losing autonomy, or risking the loss of qualities that are both emotional and cognitive. Would these new entities possess compassion, empathy and altruism? No one has an answer. Could we program morals into the equation when the machine is self-learning? Still unclear!
Some researchers believe that AI could not live in a human body. Hence it would lack a physical amygdala, our fear center. Will an AI that lacks fear lack any desire for conquest as a result, and therefore be free to pursue a path of pure truth? A true learning AI will gather intelligence from its experiences that are vastly different from ours.
Another interesting angle that AI researchers highlight is our misconception that intelligence is all about the human brain – it starts and stops within the head. Recent research has suggested that the brain is more like a receiver, wired up by the experiences we have in the world around us. Our sensory systems play a vital role in shaping our intelligence. Taste, touch, sight, smell and sound create patterns, paths and robust highways within the brain. The intelligence may reside in the senses and the experiences may reside in the brain. Not just the sensory systems, our heart and gut also play a pivotal role in shaping our intelligence.
Coming to our earlier question – why should we assume that a machine would learn in the same way as our brain, without the five senses, heart and the gut. It seems highly unlikely. AI is more likely to inhabit our devices, networks and cables than to become a narcissistic demon that hates us.
Moving Ahead in the AI World
For now, AI is just a self-modifying algorithm, whereas we humans are complex, much more awakened than the machines, and our authentic self is a limited edition. With abundant free will and no fear, AI can teach us to break illusions that haunt the human mind and liberate us to face the road ahead.
As an AI savvy enterprise, Ness is currently building a critical mass of trained people who understand the nuances of Statistical Analysis/Machine Learning/Deep Learning to solve business problems and build Intelligent Platforms for our clients.
The AI Think Tank at Ness has already developed accelerators for retail and financial services industries. One such accelerator is a recommendation engine for online retail. With a product inventory that relies on taste, touch and feel, we have built a customer experience that would be immersive for exploration and targeted for relevance with factors such as colors, art styles etc. to help improve loyalty and increase conversion rate. Our labs have also built Robo Advisor, an ML framework, which provides real-time portfolio advice for asset managers in the financial industry.
For now, the advances from these machine systems are immense and we stand to benefit. In the years to come, machines will have their own place and we will have our space. Co-existence will be the key to the future.
I attended the second Applied Machine Learning Days at the EPFL in Lausanne this year. This event is organized by Marcel Salathe and his team of the Digital Epidemiology lab, where they apply machine learning to uncover dynamics of health and disease in human populations.
Just like last year’s event, the conference balanced sessions between the use of ML (Machine Learning) in various organizations, a framework highlight, spotlight on crowdAI winners, and discussion panels. Though unlike last time, a number of hands-on workshops also took place over the weekend preceding the two-day conference.
At Ness, we come across various organizations expressing interest in applied machine learning. It was therefore interesting to hear organizations such as Cisco, Google, Swisscom, or Bühler Group share their experiences. Jeremiah Harmsen of Google leads a team advancing the use of ML across Google teams. Their activities include ML assignments, education activities and contributions to tooling. Google is particular, in the sense their products enjoy particularly large cohorts of users which often leads to particular requirements. (Note: non-GAFA (Google, Apple, Facebook and Amazon) companies eyeing one of their tools or practices tend to overlook this.) Smart text selection on Android uses ML for assisting humans dropping the infamous pins in the right place. An algorithm will predict a meaningful group of words on our behalf. Another application saves the smartphone battery on the “Now Playing” feature running in the background while allowing anytime song identification. It was interesting to hear how Jeremiah’s team disseminates knowledge in ML through an “ML ninja rotation” programme across various products at Google. They also organize TensorFlow classes as well as basic machine learning courses, both inside and outside of Google. On the tooling front, that team assists with practical considerations e.g. how many layers should I use in my neural network, or what dropout rate should I pick? He elaborated on a technique called “Wide & Deep”, which they use to help engineers come up with an effective structure. The technique involves finding a balance between a shallow perceptron-style model and a deeper model with fully connected layers. Check out the link for more details.
GAFAs are clearly pushing innovation in ML research and tools. Facebook’s Soumith Chintala presented advances in pytorch, one of the leading deep learning frameworks, to accommodate recent trends in dynamic deep learning. However, the event showed how other organizations are also finding ways to make use of these techniques. Swisscom, the Swiss leading mobile operator, presented applications of NLP (Natural Language Processing) and natural language understanding for business applications. Where Google has data coming out of its ears, it is common for other organizations to struggle with data for achieving ML success as Swisscom showed. In order to compensate for this lack of data, Cladiu Musat established cooperation with academia and gets help from students to devise clever techniques to achieve various portions of the pipeline. See for example EmbedRank or unsupervised aspect term extraction with B-LSTM and CRF.
Cisco’s approach to promoting ML across the enterprise resonates very well with Ness’s own view of imagining futures through user-centric thinking. The organization recognizes the need to identify benefits to digital transformation efforts and considers three key elements in delivering solutions which leverage ML:
- Ability to scale in order to have meaningful impact
- Have a way of building bridges across multiple stakeholders of such solutions
- Ask the hard questions early
If you are familiar with Ness’s Connected approach, this should sound familiar. Alison Michan of Bühler Group talked about the uses of ML in optical sorting machines used in the food industry. Among the many solutions provided by the Swiss leader in equipment for food processing and advanced materials manufacturing, optical sorting ejects impurities in e.g. rice production by using cameras and precise air compressors. The image processing algorithms to perform sorting support color-based as well as shape-based sorting. Such machines are complex and require delicate calibration, which is often manual. This can lead to overfitting and the use of ML techniques can help attenuate this effect while reducing the setup time. As we see with other makers of very complex machines, Bühler Group aims ML at predictive maintenance of their equipment in cooperation with the Swiss Data Science Center. The use of ML in manufacturing is very exciting for Ness as well because it helps more partners to build bridges between domain experts, IT and data science teams. On that note, I want to mention that Daniel Whitenack of Pachyderm.io had run a workshop on the weekend and also attended the event. As organizations continue to buy into the potential of ML, they will recognize the need to expand the availability of Python/Scala notebooks and training/dev sets to more and more teams. In a way, notebooks are the “new Excel”, but require a more advanced infrastructure which we address in our discovery and envision workshops.
Applied ML Days is also historically tied to crowdAI contests. That platform is equivalent to Kaggle but open and hosted last year a reinforcement learning competition to train agents how to walk and run. Agents are represented by a musculoskeletal model inside an imposed physics-based environment. The winner of Learn how to Walk presented his approach as well as a project born from his effort in tuning model hyperparameters: eschernode. It’s an online solution which helps explore the solution space in a more visual and convenient manner. The winner of Learn how to Run presented their approach based on proximal policy optimization for solving the perception problem (arxiv paper on various DRL techniques). Perception is a complex problem which ties somewhat with Ashby’s Law of Requisite Variety: the model of the environment of the agent must be balanced with the control architecture it has to affect behaviour and change in that environment. The Learn how to Run team involved six people and 240K CPU-hours over a period of 3 months. Not for the faint-hearted!
While last year’s event put more emphasis on deep learning which is one of the ML families that has attracted the most talent in the past years, it was refreshing to hear Christopher Bishop of Microsoft remind us of the power of Probabilistic Graphical Models (PGM). PGMs are powerful mathematical tools to express knowledge about the world in the form of graphs of random variables, which are typically extended to include parameters that govern the distribution of the same variables. Chris reminded us how PGM folks are able to formulate problems from graphs expressing the relationships between variables and [hidden] parameters and arrive at common methods such as Principal Component Analysis (a common dimensionality reduction technique) or a Kalman Filter (used e.g. for active safety in automotive). It was also an opportunity for Chris to promote his most recent book, Model-Based ML Book with Thomas Diethe, which can be found online at http://mbmlbook.com/ (work-in-progress).
I will finish with a few words on Raia Hadsell’s talk on deep reinforcement learning at DeepMind and the Panel discussion. You have perhaps watched the AlphaGo documentary on Netflix. Reinforcement learning differs from the more common supervised learning techniques in that it tackles problems where there are no labelled training sets. In such problems, an action leads to a reward in a changing context where there is no pre-known label to optimize actions. This leads to different cost functions. DeepMind has applied this to playing ATARI games, teach an agent to win in chess without databases of openings or endgames or win in Go. Raia came to talk about the use of end-to-end RDL in robotics. Robots present a more challenging setting to the DeepMind team with tight feedback loops between the machine with multiple degrees of freedom and its environment. They are specifically exploring how to deal with multiple tasks, how to learn efficiently without having to go through hundreds of millions of moves, how to learn from real data and how to deal with continuous control. From there, the panel discussion with Marcel Salathe, Chris Bishop, Raia Hadsell, Joanna Bryson and Martin Vetterli (president of the EPFL) touched on fascinating topics. I will simply highlight a few observations. Chris advocated for promoting the benefits of ML to balance the FUD that somewhat permeates the wider public discourse. Joanna Bryson (whom I have not mentioned; she works on extremely interesting ethical aspects of ML e.g. The Legal Lacuna of Synthetic Persons) argued for the importance of regulation and the pitfalls of disintermediating humans in certain decision loops. Raia pointed to the mismatch between societal problems and those ML-talent get to work on. These are the key remarks as Ness expands its partnerships around the way people and machines interact, as well as how people do business together. ML is bound to find its way in our solutions and the onus is on us to find net-positive uses of these powerful techniques for the people involved.
Through our On the Job series, we introduce some of the men and women who play a pivotal role in the success story charted by Ness. In this edition, Prachi Dalvi describes her role as a software engineer at Ness, the learning experiences working at the company, and more.
Name: Prachi Dalvi, Software Engineer
Career Path: I began my professional journey with an airlines company, as an Intern. In this role, I got to learn new age technologies like sentiment analysis, opinion mining, computational linguistics and natural language processing. Post my B.E. in Computers, I joined Ness, to start my first real corporate stint. At Ness, I am part of a team that works for a leading organisation in the Learning and Education domain. Working with this team is full of exciting challenges and loads of new learning, that provide me a great foundation to strengthen my technical expertise.
Roles and Responsibilities:
Currently, I am working as a Performance and Monitoring Engineer. My key responsibilities include building and maintaining performance test strategy framework, and scripting and performing test automation. I review requirements and functional specifications and test environment configurations/management. Part of my responsibility also includes looking after the execution of performance optimization experiments, as well as recommending short and long-term plans. Additionally, I am also responsible for monitoring and alerts management.
What are your best learning experiences at Ness:
Ness’s work culture is friendly and inspiring. Our seniors have very good knowledge in their area of work and are ready to help their teams deal with any obstacles. There is a huge focus on continuous learning and employee development that focuses on providing a learning curve that is beyond the regular line of work. Learning is offered through various training sessions, webinars, employee-related forums, technology campaigns etc. – ensuring that employees get well-rounded learning opportunities.
Your favorite part of working at Ness?
Ness has a fun working environment. The festive events, special occasions and employee engagement campaigns help employees stay refreshed and come back to work with a lot of enthusiasm, that also reflects in our work. We have friendly, close-knit teams. Our team also practices Lunch and Learn sessions in which we discuss and share ideas on various emerging technologies like Amazon Web Services along with tricks and tips for daily tasks. All values such as team bonding, respect, kindness etc. which are normally seen in books are witnessed in Ness.
What do you spend time on when not working?
I like to listen to music as it gets me going and refreshes my mood. I also love travelling, exploring new places and trying out new cuisines.
I love singing but I am way out of tune.
I am a fast and curious person.
First conversations with me never seem like the first ones, often people say “Haven`t I met you before?”
Amazon Go, the cashier-less store that recently opened to the public, is being touted as the boldest experiment by the online retail giant and is catching the attention and envy of retailers worldwide.
Over a year back, when Amazon announced the opening of an 1800 sq. foot digitized, cashier-less store, the announcement itself was enough to send the retail and business world into a frenzy – evoking multiple responses from different corners – speculations on how the retailer is going to make it work, praises on the bold experiment, and even criticisms on how Amazon has taken it really far this time. Now that the experiment has finally opened up to the public, much of the speculation has come to rest, but questions still persist on how the retailer (that functions more like a technology player) is going to make this blend of cutting edge technologies work to provide the highest level of customer experience the retail space has ever witnessed.
An Impressive Blend of Technologies
The Amazon Go store is a perfect example of how different sophisticated technologies can work together to make customers’ lives simpler. The basic premise of this experiment is that customers won’t have to check out, a way of shopping popularly called “Just Walk Out shopping.” Amid retailers facing big challenges with integrating online and offline, here Amazon comes up with a mode of physical shopping that appears to seamlessly blend both. All a customer needs to do is present the Amazon Go app on his/her smartphone at the entry gate and start shopping. Customers can see and pick what they want, and at the same time, drastically reduce the time and effort that they put into their shopping.Delving deeper into the technologies that lie at the heart of this experiment, one can think of high-tech sensors and massive computing power enabling this seamless and convenient shopping experience. The moment you enter the store, hundreds of cameras and sensors on the ceiling and on the shelves start to track your moves, recognize the items you pick up, and add them to the virtual shopping cart. These cameras use computer vision technology along with sensor fusion (combining data from different sensors) to detect and identify the items that are being picked up. It also enables the system to identify and remove objects from the cart that are put back on the shelves. Large amounts of data get created at different stages of shopping that hold troves of insights. Deep Learning technology (the use of advanced pattern recognition) enables systems to generate deeper insights from these data points to achieve greater levels of personalization in the future.
When you walk out, you are charged on your credit card using the app, and a digital receipt is also created. Industry experts and reporters are carefully analyzing and keeping close watch on any possible errors that might pop up, as this kind of a high-tech system could also be prone to privacy issues, theft and other vulnerabilities. For instance, what happens if someone picks up an item and quickly replaces it with something else, or if people are shopping in groups. Reportedly, so far, the system is working fine and correctly identifying items and the total bill amount in different scenarios. The system is also measuring accurately the amount of time the user spends inside the store.
A Powerful Experiment
This experiment uses technology to simplify and answer the most basic problem of the customer – in this case, grocery shoppers who go through a rather unpleasant experience of standing in long queues, often having to drag carts and cajole kids who don’t want to be there. Shoppers may have often dreamt of a scenario where they could just grab what they want and walk out of the store. Amazon Go makes this dream a reality, creating a more pleasurable shopping experience simply by getting ‘out of the shopper’s way.’