Mobila Marketing

How an IBM Watson engineer is harnessing the power of AI for the greater good

June 29, 2018 | Written by: Caitlin Leddy Categorized: Trends and Profiles Share this post: Nisarga Markandaiah grew up feeling the impact of technology all around her. Born in Bangalore, India, Nisarga witnessed firsthand how people, communities and entire cities can be inspired and changed for the better through the application of technology and science, …

How an IBM Watson engineer is harnessing the power of AI for the greater goodRead More »

What is a decision tree and why should my chatbot use it?

The most effective way to discover the intent behind your customer’s questions and provide the right answer is by using a decision tree. What are they and how do they work?
When it comes to chatbots, businesses want to know one thing. The million dollar question for a market which will be worth billions within a few years is – can my virtual agent answer my customers’ questions?
Assuming your chatbot has robust natural language processing (NLP technology), the most effective way to do this is through decision trees.
What is a decision tree exactly?
In the context of chatbots, a decision tree essentially helps them find the exact answer to your question. Further information on what decision trees are outside of the AI world can be found here.
The root of the tree is your initial question. For example, you might ask to buy tickets for a concert. Of course, a chatbot will need more information than that to fulfill your request.
To do this, it will ask a series of questions, the branches of the decision tree if you like. Each one narrows down on the customer’s goal through chatbot intents.  
Therefore when buying a ticket, the chatbot might ask who you want to see. Having selected “U2” it might then proceed to ask for the date and venue you want, then the price range and finally the specific seat.
Only once we have reached the end (or the leaves) and you have selected your seat has the decision tree ended.
Making transactions using a decision tree in chatbots
Decision trees are flexible enough to carry out a number of functions for your virtual agent.
One major problem companies face is shopping cart abandonments – currently standing at 78% and costing $4 trillion a year.
Decision tree transactions can significantly reduce this by ensuring customers locate exactly what they want in a conversational format. Just like in the concert ticket example above, users can buy almost anything through this method.
In terms of cart abandonment, the chatbot’s decision tree is able to streamline the actual payment process at the checkout to ensure the user does not become frustrated.
Decision tree: solving the difficult questions
In addition, decision trees can improve the overall customer experience by tackling nonmonetary transactions such as password recovery. Bots can identify which account you want to change by asking for your details.
Decision trees can also replace general FAQs. A major problem with help sites is that their answers are far too general for customers who value personal interaction. The decision tree is able to initiate a conversation with the user to understand exactly which answer is the most relevant to them.
For example, a user might want to know when their package is arriving. Naturally, more information is needed. A decision tree can account for this by asking for an order number or anything else that identifies that exact purchase before informing the customer.
Meanwhile, a company with general FAQs will not be able to provide the same level of service and will have to rely on live agents – this can be costly and time consuming for both customer and company.
Have you got a real life example?
Inbenta’s chatbot Veronica uses decision trees for a variety of scenarios. For example, this is the decision tree created when someone asks Veronica about integrating Inbenta technology on their platform.

Veronica can provide some background before asking what platform the person uses. This is an example of a decision tree based on natural language as the customer is asked to type in their answer rather than select an option. Note how Veronica is able to recognize ‘Facebook’ despite the misspelling.
This decision tree has been edited on Inbenta’s Backstage by arranging the existing contents into a coherent journey of discovery towards the user’s intent.

Whatever chatbot you choose to handle your decision trees must be able to cater for some of the more complex titles that might occur. For example, a decision tree about return policy must have options for different time spans depending on how long the user has had the product for.
Other examples of decision tree structures include a ‘Buttons’ option which presents the user with all the available options for them to select from and a “Yes/No” format which narrows down what the user wants by asking a series of questions.
Customers want a personal service from companies. But they also want their questions answered correctly. Using decision trees for your chatbot will satisfy these basic desires in a revolutionary way never before experienced.
Inbenta utilizes its patented natural language processing and +11 years of research & development to create interactive chatbots with an industry leading +90% self-service rate.
Companies around the world including Ticketmaster UK utilize the Inbenta Chatbot to maintain a personal service for their customers while reducing support tickets.
Interested in finding out more? Our team of experts is at your service to design a custom proposal for you.

Let’s get in touch

AI sport, it’s in the game

This article looks at the practical applications of artificial intelligence AI in Sport. How long before every athlete is AI-powered?
“I would like to thank my teammates, our coaches and of course, my parents. But most of all I want to thank our AI-powered robotic coach for making us the champions we are today.”
OK, perhaps this acceptance speech might not take place at next year’s Super Bowl LII. But AI’s impact on sport is already considerable and will continue to grow as franchises seek game changing advantages which will push them over the top against their opponents.
The likes of Billy Beane with “Moneyball” and Sam Hinckie’s “The Process” have been celebrated in some quarters as revolutionary methods to transform the fortunes of a franchise. The next development might not have a face to it, but AI could send sport down a path it has never seen before. The AI arms race has begun.
Wearables in sport
Wearables are not just for fitness fanatics hoping to post a new low resting heart rate to social media, they are also being incorporated in the world’s biggest leagues. In the 2015/6 season, The Premier League allowed players to use them during matches for the first time.
The devices are able to analyze the workload placed on players to determine whether they have been under or over trained and when an injury is imminent. With championships sometimes determined by which team is the healthiest, knowing when is the best time to reintroduce your injured star to the first team is crucial.
It is also hoped that AI-powered wearables will be able to improve player techniques in the future. As the devices can already detect which foot players are pushing off from, it is not a great leap (no pun intended) to assume they will soon be able to analyze their positioning when striking the ball and will be able to recommend adjustments.
With wearables already available on the likes of cricket bats and hockey sticks to measure strength and technique, the days of athletes putting on seemingly normal shirts filled with sensors to measure valuable data are not far away.
Sport safety
It has been one of the most controversial topics in the NFL as more studies emerge highlighting the potential long-term brain injuries suffered by players. AI is going a long way to help identify and treat potential cases.
One example comes from researchers in Montreal who can now detect long-term concussions which can cause lasting brain damage. Their AI has identified that players who suffered from a concussion had abnormal white matter connections in parts of the brain which could indicate degeneration. It claims an accuracy of 90% in detecting concussions.
Identifying the mentally tough
Hard work beats talent.
A phrase you might have heard from any sports star, coach or even a pundit paid to find a fault. Indeed, sports stars, in general, are blessed with such physical attributes that it takes something more to separate the best from the very best.
One AI startup is trying to make the task of discovering this easier through AI and machine learning to understand players’ mental strength. Receptiviti has created a language psychology AI platform which can analyze the mentality of players.
For example, it believes it can hand potential player recruits an iPad and ask a few open-ended questions to understand what type of player they will be. Not only will this help identify the very best players in the long run but it will also save coach’s time chasing after players who may not live up to their potential.
Sport is a game of inches according to the cliche. Artificial intelligence in its various guises will represent a giant step forward for any franchise willing to invest.

The history of the search engine: from index cards to the AI chatbot

One of the greatest improvements brought about by the internet is the ability to find answers to almost anything almost immediately thanks to the search engine. How did we evolve from the days of index cards to AI-powered chatbots?
How did people find answers before the internet?
Even those of us old enough to remember a life pre-web struggle to recall how we did our homework, checked for correct spellings or even resisted the urge to ask those questions we wouldn’t dare ask aloud without the relative safety of a search engine like Google.
And yet humans somehow managed to exist before the World Wide Web was created in 1989. Since then, search has improved exponentially to the point where a personal chatbot to help with our most routine tasks is becoming a reality. But how did we reach this point?
Before the internet:
Searching was far more laborious and in many cases would not even have even taken place before the creation of the search engine.
Index cards were first popularized by Carl Linnaeus to classify more than 12,000 species of plants and animals. In the years following his idea, libraries began to rely on them to index their collections.
Eventually, libraries settled on the Dewey Decimal System which organized all books by subject, author and title – one which is still in place across libraries today.
The first search engine:
With the invention of the internet came the first example of what we are familiar with today. But it was not Google, Yahoo or even Ask Jeeves which were the first to introduce a whole new concept to us.
Archie was written in 1990 by Alan Emtage and indexed all the file lists of as many public FTP servers as possible to allow users to find and download publicly available files.
While it was not on a par with the search technology available today it was indeed better than the alternative – word of mouth.
The web directory:
For a while, it was regarded as the internet’s most important search engine, but that label did not fit the early versions of Yahoo. In fact, it was considered a web directory that relied on humans to summarize websites with short descriptions and to organize them into categories.
Created in 1994, Yahoo became so popular that publishers would delay posting their websites to ensure they would be included. Despite the advancements in search, the Yahoo Directory did manage to survive until 2014 when it was closed for good.  
The first web crawler:
1994 also saw the first web crawler released – appropriately titled, WebCrawler. It was the first to index entire pages and became so popular that at one point it could not be used during the day.
Natural language search:
A search engine that Google arguably owes a lot to, Altavista was a pioneer in many of the online search techniques which we are still using today.
Notably, in 1995 Altavista became the first search engine to incorporate natural language technology. Among other achievements it also provided the first searchable full-text database of the web, allowed multi-language search and even translated pages.
Altavista’s move away from streamlined search towards a more complex web portal ultimately led to its demise as users flocked to the up-and-coming Google.
Which finally takes us to the granddaddy of them all. Google’s success can be attributed to many areas, but its most significant selling point is its famous algorithm which was able to yield within fractions of a second more relevant search results than its competitors.
In 1996 when Larry Page and Sergey Brin launched BackRub – Google’s precursor – they realized that their algorithm knew which webpage was the best for a topic based on accumulated links and, more importantly, citations from the most authoritative websites. It was this focus on the relevancy of a website that made Google so popular.

Semantic search engines:
While Google was able to provide the world with answers to searches instantly, companies were still struggling to do the same on their own websites.
The Inbenta Semantic Search Engine was first created in 2010 and was able to understand searcher intent and the contextual meaning behind customer’s searches rather than rely on keywords. Much of this capability was due to Inbenta’s patented natural language processing which significantly improved companies’ self-service rates.
Voice recognition:
The concept of computers which could understand our voice had been around for the  50 years or so before Apple’s Siri and Google both brought it into the mainstream.
Google added “personalized recognition” to Voice Search on its Android phones in 2010 as well as its Chrome browser in 2011. Its English Voice Search now incorporates 230 billion words from actual queries.
A Stanford Research Institute spin-off was sold to Apple in 2010 and led to Siri and its cloud-based processing. Ironically, its first offering was far more potent than the version embedded on our iPhones today – it was more intuitive, connected to the web and could detect meaning from sentences more effectively.
The artificial intelligent chatbot:
Chatbots have existed since Eliza was billed in 1966 as the world’s first ‘chatterbot’ capable of communicating with humans as a psychotherapist would.
Only now have virtual agents started to make their mark in the search world by providing customers with information across all forms of social media as well as on company websites.
Many of them are powered by artificial intelligence and natural language processing which has provided users with a more personal experience when searching – think of a shop assistant minus the need to step out of your house. Look at Ticketmaster’s chatbot (click the help tab on the bottom left) for an example of how this now works.
One chatbot to rule them all:
What is the next step in the search world? Chatbots are now starting to combine natural language processing with machine learning. This combination leads to agents that can provide high self-service rates and improve as it gathers more data.
Not only will bots become more accurate but we will soon be able to carry out all our searches as well as any transactions within a single conversation. Regardless of whether it is ordering a pizza, comparing the best energy prices or keeping up to date with the latest in the NBA it will all soon be handled within the same digital space.
The developers behind the search engine Ask Jeeves might have had a point when they decided to make a butler the face of their company. Search technology is doing all it can to adapt to us. It will continue to do so in ways that we cannot even comprehend.
Inbenta is a leader in natural language processing and artificial intelligence for customer support, e-commerce and conversational chatbots, providing an easy-to-deploy solution that improves customer satisfaction, reduces support costs, and increases revenue.
Interested in finding out more? Our team of experts is available to show you how Inbenta can benefit your company.

Let’s get in touch

AI doctors: how artificial intelligence can help medicine

With healthcare systems around the world under increasing pressure, artificial intelligence can provide a much-needed boost for doctors.
Around 12 million American adults are misdiagnosed every year by doctors. That is enough to fill the country of Ireland two and a half times over.
With an aging population, making the country’s healthcare system more accurate and efficient is becoming an increasing priority. Fortunately, there is a solution.
Artificial intelligence is already making great strides in treating patients and also reducing the strain on medical staff.
Crowdsourcing diagnoses:
Why ask for a second opinion on a condition when you could have a ten thousandth?
Human DX uses machine learning to help doctors deal with difficult medical cases by soliciting advice from fellow experts across 70 countries. Physicians can ask questions on the app/website while uploading relevant images of the condition and any related tests. The AI program then calculates all the responses and provides a single report.
The potential of Human DX is clear when you explore the rising waiting times in some of the world’s wealthiest countries. Studies have shown that the number of online consultations will increase cumulatively by 25% a year over the next five years, sparing both patients and doctor significant costs and wait times.
While providing more accurate diagnoses for patients, Human DX could improve the skills of doctors by tracking their clinical performance and offering recommendations in specific areas. Indeed, many physicians might feel more comfortable taking advice from machines rather than being judged by more experienced medics.
AI doctors: detecting diseases
Not only can AI help with difficult diagnoses but it can also spot the early signs of many illnesses.
Google has already made progress on this front by using machine learning to detect signs of eye diseases related to diabetes using image recognition algorithms. Google’s software can spot tiny aneurysms which can cause blindness by examining photos of a patient’s retina.
It is incorporating its diagnosis system in India alongside the Aravind Eye Care System which provided some of the images to train its image parsing algorithms. Google uses the same deep learning technique in its image search to differentiate between cats and dogs for example.
In fact, Google claims its algorithm is so accurate that it is on par with that of ophthalmologists. The next step is implementing it to support the 70 million Indians with diabetes and 400 million sufferers around the world.  
The technology does not necessarily mean fewer jobs for doctors. Google says its algorithms will perform the screening work which staff struggle to fill, freeing them up for more important tasks.
What’s in your DNA?
Artificial intelligence will soon allow us to take full control of our health by unlocking the code to our DNA. Despite the significant progress made in healthcare we still do not know what our genome is telling us.
Companies such as Deep Genomics are attempting to lead the way by interpreting DNA through a system which predicts how genetic variation affects molecules. In fact, its database can already explain how hundreds of millions of genetic variations can impact our genetic code. A greater understanding of our human DNA means doctors can soon provide personalized information to suit each of us – giving us full control of our bodies.
From here, genetic companies such as Rthm can develop tools to understand our genetic makeup and advise on what changes we should make to our daily routines. A more intelligent approach to our long-term health will reduce the strain and resources of our medical staff.
Discovering new drugs faster
Bringing new drugs to market is both costly and time-consuming. A discovery can take 14 years and cost around $2.6 billion before it becomes available.
A major reason for this delay is the need to test the chemical compounds against every possible combination of cells, genetics and any other mutations to ensure it is safe.
Machine learning has the potential to reduce these costs by as much as 70% by analyzing the scientific literature at hand. In fact, Benevolent.AI has already identified two potential drugs for Alzheimer’s through this method.
Given an aging population and fewer resources for medical care, AI is a solution that ensures we do not simply stick a massive band-aid on the problem but rather properly treat it so it never becomes an issue again.
Inbenta is a leader in natural language processing and artificial intelligence for customer support, e-commerce and conversational chatbots, providing an easy-to-deploy solution that improves customer satisfaction, reduces support costs, and increases revenue.
Interested in finding out more? Our team of experts is available to show you how Inbenta can benefit your company.

Let’s get in touch

What is the difference between a chatbot and a virtual assistant?

They will soon become a cornerstone of our daily lives but what exactly is the difference between a chatbot and a virtual assistant?
The VHS and the Betamax, the Blu-ray and HD DVD or more recently the current virtual headset battle between HTC Vive and Oculus Rift.
The history of technological development is littered with examples of various formats fighting it out for market dominance. At times, these format wars will dictate what we refer to the new invention as. When purchasing a high-density optical disc we tend to ask for a Blu-ray for example.
As artificial intelligence moves out of its winter we are encountering confusion over what to call the intelligent computer programs that communicate with us – chatbot or virtual assistant.
Are chatbots and virtual assistants the same?
It depends on who you speak to. A school of thought exists which believes there is no difference and that either one could be an umbrella term for the conversational agent.
If this is the case then it seems redundant to have two names for the same function. Chatbot is by far the more popular term according to Google Trends.
In general, if its primary mode of interaction is through messaging (Slack, Facebook etc.) then you are communicating with a chatbot. There is an argument that the likes of Siri cannot be a chatbot because it exists outside of these channels. But this does not feel like enough of a differentiator.
In fact, of more importance is the function of the chatbot (or virtual assistant) that you employ. In this regard, there are some myths surrounding their capabilities which should be debunked.
Myth 1: A chatbot is not intelligent enough
Some of the most powerful chatbots are equipped with robust natural language processing in order to understand the meaning of an inquiry rather than simply the keywords.
Previous bots might have only been able to carry out a limited number of conversations through either hard-coding, wildcard matching of words and phrases or time-consuming keyword training. However, bots powered with NLP are now far more flexible. Unfortunately, many chatbots do not leverage true NLP and are giving chatbots a bad name.
Thanks to machine learning, chatbots will continue to improve and will produce higher self-service rates than ever before.
Myth 2: A virtual assistant can carry out a wider range of functions
While there might be some truth to this now, the gap between what the two hope to achieve is constantly narrowing.
In the past, the chatbot could only perform specific tasks such as a password change or information about the weather. Whereas, the virtual assistant was more wide-ranging in what it offered.
Thanks to advancements in NLP and machine learning, however, this is changing. Chatbots are now far more diverse and can carry out more functions through their ability to understand natural language. The use of decision trees, for example, makes it far easier to discover the exact intent behind user inquiries, broadening its functionality even further.
Myth 3: A virtual assistant is better at remembering context
Virtual assistants even now still struggle to remember key information during conversations but chatbots are already proving they can store what you tell them.
For example, Inbenta’s chatbot Veronica is able to remember your email address if you provide it to her.
If you tell her “My email address is….” then she will retain that information for future use. Therefore, if you were to ask for a demo she would not require you to resubmit it.
Rather than debate what we should name them, it is important to recognize how the chatbot (or virtual assistant) will provide the most human-like experience possible by understanding our natural language to the best capabilities.
Inbenta is a leader in natural language processing and artificial intelligence for customer support, e-commerce and conversational chatbots, providing an easy-to-deploy solution that improves customer satisfaction, reduces support costs, and increases revenue.
Interested in finding out more? Our team of experts is available to show you how Inbenta can benefit your company.

Let’s get in touch

How IBM is improving Watson Visual Recognition capabilities

March 15, 2018 | Written by: Kevin Gong and Matt Hill Categorized: Developers | News and Updates Share this post: In 2016, IBM announced a new Watson Visual Recognition feature — custom classifier training — that allowed users to train our service with their own images to achieve unique, powerful results for their enterprise. This feature …

How IBM is improving Watson Visual Recognition capabilitiesRead More »

Anticipate and preempt disruptions: The increasing cost of system failure

March 16, 2018 | Written by: Tim Mucci Categorized: Cognitive Enterprise | News and Updates Share this post: When systems fail there are serious consequences. Outages not only leave customers dissatisfied, but they cost companies millions of dollars in lost revenue. With 62% of customers considering switching to a competitor after 1-2 bad experiences, and …

Anticipate and preempt disruptions: The increasing cost of system failureRead More »

Deep Learning as a Service, IBM makes advanced AI more accessible for users everywhere

March 20, 2018 | Written by: Ruchir Puri Categorized: News and Updates | Watson APIs Share this post: Artificial intelligence will be the most disruptive class of technologies over the next decade, fueled by near-endless amounts of data, and unprecedented advances in deep learning. The rise of deep learning has been fueled by three recent …

Deep Learning as a Service, IBM makes advanced AI more accessible for users everywhereRead More »

AI Everywhere with IBM Watson and Apple Core ML

Share this post: Integrating AI in everyday enterprise and consumer applications is steadily becoming the new normal. In a mobile-first world, the number of users accessing AI through apps on their devices is rapidly growing. Very often, their experience is hindered by inconsistencies in the quality or availability of network connectivity. Consider three distinct examples …

AI Everywhere with IBM Watson and Apple Core MLRead More »

Scroll to Top