Interview
7 min read
For our latest specialist interview in our series speaking to technology leaders from around the world, we’ve welcomed James Kaplan CEO and Co-Founder of MeetKai.
He founded the startup with his Co-Founder and Chairwoman, Weili Dai, after becoming frustrated with the limitations of current automated assistants.
Kaplan has had a true passion for AI and coding since he was six. He wrote his first bot at only nine years old and wrote the first original Pokemon Go bot.
Now his pioneering work in AI Personalized Conversational Search puts MeetKai in competition with the largest names in tech.
Tell us about the business you represent, what is their vision & goals?
I am the CEO and Co-Founder of MeetKai. We are a conversational intelligence startup that aims to leapfrog over what we consider “chatbot” based virtual assistants.
Our goal is to make a virtual assistant that acts more like a concierge than a simple robot that you command. True conversational intelligence is our vision.
Can you share a little bit about yourself and how you got into the field of artificial intelligence?
My background is a mix of computer science and finance. I founded a quantitative AI-based hedge fund, but I was first drawn to AI as a child. I was fascinated with how game AIs worked, in particular those around simulation and strategy.
My desire to understand how other AI systems worked carried over to finance. After a few years of working in the financial space, though successful, I was more drawn to thinking of ways I could use AI to build products that would help people, not just make money for my investors.
What is conversational AI?
Conversational AI is a broad field, and as a phrase, it seems to encompass more and more each year. It means all the elements that would allow for an AI to have a conversation with you productively.
For some people, this means including speech-to-text and text-to-speech within the scope. For others, it can even include the fusion of elements like image understanding to facilitate an AI having a conversation about visual mediums.
The real crux of it is understanding. Understanding what a user is saying rather than just the text itself is what I consider to be the hallmark of conversational AI. For this reason, search is an element of conversational AI.
For example, if someone says, “find me a chicken recipe that is not Italian,” then a few different subsystems of conversational AI are engaged. Speech-to-text turns the user’s voice into text.
Natural language understanding determines the meaning of what the user is saying. Knowledge graph search then locates the actual content they may want from the AI’s database.
Finally, text-to-speech returns a voice response. All of the elements above are what I would consider being covered by conversational AI.
How can someone start creating conversational AIs?
The key is to pick a subsegment of conversational AI rather than try to build out the entire pipeline from the start.
There are many open-source tools available for each component, so what someone should do is pick a weak link where you think there might be a possible disruption.
One particular example I have given to new players in the space is actually in speech recognition. While it seems that it can be impossible to do better speech recognition than the tech giants, this is only true if you think of general speech recognition.
Instead, you say, “this will only be used for ordering fast food," then that opens up a lot of possibilities in making a much higher accuracy system.
There is no need to recognize the proper nouns of every actor, so why train a model to do so? This specialization approach is possible to employ to each element in the field and would be where I would suggest people focus as a trick.
What industries is conversational AI disrupting?
I think conversational AI is going to be disrupting itself for the new future. Many systems built using the precursors to true conversational AI had lukewarm reception due to their weaknesses.
Revamping them with modern techniques is going to push the needle far forward. My ambition as a practitioner is that conversational AI disappears into the background where you will take it as the natural way to interact with a computer rather than a novelty.
How do you train a conversational AI?
It depends! If you take a specialization approach, it means spending a large amount of time understanding the space. On the other hand, you can take a generalized approach, where you collect a large volume of sample conversations and train a model based upon it.
One of the most popular ways to do this is through the cutely named “Wizard of Oz technique.” Very simplified, you have one person pretend to be the AI (the wizard), and another person interacts with the AI as a user.
The user will ask questions and the person playing the wizard responds with how they think an AI should respond. These conversations become the objective with which an AI is trained to emulate.
What does your day to day responsibilities look like at your organisation?
As the CEO, I try to divide my time between doing more “CEO” tasks and tasks that touch the core AI. While wearing my CEO hat, my main task is to meet with partners and customers on the business side to make sure we're moving forward on our product and offerings.
I also try to spend time talking to other companies in the space and comparing notes. It's important to stay on top of everything in the field.
I spend at least a third of my time every day still doing R&D and software engineering. It's a big risk for CEOs of R&D-heavy AI companies that they lose touch with the capabilities and roadmap. If that happens, it can quickly become a cycle of overpromising and under-delivering.
While doing R&D, I lead the search, personalization, and understanding teams at our company. With my software development hat, I manage the core backend team.
What are the key differences between computer science, machine learning and AI?
Computer science is the best way to learn how to think about computer systems and architectures. If you don’t understand everything from a low to a high level about how a machine works, it can become easy to work yourself into a corner.
I see machine learning as an extension of math and statistics into the application thereof. Machine learning can be challenging for people to reason about in the industry because it can be tempting to treat it as a black box where you put data in and get predictions out.
This can even be encouraged at times with concepts like AutoML, where the best algorithms are found through searching all possible algorithms with intelligent brute force.
AI is something that I would consider a different bucket from machine learning and computer science. AI is a product and business offering, an end goal. Machine learning and computer science are tools used to get you to that destination.
What are some misconceptions that you believe the average person has about AI?
The biggest misconception I've seen is people thinking that AI will replace humans. I see this a lot in concerns over what society might look like in 10-20 years from now. AI is something that heavily augments people rather than outright replaces them.
What advice would you give to someone wishing to start their career in artificial intelligence?
Very simple - find something that interests you and specialize in it. AI is too broad of a field to do it all. I don’t have much interest in computer vision, so I don't specialize in it. Once you find what you specialize in, start building side projects and toy examples.
You can only get so far studying. When we interview candidates for jobs, this is a deciding difference for us. Have you taken on the application of AI, or are you just interested in how they might work? There is a big difference between knowing how something works rather than just the theory of how it MIGHT work.
Would you like to share any artificial intelligence forecasts or predictions of your own with our readers?
My current “pet” forecast theory is that within three years, completely automated AI translation/interpretation will be more advanced than what a human translator would be able to do in real-time.
Within five years, this will extend to even lower resource languages that have much smaller amounts of data available.
What is your experience of using AI-backed data analysis or log management tools? What do you think is the benefit of using a log management tool that has machine learning capabilities for an organisation?
There are two ways in which we have seen the benefits, and I think it applies to all organizations.
The first in my mind is in making the engineers working with the logs more efficient and capable. If you treat it as something that will allow you to replace one engineer, you set yourself up for a bad experience.
However, if you approach it as “we want to make each engineer 10x more productive,” then I think you are of the right mindset for real transformation.
The second is in helping you move faster to events. The biggest place we currently use AI for log analysis is security.
By the time you have an engineer write rules for a particular threat model, it might be too late. Using a log management tool that has machine learning capabilities built-in can help spot problems before they're noticed.
Again, it is not a silver bullet. Don’t expect your log tool to tell you've been hacked. See it as a tool to make your organization move faster and react.
Are there any books, blogs, or any other resources that you highly recommend on the subject of AI?
I am a huge fan of paperswithcode. The latest state-of-the-art papers that include code are posted there. I take most cutting-edge papers with a grain of salt.
If there's code released, then even if I don’t read it, it means that they are willing to put their money where their mouth is. I like to use it as a way of keeping track of how academics and industry are moving.
If you enjoyed this article then why not check out our post on the best Grafana dashboards or our Intreview with AI Ethicist, Alice Thwaite next?