Get a DemoStart Free TrialSign In

Interview

5 min read

For the newest instalment in our series of interviews asking leading technology specialists about their achievements in their field, we’ve welcomed Matthew Renze. Matthew is a specialist in artificial intelligence who has given over 100 talks on this very subject and is the author of numerous AI courses on Pluralsight.

Tell us about the business you represent, what is their vision & goals?

I am the owner of Renze Consulting, a Data Science Consulting firm in Las Vegas. Our mission is to prepare individuals and organizations for the coming wave of AI-enabled automation through technology education. Our clients range from small start-ups to Fortune 500 companies.

Can you share a little bit about yourself and how you got into the field of artificial intelligence?

I originally went to college to study computer programming. After college, I became a software developer working on large-scale data-driven web applications. However, I hit a ceiling with my career, so I went back to college to study Data Science and AI.

I completed double degrees in Computer Science and Philosophy, with a minor in Economics. Then I also acquired a Data Science specialization through Johns Hopkins. With my new education in hand, I rebranded myself as a Data Science consultant and started my own consulting firm.

As a consultant, I worked on several cutting-edge projects involving data science, AI, and ML. Then I started teaching online courses. Now, I spend most of my time teaching my clients about AI, ML, and Data Science. In addition, I speak at conferences around the world on these topics. To date, I've given over 100 talks on every continent in the world (including Antarctica).

What do your day to day responsibilities look like at your organisation?

The bulk of my work is divided into two main areas: running the business and creating/delivering content. Running the business is like any other business. You deal with marketing, clients, accounting, legal, etc. However, creating and delivering content is the part I find most rewarding.

I create and deliver online courses, on-site training, one-on-one consulting, and technical prototypes. I currently spend the majority of my time creating and maintaining my existing 15 online courses. They cover topics including AI, ML, Data Science, and Software Development.

What are the key differences between computer science, machine learning and AI?

Computer Science is much broader than the field of AI or ML. CS is focused on the study of computation in general. It covers a very wide variety of topics including the theory of computation, computational complexity, compilers, algorithms, set theory, etc. It's essentially a subfield of mathematics focused on the theory of computation.

Artificial Intelligence is a multidisciplinary study of the theory of intelligence. It combines aspects of computer science, statistics, and neuroscience. It's mainly focused on understanding how we can make computers act in rational ways. Essentially, it tries to answer the question "how do you make a machine that can perceive its environment and choose actions that maximize the expected likelihood of achieving a goal of some kind?"

Machine Learning is a subfield of AI that's focused specifically on teaching machines how to learn from data using statistics. Essentially, we're teaching a machine how to make rational decisions by learning the patterns that exist within the data.

To summarize, ML is a subset of AI and AI is a subset of CS plus statistics and neuroscience. However, there is a lot of overlap between the three fields of study.

What are some misconceptions that you believe the average person has about AI?

There are actually quite a few:

  • AI will take all of our jobs
  • There's no way AI will take my job
  • AI will inherently be good for society
  • AI will inherently be evil to society

I think the single biggest misconception is that most people don't actually understand what AI is. There is a vast difference between Artificial Narrow Intelligence (ANI) -- which we currently have and Artificial General Intelligence (AGI) -- which doesn't yet exist and is likely a decade or more away from becoming a reality. Unfortunately, most people conflate these two concepts and thus they don't understand the difference.

What advice would you give to someone wishing to start their career in artificial intelligence?

There are five key pieces of advice I give everyone to prepare their career for AI:

  1. Educate yourself about AI - learn about AI, what it is, why it's important, its pros/cons, and how to solve real-world problems with AI
  2. Upgrade your career for AI - understand where you provide unique value in a labour economy that will be largely automated
  3. Invest in an AI economy - don't depend solely on your labour for income in a world where automation will reduce the value of human labour
  4. Use AI responsibly and ethically - AI is neither inherently good nor evil -- it's up to us to use it in ways that make our world a better place
  5. Adapt or become obsolete - our world is changing fast, you either need to adapt to these changes or you will become obsolete

Would you like to share any artificial intelligence forecasts or predictions of your own with our readers?

It's really difficult to make accurate predictions about the future of AI. However, there are a few trends that I'm pretty confident about.

First, we will likely see a large number of jobs disappear due to AI automation in the next few decades. However, we will also see many new jobs created to support this new AI labour force. This is going to cause a major disruption to our labour economy with unemployment, retraining, and early retirement. The big question right now is whether there will be more new jobs created than those destroyed by AI automation.

Finally, data will likely become one of the most valuable assets in our society. Those with the most data and the ability to leverage it for data science and training AI will wield tremendous power in our information economy. Large companies with lots of data will have a significant advantage over small and medium businesses - well beyond simple economies of scale.

What do you think is the benefit of using a log management tool that has machine learning capabilities for an organisation?

Humans are exceptionally good at detecting patterns in sensory data. However, we fail miserably when the size of the data grows beyond our very limited capabilities. So, we need to automate the analysis of large data sets to identify patterns of interest more efficiently and effectively.

Computer security logs are a perfect example. Humans lack the capability to scan millions of rows of logs in a few seconds and see important patterns in them, like a security breach for example. However, if we provide an ML algorithm with a bunch of logs and a few examples of what a security intrusion looks like, the algorithm will learn to detect similar patterns in new data and alert a human when a cyber attack is occurring in real-time.

As data becomes more and more important to the basic function of our highly technological society, we will see more and more cases where we will need to rely on machines in order to function effectively.

Are there any books, blogs, or other resources that you highly recommend on the subject of AI?

Obviously, I recommend all of my online courses and especially my free online course on Preparing Your Career for AI. In addition, I consume a lot of content from Two Minute Papers, Andrew Ng, OpenAI, MIT, and Johns Hopkins.

If you enjoyed this article then why not check out our previous post explaining the meaning of CI/CD? or our guide covering Apache Hadoop vs Spark?

Get the latest elastic Stack & logging resources when you subscribe

© 2024 Logit.io Ltd, All rights reserved.