Artificial Intelligence and Staffing One Year In: Part 1

A Two-Part Interview with Odell Tuttle, CTO at Avionté

At CONNECT 2023, Avionté CTO Odell Tuttle addressed an audience of over 500 staffing industry professionals about the state of Artificial Intelligence (AI) and its potential impact on business and the staffing industry. At the time, there was a lot of buzz around AI, and while we were far from fully understanding its full capabilities, there was already talk about its endless possibilities and potential impacts on our lives and work. This buzz came with both excitement over the benefits—such as increased efficiency and new work opportunities—and concern about negative impacts, like whether AI would replace workers altogether. 

Artificial Intelligence and Staffing One Year In: Part 1
CTO Address at CONNECT 2023

A year later, Odell Tuttle will address our audience again at CONNECT 2024 to discuss how AI has evolved, its imminent impact on the staffing industry, and Avionté’s approach to AI – developing solutions with the greatest potential to improve our customers’ lives and ensure future business success. 

In this interview series, we speak with Odell before CONNECT 2024 to hear how his initial thoughts on AI have evolved over the past year—what has come true and what has been most surprising. 

Interviewer:  At last year’s CONNECT, you discussed the significant hype surrounding AI, particularly following major announcements from various platforms like ChatGPT. Now that a year has passed, how do you perceive the initial hype? Are we moving beyond it, or does overly fanciful thinking still dominate? And if this is an AI revolution, is it unfolding as you expected? 

Odell Tuttle: I would say it’s mostly what we thought. AI companies are building their war chests. Companies that weren’t thinking about AI before are now scrambling, especially if they’re a software company, not to be left behind. And, even if they’re not a software company and they read any news, they’re trying to figure out what this means to them and make sure that they don’t have a competitive disadvantage because they’re missing something. You get FOMO (Fear of Missing Out) that something’s happening and you need to be part of it, and it’s fueling a kind of a secondary wave of hype where you’ll see a lot of, “Hey, are you using it yet? I don’t even know if it’s going to bring value to me, but I should be using it.” 

But I think we’re still early in this mass-adoption phase of Gen AI and the overall AI revolution. And we all tend to think of AI as a recent phenomenon, but it really isn’t. If you look at the adoption curve for AI, it actually started around 1950 with the idea of the Turing test. The whole principle was about figuring out how we’d know when we’ve achieved true AI – and that was back in 1950. Now, we’re finally at a point where computing power, language models, and neural network data structures have advanced enough to make AI a reality. We’re starting to see this in our own software, where our team looks at the output and says, “I think that passes the Turing test.” 

But some of the things we’re watching: New skill sets emerging in software development. Prompt engineering is a whole new discipline now, and people are shifting gears to that. It has become a critical element of a modern software development team. 

But one surprise I didn’t expect over the last year was how empathic AI was going to show itself as a valuable and sort of amazing tool. If you’ve used tools like Hume, Siena, and others, you’ll notice they really are able to understand and convey human emotion. When you combine those capabilities with advanced language models, the result is an incredibly human-like experience. I think this is going to be huge for AI chatbots. 

Interviewer: Can you quickly define for people who may not know, as many people are still very new to AI, what is empathic AI?  

Odell Tuttle: Well, it’s kind of what it sounds like. Whereas traditional AI focuses on logic—ask a question, and it gives you a conclusive answer – empathic AI, on the other hand, is designed to bridge the gap with humans through empathy. 

AI can now understand and convey emotion when talking to people, which was a surprising development over the past year. While this might be outside the staffing world, it’s going to be revolutionary and speed up AI adoption in robotics and chatbots. That’s going to be pretty cool in the next few years. I think this will propel chatbots to the mainstream mode of communication. 

But when it comes to coding, we might have gotten a bit too excited. AI can do some impressive tricks with code, but its accuracy isn’t quite there yet. Studies show that about 40 to 50 percent of AI-generated code is inaccurate, which means developers must spend time fixing or understanding it. This can slow things down compared to if they had just done it themselves. However, AI is still a helpful coding assistant—a good copilot for engineers. It’s not quite ready to write a lot of accurate code fast.  

Interviewer: Let’s go back to something you mentioned earlier. You made an important point about FOMO and how that’s fueling a secondary wave of hype that’s causing people to start using AI without knowing the value that it’s going to bring to them. Do you have any advice for staffing leaders who are experimenting with AI? Is this a good strategy?   

Odell Tuttle: There’s a legitimate, compelling reason to make sure that a technology shift or big wave is not going to leave you behind. And the scientific method is all about exploration, drawing hypotheses, and then experimenting. I think everyone needs to do some exploring to really understand what AI can do. It’s often different once you start using it compared to what the marketing says. You have to be hands-on to see its true capabilities and limitations.  

If you haven’t already, I encourage you to start using Gen AI in some way. When I began incorporating it into my daily routine, I discovered many small areas where it can add value and improve efficiency. 

Interviewer: That’s interesting and good to know. So, we talked about AI in general, but what about large language models? How have they specifically evolved over the past year?  

Odell Tuttle: Well, LLMs are really the data structure beneath the modern generative AI. It wasn’t really until they could produce LLMs with unsupervised learning that we could do the things we do with Gen AI. 

There’s a few more entrants into the field. You have the big players – OpenAI, Anthropic, Microsoft, Google, Meta. But there’s also a few others in the space. Over the past year, many of these companies have iterated on their models.  

For instance, OpenAI has added internet access to their models, making them more fluid in real time. Anthropic has released entirely new versions that are significantly more intelligent, capable, and human-like. All these advancements are happening incredibly fast, and it will be exciting to see where they go in the next couple of years. 

Interviewer: Let’s switch gears a bit. In a previous discussion, you mentioned that technological revolutions often don’t seem revolutionary while they’re happening. This ties into what you said about finding value as you use a program.  

You used the introduction of the spreadsheet as an example—when it first came out, people saw it as an interesting or helpful tool, but not necessarily revolutionary. In retrospect, though, it was quite transformative. Can you explain a bit more about that? 

Odell Tuttle: So, like you said, revolutions aren’t always obvious when you’re in them. It’s usually only in retrospect that you realize their true significance. I read an article a few years ago that looked back at the information age—the era before the current one—and discussed what fueled the rapid pace of M&A deals, complex financial transactions, investment banking, and global business shifts in the 1980s and 90s. 

And, at that time, there was a lot of speculation, with some attributing it to cultural influences like the movie Wall Street and its character Gordon Gekko. But the belief is that much of it stemmed from the advent of the spreadsheet. 

VisiCalc, the first major spreadsheet program, wasn’t a complex tool by today’s standards. What made it revolutionary was the compounding effect of putting it in the hands of every business organization. Suddenly, companies could quickly perform “what-if” analyses, run scenarios, and get instant results to guide their decisions. This ability to make rapid, informed decisions drove innovation and created an arms race for information processing. 

We might be in a similar situation with Gen AI now, as it allows us to perform complex tasks very rapidly and iterate on them—not just with numbers in spreadsheets, but with complex content as well. So, there’s a little bit of a parallel there.  

Interviewer: So, how do we apply this example to AI? What aspects of AI are truly transformational? It reminds me of what you mentioned earlier—when you start using AI, you might have a specific purpose in mind, but as you use it, you discover hidden values that are quite powerful and perhaps not initially apparent. 

Can you speak a bit about that? And how can we apply these lessons to our approach to AI? 

Odell Tuttle: There’s always the intended purpose of something, and then there’s the emergent things that come out of it, and the unintended side effects, both positive and negative.  

But with Generative AI, we often think that because it’s good with languages, and since code is essentially just semantics and language, it should make coding more efficient. You can ask it to write or interpret code, and it seems logical to assume that AI will speed up the coding process. 

There may be some truth to that, but the biggest impact I’ve seen so far is in the quality of the code. When humans handle mundane tasks like writing out for loops, there’s always a certain percentage of errors. AI, on the other hand, is less likely to make those mistakes. By combining machine-generated code with human oversight, we end up with higher-quality code. So, that’s one of the areas where we’re seeing improvements already that were not necessarily expected. 

Interviewer: So, I know that you’ve done a lot of research on AI tools and there’s a lot of companies out there that are touting AI and selling their AI functionality to companies. But how many major players are truly worth paying attention to? 

Odell Tuttle: I guess my thought on that would be this: there’s going to be a handful of big players. When there’s a major revolution in tech, you usually see some consolidation into an oligopoly, where a few major companies dominate. We’re already seeing this with the base layer for foundation models. 

OpenAI is the most well-known because they were behind ChatGPT and were the first movers. Then the founders of OpenAI, who didn’t agree with its direction, spun off and started Anthropic, making them another big player. And of course, Microsoft, Google, and Meta are all in the mix as well. 

These are the big players for foundation models, but underneath them, there are smaller LLMs. On top of these foundation models, there will be layers of value-added capabilities. For example, empathic AI tools that enhance the human interface. There will be various tool layers built on top of the foundation models. 

It seems like the AI layer is going to supplant the traditional operating system, becoming the interface between humans and machines. That’s where we’re headed, and there will likely be a few major players maintaining competition in that space. 

Interviewer: As you mentioned, there are going to be several players in the industry. We’ll most likely have a foundational oligopoly, and on top of that, there will be layers of smaller companies offering specific tools that build on these foundation models. 

With so many companies reaching out with tools that can help with various tasks, do you have any advice for businesses on what to look for when selecting an AI partner, especially regarding security? What should be considered red flags, and what indicates a company is worth partnering with and can truly add value? 

Odell Tuttle: The reality is, while it’s novel, it’s still an information system, so many of the same principles still apply when choosing vendors and partners and tools as any other software-as-a-service company. 

In fact, many of the same cloud computing criteria still apply. What is their data privacy posture? What is their security infrastructure? What is their availability infrastructure? And then some new things that we have to think about now because of the nature of what it does. I was just reading an article about how anthropics, Claude, is now as good or better than humans at influencing people. 

This brings us into a whole new area that needs to be thought out surrounding areas involving safety, reliability, trust, bias, ethics, and transparency. Organizations like Anthropic are addressing these issues with concepts like constitutional AI, which focuses on providing answers that are helpful, harmless, and honest. 

But depending on your industry, especially if it’s highly regulated, it’s important to consider these factors. For example, removing any bias in the system will be crucial, especially in highly regulated industries. 

This is the first part of our interview series with Odell Tuttle. Stay tuned for future installments. If you’d like to join us at CONNECT 2024, where Odell will speak in greater depth about AI and its future implications for Avionté and the staffing industry, register here. The event will take place at the Hilton Downtown Minneapolis from July 29-31, 2024, and registration is free for Avionté customers. 

Odell Tuttle
Chief Technology Officer at Avionté

Odell Tuttle oversees the technology teams, tools, and processes that provide the foundation of the Avionté platform. This includes software engineering, cloud infrastructure, and information technology. Odell brings over 28 years of experience building and operating large-scale software platforms.

Avionté Updates Brought Straight to Your Inbox