As technologies evolve, it's necessary to rethink business structure, adopting new roles and positions to manage the technology. Now, new tools offer the ability for developers not familiar with machine learning models to start creating them. How can that fit into your future?

Machine learning and artificial intelligence are so difficult to understand, only a few very smart computer scientists know how to build them. But the designers of a new tool have a big ambition: to create the Javascript for AI.

The tool, called Cortex, uses a graphical user interface to make it so that building an AI model doesn’t require a PhD. The honeycomb-like interface, designed by Mark Rolston of Argodesign, enables developers–and even designers–to use premade AI “skills,” as Rolston describes them, that can do things like sentiment analysis or natural language processing. They can then drag and drop these skills into an interface that shows the progression of the model. The key? Using a visual layout to organize the system makes it more accessible to non-scientists.

“Stringing together things is a thing even a child learns,” explains Rolston. “By simplifying that orchestration aspect, the stuff that’s going to stay hard–like the data transforms–are easier to understand. How they relate to each other is visually explained to the user.”

Right now, AI algorithms are buried inside complex code, but creating a graphical user interface is a crucial step toward enabling more different types of people to become the architects of machine learning models as the technology begins to infiltrate our lives. A GUI has the potential to give designers a seat at the AI table–something that could be necessary to ensure the technology is used ethically and responsibly.

Cortex launches today from the Austin-based enterprise company CognitiveScale, which has been building AI models for businesses in financial services, healthcare, and e-commerce since 2014. CognitiveScale has been using its own version of Cortex internally to build those models for clients, but launching it to the world means that other companies that employ developers without expertise in machine learning can begin to build AI on their own. While the tool is primarily aimed at companies, not individuals, it presents an opportunity for developers and designers who work at those institutions to get their first taste of creating AI.

Building this AI graphical interface was no easy task. During the initial conversations with CognitiveScale’s founder CTO Matt Sanchez, who previously ran IBM Watson Labs, Rolston says he had to admit to Sanchez that he and his team were completely lost. It took many hours before the design team could begin to understand and conceptualize what Sanchez was trying to do. “I think good designers can ride shotgun with a surgeon or jet pilot or AI programmer, and listen to them, and extract out of them things that are true to design and are true to their profession,” Rolston says. “That [didn’t] happen without hours of conversations where I [had] barely a thread or grasp on what Matt was saying.”

Machine learning functions by extracting patterns from millions–or even billions–of data points, which enables it to make decisions about new data. It’s conceptually simple to understand, but Rolston and his team had to dive deeper into the technical elements of how AI really works, something that typically takes a PhD to fully comprehend.

Their conversations started with trying to create basic terminology for different elements that the Cortex composition tool would have. Rolston likened the process to programming during his teenage years, in the mid-1980s, before terms like “file” and “folder” were ubiquitous. These terms are tied to the development of the graphical user interface, which ended the era of only communicating with a computer through code and instead offered a radical alternative: a visual representation on the screen that gave you shorthand to accomplish different tasks. “All those things are the nerdy cruft of creating computer software that’s been worked out over a very long time,” he says. “Back in ’85 there was no one way to do it. Looking at this modern situation, there was no one way to do it.”

Rolston and his team found that the CognitiveScale developers were using different words to refer to different parts of the system, so they had to get on the same page. They ended up deciding on two primary terms: skill and agent. Skills are single-purpose bits of software that can be packaged up and used again and again–kind of like Amazon Alexa skills. Agents, which are composed of skills, are the larger, more complex models that you build inside of Cortex–they could accomplish tasks like processing insurance claims using text analysis, or tracking investor sentiment in a particular industry. This nesting concept forms the core of how Cortex functions.

Read More