Chris Neimeth Of NYC Data Science Academy: Blockchain, AI, Big Data and Art Convergence

Blockchain, Innovation, News | January 4, 2018 By:

Chris Neimeth is the COO of the NYC Data Science Academy, a training school for data scientists and other technologists. In 1996, he embraced big data and developed the web’s first demographic real-time ad targeting system for The New York Times Digital. He has since served in various strategic roles: CEO of Salon Media Group Inc., President of IAC Partner Marketing, Executive Vice President of Ticketmaster, President/CEO of Real Media, Chief Commercial Officer of Daylife, Senior Vice President for The New York Times Company Digital, and founder of Grey Interactive. He is a two-time elected Director of the Interactive Advertising Bureau.

Neimeth talked with Block Tribune about the increasing convergence of blockchain, artifical intelligence, big data, art and creativity.

BLOCK TRIBUNE:  I read recently that there is a group that is using artificial intelligence to look at art for valuation purposes. I understand there’s other experiments out there to analyze, for example, what makes a good novel. My question to you is, how far away are we from art actually being created by artificial intelligence?

CHRIS NEIMETH:  That depends upon what your definition of “is” is, as the famous president who was impeached said a while back. Being a little less coy, computers are generating images right now and are creating quite a few different things which are quite compelling. Oftentimes, they are copies. They’re really just based on composites of pieces of images and ideas collected from real world.

It started a couple years ago.  Google’s Magenta group, which are now, I think, part of Alphabet’s DeepMind, started producing the psychedelic images that were based on the convolutional neural network that was part of their image recognition technology.

You ask the question how far are we away from people creating compelling imagines? It’s already here. As a matter of fact, there are computers systems and call them deep learning or artificial intelligence, if you will, that can create a compelling photograph-like image of a person who is completely made up from the computer’s programming and experience. Which is indistinguishable from an actual photograph of an individual. They can do the same thing with the piece of art. They could create composite art that is derivative of different artists.

One of the things that we do at my company is we train people. We work with people who are interested in learning about data science. There is this one group of people who actually put together a neural network that allowed them to evaluate pieces of art from different famous artists, from Post-Modern and impressionist artists. And, to determine who the actual artist was. That was pretty impressive.

What we’re seeing is that the same technology that companies use to interact with the environment, or to allow people to interact with computers, or technology, is being used to output things as well. We had, initially, Google working on technology that allowed for image recognition and classification. So that, if there was a picture of a cat or a specific breed of a dog, it would be able to recognize what kind of dog it was. Being given different photographs of those dogs, or different photographs of different dogs.

The same thing applies within the language space. We have the use of long short-term memory networks that are being used for natural language processing. What’s interesting is that those same things that were used for human/machine interfaces or machine/environment interfaces, are also now used to create things that are similar to what the input was. If the input was natural language, the output can now be natural language. If the input was images or information from the environment around it, there’s the ability to create similar images as a composite.

But, is it artwork? That’s the question. They can be very visually compelling or a readable set of words, or even a story. We had the UK Press Association, I think, last year turning over a chunk of its local reporting to automated or robo-reporters. We see the output happening already. The question is, is it exploring or producing the art of the possible, the theater of the mind that comes from more abstract or non-linear thinking. I haven’t been exposed to that, and I think that that’s further off in the future.

I do think that computer assisted creativity is here. It has been here for decades and is going to continue to allow the pushing of the envelope in the creation of more compelling, or even different, experiences for the consumer.

BLOCK TRIBUNE: If I understand what you said correctly, whatever it is right now, and potentially into the future, it is all a function of what is being surveyed? For instance, if you give it the Bible, it’s going to give you something back that’s biblical.

CHRIS NEIMETH: That’s right. These deep learning systems are trained. They work with training data. As an example, the Google Image classification project that allows for the computer to identify and classify specific objects and images was trained by inputs of millions of images that were already tagged. Or, that were already classified. Once it has that training, then it’s able to apply that training to the classification of new images.

Similarly, if you were to feed a computer images of a certain sort and ask it to create a composite or create something that is similar, it’s going to have to use the information that it was resented with previously. This is the case for a lot of what is happening within deep learning and specifically what I would call images and text or language.

However, there’s some other things that have been happening recently that are stepping away from that. The more recent AlphaGo Zero, when in the game Go, which happened in the not too distant past, was a pretty interesting development within the artificial intelligence space as much as … I don’t know if you’re familiar with Go and AlphaGo. It’s an ancient Asian game that is harder than chess and has many more permutations. Google DeepMind created a deep learning solution called AlphaGo. It beat the number one AlphaGo master, Lee Sedol. It was trained with hundreds of thousands of different games that were played in the past.

What happened recently with AlphaGo Zero is that they were able to start this machine with only the rules of AlphaGo, and not give it any training that was based upon previous games, and had it play against itself. Then over a period of a couple of days, it played itself, and it learned so quickly that it was able to beat the previous AlphaGo, the one that beat this Lee Sedol master. It was able to beat it a hundred games to zero. That shows how quickly things are changing.

BLOCK TRIBUNE: Theoretically, and we’re getting closer to the point, if I fed it every screenplay that won an Oscar for the last 50 years, potentially AI could create something that was unique and potentially a better version of all of that stuff?

CHRIS NEIMETH: Potential … anything is possible. I don’t see it happening in the near future. I think a good reference point is a short that was created called Sunspring. Sunspring was produced by an AI, and artificial intelligence agent, or deep learning agent that was trained on a range of movie scripts. And, then asked to actually create a script for a movie. Thereafter, some producers and talent got together, and over a 48 hour period, they produced the movie based on the script.

It’s really worth watching if you have the nine minutes that the short takes. I can send you the URL if you like. Because, it is barely intelligible. I considered it a little arty. I thought, “Wow, this has the non-linear elements of something that could be viewed as a very noir film.” Then I actually read the screenplay. I realized that it was just more random rather than arty, per se. Nonetheless, the fact that you had a machine that was able to put together some words and scenes is an amazing development.

Given that we have that development that took place last year, and the rapid acceleration of productivity and efficiency, and the more rapid learning … I guess the lower learning costs that are being demonstrated by these new approaches, that suggests that things are going to get pretty interesting pretty quickly.

I do a little bit of gaming, and my son loves to play games. I look at what’s happened within the video game space. You can see that these games are getting more and more lifelike, like that movie 300 several years ago. It’s a cartoon movie but was somewhat lifelike. CGI is also becoming incredibly compelling.

Then last year, also, one of the Google products produced a song that was completely developed by a computer. It’s got a little bit of a melody, but it’s certainly very rudimentary. Then in the same year, they produced a Christmas song, which is pretty creepy, but nonetheless … It uses some Christmas lyrics.

If you see these things and you see how they’re evolving, even within a short period of time, it shows that within 18 months or two years, we’re going from some composited still images that are kind of psychedelic; to a song that has some melody; to a song that has some melody that has some lyrics, albeit creepy; to a movie script. We’re seeing an evolution of output that is extremely rapid, logarithmic, even. Hockey stick shaped, certainly. That suggests that it’s going to improve. It also suggests that alongside humans, it’s going to allow for a lot more productivity within the creative space.

BLOCK TRIBUNE: What is the technological hurdle that has to be overcome before it starts produce things that are jaw-droppingly beautiful art?

CHRIS NEIMETH: I think it’s already here. The hurdles that need to be overcome are on a couple of different levels or along a couple of different axis. One is ease of use by people who are going to be creators. Right now, this technology is relatively challenging to access, although it’s getting much, much easier.

I’m not a computer scientist by education, and yet I’m able to use the tools that Google has released. Its TensorFlow deep learning solution and Keras, which sits on top of it. I can, with a couple of dozen lines of code, produce the same level of complexity that was really at the leading edge of image recognition 18 months ago. It’s becoming more accessible.

As an example, Adobe, which is heavily involved with tools that support people in creative roles or creative professions, obviously, have Photoshop and Illustrator as part of its creative suite. But, came out with something called Voco. I don’t know if you heard of this. There is a lot of discussion about how it was going to contribute to an ease of production of fake news. With Voco, you could train the solution on someone’s voice. You would have it listen to 20 minutes of that individual. Then you could type in words, or phrases, or a sentence, and it would produce that communication using that individual’s voice. Which is kind of scary, but also pretty interesting.

There’s an example of an interface that’s actually usable and has, at least in one method, a way to produce something that has some creativity. If we want to fill in extras in a movie, a voice-over. If we want to create scenes or backdrops and didn’t want to pay for the art, you didn’t want to pay for the rights to the images, we could use computers to create the images. If we’re filming something and we want to have other elements of the background filled in, that’s the kind of thing that can be done using computers. You still want the level of creative input in overall direction and plot line to be done by an individual. I don’t really have a timeline as to when that will change.

BLOCK TRIBUNE: You were about to say, a second ago, that you believe that this is going be good for humans?

CHRIS NEIMETH: That it’s going to be good for them? Did I say that?

BLOCK TRIBUNE: I thought you were leading that way. I guess the question is, what happens to humans when it reaches a certain level where it is able to create this art?

CHRIS NEIMETH: Well, people have to change. I think it goes the way of the people who created the buggy whip, at a certain level. One school of thought, the glass half empty, is that, “Oh, boy, there’re going to be a bunch of typesetters that are out of business now that we’re going to digital typesetting.” I was actually working at The New York Times when we went through that transition. The glass half full is that now we have more resources for people to focus on the actual thinking and writing to improve the depth of communication, the depth of analysis, the thoughtfulness that goes into the output.

It’s an evolution. To the extent that they elements of creativity or productivity still remain at the human level. There’s a level of joy that is associated with that action and with sharing the outputs or the fruits of those actions.I hope that never goes away. There’s probably less joy in coloring or filling in certain things. To have some computer assistance with that may be helpful. I don’t know.

The one thing is the interfaces that can access to these tools. The other thing that has been accelerating, in that in the last three or four weeks was a rapid advancement, is the amount of data that is needed to train these models.

One of the more famous and more prolific scientists within this space is name is Hinton. He came out with a new approach. It’s called Capsules within the neural network space, in deep learning space. That approach allows for the training to occur with a much lower volume of inputs. Once we see these algorithms or these approaches becoming more efficient in terms of their ability to gain quote/unquote “intelligence” with a lower volume of training data, that’s really important. Once we see tools that can allow people to manipulate these approaches and to apply them, that’s going to be really important.

Then of course, the increase in computing power that is afforded by new machines and new approaches. What some of these larger organizations in the cloud are calling hyper-scaling with different architectures for allocation of data and computing or processing. It’s all happening really quickly, and it’s pretty exciting to observe.

BLOCK TRIBUNE: Does blockchain have a role in this wave of creativity that’s being driven by AI?

CHRIS NEIMETH: Blockchain is an amazing accounting revolution. I don’t mean that in a pedestrian or a simple way. The ability to count and to verify units, and to create a low friction environment for transactions, which is enabled by blockchain, has the potential to revolutionize anything that is traded or exchanged. That can include intellectual property. That can mean maybe we will have micropayment methods for creative producers that haven’t been possible simply because of transaction fees that are exacted by people who manage the payment systems.

I used to work at I don’t know if you remember that or know of that website.

BLOCK TRIBUNE: Sure. It’s still around, isn’t it?

CHRIS NEIMEITH: Yeah, it is. It’s still around. It’s great. It’s still alive and kicking and great, and I read it every day. I was the CEO there for a little while. One of the things that we tried to implement there, was a social blogging platform that allowed, this is a mouthful, friction-free peer-to-peer payments. Just meant that you could contribute, and you could get paid for it. People wouldn’t have to pay the transaction managers. We had a deal with a company called Revolution Money that would help us do that.

My thought was, “Wow! We’re going to create a content marketplace where the creatives don’t have to pay a fee to the banking or payment infrastructure.” For a variety of reasons, it didn’t work. If we had probably chosen to go with PayPal or something that was more mainstream that still did take a little bit of a transaction fee, it might have been more successful. I’m not here to debate what was successful or not about that approach, but I do think that blockchain as a technology that enables friction-free or low friction transactions and exchanges of value is really important.

Because, particularly in the creative space, as we reduce barriers to allowing people to create … Look what Amazon has done with authors. Everyone said that Amazon was going to be killing the book business. What do we have now, is we have Amazon has created a platform that allows authors to actually publish. We’ve removed the gatekeepers that a lot of these presses represent.

To be able to envision a future where people can contribute [inaudible 00:24:36] YouTube, or blogs, or Twitter, or whatever, to express themselves. To be able to be compensated for that directly or to have the level of value contribution involved some level of verified exchange or payment using blockchain, is really exciting and really promising. I’m hopeful.

BLOCK TRIBUNE: Does anything about this AI creative process worry you?

CHRIS NEIMETH: You touched up on it a little bit earlier, which is when we move too quickly into things … Have you seen the movie Idiocracy, or do you know of it?

BLOCK TRIBUNE: Of course. Yes.

CHRIS NEIMETH: Great film. I think that the prospect of mass production and mass consumption of information, or media, or reporting is a little frightening. I was a part of the Aspen Institute Forum on Communication and Society. Within that, we worked as a group to try and protect the fourth estate. Because, there’s research that shows that people who read newspapers and people who ingest news and information from credible sources have a better understanding of their civic responsibilities as contributors and participants in society.

I wonder and worry that the machinization or the factoryization, forgive the poor choice of words, could create a less compelling environment. Or, an environment where mass consumption is chosen because of ease of consumption over things that might be more thought provoking and potentially painful.

As someone who spends a lot of time with digital media and has kids who do the same, I am more and more recognizing that human to human interaction is critical for self-actualization, for growth, for cementing and growing commitments and bonds that create the richness that is our lives. I don’t know what the ongoing value of having a growing technology layer between that interaction and expression represents and whether it’s positive.