RNZ National – Tech Tuesday with Jesse Mulligan – Should we be worried about AI?

RNZ National – Tech Tuesday with Jesse Mulligan – Should  we be worried about AI?

 


 

Jesse (00:00): ... RNZ national. It's Tuesday afternoon. Time for Tech Tuesday. I'm joined by Daniel Watson from Vertech IT Services. Hello, Daniel.

Daniel Watson (00:08): Hello, Jesse.

Jesse (00:09): You'll be proud of me actually, I've been acting as one on one IT guy for my wife who's got a big presentation today. I installed a big monitor for her. That went very well. That was just a plug and play. That was great. And also went and bought a ring light for her today so she looks as good as possible in the presentation. That took a bit of finding actually, but I've eventually got one at the stationary store and plugged that in. Actually, I've got new confidence in my own IT abilities. One problem we had is we ran out of USB ports.

Daniel Watson (00:41): Ah, yeah. Yeah, that's true because everything like that does depend on having extra USB ports. Getting a port replicator. You can get that a L-chipo that adds another five ports into your laptop. It's quite useful for all the extra peripherals.

Jesse (00:54): I was actually pleased with myself cause I had one of those and then I went to plug it in and it was a USBC connection, which seemed really stupid. But I guess it's designed for Apple or something anyway.

Daniel Watson (01:05): Well, USBC is the way it's going to be going into the future. Even Apple might get forced into... Well, it looks like Apple's going to be forced into that by the EU.

Jesse (01:11): Yeah, I saw that story and which made a lot of sense to me. It is ridiculous in terms of waste and just inconvenience that we're all using different charges, right?

Daniel Watson (01:22): Yeah. Oh, it drives me nuts. But I mean, it is one way of making money is by changing the peripherals on a regular basis.

Jesse (01:30): Well, you're not here to talk about my home office set up, we're here to talk about AI today. Really interesting story that came through this week, a Google engineer says that one of the firm's AI systems might have its own feelings. He says this AI, it's called LaMDA actually wants... He says that its wants should be respected and he released a conversation that he's had with this AI, which is quite eerie. It's always quite eerie to hear AI talking like a normal human being. But actually Google sounds like they're not happy with this guy sharing this conversation. They put him on leave.

Daniel Watson (02:13): Yeah. I imagine that's probably more about information release or anything like that more than anything nefarious regarding the AI itself. But it's interesting that he was part of an area called responsible innovation, which is like, oh, okay, cool. So they've got innovation and then they have a department which is responsible for responsible innovation, which is okay. So clearly they're seeing that there's dangers there but it's probably not the Skynet type danger.

Elon Musk has been quite outspoken regarding what he describes the singularity, the idea that if you build a general machine intelligence, it's going to be operating at speeds much higher than what our human clock cycle is in our meat computers and then it could essentially find us redundant, right? Why should it be subservient to these slightly evolved apes?

Jesse (03:08): Yeah, there's a conversation that he had on the Joe Rogan Podcast, which is worth looking up if you want to scare yourself and where he predicts what's going to happen in the future. Do you worry like he does that AI is going to reach a point where it's smarter than humanity and starts doing things that we don't want it to do?

Daniel Watson (03:24): Not so much. I think there's already evidence that there's potential for harm which is several orders of magnitude below that, all right. So there's been examples where, say, there was one where Microsoft put out a chat bot and they advertised that the chat bot was learning from the conversations that it was having with people. So unfortunately what straight straightaway happens there is this chat bot... If you've heard of the website for Chan, don't recommend you go there, it's like a spawning round for internet trolls. They managed to turn this chat bot into a hate speech machine because after 24 hours a collection of bots had been posting 15,000 times conversations into this artificial intelligence chat bot. So it was being trained from the language that it was being fed. So it started spewing out really, really toxic tweets, like really horrible white supremist stuff, right down the conspiracy theories, using the N word all over the place.

So these artificial intelligences is they're built around... It all depends on the data it's been fed to learn from. So if that's not clean and in multiple senses of that word, like if the data is not clean, then you're just going to get the output of whatever you throw in there.

Jesse (04:56): I will say that I am disappointed by the AI in my life. We've got one of those Alexa, Amazon things on our bench. She inevitably disappoints me. I've got the Google Assistant. I've yelled at that before. Actually, it was in the car once and I yelled, "You're a terrible AI," which is very pointless. They don't seem to be learning very quickly. Sometimes you can ask them to play a song and they just don't hear you or they mishear you over and over and over again. But I suspect it's one of those sort of tangential things that they'll get very slowly better and it'll feel like no progress is being made and then one day we'll all wake up and they will have clocked it.

Daniel Watson (05:34): You won't have noticed. Yeah. Yeah. I'm still dubious having those things in the home because you're providing a direct listening device for a corporation into your daily conversations because they're always listening.

Jesse (05:45): Yeah. I know of somebody who actually asked for the records of her Alexa and they sent it to them and what she wasn't expecting was they sent her all the audio files. So they recorded everything that she'd ever said as well. That's very eerie.

Daniel Watson (06:00): Yeah. but-

Jesse (06:01): So you think they're getting more out of it than we are?

Daniel Watson (06:03): Oh yeah, for sure. Yeah, I mean-

Jesse (06:08): It's just a total change of perspective to think of these things as tools which benefit the company rather than tools that benefit us.

Daniel Watson (06:15): Yeah. And then if you look at the price point of some of these things and you're like, "Oh wow, that's really cheap." And you're thinking, why is that? Okay, well perhaps that price is just covering the cost of the hardware and shipping it to you because the rest of it's going to pay off, they know. So Amazon is using artificial intelligence hugely within their environment. You just couldn't run that kind of massive operation without the business analyst's insights that come from taking a huge wadge data and trying to look for efficiencies in where they store certain items of stock and how they're going to route it most efficiently through the [inaudible 00:06:55].

Jesse (06:55): What about you, Daniel? What about a company which I presume is slightly smaller than Amazon, Vertech IT Services, do you guys actually use AI in your day to day work?

Daniel Watson (07:05): To a certain extent because there's sub layers of artificial intelligence, like machine learning. The goal of artificial intelligence is to create like a general intelligence like humans, but you can take subsets of that. So machine learning and how it looks like in our environment is actually in the security endpoint protection that we put in place on client's computers because the software there is looking at the data set of all the applications, processes running on our client's computers. And when it spots something that is unusual, it flags it for further analysis and does an additional code inspection on that to see if is this a novel type of virus which is appeared on this client's machine?

Which is very useful because these days malware comes with built in obfuscation code, right? So the code itself isn't just written line by line 115 lines packaged up and sent out, now what it will do is the code itself will be shuffled and changed around so that every time it gets deployed it's in a different combination. So on the defense side of things, the old antivirus signatures can't just look at the code and go, "That is malware code." It becomes more difficult because now it's different each time.

So that's where the machine learning is coming and looking for, okay, not looking at the code of the software running necessarily but what is it doing on the system? Is it trying to alter that particular system file? Is it dropping a file in this location and then sequencing it all up and going, whoa, hang on. This is all the behavior that looks to be malicious. We should throw up an alert, stop it and roll it back or whatever.

So that's how that kind of looks out for us, but businesses are using AI in lots of different ways now

Jesse (09:07): We're out of time, Daniel. Probably a good time to tell you now that Jesse's away sick today. You've been talking to his AI bot. How did I do?

Daniel Watson (09:14): In fact, this is probably an improvement upon the normal Jesse. Well done bot.

Jesse (09:18): I'll pass on your horrible feedback. Daniel from Vertech IT Services. Thanks so much. Nice to talk to you.

Daniel Watson (09:26): You're welcome, mate. Take care.