[00:00:00]
Bram Lagrou: welcome back everybody to yet another energizing episode of the Energize With Bram Podcast. We are living today in a world of AI, machine learning, all this fancy automation stuff, artificial intelligence. And it's important that we also stay at the frontline here, with experts that know what they're talking about, because people like me, we're the user of it all.
I wanna warmly welcome Jamie Shirah to the podcast. Hello Jamie.
How are you?
Jamie Sherrah: Hey, good Bram. How are you? Thanks for having me here.
Bram Lagrou: Yeah, I'm very well, thank you. And I've been looking forward to our chat here on the podcast. Look very quick story. You obviously are a PhD in machine learning. Right? How long have you been operating in this space for?
Jamie Sherrah: Oh, since the nineties, so [00:01:00] about nearly 30 years.
Bram Lagrou: So people like us that basically have only seen the chat gpt and, the advent of Claude and all these other systems come up. You have been in the trenches for a very long time already. And so tell us briefly, like what's the sort of work that you've been doing since you graduated with your PhD?
Jamie Sherrah: Oh gee. How long have you got?
Bram Lagrou: just in a nutshell.
Jamie Sherrah: I did a PhD in Adelaide, in engineering department on neural networks and genetic algorithms. And then, did a postdoc in London on computer vision. And then we were doing a lot of, detecting and tracking people in video. And at the time it was a hot topic to have video surveillance to anti-terrorism and things like that.
and then I went on to work in a startup in the same area where we made commercial grade software to do all this in real time with c plus plus, and we had to write all the code ourselves. You couldn't just download it in the early two thousands.
Bram Lagrou: Yep.
Jamie Sherrah: I've had a number of technical jobs like that, been in [00:02:00] startups, worked for defense research, worked for a hedge fund those kinds of technical jobs.
In recent years, found myself at the Australian Institute for Machine Learning which is Adelaide. So Australia's biggest, research group for AI and machine learning and computer vision, and, was helping them with industry and defense projects. and then ended up starting my own, company, Inject AI, where we can make custom AI and machine learning software focusing more lately on creating machine learning models or improving them, creating data sets, training them, and working on all different topics. So it is been a really rewarding career working in machine learning.
'cause as you would know with ai, you can apply it to anything. So my whole career I've come into different, industries and had to learn, a fair bit about that industry to be able to apply it. so I've [00:03:00] done quite a lot of stuff, even, more. Recently in the medical, health space.
Bram Lagrou: That's an interesting one actually. That's a good way a segue into the next question. So, for let's say the layman, people that haven't got the PhD in your field, to really quickly get 'em an idea of what machine learning looks like. 'cause there's different grades of ai, right? So how does machine learning fit in?
What does exactly does it mean?
Jamie Sherrah: Yeah, it's basically getting a computer to perform some cognitive task where there's some kind of judgment involved. so, we all know computers are good at logic and hard rules,
the price is more than this, than do that or something. And that's been the basis for programming. It took us from those days where we had rooms full of people doing calculations on paper to, you just have a pocket calculator and things like that, and computers, move Data around and transform it. But then when you have something like, is this a picture of a cat or not?
You [00:04:00] can't make rules for that because, cats can look different and they could be drawn cats and there's too many rules to write down. and our brains just do this stuff naturally where We learn things. So it's those sort of cognitive tasks
Where it's more statistical and based on data and you're presenting examples to the computer and getting the computer to learn by example, how to perform a task and something, that really a machine learning is, where we're at these days or always have been, is that you you have a data set and you train a model.
You create A model. And then that model's kind of fixed. It's not, I think people have an idea that AI is like always learning or something, but we don't have reliable algorithms like that. They tend to forget when you keep learning online, they forget. So with chat chippy team and things like that, you get the illusion of learning because.
the tooling around the machine learning model is storing and remembering and [00:05:00] databasing bits of information that it feeds back into it as well.
Bram Lagrou: Yeah.
Jamie Sherrah: So it's like the AI is like an engine and then you have, the framework around it to make an application.
Bram Lagrou: I'm catching, while we're having this conversation that it can very easily get very techy.
for the non-tech people. And so I think it's sometimes good to go back to things that we've all seen. And if you go online or you go on television, think of Netflix, you, scroll through the, directory and suddenly you see like a series like, person of interest coming up, right?
I recently started watching it. It was pretty grabbing, and the whole idea was that there's this big machine, an AI that scans through. Tens of thousands of cameras, the whole city probably what is already happening anyway, but the idea is that the machine, as they call it, they dub it as the machine, is that the machine not only can detect threat attacks like you just mentioned earlier for terrorist attacks in the but it also has a specific list where [00:06:00] certain names and even social security numbers land on.
Of people that might be at risk of a criminal plot. And so those people could then be potentially rescued for one of the heroes in the series. Now, I think it's a good starting point because it gives us all a visual of a practical application of ai. We don't have the time or bandwidth for people for 24 7 to watch all those cameras, but an AI can do it, right?
if we use that as an example. Where do you see, and have you already worked on particular projects where the AI a model is basically introduced to then have a practical outer community or commercial implication?
Jamie Sherrah: Oh yeah. Pretty much my whole career, yeah.
Walk us through an example.
Like at the Institute of Machine Learning where I'm a, adjunct member, they're very heavily focused on research and the criteria and or the, your objective is to come up with some [00:07:00] novel idea and research it. but I've, my career has been more about. deploying it
Sometimes you don't want the latest and greatest thing that came out last year at CVPR. You want, something that's been around for a few years and is robust and well understood and has been matured by the research community over time.
maybe through some GitHub project or something, Oh, what are some examples? Let's see.
Bram Lagrou: You mentioned medical, for example. What sort of thing have you
Jamie Sherrah: Yeah, so a recent one, we all know it, our hospitals are under pressure in South Australia with, ramping We're always looking, it's a complex problem, but we're looking at how can we help doctors or help the hospitals be more efficient? And one problem is in the emergency department the doctors have to write these reports every time they see somebody.
So that takes them time where they're not looking at and helping patients. They're sitting there typing, People forget things and details, things like that. But in private [00:08:00] practice, people for. Couple of years now, I've been using these scribes like, Heidi Health Library Bird, where the doctor will whip out a mobile phone and there's an app running and it listens to the consult and then transcribes that automatically and turns it into notes or a letter to a referral letter or something like that.
SA Health and the hospital system, they don't want to use something like that because all your data's going out into, outside of their boundary. they have very strict data security requirements and they need to keep everything in their boundary. So there've been trialing that in emergency departments here, and we've been helping with, deploying that in their secure environment, which have just, had going just recently.
Over the next six or 12 months, we'll be looking at more the medical aspect. Is it really helping doctors, is it really saving them time? Is it accurate enough? How does it compare to the reports they would've written? If you are in private practice, you're some sort of specialist, [00:09:00] you just make those decisions subjectively, it's helping me or it's not, and I'll use it or I won't. But, in hospitals obviously there's more of a framework and a process. A new doctor might just get told, you have to use this. So this hospital has to be sure that it's right and that the data's secure.
Bram Lagrou: I'm glad that you touched on it.
'cause I know that you're big on data security. case in point is, that you are the founder also of, a solution called Hippo. even though it's, for the time being a bit on the back burner for you, but I know that the security's really important. I think the challenge from previous conversations that you and I have had was that.
Whatever data enters into a software that was created overseas. Case in point, US or anywhere else, the servers are overseas and therefore you don't really control the data for that reasons. Is that basically the full crux of it?
Jamie Sherrah: We have this privacy act in Australia, that, has things to say about how you can store and use data as a [00:10:00] company.
And it kicks in when you have a certain revenue level for a general company. But for medical or NDIS related businesses, it doesn't matter how big they are, they have to follow that. And obviously, the more critical and sensitive your data. the more important these things are and the more susceptible you are to litigation if something goes wrong.
if you're a lawyer or a financial advisor or in any medical practice, then the security of the data is really important. a lot of the AI tools now, they come from overseas, mostly the US People are really excited to use them, but not thinking about what's happening to the data.
It's the same as any software system, right? If you use Grammarly, where is that going? it's looking at all your documents and where is that data going? Does it get stored? Does it get used? But the privacy Act, says that you should, if you're sharing data with a third party overseas, you should make sure it's got the same kind of, security and legal comebacks as in Australia.[00:11:00]
And obviously we can't control that. an example is Sam Altman said, if you've got anything really sensitive, don't put it in chat GPT 'cause we can't stop the US authorities coming and accessing that. It might people weigh up the risk reward here, you're an SME, probably.
It won't be relevant, that kind of example. But, I think people aren't really thinking about this 'cause it's not, nobody's really hurting from it at the moment. There's no huge court cases over breaches. There's no big breaches like we had here with Optus and Medibank. and even those, they came and went and who's talking about them now?
So it, seems to be the nature of, data security, that it's a problem when something goes really wrong and then people forget about it a bit. but there are some extra risks with ai. One is that the data could be used to train a model and then indirectly be publicly disclosed in some form.
and then, the other is. [00:12:00] now more and more we're using agents who've got like Claude Cowork and Claude Code and that are really popular. And the AI is going off and grabbing data and recombining it and putting it here and there and interfacing different systems and there's just more and more risk that the data goes the wrong way.
a lot of the tools like copilot and ChatGPT and such have had integrations with email for a while, and people say, I'll get it to send emails for me automatically. and there's a thing called prompt injection attacks, which is unique to ai, which is to say you can bypass the safeguards in something like chate and get it.
The AI to act in a rogue way, so a hacker could just email you if you had that integration with the right email content and get all your private data, emailed back to them. And that's been demonstrated with, tools like Chat, GPT and copilot. And [00:13:00] they keep putting out, oh yeah, we fixed this and that problem, but it's a problem you'll never get rid of.
So the jail breaking, it's called jail breaking. So you're getting the LLM to bypass Safeguards. It's like you can keep making it less and less susceptible to it, but, I don't see how you can ever stop it.
Bram Lagrou: And I think that the challenge is that the more open doors you give it, the more access you granted to other systems, the more you're vulnerable to all these threats.
Right?
Jamie Sherrah: Yeah, I think we're seeing more and more that the power of AI comes from, or language models, large language models is when you give it access to more data. We started out with chat g PT in a window and you had to cut and paste everything and upload everything, and it's just slow and cumbersome
You're doing a lot of repeat work like that, and there's a shift more and more towards making all your data available. probably the winners in that space [00:14:00] would be Microsoft with the co-pilot where you're in their ecosystem, they can control where your stuff's stored, and then copilot has access to all that.
So I think that's all going pretty well, but I don't use it, so I'm not sure.
Bram Lagrou: Let me ask you like how can businesses, we're talking really like the for-profit sector mainly I would say but businesses, companies, how can they really use some best practices while they use the benefits that AI have on offer, but actually also make sure that they cover their own bot and they look after their customers better.
What would be some best practices, tips that you can give them?
Jamie Sherrah: yeah, on the data security side. I think one is just make sure whatever AI tools you're using, you've got the option switched off to train on your data. So I've noticed this has cropped up lately. There's a copilot in GitHub.
So GitHub is where a lot of people store their source code.
And it's popped [00:15:00] up recently saying, we are gonna start defaulting to training on your code, on your private repository code unless you switch it off. So if you don't read that message or you don't act on it, then suddenly all your proprietary code is getting used to train their models, which, is a big, huge problem because a company, a software company's IP is the code, right?
You could say, oh, I've got this invention or algorithm, but it's really, it's the code and, that you're making it, you're disclosing it, I dunno how that goes for patents.
Bram Lagrou: So that's a really practical one. What about like things like servers? Would you be a proponent to encourage companies to have their own servers stored, for example, on shore?
Jamie Sherrah: AI systems are IT systems so you'd have to look at your cybersecurity, practices and policies. look at it in the same context as that. So doing a risk assessment and everything like that, people seem pretty happy with cloud systems as opposed to [00:16:00] on-prem. If you're really zero trust, you can go on-prem. So that means that you are running the AI on a computer at your premises.
so for example, with the hospital scribe that we've made, that is an on-premises system. There's a GPU server running that's running the large language model, and other systems, machine learning systems.
That would be the extreme, level. But the problem is if you're using open source models, they're not as capable as the ones that you get in the cloud, like chat gpt and Claude. But it depends on your application. So you, with all this, you really need to look at the requirements of what you need. then you can have like private cloud, so like a Hippo, products like that. So you can have Azure or AWS. Or something like that.
Within that cloud system, you can have a model and you can even have models that are, guaranteed to be served in where the processing [00:17:00] happens within Australia.
And then you make sure the data gets stored in Australia. 'cause if you use chat GBT, you've got on the left, you've got all your con past conversations.
So all that data is stored. On their servers in the us right? so there's options to have that stored in the cloud, but in Australia. so there's lots of different options. You could look at it as, maybe we do use US systems. Machine for the machine learning, but it's stateless, it's not storing anything.
And then we make sure that the storage is here. So there's different options and you have to look at the requirements. Interesting. But if you go on premises, the costs of the hardware can be significant depending on the power of the model you are using.
With agents and stuff, is talking about helping businesses, best practices.
If you think of in software, you've got this concept of read write. Sometimes you're just reading, you're not changing anything and sometimes you're writing. So that would be, putting documents, on your Google Drive or sending an email or something, it seems, [00:18:00] straightforward that if you're reading that's safe, you're not disclosing anything. But if you are writing, then that's when data could leak out of your organization. So I would say, go ahead with integrations that read data. from outside, but just be careful and review where you are enabling an integration that can write to some system and data's going out the door that way. Another recommendation that's practical for businesses is splitting, the use cases into sensitive and nonsensitive. So you could imagine a company has, an internal system. That you just use for really secure data board meeting work or something like that. but if you are writing, an article for, a blog or LinkedIn or something that's gonna go public, then just use whatever tool you want.
Bram Lagrou: SAS [00:19:00] Pocalypse, what is that all about?
Jamie Sherrah: A couple of months ago, the stock market, got hit hard with, software companies. their share prices going down, which they called the SAS apocalypse. And this came about around that time, Claude Cowork was coming out and open claw and in part what they say in particular, this was triggered by.
Claude releasing these, skills or recipes. one of them was for reviewing contracts, legal contracts, and there's been machine learning or AI legal software around for. Decades, Before large language models and generative ai.
People just think we don't need those tools now.
'cause if we've got Claude and Claude can review contracts, so we don't need that specialized piece of software. Specialized, SaaS, software as a service application on the web. We don't need that anymore. We just get clawed to do it or [00:20:00] chat GPT.
And the stock market is, the prices have the future predicted earnings built into them. So people were saying, okay, these software packages might still be valuable, but they're not gonna get as much recurring revenue in the future as we expected. So then they downgrade the price estimate even though they might be good businesses.
Bram Lagrou: there's another point that, you mentioned that would be worth us talking about. what about the missing AI knowledge or data layers, specifically for organizations out there? What did it have to be thinking about? What is important? 'cause I think we all very easily dabble into all this stuff.
We fall into it, we start using chatt, pt, we start using Claude we're using, copilot, whatever. But we're not really mindful of how to build it properly, how to structure it well, and how to be smart with it so that we do things right.
Jamie Sherrah: Yeah, that's where the dust has not settled.
We're still in a, early days or pioneering days, [00:21:00] of, of AI When I was at uni, the internet was invented, and I was using browsers like Netscape and Mosaic that don't exist anymore, and Yahoo search engine that I think it's might still be there, but no one uses that.
And yeah, there were others and, they're not around anymore, i've always thought you look at open AI and everybody thinks they're cool and Chate is cool, but who knows in 10 years they might not exist. Google can play the long game 'cause they're massive and they've got this income stream from advertising and, they, they're doing quite well with Gemini anyway.
So who knows in 10 years how it'll look in terms of ai. But again, there's that issue of, it's powerful when you bring the data to it, and that's where it's not clear what will happen. Because the problem is if you're trying to integrate with everything, then it's difficult. But maybe AI can help with that as well.
as I said. I think what'll happen is for [00:22:00] business, eventually it'll settle on, a handful of solutions that people use. And copilot is looking like one. 'cause you've got that Microsoft ecosystem. And for knowledge workers, that's just like such a common solution. Google's got their suite as well and their ecosystem that they can, and they're deploying Gemini into and that it's all integrated.
Bram Lagrou: What about things like Salesforce? 'cause obviously they're big they're in the cloud. They're very big on, they're, what is it? They're Genie application or whatever it's called, or gen or, yeah.
Jamie Sherrah: This comes about to the SaaS pocalypse. There's different kinds of software. The operating system running our computers right now, that has to be very carefully made. You can't vibe code that, if you've got something that's doing payroll for your employees, you've gotta be really carefully checked.
You don't, as a company, you don't wanna go, oh, just vibe code zero, and have it do my accounts for me. If there's any mistakes or lack of robustness in that, or it deletes all your data. It's a huge problem. And so this. You wouldn't vibe code the [00:23:00] software that runs a airliner,
Bram Lagrou: definitely.
Jamie Sherrah: And aircraft. So there's different kinds of software and those tried and true apps, need careful development. A lot of people, when they talk about using AI to generate apps quickly, it's about kind of internal apps that maybe have a limited lifespan need to evolve quickly they can be rough around the edges. They don't need to look good and stuff like that. don't need to handle every case.
Bram Lagrou: You, you're looking like a application, like lovable or something.
Jamie Sherrah: Yeah. I've only used lovable for websites. It, does it do apps as well?
Bram Lagrou: My understanding is that it would if you want a CRM system, it could build you on, yeah.
Jamie Sherrah: Yeah, and there's like superb base and platforms like that do those kind of things. because yeah, deploying it, in the cloud is one of the, you can vibe code it on your laptop and then you get to deploy it. So those platforms allow it to be deployed easily, which is really important.
but yeah, getting back to the SaaS apocalypse and companies like Xero and [00:24:00] Salesforce, they're like, programs of record or it's where your data lives, and if then they then go, oh, let's add AI to our platform, it's probably gonna be, end up mediocre and it's not gonna keep up with the other.
Companies that's all they do, like open AI and Anthropic. So I think a better strategy for them is to say, yeah, we're like a data company and we've got some domain specific tooling. And maybe some domain specific ai, but we're just gonna make sure that the integrations and the APIs and interfaces are there, that it can interface with the latest and greatest AI tools.
So if your SAS company just does some data processing, then yeah, you're in trouble in terms of, being superseded by something like Claude. But if, you are storing data and there's a lot of domain specific [00:25:00] processing and knowledge that goes into how you handle that the calculations and processing that's going on, then that's like a moat, and, good value. So maybe the future's gonna be, you have all these. data centric services, you have some domain specific services and they're all openly integrating with ai. and then somehow we come up with a way of doing that securely.
The internet is just gonna get more and more complex.
It already is too complex and we jump on a browser and use it like a library, searching for things. And, we are not really benefiting from everything we can, 'cause we can only. Search for a few things that we thought of, and I think what we should have or what we will have is, like a broker to the internet that we've got an AI agent that's our broker to the internet, and it's finding stuff for us, proactively giving it to us and it's interacting with other AI agents or [00:26:00] brokers, to find what we want.
And maybe at that level. Those agents can disclose like the minimum amount of information that they need to maintain privacy
Bram Lagrou: Hey, I've got one more question for you, briefly. Obviously ai, a lot of people say Hey, look, this is really, making a lot of jobs redundant and replacing people and more robots, more machines,
Aside from that, I'm just wondering. In this, whole debacle of machines versus humans, how do you see the future of people skills and the need for them the more that we get ai, being part of our daily life?
Jamie Sherrah: Yeah. They say that, some small fraction of human communication is verbal, right?
It's, Body language and expression and all those nuances. And we're far from AI replacing all that stuff. I guess you can have avatars that [00:27:00] can do stuff. but still being in person with someone is important.
Bram Lagrou: I'm coming actually from the angle of employability for somebody to have a good opportunity in the job market.
Even though their technical skills might be replaced by agents here and there more and more, how would people skills potentially help them to maintain, if not secure, further employment
Jamie Sherrah: I guess the default answer is just make sure you're really good at using AI and keep up with it.
So is that argument that AI won't replace your job? Someone who uses AI will replace your job. so obviously some jobs are just gonna outright be replaced as with any disruptive, industry cycle. And, people are gonna just be more productive and get more done with ai. Maybe in some ways work will get more stressful because you won't have those downtimes of doing the boring, repetitive [00:28:00] work that might've just been a rest for your brain.
The AI's just handing you all the hard cases.
But yeah, I think if you are orchestrating, overseeing, let's look at software for example. The way I work has changed a lot. I use code and stuff that wasn't feasible before. I can do now, in much quicker time. I don't know how it's gonna go for junior developers because.
To me core code is like a junior or better developer. Where you get the answer in minutes, not days, so and it's not having to go and learn about stuff. It just knows it. So in terms of, efficiency and cost. It's really good, but you need, it's not good enough to be autonomous yet.
You need an experienced software engineer to guide it, to check it, to correct it. It'll just do silly design things. Make silly mistakes, make wrong assumptions, and you need someone overarching it. And I think it'll be the same in every industry with language models. Say [00:29:00] you're doing marketing. Because somebody needs to quality control or check it. Make sure that what it's producing is on message with the company and not breaching any guidelines and things like that. But it doesn't mean you have to actually be Right. Typing the words, doing the research.
Bram Lagrou: I like the analogy that I heard somebody say, and I thought that was the best way to look at it.
He says, you gotta see yourself as the expert being the architect. But then you actually have the builders, the doers, the implementers that do 80% of the grind work, which is the ai. You gotta be the architect, the designer, and overseeing it, coordinate it, but have the doing done by ai. So I thought that was really a good way, 80, 20% rule sort of thing.
Jamie Sherrah: Yeah. Will it change over time? On their own, the AIs don't have motives and goals, right? Ultimately it's us telling it what to do or making those decisions. The other thing is accountability. For example, You could say that all lawyers are just working with documents and information, so you could replace all that with ai.
But an AI is not a [00:30:00] legal entity, so someone's gotta be accountable legally for the decisions and information and all that stuff. At the very least, you need a lawyer to be overseeing what the AI is doing. But then there's also, all that high level judgment, decision making, combining information.
I think humans are still better at that subjective and, decision making and information fusion, than the computers are.
Bram Lagrou: Our time is up for today, Jamie, but it's such a vast topic to unpack. I have to say, you've, been very eloquent in explaining yourself and giving us a little bit of an understanding of what this very highly complex matter represents for us all, both in pros and cons. Really appreciate you coming on to the podcast today. And obviously should anyone ever want to talk to you, you're on LinkedIn, they can reach out to you. You've got your website, so we'll make sure that people have access to those. So otherwise, any last final thought or comment just briefly.
Jamie Sherrah: Just make sure that you are [00:31:00] using AI so that you can keep up 'cause it's changing very quickly and yeah, it's hard for anyone to keep up.
Bram Lagrou: This was the end of our Energize with Bram podcast.
We look forward to seeing you again next time. For now, goodbye.