Garbage In, Garbage Out is NOT Why Machine Learning Fails

Garbage In, Garbage Out is NOT Why Machine Learning Fails

– Give me the headline. Where are the general areas
of machine learning fails? – Overconfidence is one of them which is obviously a danger
in any security situation. So if people believe that machine learning will change the game for them, then that’s an overconfidence problem. So in fact it may change
the game negatively rather than positively. We saw this with Cloud too where people thought Read about best garbage disposal of 2019.
Cloud would be better. It can be but it also can be worse. Performance and ease of use are some of the harbingers of success. And security often is opposed to those while machine learning brings with it performance and ease of use and of course security
may not be in alignment with those other two, so you can actually go
faster doing bad things and it can be easier
to do bad things, right which is no different than
what we’ve seen in tech before but now in machine learning, people have an overconfidence
that the ease of use and the performance will
somehow make them safer. It doesn’t always. – All right, so let’s talk about the categories of failure. You were talking about how, you had mentioned earlier that like where there’s been machine
learning racism actually and that falls into multiple categories. Explain where the categories of machine learning failure are. – Well to put it simply, there’s sort of a hidden
layer that goes on here and there’s a lot of trust
that’s put into this hidden layer which is essentially
that the learning systems are gonna go and figure
things out themselves which doesn’t require human intervention. The problem is within the hidden layer, they may be reinforcing
bad traits of the past and if you’re not familiar enough with where you’re coming from, it’s hard to predict where you’re going Which is true for a lot
of situations in history. So as a historian, you look at
the path that we’ve come from and you say “well, if you
apply machine learning to this, “it’s gonna get really bad really fast.” But people don’t necessarily
know where we come from so they assume, why won’t it get better? Driverless cars, why wouldn’t
they stop killing pedestrians? Well the real question is, why wouldn’t they kill more pedestrians and the data shows that we’d
actually kill more people with driverless cars than less. – Give me a very specific example of a machine learning failure? – There’s so many. That’s one of the problems. I’m trying to talk about the top 10. So one very, very good example is you had Google launch
a image recognition system and it essentially
didn’t account for race. So while they tried to correct for whites and even had people add Google
who corrected themselves looking at their own images, they never addressed
it in terms of blacks. And so it’s implicitly
racist when they launched because it started making blacks would classify them as
animals as opposed to humans. Now it also classified whites
as animals and not humans but they corrected that because they had whites working at Google. They didn’t have blacks to look at it to correct before black. So it was racist in it’s initial state because the people who
were tuning it if you will to correct it weren’t accounting for race. So, and that’s based in history, we had the same problem with film. So it’s not like this came out of nowhere. We’ve seen it also in the way
that we address recidivism. People have used machine
learning to try to figure out who would be most likely to
commit a crime in the future. While we have a racist past in how we’ve dealt with justice in America and so, naturally it had
a horrible track record in predicting who would be
recidivist in the future. It was 40% incorrect, a massive failure of machine learning. – What are, I guess maybe,
your top two or three piece of advice on how to avoid having a machine learning failure. What are things to look out for, I guess. – Well this where the secret
solves the presentation is that for seven years
I’ve been working on a book about how to fix this problem and I started with the technical aspects, the controls, cause that’s what we do
in information security. We figure out controls,
firewalls if you will and we build a big conference
around selling people on these controls. But ultimately we’re missing
in machine learning space is an ethical model that’s
really governed in space which is when are you doing
something which is unsafe or it’s going to harm people. And we saw this with
Facebook in particular. Their lack of ethics will allow them to actually create situations where people were being Continuous vs batch feed garbage disposal
murdered, close to genocide. And I think that’s the core issue here. So we can’t regulate the information flow and we’re using machine learning to generate huge amounts of harm, then we have a much bigger
issue than just a control space. It’s not about our back,
it’s not about IDS, it’s really about what
is safe, what’s unsafe and who gets to decide
what information is harmful to prevent it from flowing. (guitar plucking)

Leave a Reply

Your email address will not be published. Required fields are marked *