People hacking: The human element of cybercrime unmasked

Jenny Radcliffe, AKA ‘the people hacker’ is an expert in social engineering, negotiations, deception detection and  infiltration at a corporate level

With cybercrime set to dominate the headlines again in 2022, keeping data safe has never been so critical. Businesses that rely on customer data to manage their services, such as insurtech and fintech companies, are often the target of cybercriminals, simply because the opportunities to extort money from them is so high.

Known as the People Hacker, Jenny Radcliffe is now one of the UK’s lead experts on company infiltration and the psychological vulnerabilities within companies that result in devastating data leaks and ransomware events.

The vast majority of them, she says, are the result of phishing and vishing events that result in staff members inadvertently allowing hackers into their systems simply because they have been conned into doing so. The results are often devastating, and as cyber insurance companies weigh up the pros and cons of paying off ransomware demands, the stakes have never been so high.  Education and staff training, as the IoT continues to mushroom, is the only way to cut down on the number of breaches, Radcliffe says.

Q: You describe yourself as a hacker - but your role has a number of elements. Tell us about that.

I'm a hacker, but I'm a hacker that deals in psychology and physical buildings. When most people think of a hacker, they think of a young male in a black hoodie hacking with lines of code, like in popular culture in the cliched stock images that we see.

But actually, I specialise in the psychology of scams and cons and how  things like phishing emails are constructed in order to catch people out through persuasion and influence, and emotional or psychological triggers.

I'm also  a specialist in physical infiltrations, which means testing security systems on physical buildings. So that's testing, not just the alarms, the gates, the fences, the doors, and whether we can get past those, but also whether people can be persuaded, to let us through by deception or persuasion of some kind.

Q : It sounds like a fairly unique sort of profession. Are there many of you about?

It's not unique and this type of penetration testing is a big part of the security industry working adjacent to the technical side of cyber security. I think because I'm a woman, I stand out a little bit as well as the fact that I've come out of the security closet, and said, "This is what I do." So it's the first time a lot of people have heard about this type of role outside the industry. But there are many  people who have roles similar to mine.

Q: What prompted you to do this job?

Essentially, nobody plans to do it, or certainly they didn’t when I started. I worked with my family in Liverpool. They were loosely, involved in security, particularly nightclub security, and things like that. And as a kid, it was something we'd do for entertainment. We got in and out of a lot of empty buildings in and around Liverpool, and beyond. Its known as “urban exploration”  these days, but essentially,  it was just nosing around, not really taking or breaking anything.

Q: So, infiltration was in your blood?

Well, gradually, word spread that we were good at what we did - and that we could check if the security in your home was adequate by trying to break it. In Liverpool at the time, the people with money and large properties were football players who came to the clubs and then asked us to help them.  After that, we looked at doing the same thing for some businesses and became busy. Parallel to that, the cyber industry and the cybersecurity industry had started to grow, and whilst cyber sounds very technical, as the majority of cybercrime is down to human error or manipulation there was a place for my skills within that industry.

Q: How does physical security translate to online security?

So, when you've got someone who's used to talking their way into a building, that translates very well into talking someone into opening an email, clicking on a link, or clicking an attachment. That is the most common way a company or a person is compromised: by clicking on something malignant in an email. It's over 90% of the problem - depending on which statistics you look at.

There are other things that happen as well, but the soft skills I had developed, the people skills that enable me to get into buildings,   or to get people to let me in and to trust me, are also the same skills that apply to phishing emails and social media approaches that are bogus and intended to cause criminal harm.

 So, it was a natural progression for me to branch out into the cyber world as well.

Q: It seems that as soon as companies get a handle on one part of cybercrime, another problem pops up and they're attacked somewhere else. Is there a solution to this problem?

The issue is that criminals, scammers, and malicious actors will always adapt and develop tactics to stay steps ahead of their targets. To give you an example, when COVID-19 hit in 2020, by the time we were in mid-April, Google was reporting 18 million COVID-themed phishing emails a day. This shows that criminals will adapt to the meta environment, to the story that's going on in the wider world, as well as change the methods that they use to adapt to the technology.

Now we're seeing things like deep fakes. It's very difficult to know whether the person is the real person or whether it's someone using technology to sound or to look like something they’re not.

However, it’s not a completely pessimistic picture.  Although criminals can always evolve and always use different pretexts, stories, and scripts, there are indicators and signs that generally show you that something is fake.

Q: Can you tell us about the elements that mark something out as fraudulent or a threat?

I generally cite four red flags that often give away that you're about to be breached or you're about to be attacked, regardless of how you are approached.

The first red flag is money. Even if you deal with money all the time, anything to do with money is a red flag. If they mention your money, their money, or present a situation where money may be needed or exchanged, that is cause for external and independent verification of the source of the request.

The second thing is emotion. Is the contact you have received and the story behind it designed to make you feel emotional in some way? So have they made you happy, sad, frightened, empathetic, wanting to be philanthropic, donate to something? Anything, where there is an emotional tone to the approach, is a step-back point because when we feel emotional about something our rationale and decision-making capacity is low.  Essentially, a scammer or criminal knows this and so the play is  "We want you to be emotional because that's when you click on the links, open the attachments or give us what we need."

The third element is urgency.  Criminals try not to give a “mark” time to think. They always want people acting quickly at the moment and not investigating. They don’t want you to google the name of the company they are pretending to be, or search for the text of the email online in case it has been already called out as a scam.

Cybercriminals want you to act quickly, and in the heat of the emotion.  In the heat of the moment, we are more likely to make mistakes, so the less time we have to think about something the better from their perspective.

Finally, there will always be a “call to action” from the criminal to the victim.  Pay something, tell me something, click on something or open something.  Without you taking action of some kind the attack fails, so it’s that call to action that is the fourth red flag in the series.  Money, emotion, urgency, and call to action.

Q: Can you give me some examples that you regularly deal with?

Yes. So a good example of emotional manipulation is fear because it can be easy to induce remotely and causes a victim to act quickly.

So the potential victim might receive an email saying ‘you've got a parking violation’ click here and pay the fine or you’ll be summoned to court. Another approach is to play on greed, or pleasure at receiving a reward or free gift. They might say you’ve won something or you've got a discount on something and tempt you to click on a link or go to a bogus website.  These tactics are hard to detect as they are very similar to legitimate sales techniques, and they work well.

Lastly, they often try and make a person feel ashamed or exposed in some way.  They may say, we’ve got your computer; we've got ransomware and we've seen your personal photographs or we're going to make up a story that you've been on an inappropriate site and we’ll tell your friends or colleagues.  The story doesn’t have to be true and no evidence needs to be produced, the idea of dealing with that threat is often enough to make the potential victim panic.

What the brain does in that situation is say to you, “ this is a bad situation,  take action quickly." It doesn't have to be the right action, it is just action of any kind. We are prompted to do something, anything to get ourselves out of this state of cognitive dissonance.

So the scammer will give you a problem, make you emotional and then give you a way out: click on the link, pay the fine, open the attachment, and our own psychology, more or less, does the rest.

When those things come into play, we're all vulnerable. But if you know what to look out for and if you do some basic cyber hygiene, you're better protected than most other people. We'll always live with a crime. Cyber and digitisation are a huge part of our lives now because so much of life is online, however, the same tools and tactics, and methods that have always been used are still being used. It's just the way it's delivered that's updated.

Q: What are companies doing wrong in terms of dealing with these problems and what advice would you give them?

You need to train your staff and to be aware, and the training needs to be interesting enough to keep people engaged. People need to understand that if they keep themselves personally safe, from a cyber perspective, and are healthily paranoid about approaches online and elsewhere, then that'll translate into the workplace as well, especially now with hybrid working.

Fundamentally, staff needs to know that this is a problem and to be aware of how they could be approached and targeted. They also need to realise that keeping the company safe is also a by-product of them keeping themselves safe and being more secure. Realising what is in it for them is often more effective than mandating best practices on behalf of someone’s employer.

Once awareness is established, then it is also the cyber hygiene basics that need attention, things like weak or shared passwords, not patching your phone, all the boring, everyday stuff that needs to be repeated often and maintained all the time.

How can companies make sure their staff is vigilant?

Companies need to make sure their staff understands the risks. They need to make sure the basics are covered and they need to make it easy for people to report an incident when they think they've encountered something suspicious.

If you get an email or a text, you should know where to send that email - and whom to tell if you think it might be suspicious. Companies have to make it easy for people to report these things.

Additionally, if someone does get caught out by phishing emails or similar it's important they are educated as to how this has happened rather than punished for being a victim of a crime. If there's a blame culture, people won’t report when they get scammed or when they think they have had a near miss and the organisation misses an education opportunity and increases the chances of being hit again in the future.

How can the man on the street be sure that the company they are using for their car insurance, say, is being internally vigilant in terms of staff training and data?

That's the six million dollar question, isn't it? Because some of the biggest companies in the world have been breached and they've thrown everything at security. They're resourced, and seemingly prepared and they've got all the technical solutions in place, but the size of a company isn’t necessarily the key and even well-prepared organisations can, and have,  been hit.

However, on an individual basis, it is very difficult for someone to make a decision on how good a firm is at keeping their data safe because there's nothing for the layperson to really measure or compare. 

Q: What are the best preventative measures people can take?

You can only do your best and look after what you can control.  Do all the basic things like having a strong password and being careful what you post on social media.  I would also say that you can be very careful about who has your data and not share information freely with every company that asks for it.

For example, we're in the festive season now. People will be shopping online. If a website says something like, "Shall we keep your card details for next time?" The answer should be “No!”

 The fewer places that hold that information, the better. Limit who's got your data. Limit what data can be seen, limit what information any entity has about you.

These days, we live our lives online anyway, so you can't let these risks stop you from taking insurance, shopping online, or using social media, but you do have to be judicious about how much information you share about yourself or make visible online.

Q: If you could impart one piece of advice that would help protect data against cyber-attacks, what would it be the most critical thing that you can think of?

Look out for those four flags. The majority of attacks come through humans and those are the main ways they are framed. Remember those and you stand a good chance of stopping a scammer or criminal, regardless of how they approach you.


Featured Articles

Allianz Announces Partnership With Clearspeed

Emerging scams like moped fraud and shallow fakes pose new challenges to insurers, so more sophisticated detection systems are crucial

Milliman Arius: Reserve Analysis with an End-to-End Solution

Insurers face risks and errors with current reserve analysis methods – and Arius provides the answer

Allstate: BCG Partner Harnesses Gen AI to Transform CX

Allstate and BCG are harnessing Gen AI via a new model to better understand customer needs and improve overall experiences within the insurance sector

Comarch Diagnostic Point: Next Gen European Health Insurance


MoneyLIVE Summit 2024: Qover Talks Embedded Insurance


Ansel raises US$20m to combat financial healthcare barriers

Partner Ecosystems