Matthew is a System Administrator in a growing startup. For the most part, Matthew’s job Is easy, most of the processes have been automated, and sans a few bugs here and there, he spends most of his time on optimization and providing technical support to the rest of the team.
That is, until one day Matthew received a billing warning from their cloud provider. Much to his surprise, they’ve gone over budget. Once he checks the dashboard, it’s even worse than he imagined, in less than 24 hours their systems had consumed more resources than what was allotted for the month.
Matthew discovers that someone has gained unauthorized access to their cloud and has been running scripts. In other words. Matthew`s startup, like 30.000 web pages a day, was the target of a cyberattack.
With the surge in remote work and cloud computing, cyberattacks have been a growing threat, the estimated loss in 2015 due to hackers was over $3 trillion, and experts believe that by 2025 we might be looking at losses of up to $10.5 trillion.
Companies are spending more than ever on cybersecurity, implementing strategies like DevSecOps to create safer processes less prone to exploits. Unfortunately, technology advances for both sides, and with better security also comes more refined methods to breach it.
Designing with security in mind is always difficult, a system is as safe as its most vulnerable part. All it takes is a bug or an omission to open the floodgates to all manners of exploits.
Human beings are also part of the equation, and unlike software, we can’t be patched with security updates. In terms of cybersecurity, we are liabilities, vectors of attack waiting to be exploited by someone with an understanding of social engineering.
It should come as no surprise that cyberattacks are very different from their depiction in the media. Genius hackers intercepting a transmission and writing code in real-time is about as realistic as an 80’s action hero entering a building karate-chopping his way through mooks.
In truth, most cyberattacks aren’t as flashy. In fact, the most common forms of cyberattack, phishing, and man in the middle, rely on fooling a recipient. Something that requires very little to no coding skills at all.
Why brute force your way into someone’s account when they can willingly give their personal information away? Why waste time trying to find an exploit to access a network when someone the inside can give you access?
Inside jobs do happen, but most often than not, people are not aware that they are being played by con artists. After all, social engineering works by exploiting the vulnerabilities in human cognition. From our limited ability to process information to our innate belief that most people are good.
Case in point, back in the day, when USB sticks were extremely popular, cybercriminals would give them away on the streets as publicity material for fake companies. All it took was a regular office worker in need of a USB stick to put it in their machine, and hackers would have access to their computer or even the company’s network.
Who in their right mind would think that a nice person giving away publicity products would be part of a cybercriminal ring? Most of us would just take for granted that a company is trying to push its brand. It’s the more plausible explanation. That’s what cybercriminals count on.
But most people can recognize a suspicious email or a weird phone call, right? Yes, but in this kind of attack, the criminal relies on the possibility that one person out of a hundred will not. Just like with the USB drop, all it takes is one vulnerability.
And once again, with technology comes new challenges. One quick example: Discord is a social media and chat platform for gamers that exploded in popularity due to the pandemic. It’s fantastic to meet and play with other people with similar interests.
But, it’s also well known for its wide range of exploits, even allowing for RAT (remote access trojans). If an employee were to launch discord on a browser from their work computer to chat with friends, they could inadvertently download malware from a “trusted source”.
“No, that won’t happen to me”
Perhaps the biggest risk is thinking that this kind of thing won’t happen to you. Allow me to share a quick story.
I often provide consulting services for a travel agency. Every three weeks they have to send a report about ticket sales to a specific airline. The airline built a web application so that agents could check their status as well as upload the required information with each report.
Due to a poor UI, one agent asked me if I could help them upload the files. To my surprise, the app had more than its fair share of bugs, one of which triggered a page not found error. Fine, except for the fact that the app was built using a very popular framework that has a debugging mode enabled by default.
So, the page not found error wasn’t your typical 404, it was a whole debugging report, with code and routes, and everything you would need to get a very good picture of how the server was structured. Keep in mind, that this is an app that was designed so people could upload critical information like credit card numbers and personal data.
It’s not the framework’s fault, one of the first things the documentation tells you is to turn off the debugging mode before going to production. I wrote a lengthy email explaining the risks of keeping the mode on and sent it to their web administrator.
A few days later I got a reply assuring me that the risk was minimal since only key people had access to the web application. Except, that was not the case, I was a living example of a non-agent that knew of the application.
And the error? It can be triggered by literally writing nonsense in the navigation bar of any web browser. That kind of hubris is the bane of security systems, thinking that you can leave an exploit because it poses a very little risk is asking for trouble.
Training Your Personal
Every person in a company, regardless of their position, should go through a mandatory security workshop. Understanding the basics of security goes a long way in preventing people from making the kind of mistakes that could end catastrophically
In tandem, it’s very important to create a set of security policies and promote them by offering incentives. Security-conscious behavior is seldom reinforced. We chastise people who make mistakes, but rarely reward those who serve as an example for promoting a security-conscious culture.
All in all, the human element is as central to cybersecurity as it is for security in general. Technology can help us, but it can only take us as far as our practices let it.