James Bond and Jason Bourne movies are super fun to watch — because there’s a great thrill in spy novels and whodunnit stories.
The article below, by Roger Grimes in CSO Online, shares some of those same thrills and spills but in the context of cyber security.
Check it out, it’s a good read.
Emphasis in red added by me.
Brian Wood, VP Marketing
——-
6 lessons learned about the scariest security threats
IDG News Service — Advanced persistent threats have garnered a lot of attention of late, deservedly so. APTs are arguably the most dangerous security concern for business organizations today, given their targeted nature.
An APT attack is typically launched by a professional organization based in a different country than the victim organization, thereby complicating law enforcement. These hacking organizations are often broken into specialized teams that work together to infiltrate corporate networks and systems and extract as much valuable information as possible. Illegally hacking other companies is their day job. And most are very good at it.
By all expert opinion, APTs have compromised the information infrastructure of any relevant company. The question isn’t whether you’ve been compromised by an APT, but whether you’ve noticed it.
I’ve been helping companies fight and prevent APTs for nearly a decade. In that time I’ve amassed my share of war stories from the IT security trenches. Here are some of the better real-life tales, not just for the chase, but for the lessons learned.
APT war story No. 1: APT eyes are watching you
I once spent more than a year responding to an APT attack at a multinational company that was involved in everything from high-tech satellites and guns to refrigerators and education. When I got the call, the client had already been hit by two other APT attacks, which isn’t unusual. Most companies that discover an APT usually figure out it’s been there for years. One client I worked with has been compromised by three different APT teams over the past eight years — not surprising in the least.
The multinational was finally serious about combatting the attack. The entire IT team was called together to respond; a large, single-purpose task force was created; all the relevant experts were brought in. It was decided that many months in the future all passwords would be reset.
You may wonder why the delay in resetting passwords. Password resets should always be pushed out far into the future in these situations because there’s no use changing all the passwords to kick out an APT if you can’t guarantee you can prevent the baddies from breaking right back in. Identify the most relevant weaknesses, fix them, then change the passwords. That’s the best defense.
As in most companies I work with, everyone involved was sworn to secrecy. Code words were established, so the team could discuss various aspects of the project in (possibly monitored) emails without alerting intruders or employees not yet engaged.
In this instance, the big password reset day was scheduled to coincide with the company’s annual baseball game, which had been instituted to increase employee morale. Because of this, the project was dubbed “company baseball game,” with the name of the company changed here to protect its identity. From that point forward, no one mentioned APT or password reset. Everything was about the baseball game.
The company’s systems were completely compromised, so new laptops and wireless routers were purchased. All project-related work was to be performed on these laptops over a secured wireless network to prevent any accidental leakage of information about the project, regardless of code-word use.
One facet of the project was to tackle the overabundance of domain administrators at the company. There were far too many — all told, more than 1,000. We set up camp in one of the many executive conference rooms we used over the course of the project and began discussing what to do.
We couldn’t decide which domain administrators were truly needed and which we could disable, so we decided to disable them all on “company baseball game” day, and force those who really needed domain admin access to reaffirm their need. We drafted a domain admin access request form on one of the project laptops and called it a day. We would send out the forms just before “company baseball game” day so that each person who needed a domain admin account could get one in time to be prepared.
The next morning around 7:30 a.m., I entered that same executive conference room. The project manager was already there. He looked up at me, his eyes a bit wider than usual for the early hour, and said, “Here’s our first two domain admin requests,” as he flipped them to me.
What did he mean domain admin requests? The form wasn’t out of draft stage and wasn’t scheduled to go out for months. But there they were, two filled-out “domain admin access request” forms. They had some small, but very noticeable mistakes, so it was obvious they were not from our original draft. Each was filled in by team members belonging to a foreign subsidiary, who currently had domain admin access. The reason they were requesting the reinstated domain admin access? Because the current access was to be cut on baseball game day.
To this day, I still can’t believe it. I was holding two forms that shouldn’t have existed. The only draft was on a laptop on an air-gapped network. Our precious secret project code was blown. Astonishment passed from team member to team member along with the forms as we gave them the news.
After much investigation, we figured out that the APT, led by insiders, had infiltrated all the conference rooms using the data display projectors and executive video conference systems. They were watching and digesting all our supposedly secret meetings. Their only mistake was in not understanding that the form didn’t really exist yet and was not due to be sent out for months. Thank goodness for language barriers.
Lesson: If your conference equipment is networked and has the ability to record voice or video, make sure you disable them before conducting meetings.
——-
APT war story No. 2: Not all APTs are as advanced as experts think
This is the story of an APT team that had taken total control of a company’s network. They were actively creating connections all around the network, day or night, by the time I got called in. They were beyond caring whether they had been discovered.
APTs are almost certain to dump all password hashes and use pass the hash (PtH) tools to take over the rest of an organization’s network. In this instance, the customer decided it was time to disable those weak LAN Manager (LM) password hashes that Microsoft had been recommending to disable for at least 10 years, and trying to disable by default at least since 2008. This particular APT was using the captured LM password hashes to do the dirty work.
I told the customer the proposal would not work because, by default, at least two types of Windows password hashes exist in Microsoft authentication databases: LM and NT hashes. The attackers had downloaded both types, and the PtH tool they were working with could use either. I even showed the client how the attacker’s tool had the syntax built in to switch between LM and NT hashes, a very common feature of PtH attack tools. Worse, even if you disable the storing of LM hashes, they are still created in memory when someone logs on. It sounds crazy, but that’s how Windows works.
The customer would not be dissuaded. Despite my protestations of wasted effort, it disabled the LM hashes and reset the passwords. Now the local and Active Directory databases contained no usable LM password hashes. You know how well that worked?
Well, it worked — because the APT team never used another password hash to perform its attack. Truth be told, they just moved on to other methods (see below), but the PtH attacks stopped. It turned out that the APT team didn’t even know its own tools. You could imagine the discussion they must have had internally when all the LM hashes disappeared, including shrugged shoulders and a brainstorm of new strategies.
Lesson: “Advanced” may be included in the name of APT, but not all APT attackers are all that advanced. Plus, sometimes the expert is wrong. I wasn’t wrong technically, but that didn’t prevent the outcome the client was looking for to be the same. It humbled me.
—–
APT war story No. 3: The medicine may be the poison
As a full-time Microsoft security consultant, I’m frequently asked to work on APT engagements led by other companies; I’m a resource, not the project leader. There’s one security consulting company I’ve worked with enough to know many of its staff members and consultants informally, if not personally. We understand what our roles are — depending on who gets there first, makes friends with the CIO, and assumes leadership. Our partnerships have always been friendly, though competitive. After all, it’s better to be a leader than a follower.
This security consulting firm is well known for fighting APTs and even sells detection software to help. Frequently, on engagements, it succeeds in selling its software and getting it installed on every computer in the environment. I was very used to seeing its service running in Windows Task Manager.
In this particular story, the security consulting firm arrived first, saved the day, and moved on. It also succeeded in installing its software throughout the organization and hadn’t been onsite in nearly a year. As far as anyone knew, the customer had been APT-free since the initial remedy. At least no one had detected any signs.
I’m a big fan of honeypots. A honeypot is software or a device that exists simply to be attacked. It can be an unused computer, router, or server. Honeypots can mimic anything, and they are great for detecting previously undetectable adversaries, so I recommend them often. This can be a decommissioned computer to which no person or service should be connecting. When a hacker or malware does connect, the honeypot sends an alert that can trigger an immediate incident response.
In this instance, I spent a few days helping the client deploy some honeypots. Most customers ask me how we are going attract hackers to the honeypots. I always laugh and answer the same way: “Don’t worry, they will come.” Indeed, every honeypot I’ve ever set up has detected nefarious activity within a day or two. These new honeypots were no different.
We detected network logon attempts coming from multiple workstations, none of which had a legitimate reason to be logging on. We pulled a few of these workstations and forensically examined their hard drives. We found that the APT had placed a remote-access Trojan on each of them. The Trojan’s name? The same as the anti-APT detection software. The bad guys had someone replace the legitimate anti-APT software with a Trojan, and it turns out they did it on nearly every computer.
This explained a few things, like why no APT had been detected. But the bigger question was how did it get installed in the first place. It turned out the customer’s “gold build” had been compromised in its build environment, and this Trojan was part of the build.
Lessons: First, verify the integrity of your builds; prevent unauthorized modification or invent some way to detect it. Second, honeypots are a great way to detect malicious activity. Third, always look for and investigate strange network connections from unexpected places.
—–
APT war story No. 4: All your PKI base belong to us
APT attacks on Public Key Infrastructure (PKI) servers used to be somewhat rare. In fact, until two years ago, I never personally ran across an APT where PKI servers had been involved. Now, it’s fairly common. But the most relevant story is the one where the PKI turned into physical access in a sensitive area.
This particular customer used its internal PKI servers to create employee smartcards. These smartcards were used to not only log on to computers but to physically access company buildings and other infrastructure.
The customer’s root CA (certification authority) server was a virtual instance sitting, disabled, on a VMware host server. The bad guys had found it, copied it offsite, cracked the local (weak) administrator password, and generated their own trusted subordinate CA. They used this CA to issue themselves PKI access to everything they could.
What surprised me most was the video my client showed me of two unknown men posing as employees. Using the fake smartcards they created, they had parked their cars inside the secured company parking lot, walked into the building through the employee entrance, and onto a floor that stored highly sensitive data.
My customer couldn’t tell me what happened after that or what was taken, but I knew they were not happy. There was a very serious mood in the room. I was invited to help them create a new PKI and to migrate the company into the better-secured PKI environment.
Lesson: Protect your PKI CA servers. Offline CAs should be just that: offline! They should not be disabled or sitting on the network with their network cards disabled, but off the network, stored in a safe, and not so easy to compromise. CA private keys should be protected by a Hardware Storage Module appliance, and all related passwords should be very long (15 characters or more) and complex. Plus, it can’t hurt to look for and monitor if other unauthorized CAs get added as trusted CAs.
—–
APT war story No. 5: Don’t forget the accounts you’re not supposed to touch
As mentioned above, most APT recovery events involve resetting passwords. If you’re going to reset passwords, reset all accounts — though it’s easier said than done. All my customers start out doe-eyed, ready to reset all passwords, but when they discover how much it will disrupt the business, they quickly scale back their goals. It’s far easier to get fired for causing a significant business interruption than it is for not getting all the hackers out.
This particular customer was ready and incredibly thorough. The plan was not only to reset all user and service accounts, but computer accounts as well. Almost no companies do this, especially when it comes to resetting service and computer accounts. Heck, I’m giddy if they reset all elevated user accounts, because it’s hard to get that little bit done thoroughly. Laugh only if you haven’t been through this drill.
Password reset day came and went. There were significant service disruptions, some of which were painful enough that we had to tell the CEO. By the end of the week, however, we had reset all the passwords.
Within a few days, the APT owned everything again, picking up all email, controlling all the elevated accounts, including IT security accounts. It was like the password reset never happened. We were perplexed. As best we knew, we had removed the easy holes, educated employees, and couldn’t see any evidence of Trojan backdoors.
Alas, there’s a built-in Windows account called krbtgt that is used for Kerberos authentication. You shouldn’t touch it, remove it, or as far as we previously knew, change its password. It really shouldn’t be a user account that shows up in user account management tools, and this APT team knew it.
As I’ve learned on successive engagements, krbtgt is a go-to technique. After an APT crew compromises an environment, they add the krbtgt account to other elevated groups. Because customers usually leave it alone, even during a password reset, it can be exploited as a go-to backdoor account. Great idea — if you’re a malicious hacker.
My customer reset the passwords of its krbtgt accounts and everything else (again). As far as I know, it has not had another detected problem. Be aware that resetting krbtgt accounts will absolutely cause authentication problems. It’s a pain. But if you have to do this, you too will get through it.
Lesson: if you’re going to reset all accounts, make sure you know what “all” means.
—–
APT war story No. 6: Information overload is spurring APT innovation, too
My last story isn’t about a single client, and it shows the evolution of APT over the years. Early APT practitioners would immediately collect everything they could as soon as they broke in. They would siphon out all old emails and install bots to get every new email sent. Many times they would install Trojans to monitor the network and databases, and if new content was created, they would copy it.
In other words, many companies have online backup services they aren’t paying for.
Those were the old days. In the world where terabyte databases are no longer even close to surprising, APT has a problem. When they get complete access to a network and learn where all the information is stored, they have to be more selective. Whereas they used to grab everything, what we see now are very discrete selections. The more advanced APTs these days build their own search engines, sometimes with their own APIs or borrowing the APIs of other well-known search engines, to search for specific data. They may still only leave with gigabytes of data a day, but what they have is highly selective.
Lesson: APT has the same issues finding and managing data just like you do. Don’t let them index your data better than you do.
http://www.csoonline.com/article/748680/6-lessons-learned-about-the-scariest-security-threats?page=1