The correct answer, Assess the issue, determine the scope of impact, and remediate the initial problem.
Since, I have software which scans files diffs, I can see the vulnerabilities were injected in Late Oct/Early Nov.
So, I restored a backup from a few weeks prior to that date.
After restoring from the backup, I immediately updated all of the plugins/software, and removed the package which introduced the vulnerability.
Now, at this time, you might be concerned with the security of your homelab.
I am not.
Because I treat my external facing services as honeypots which I expected to get PWNED. As such, if the attacker managed to obtain shell access to the target kubernetes container, the impact was limited, because the pod itself, has ZERO network access to anything, except the internet. It can’t even talk to my internal DNS server. Nothing.
As well, any authentication attempts on my local network, would have been detected by my Log monitoring platform, which would have delivered me an email, letting me know of authentication attempts on my internal servers.
Since, this is a docker/kubernetes container, I am rest easy knowing there are no persistent file system modifications to the container, as it is not persistent. Since, I restored to a backup before file changes were detected, this is more peace of mind.
So, what did I find?
A lot of php files containing very suspicious exec commands, which should not be present. I find lots of lovely obfuscated code checks, which also suspiciously had lovely eval commands.
Why did I make a post on this?
Because a few times a week, I see a post along the lines of…
“HELP MY LAB GOT PWNED AND MY STUFF IS NOW ENCRYPTED. WHAT SHOULD I DO?!?!?!”
I am making this post- because if you follow the recommended practices of having proper backups (3-2-1) rule, you can recover from these issues without breaking a sweat.
Backups, combined with log/authentication monitoring, gives you peace of mind. Properly securing everything, and restricting network access when possible, keeps things from spreading around your network.
Without the proper ACLs/Rules into place, the attacker could have gained access to my network, in which case, containing the damage would be extremely difficult. This is why having a proper DMZ is still crucial for any publicly exposed services.
Log monitoring software, was able to alert me to the presence of an issue. Without this, there would still be who-knows-what trying to run in my old wordpress site, and I would be none the wiser. Although, granted, it took a few weeks for an alarm to trip, which I have already remediated for the future.
Also, wordpress is a vulnerability magnet. Third time in the last 8 years.
Use Protect Remote to block “wp-admin” and “wp-json” for public access. It’ll protect your site permanently. If you have a problem with DDOS attack, you can also use CloudFlare
Out of curiosity, what log monitoring platform are you using?
You don’t have wordfence and auto-updates turned on?
Do have auto-updates. Don’t have wordfence- just heard about it, yesterday.
But, I do have another service that provides vulnerability scanning… and after checking my emails, it has been trying to notify me of the issue for about a month now… Suppose, I need to actually check those.
Sucuri?
No- Cleantalk.
I am making this post- because if you follow the recommended practices of having proper backups (3-2-1) rule, you can recover from these issues without breaking a sweat.
Backups, combined with log/authentication monitoring, gives you peace of mind. Properly securing everything, and restricting network access when possible, keeps things from spreading around your network.
Without the proper ACLs/Rules into place, the attacker could have gained access to my network, in which case, containing the damage would be extremely difficult. This is why having a proper DMZ is still crucial for any publicly exposed services.
Amen
Or just don’t have external facing services. Easiest way to not get hacked
But- then, how are my side-businesses supposed to make money?
Storefronts, and other externally-exposed services generally don’t work too well… when they aren’t exposed.
That’s true I suppose. Maybe I’m just not comfortable with it
It’s all about have proper redundancy, and risk-aversion.
And, of course, working backups, and a contingency plan when something bad happens.
Wouldn’t it also be better to just host the site on something like AWS? The downside is you have to pay the hosting fee but if I were running a site, I’d rather have it outside of my network. I’m still very inexperienced but that’s my thinking
Depending on the use-case, absolutely.
For a small site, absolutely.
I have a few dozen externally exposed projects that I self host though, a few of them are rather resource intensive, which would add up pretty quickly in AWS.
In my case, keeping everything in an isolated DMZ, handles reducing the risk vastly, as well as completely isolating internet-exposed applications from everything else.
Spotted the exact same thing on my wife’s wordpress site at the end of October. Don’t forget to harden your reverse proxy to help render these kinds of attacks moot!