I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
As a non-Windows-user I see that as a good thing. LLMs are not going away - but that kind of nonsense at least will make sure all PCs will eventually have cheap and reasonably fast AI acceleration. Which is required for killing off centrally hosted LLMs (plus nvidias cash grabbing)
Currently my mk4 is printing pretty much 24/7 with IS profiles. I’m applying some lubricant roughly once per week - sometimes I notice the printer starts making strange noises, mostly I notice when rods have zero residue between prints, and just add a bit.
Intel is well known for requiring a new board for each new CPU generation, even if it is the same socket. AMD on the other hand is known to push stuff to its physical limits before they break compatibility.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
It starts with them only doing initial talks about buying their hardware for a project with you for a 7-figure payment, and doesn’t improve from there.
It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
A problem of this bubble is that it is making AI synonymous with LLM - and when it goes down will burn other more sensibly forms of AI.
It surely is a bubble - so probably a bit different than many other bubbles.
I think OpenAI made the right call (for them) to commercialize when they did - as that pretty much was their only chance to do so. Things has moved fast over the last 1.5 years - and what used to take a decade in tech has happened within months: OpenAI is the dinosaur company grandfathered in, while for already about a year it’s been more sensible for anybody wanting to do something with LLM to selfhost (or buy hosting capacity, but put up own data) one of the more open language models, and possibly adjust or re-train it.
As a company owner I get a ridiculous amount of spam for a year already from all kinds of companies building products on top of OpenAI stack, or are trying to sell training or conferences. All those companies will be left with nothing once all the slower users realize technology has moved on. It’s like somebody trying to build all their product offerings based on VMWare stack nowadays.
If you as a company want to offer something around AI right now the safest option is probably offering hosting, or if you want to do more hands on, adjustment of open models. Both of those are very risky, and many will go bust in years to come - but not as suicidal as building on top of a closed dinosaur.
I see you’re not working in any industry having to deal with Qualcomm.
He probably needs a comaintainer. We could select one of us and then try pressuring him into accepting that.
Because it does JBOD if the controller supports it. Pretty much none of the controllers you’ll find in consumer hardware support that.
JBOD relies on an optional SATA extension, which most of your controllers won’t have.
That leaves you with RAID in the controller - which is a bad idea, as you don’t have much control over what is going on, and recovery if it fails will possibly messy.
Mobile workstation. There are some Xeon notebooks which also can take more than 64GB - but they have bad availability, cost about the same as a high end mac book pro, are significantly larger and heavier, run hot and have shitty battery life for comparable performance.
The overall hardware situation has been ridiculous for many years now. I recently got a new Dell Latitude for a customer project - runs hot, performance and runtime suck. Runs out even faster than my tiny GPD pocket 3, while providing worse performance. Compared specs - they indeed stuck a smaller battery into the business notebook than into the portable toy. We’re now at a point were a Chinese niche hardware maker does better thermal management for x86 systems than any of the established manufacturers. Current AMD mobile CPUs are great - and I’d love to have a good notebook with one, just nobody bothers building it.
AMD can compete in performance and power/Watt mid to high load, but is shit with low load efficiency. intel has nothing at all. Apple scales nicely over the complete range.
If you want a relatively small notebook with lots of RAM you also don’t have options (not really AMDs fault, but hardware manufacturers seem to produce mostly shit now). Framework is pretty much the only somewhat decent option with 64GB max, if you want more there’s pretty much only apple - which is way overcharging for that.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.