![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/u9kB0kgEaN.png)
Will you need your own account for the proprietary ones? Mozilla paying for these feels like it couldn’t be sustainable long term, which is worrying.
Will you need your own account for the proprietary ones? Mozilla paying for these feels like it couldn’t be sustainable long term, which is worrying.
Windows only PWA’s 😮💨
Gitlab’s AGPL so I don’t think there’s anything stopping you from moving to a self managed instance.
Monetization plan might be to sell prints of platformed artists work, with out any need for pesky royalties.
Because until you spend many hours getting used to it, it’s annoying as hell. I’m a longtime bash user, but if I have to do anything in PowerShell, it sucks. Bash is even less friendly to novice/casual users due to tools like awk and sed being totally obtuse. When you’re unfamiliar with the workflow, not being to see everything you’re able to do at a glance is pretty frustrating.
NFS is generally the way network storage appliances are accessed on Linux. If you’re using a computer you know you’re going to be accessing files on in the long term it’s generally the way to go since it’s a simple, robust, high performance protocol that’s used by pros and amateurs alike. SSHFS is an abuse of the ssh protocol that allows you to mount a directory on any computer you can get an ssh connection to. You can think of it like VSCode remote editing, but it’ll work with any editor or other program.
You should be able to set up NFS with write caching, etc that will allow it to be more similar in performance to a local filesystem. Note that you may not want write caching specifically if you’re going to suddenly disconnect your laptop from the network without unmounting the share first. Your actual performance might not be the same, especially for large transfers, due to the throughput of your network and connection quality. In my general experience sshfs is kind of slow especially when accessing many different small files, and NFS is usually much faster.
If you’re on Linux I’d recommend using btrfs, or bcachefs with snapshots. It’s basically like time machine on MacOS. That way if you accidentally delete something you can still recover it.
Isn’t a huge part of the point of copy left licences that an author can’t change the license without rewriting the code entirely?
A dedicated server is needed because something needs to keep a catalog of the smart devices available on your network and ideally be accessible to many people in one household. You could make a system that went phone -> device but you would need to set up each device on each phone you wanted to use, which isn’t a great user experience. You could also run into issues where devices would need to handle multiple conflicting commands from different users coming in at once. Since smart devices are usually trying to use as little power as possible, that extra complexity would hurt you in that department. The third reason is that having a separate server enables automated workflows that would depend on an always online server that orchestrates multiple devices. For example, let’s say you have some automatic insulating blinds, a smart thermostat. You want to raise and lower the blinds to maximize your energy efficiency. Since you have the dedicated server, that server can check the temperature set point of your thermostat, current weather, and sunrise\sunset times. If it’s sunny out, and your set point is higher than the outdoor temperature, the server can raise the blinds to let warm sunlight in, and vice versa. If only your phone could control the devices a workflow like this couldn’t work when you were out of the house.
Some perceptural hash of the actual ads could work to. You could run into legal trouble sending the ads themselves or the hosts speaking.
I wonder if the Gnome team’s cavalier aditude towards agreed upon standards is related to Redhat’s influence 🤔 It’s totally possible the devs are just high on their own fumes due to being the default for so long.
Back in the Gnome 2 days this wasn’t as much the case. Plus KDE was kind of a mess back then so the main choices were Gnome or XFCE which had fewer features. When Gnome 3 came around the devs switched hard to a much more opinionated approach, leading to Gnome 2 forks like Cinnamon since KDE was still very underpolished. It’s a bit regrettable that all that effort was poured into Gnome forks instead of improving KDE especially considering how great it is now.
My theory is that Google wants to move towards vector symbolic representations for pages in search rather than page caching. It would make index storage and retrival orders of magnitude cheaper for them if they can design a scheme that works well.
Thats because in government products many unsafe languages shittier than C(++) are used, like Ada, Fortran, and Cobol. It wouldn’t surprise me if most of the code running on products for government use werent written in C or C++
Nothing really, the JVM has a pretty troubled history that would really make me hesitate to call it “safe”. It was originally built before anyone gave much thought to security and that fact plauges it to the present day.
Pretty crazy to reccomend Java as a secure alternative.
Fzf has some scripts packaged for most shells that’ll replace ctrl-r reverse history search with this behavior
No voice chatrooms so it’s not similar. The most similar open source solution I’ve seen is Mumble
For neovim check out mini.align
Finding new ways webshits fuck up the most basic development principles boggles my mind. It’s like they intentionally stay ignorant.