Just your normal everyday casual software dev. Nothing to see here.

  • 0 Posts
  • 75 Comments
Joined 11 months ago
cake
Cake day: August 15th, 2023

help-circle

  • I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items

    As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources

    The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.


  • I personally will never use nextcloud, it is nice interface side but while I was researching the product I came across concerns with the security of the product. Those concerns have since then been fixed but the way they resolved the issue has made me lose all respect for them as a secure Cloud solution.

    Basically when they first introduced encrypting folders, there was a bug in the encryption program, and the only thing that ever would be encrypted was The Parent Directory but any subfolder in that directory would proceed to not be encrypted. The issue with that is that unless you had server-side access to view the files you had no way of knowing that your files weren’t actually being encrypted.

    All this is fine it’s a beta feature right? Except for when I read the GitHub issue on the report, they gaslit the reporter who reported the issue saying that despite the fact that it is advertised as feature on their stable branch, the feature was actually in beta status so therefore should not be used in a production environment, and then on top of , the feature was never removed from their features list, and proceeded to take another 3 months before anyone even started working on the issue report.

    This might not seem like a big deal to a lot of people, but as someone who is paranoid over security features, the projects inaction over something as critical as that while trying to advertise themselves as being a business grade solution made me flee hardcore

    That being said I fully agree with you out of the different Cloud platforms that I’ve had, nextCloud does seem to be the most refined and even has the ability to emulate an office suite which is really nice, I just can’t trust them, I just ended up using syncthing and took the hit on the feature set




  • I mean it gets the point across, regardless of the service dog or a pet(which shouldn’t be in the TSA security line in the first place cuz generally airports will have a designated drop off or require Kennels) , in this case it doesn’t matter how dense you are, it’s clear: do not pet the dogs, if the reader wants to say that it means no petting dogs on the entire trip, the airport doesn’t care as long as you’re not petting the dogs at the airport, and therefor not getting in the way of procedure or causing a potential safety issue for the port



  • Seconding this, I took the plunge a month or two back myself using proxmox for my home lab. Fair warning if you have never operated anything virtualized outside of using virtualbox or Docker like I was you are in for an ice Plunge so if you do go this route prepare for a shock, it is so nice once everything is up and running properly though and it’s real nice being able to delegate what resource uses what and how much, but getting used to the entire system is a very big jump, and it’s definitely going to be a backup existing Drive migrate data over to a new Drive style migration, it is not a fun project to try to do without having a spare drive to be able to use as a transfer Drive


  • honestly I don’t think there is a better way, like others have said you can use a trash program or you can chmod the git directory before deleting but, I would recommend against the comments saying alias the command, that can lead to even bigger problems if you typo thr alias or mess up in the script. rf can’t break anything unless you say the wrong directory which would be the same with aliases anyway,

    My recommendation out of them all would be using a trash program to move it to the trash that way if you do screw up the location you have a way to restore it otherwise you could make a script to list the files affected using ls and then prompt a yes/no prompt using read before doing the rm script, but that’s something you definitely want to test in a sandbox or user restricted environment if you’re not used to scripting in case something breaks




  • I concider bloat to be either unneeded files/programs. So duplicated libraries, unused apps, not personal data files that are stagnant, anything similar to that. It’s hard to put a metric on it, I just browse through my files every once and awhile and delete the unused stuff, but with the push for container based stuff I forsee that method will become increasingly harder as time goes on


  • judging by lack of description on this post, and the videos description, it’s a rage bait video based off potential intentions behind a website that logs discord activity and sells it for profit. The video description gave a big “I’m trying to egg you to watch this” vibe though so I didn’t go further. The site named has been shut down a few times now, it just renames itself every time and boom operational again.

    my opinion is that’s a risk you gotta take posting stuff online and it likely won’t be going anywhere, nothings secure unless you trust everyone involved. I wish for privacy but I don’t expect it unless I can meet that criteria


  • TPM is a good way, Mine is setup to have encryption of / via TPM with luks so it can boot no issues, then actual sensitive data like the /home/my user is encrypted using my password and the backup system + fileserver is standard luks with password.

    This setup allows for unassisted boot up of main systems (such as SSH) which let’s you sign in to manually unlock more sensative drives.





  • You are fine! I just wasn’t sure. Also I never considered that it might be an employer restriction for some of those. I guess that would make sense of why some people might post info dumps in that matter, even if it is hard to read.

    Additionally, that emoji makes a lot more sense now, I figured it was something like that. I just hadn’t seen it used that way before.



  • I’m glad I’m not the only one who thought this

    People need to realize that social media platforms are not always the best medium. Post a summary with the link to the rest of your information and I promise you anyone who’s going to interact/show interest with your article is going to click that link.

    This let’s people be able to read your article without having to duct tape all of your post together to get the separation between posts, and allows for easier sharing of medium.