• 0 Posts
  • 17 Comments
Joined 6 months ago
cake
Cake day: January 5th, 2024

help-circle





  • The worst gotchas and limitations I have seen building my own self-host stack with ipv6 in mind has been individual support by bespoke projects more so system infrastructure. As soon as you get into containerized environments, things can get difficult. Podman has been a pain point with networking and ipv6, though newer versions have become more manageable. The most problems I have seen is dealing with various OCI containers and their subpar implementations of ipv6 support.

    You’d think with how long ipv6 has been around, we’d see better adoption from container maintainers, but I suppose the existence of ipv6 in a world originally built on ipv4 is a similar issue of adoption likewise to Linux and Windows as a workstation. Ultimately, if self-rolling everything in your network stack down to the servers, ipv6 is easy to integrate. The more one offloads in the setup to preconfigured and/or specialized tools, the more I have seen ipv6 support fall to the wayside, at least in terms of software.

    Not to mention hardware support and networking capabilities provided by an ISP. My current residential ISP only provides ipv4 behind cgnat to the consumer. To even test my services on ipv6, I need to run a VPN connection tunneling ipv6 traffic to an endpoint beyond my ISP.


  • I have been utilizing BunkerWeb for some of my selfhost sites since it was bunkerized-nginx. It is indeed powerful and flexible, allowing multi-site proxying, hosting while allowing semi-flexible per-site security tweaks (some security options are forcibly global still, a limitation).

    I use it on podman myself, and while it is generally great for having OWasp CRS, general traffic filtering targets and more built on top of nginx in a Docker container, the way Bunkerweb needs to be run hasn’t really remained stable between versions. Throughout several version upgrades, there have been be severe breaking changes that will require reading the setup documentation again to get the new version functional.


  • I just did some testing in the past hour or so and did a portable install from scratch using the Fedora Workstation 39 iso. I’m not exactly sure what your hardware detection issue would have been, but I can say that Anaconda could detect both a USB flash drive and an external hard drive I had plugged in.

    Going with the USB flash drive, I did skip using the automatic partitioning and went for using blivet to do my work. I did format the drive beforehand as I have always had issues with that drive properly accepting various partitioning commands (the installer no exception as tested). I did reserve a partition for a shared storage filesystem, but didn’t actually give it a filesystem here.

    In blivet, here is a sample of the kind of partition schema I was talking about (something that might be helpful to OP or anyone else that wants to try this setup):

    Blivet GUI Partitioning screen showcasing empty, EFI, boot, and root partitions

    Blivet GUI Partitioning screen showcasing boot btrfs volume with singular subvolume

    Blivet GUI Partitioning screen showcasing primary btrfs volume with root, snapshots, home, var, and var/log subvolumes

    I was able to then complete the install as normal and boot into the finished USB drive. I noted a small non-fatal complaint from grub on boot, but I imagine this is fixed with updating the system. All systemd units boot without failure and I am able to get the system working with minimal issue. The only real issue I could note is that the installation is very sluggish (due to it being on a flash drive rather than an external ssd or some other more suitable media). After booting, I then opened Disks and added the missing exfat (or NTFS) filesystem I reserved a partition for. The reason I didn’t do this in blivet during install is because the option doesn’t actually exist to make an exfat fs in the tool.

    Fedora 39 with GNOME Disks window and terminal reading out the contents of /etc/os-release

    Hopefully, this comment is helpful toward getting such a setup working.

    EDIT:

    Something I did notice with GRUB on Fedora Workstation 39 is that by default, the menu will not show unless pressing escape on boot. There is a useful AskUbuntu post that explains in detail how to access the grub menu and how to change it to behave in a better fashion for a multiboot system.


  • Ah, that would put a bit of complication into things. If you want to actually accomplish this though, you should largely start with the same steps as a standard system install, using a second USB flash drive to write the distro onto the external SSD, leaving enough space to build the rest of the partitions you need. If you intend to make a Windows-shared partition (exfat, fat32, or NTFS), it is probably best to put that partition either first or just behind the EFI partition so that Windows systems won’t have a hard time finding it. Exfat or NTFS would be a better choice for this type of partition.

    I would still generally recommend keeping the live distros on a separate bootable drive, but you can size and reserve dummy partitions after the rest of your normal dual-boot installs and shared data partitions for live installers to overwrite. There is likely going to be some experimentation with getting the OS bootloader (such as on GRUB provided by Fedora in this case) to pick them up and add them as boot entries. You should (depending on the live image) be able to dd write them to the ending partitions reserved for the image for as long as the partition is sized equal or larger than the ISO image’s size (it’s best to give at least a few blocks of oversize on the partition when writing ISO’s directly).

    Edit: In a proper Fedora install, you should almost never need to disable selinux or set it to permissive unless you know why you don’t want it.


  • For the longest time reading this post, I didn’t catch that you were setting up a simple dual boot for an internal SSD and thought with using tools like Ventoy you were making a multiboot portable install.

    You are obscenely overcomplicating this. Your current approach is almost completely wrong to getting a functional multiboot system that passes secure boot and everything else.

    Quite literally, bootstrapping from windows can use Rufus or ventoy on a USB stick to dump the ISOs on. Then boot into bios from the USB EFI entry. From there, simply install Fedora (no VM necessary). You’ll get Fedora installed in a GPT/EFI configuration (if you formatted your drive for install). If doing manual partitioning to leave space or do other configurations, format the drive to GPT. If multibooting, you may want to expand your EFI partition beyond 512MiB for multiple distros.

    For other Linux OS to install alongside, you can then install them in the free space. Another comment mentioned to not install a bootloader on the secondary OS(es), which is generally a good idea.

    For Tails, it is not meant to be installed on an SSD. It is best to use it on a flash drive.

    Overall, a majority of your issues stem from the following:

    • trying to use live distro images as an actual OS install
    • using Ventoy as your bootloader
    • using legacy MBR partition tables instead of GPT without expressed need for them
    • Using virtualbox in general to provision bare metal hardware (changes need to be made in your VM software of choice to get EFI booting to work)

    I’d argue your conclusion of people not switching to Linux because you found it too hard is almost entirely not because of any issue on Linux, but the factors you wedged yourself into on a modern x86 PC due to your methods in accomplishing your goal.




  • Before going any further to adjust your Z offset and other factors to tune for better bed adhesion, you should probably adjust your bed to actually be level as well as ensure (seeing that you only have one z lead screw?) that the X axis frame isn’t sagging on the side not containing your Z lead screw. Once you’ve got those factors sorted out, you should check your probe repeatability and then set your Z offset accordingly.

    Bed adhesion even with proper, clean first layers can be a pain depending on the bed surface, material being printed, how clean the bed is of oils and other contaminants, how hot the material is being extruded, and how hot the bed is among other factors. While using a bed mesh will greatly help to account for off-zero unevenness in your bed surface, you really shouldn’t use it to compensate for uneven bed leveling (especially when it looks like you are nearing more than 0.5 mm in unevenness).

    To diagnose other print issues, it will be helpful to see a picture showcasing the problems described in a failed print.


    As a side note, it is somewhat difficult to read your Klipper config file without using view source as it is being markdown formatted. You can negate it by using three graves on the lines above and below it so that it is wrapped in a ‘code block’ and isn’t formatted.

    ```
    Some code
      eeeee
      fffff
    ```

    Turns into

    Some code
      eeeee
      fffff
    

  • In general, Microsoft doesn’t support many filesystem formats at all. In the same way you shouldn’t attempt to cross-run a steam library from Windows on Linux, you really shouldn’t do from Linux to Windows. It’s in part due to how permissions, execution flags, filesystem case sensitivity, file metadata, is interpreted by Windows applications vs. Unix-like applications. There will be issues going either way when using foreign filesystems in complex tasks.

    While it should be expected that the files will have the same contents where they are actually the same (i.e. a Proton game will be the same as a Windows game as it comes from the same steam depot), there is a good chance that translation of interpretation isn’t to be 1:1 on either side. Furthermore with using Steam libraries, Steam includes additional data beyond just the game files, which is likely why they are invalidated. A significant portion of visible cross-os portability issues is due to many applications like Steam using OS-specific file structures. More than likely Steam is going to intentionally make the library metadata not fully compatible between Linux and Windows Steam and force validation before launch because there is a good chance the games aren’t even compatible builds or otherwise have additional compatibility content dragged along (such as Proton WINE prefixes that are to be completely ignored when launching from actual Windows or having additional libraries, modded executable binaries that have platform-oriented patches).

    If you seriously want to run a cross-share of a Steam Library between Linux and Windows, you should really utilize Steam Cloud save. If you want to “deduplicate” your games, your best bet would be if you can open the foreign fs and have a compatible copy of the game, to simply clone the game files to the current filesystem and remove from remote rather than attempt to force a multi-os single-partition shared library. You are less likely to destroy your Steam library if you treat the actual libraries separate, but move the games like they were downloaded externally. Barring being able to do that, just don’t cross-share games. Simply reboot into the OS that has the game you want to play instead.


  • jrgd@lemm.eetoLinux@lemmy.mlCurrent state of NTFS compatibility?
    link
    fedilink
    English
    arrow-up
    51
    ·
    edit-2
    5 months ago

    One can comfortably use NTFS to read and write files on modern Linux distributions, but NTFS in general is not very suitable for running applications on or using for long-term usage between a dual-boot. Windows can and will often lock up NTFS partitions whenever it decides to hibernate rather than shutdown or sometimes suspend. NTFS while not being the greatest FS in general will also have worse performance on Linux than Windows. You can totally keep data stores on a Linux system, though you won’t be able to make use of many of the advanced features some Linux/BSD-oriented filesystems offer. You can totally keep your drives as they are now, though if you intend to make a full switch you should consider migrating your drives’ data over to more Linux-oriented filesystems (be it Btrfs, Xfs, or Ext4 is your choice depending on the features you want). In short, NTFS works but lacks a lot of features and performance that a more suitable filesystem would offer.



  • Yes, just make sure that the boot setup for the distro install is compatible with what you intend to install it onto (I.E. if your server is going to be using EFI to boot an OS, install your Ubuntu instance as GPT, EFI onto the SSD). Depending on what wireless modules you are using and where you are sourcing them and how you are installing them, you might need to ensure Secure Boot is disabled in the BIOS of your server. This will be the case if the kernel module package you are installing doesn’t sign the wireless adapter driver you intend to use. Otherwise, most drivers you could possibly need should be baked into the kernel and you should be good to go.

    (One further sidenote coming from someone who has not used Ubuntu in a long time (since 16.04’s release), it would be good to check in the /etc/fstab file that the filesystem references are using either UUID or PARTUUID. Depending upon the drive layout of the server you are mounting the intended drive into, traditionally labeled references such as sda or nvme0n1 can change depending upon the slots each drive is seated. Using UUID or PARTUUID in the fstab reference alleviates any potential complications from this scenario where fstab might reference the wrong drive in mounting partitions. I do believe Ubuntu would likely do this by default nowadays, but it can’t hurt to check.)