• 6 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: February 1st, 2023

help-circle

  • AI has a lot of great uses, and a lot of stupid smoke and mirrors uses. For example, text to speech and live captioning or transcription are useful.

    “Hypothetical AI desktop” “Siri” “copilot+” and other assistants are smoke and mirrors. Mainly because they don’t work. But if they did, they would be unreliable (because ai is unreliable) and would have to be limited to not cause issues. And so they would not be useful.

    Plus, on Linux they would be especially unusefull, because there’s a million ways to do different things, and a million different setups. What if you asked the ai “change the screen resolution” and it started editing some gnome files while you are on KDE, or if it started mangling your xorg.conf because it’s heavily customized.

    Plus, every openai stuff you are seeing this days doesn’t really work because it’s clever, it works because it’s huge. Chatgpt needs to be trained for days of week on specialized hardware, who’s gonna pay for all that in the open source community?


  • Distributing software is not instantaneous. Assuming that Mozilla has already sent the update to flathub, it will take some time before it’s validated and available for download.

    If instead of flatpak you had used native packages, you would be in the same situation, as fedora’s update system keeps updates in testing until enough people say it’s fine.

    If you wanted to get the update as soon as possible, you would have to download the prebuilt binary from Mozilla, but then you would have to update manually and everything.

    Just be patient for a few days.


  • IMHO I would avoid the ublue distros and just go for official fedora spins. The guys have good intentions, but they don’t have the means to maintain that many distros “properly”. I often end up enabling copr packages for bazzite in my fedora install, just to find out the program doesn’t work.

    That being said, as the other comments told you, you can still install native apps on immutable distros, it’s just a bit more work. I don’t expect distrobox or toolbox to be much faster than flatpak, as they are all just containers with a nice cli, except flatpak is easier to update. But trying costs nothing



  • Well, I’m biased because KaTeX is load bearing to my use case. But I would argue that it:

    • Is more powerful
    • Is an introduction to LaTeX (which is an industry standard)
    • It’s ubiquitous

    You could consider using mathjax instead of KaTeX which should render both latex math and asciimath, (and should be better in general). If you had unlimited resources (which I guess you don’t) it would be cool if you made the math language into a setting.

    For git, other than the add and commit buttons, it would be useful to have a “git gutter” which shows changes from the last commit. Which is the only git integration feature that you can’t get away with external tools.

    For spell checking, even just pulling in some dictionary, like the ones in vscode’s cspell extension and having a basic dictionary check is much better than nothing.




  • In addition to what the others have said, windows has already had its big paradigm change (“similar” to the change from x11 to Wayland that is happening) in the past. It was around 2007 with windows Vista. They also didn’t get it quite right on the first try, but because Microsoft can do whatever they want, and in Linux you must convince the community that something is better, it was easier for them to just change everything under everyone’s nose.


  • Hmmm. That’s suspicious, there’s a number of things in the way of video acceleration with that setup.

    First of all, the fact that on fedora (ublue is a derivative of fedora) you need to install openh264 from dnf and not from Firefox extension manager, and then you still need to change some settings in about:config . Second, you are using a flatpak, I’m not sure if openh264 needs to be installed “inside the flatpak”. And last, it might just be the Nvidia.

    The first two would also affect AMD.






  • The USB protocol was simple by design, so it could be implemented in small dumb devices like pen drives. More specifically, it used two couples of cables, one couple was for power and the other for data (four wires in total). Having a single half-duplex data line means you need some way of arbitrating who can send data at any time. The easiest way to do it is having a single machine that decides who gets to send data (master), and the easiest way to decide the master is to not do it and have the computer always do the master. This means you couldn’t connect two computers together because they would both try to be the master.

    I used the past tense because you may have noticed that micro USB have 5 pins and not 4, that’s because phones are computers and they use the 5th pin to decide how to behave. If it’s grounded they act as a slave (the male micro to male A cable grounds it). If it has a resistor (the otg cable has it) it act as master. And if the devices are connected with a wire on that pin (on some special micro to micro) they negotiate the connection.

    When they made usb 3.0 and they realized that not having the 5th wire on the usb-A was stupid, so they put it (along side some extra data lines) that’s why they have an odd number of wires. So with usb 3 you can connect computers together, but you need a special cable that uses the negotiation wire. Also I don’t know what software you need for it to work.

    Usb-c is basically two USB 3.0 in the same cable, so you can probably connect computers with that. But often the port on the devices only uses one, so it might not be faster. Originally they put the pins for two connections so you could flip the connector, but later they realized they could use them to get double speed.



  • What is available is a x11 server, not more not less, it cannot be used for anything other than x11. If they made X12, it would not work on Nvidia, unless they wrote a new server, which they wouldn’t.

    You need to understand that the xorg server everyone use literally does not work on Nvidia, because it uses implicit sync, which is required by the Linux infrastructure. The only thing that works on Nvidia it’s specifically their own proprietary server.

    Nvidia does a lot of impressive stuff, but they have neglected the Linux scene for a long time, because it wasn’t convenient, and it shows.

    Edit: …what was available… because Nvidia is gradually implementing things the correct way, and Wayland is becoming more and more usable with every driver update. Because, surprise surprise, it does depend on the drivers. Also, both Intel and AMD work perfectly with Wayland.



  • False, xorg isn’t written with support for Nvidia, when xwayland windows flickers on Nvidia it’s an effect of xorg not working Nvidia.

    The Nvidia driver is a closed source implementation of the xorg server written by Nvidia for Nvidia GPUs. Xorg was invented at a time when drivers were done like that.

    Now xorg uses glamor (except on Nvidia) which is a driver that implements the server over opengl, so you don’t need to implement the whole thing for every GPU. Except glamor doesn’t work on Nvidia because Nvidia doesn’t implement implicit sync, which is required by Linux, and that is what you see in xwayland (which uses glamor as well).

    Wayland doesn’t require writing a whole server, but it requires implementing GBM and implicit sync (as does everything on Linux, unless you are using Nvidia’s proprietary corgi server). Nvidia refused GBM until a few years ago, and still refuses to implement implicit sync. Which is why explicit sync will solve most issues.