• 2 Posts
  • 19 Comments
Joined 9 months ago
cake
Cake day: January 3rd, 2024

help-circle
  • QT is a cross platform UI development framework, its goal is to look native to the platform it operates on. This video by a linux maintainer from 2014 explains its benefits over GTK, its a fun video and I don’t think the issues have really changed.

    Most GTK advocates will argue QT is developed by Trolltech and isn’t GPL licensed so could go closed source! This argument seems to ignore open source projects use the Open Source releases of QT and if Trolltech did close source then the last open source would be maintained (much like GTK).

    Personally I would avoid Flutter on the grounds its a Google owned library and Google have the attention span of a toddler.

    Not helping that assessment is Google let go of the Fuschia team (which Flutter was being developed for) and seems to have let go a lot of Flutter developers.

    Personally I hate web frontends as local applications. They integrate poorly on the desktop and often the JS engine has weird memory leaks





  • Immutable distributions won’t solve the problem.

    You have 3 types of testing unit (descrete part of code), integration (how a software piece works with others) and system testing (e.g. the software running in its environment). Modern software development has build chains to simplify testing all 3 levels.

    Debian’s change freeze effectively puts a known state of software through system testing. The downside its effecitvely ‘free play’ testing of the software so it requires a big pool of users and a lot of time to be effective. This means software in debian can use releases up to 3 years old.

    Something like Fedora relies on the test packs built into the open source software, the issue here is testing in open source world is really variable in quality. So somethinng like Fedora can pull down broken code that passes its tests and compiles.

    The immutable concept is about testing a core set of utilities so you can run the containers of software on top. You haven’t stopped the code in the containers being released with bugs or breaking changes you’ve just given yourself a means to back out of it. It’s a band aid to the actual problem.

    The solution is to look at core parts of the software stack and look to improve the test infrastructure, phoronix manages to run the latest Kernel’s on various types of hardware for benchmarking, why hasn’t the Linux foundation set up a computing hall to compile and run system level testing for staged changes?

    Similarly website’s are largely developed with all 3 levels of testing, using things like Jest/Mocha/etc… for Unit/Integration testing and Robots/Cypress/Selenium/Storybook/etc… for system testing. While GTK and KDE apps all have unit/integration tests where are the system level test frameworks?

    All this is kinda boring while ‘containers!’ is exciting new technology



  • stevecrox@kbin.runtoFediverse@lemmy.worldWhat's going on with kbin.social?
    link
    fedilink
    arrow-up
    62
    arrow-down
    1
    ·
    edit-2
    8 months ago

    The developer behind KBin seems to have issues delegating/accepting contributors.

    If you look at the pull requests, most have been unreviewed for months and he tends to regularly push his branches once complete and just merge them in.

    That behaviour drove the MBin fork, where 4-5 people were really keen to contribute but were frustrated.

    To some extent that would be ok, its his project and if he doesn’t want to encourage contributions that is his decision but…

    KBin.social has gotten to the size where it really should have multiple admins (or a paid full time person). Which it doesn’t have.

    The developer has also told us he has gone through a divorce, moved into his own place, gotten a full time job and now had surgery.

    Thats a lot for any normal person and he is going through that while trying to wear 2 hats (dev & ops) each of which would consume most of your free time.

    Personally I moved to kbin.run which is run by one of the MBin devs


  • Docker swarm was an idea worse than kubernetes, that came out after kubernetes, that isn’t really supported by anyone.

    Kubernetes has the concept of a storage layer, you create a volume and can then mount the volume into the docker image. The volume is then accessible to the docker image regardless of where it is running.

    There is also a difference between a volume for a deployment and a statefulset, since one is supposed to hold the application state and one is supposed to be transient.


  • DevSecOps is all about process, I simplified my answer.

    At a high level in software there will always be a review phase, where code needs to be built and pass tests (as a minimum). With Git being used by every organisation I have been involved in you will find the organisation/team will claim to follow a variation of ‘Feature branch Workflow’ with review happening at a specific point (Pull Request).

    For the last ~10 years every organisation/team I’ve worked in/with has claimed to use a CI to verify the source code as part of that review phase.

    In most dysfunctional teams that review phase will be broken, the fastest win is to bring in the CI. Static analysis tools are also impartial in how they review and useful in teaching people how to review.

    I don’t say your project must build with no warnings, I say you project must build and I’ll have the CI point out where you have added warnings as part of review. When people complain I’ll point to their teams Ways of Working or an organisations Software Development Plan (or in one case a System Engineering Management Plan).

    The sort of team that then chooses to disable this will do so because the leadership are undermining it (normally a team lead turns it off or tells them to just ignore it). There seem to be a few common reasons as to why team leads will do this but it isn’t something you can rationally debate with. The only solution I’ve found is to sideline the problem, change team culture and identify a leader within the team and hand it over to them.

    Your talking about teams which are failing due to the environment, those normally understand what is wrong and the best approach is to be a good scrum master (e.g. run retro sessions, identify issues and work to resolve the environment problems with them).


  • You can’t fix a people problem with process.

    For example I’ve worked in DevSecOps for 10+ years, whenever consulting my first step is to implement a CI that picks up Pull Requests, builds them and runs a code analysis tools (e.g. pep8, spotbugs, eslint, etc…) and have the CI comment the Pull Request. The idea is to get an understanding of the projects technical debt and stop things getting worse and ensure the solution ‘just works’.

    Teams with huge amounts of technical debt will find a way to disable it when your not looking. They will develop all kinds of reasons and in reality the technical debt was created because of cultural issues in the team.

    So I’ve learnt its important if you spot a team doing that, the solution isn’t locking it down the solution so they can’t disable it or more process. But forcing out the technical leader and sitting with the team and working out why each one is fighting the tool and not seeing them as an asset and teaching them to be better.


  • There will always be someone who is beating you in a metric (buying houses, having kids, promotions, pay, relationships, etc…) fixating on it will drive you mad.

    Instead you should compare your current status against where you were and appreciate how you are moving forward

    As for age

    During university my best mate was 27 who dropped out of his final year, grabbed a random job, then went to college to get a BTEC so they could start the degree.

    It was similar in my graduate intake, we had a 26 year old who had been a brickie for 5 years before getting a comp sci degree.

    The first person I line managed was a junior 15 years older than me, who had a completely different career stream. They had the house, kids, had managed big teams, etc… honestly I learnt tons from them.



  • It isn’t a good move.

    A domain name can cost as little as £10, similarly most email services cost ~£5-£15 per person per month. Its normally pretty easy to link a domain to an email provider and doesn’t cost anything other than time.

    If a company can’t be bothered to implement the most basic online branding people will make their assumptions and some will filter your company out because of it. With the cost to implement so low (e.g. £160 per year), even the loss/gain of a single customer would justify it.





  • I wouldn’t use “certified” in this context.

    Limiting support of software to specific software configurations makes sense.

    Its stuff like Debian might be using Python 3.8 Ubuntu Python 3.9, OpenSuse Python 3.9, etc… Your application might use a Python 3.9 requiring library and act odd on 3.8 but fine on 3.7, etc… so only supporting X distributions let you make the test/QA process sane.

    This is also why Docker/Flatpack exist since you can define all of this.

    However the normal mix is RHEL/Suse/Ubuntu because those target businesses and your target market will most likely be running one.


  • I suspect they mean around packaging.

    I honestly believe Red Hat has a policy that everything should pull in Gnome. I have had headless RHEL installs and half the CLI tools require Gnome Keyring (even if they don’t deal with secrets or store any). Back in RHEL 7, Kate the KDE based Text Editor pulled in a bunch of GTK dependencies somehow.

    Certification is really someone paid to go through a process and so its designed so they pass.

    Think about the people you know who are Agile/Cloud/whatever certified and how all it means is they have learnt the basic examples.

    Its no different when a business gets certified.

    The only reason people care is because they can point to the cert if it all goes wrong


  • stevecrox@kbin.runtoLinux@lemmy.mlI'm so frustrated rn.
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Debian isn’t old == stable, its tested == stable.

    Debian has an effective Rolling distribution through testing than can get ahead of Arch.

    At some point they freeze the software versions in testing and look for Release Critical and Major bugs. Once they have shaken everything and submitted fixes where possible. It then becomes stable.

    The idea is people have tested a set baseline of software and there are no known major bugs.

    For the 4-5 releases Debian has released every 2 years (Similar to Ubuntu LTS). Debian tends to align its release with LTS Kernel and Mesa releases so there have been times the latest stable is running newer versions than Ubuntu and the newest software crown switches between Ubuntu LTS and Debian each year.

    For some the priority to run software that won’t have major bugs, that is what Debian, Ubuntu LTS and RHEL offer.


  • Can you elaborate…

    I have looked after a few instances of Active Directory and basic user management involved multiple steps through GUI’s clearly written at different times (you would go from a Windows 8 to Windows 95 to Windows XP styled windows, etc…)

    I much prefer FreeIPA, if I wanted to modify a user account it was two button clicks. Adding a group and bulk applying was the work of moments. You can setup replicas and for a couple hundred users it uses no resources.

    The only advantage I could see related to Exchange Integration as it makes it really easy to setup Sharepoint, Skype & Email.

    Sharepoint never gets setup properly and you find people switching to alternatives like Confluence, Github/Gitlab Pages or Media Wiki. So that isn’t an advantage.

    Everybody loathes Skype and your asked to setup an alternative (Mattermost, Slack, Zoom, etc…). I am not sure how integrated Teams is.

    Which really only leaves Email and I just can see the one off pain of setting up Dovecot as worth the ongoing usability pain of AD’s user control.



  • The person is correct in this isn’t a Linux problem, but relates to your experience.

    Windows worked by giving everyone full permissions and opening every port. While Microsoft has tried to roll that back the administration effort goes into restricting access.

    Linux works on the opposite principle, you have to learn how to grant access to users and expose ports.

    You would have to learn this mental switch no matter what Linux task your trying to learn

    Dockers guide to setting up a headless docker is copy/paste. You can install Docker Desktop on Linux and the effort is identical to windows. The only missing step is

    sudo usermod -aG docker $user

    To ensure your user can access the docker host as a local user.