• 0 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I’m currently more of an generic sysadmin than linux admin, as I do both. But the ‘other stuff’ at work runs around teams, office, outlook and things like that, so I’m running a win11 with WSL and it’s good enough for what I need from a workstation. There’s technically a policy in place that only windows workstations are supported, but I suppose I could run linux (and I have separate laptop for linux-only stuff). At the current environment it’s just not worth the hassle, spesifically since I need to maintain windows servers too.

    So, I have my terminals, firefox and whatever I need and I also have the mandated office-suite, malware protection/IDR/IDS by the book and in my mindset I’m using company tools for company jobs. If they take longer, could be more efficient or whatever, it’s not my problem. I’ll just browse my (personal) cellphone while the throbber spins on the screen and I get paid to do that.

    If I switched to linux I’d need to personally take care of my system to meet specs and I wouldn’t have any kind of helpdesk available should I ever need one. So it’s just simpler to stick with what the company provides and if it’s slow then it’s not my headache and I’ve accepted that mindset.


  • The package file, no matter if it’s rpm, deb or something else, contains few things: Files for the software itself (executables, libraries, documentation, default configuration), depencies for other packages (as in to install software A you need also install library B) and installation scripts for the package. There’s also some metadata, info for uninstallation and things like that, but that’s mostly irrelevant for end user.

    And then you need suitable package manager. Like dpkg for deb-packages, rpm (the program) for rpm-packages and so on. So that’s why you mostly can’t run Debian packages on Fedora or other way around. But with derivative distributions, like kubuntu and lubuntu, they use Ubuntu packages but have different default package selection and default configuration. Technically it would be possible to build a kubuntu package which depends on some library version which isn’t on lubuntu and thus the packages wouldn’t be compatible, but I’m almost certain that on those spesific two it’s not the case.

    And then there’s things like Linux Mint, which originally based on Ubuntu but at least some point they had builds from both Debian and Ubuntu and thus they had different package selection. So there’s a ton of nuances on this, but for the most part you can ignore them, just follow documentation for your spesific distribution and you’re good to go.


  • Phobia, by definition, is uncontrollable, irrational, and lasting fear for something. In the current geopolitics situation I’d say that it’s not uncontrollable and very much not irrational. Fear, as a fellow Finn, might be a bit strong word, but it’s a definetly a concern.

    When I first read that I thought that the response is a bit harsh, as Russian (and Soviet Union) individuals have traditionally been a big part of open source community and their achievements on computing are pretty significant, but when you dig a bit deeper on that, a majority of Soviet era things are actually built by Ukrainians in Kyiv (obviously Ukraine as a country wasn’t a thing back then).

    Also, based on my very limited sight on the matter, Russians are not banned from contributing, but this is more of an statement that anyone working for the government in Russia can’t be a part of kernel development team. There’s of course legal reasons for that, very much including the trade bans against Russia, but also the moral part of it, which Linus seems to take a stand on.

    Personally I’ve seen individuals at Russia to do quite amazing feats with both hardware and software, but as none of us are in a void without any external infcluence nor affect, I think that, while harsh, the “sanctions” (for a lack of better word) aren’t overshooting anything, but they’re instead leveling the playing field. Any Joe Anynymous could write a code which compromises the kernel as a whole, but should that Joe live in Russia, it might bring a government backed team which can hide their tracks on a quite a bit different level with their resources than any individual could ever even dream about.

    So, while that decision might slow down some implementations and it might include some of the most capable of developers, the fear that one of them might corrupt the whole project isn’t unreasonable and, with ongoing sanctions in place (and legal requirements that follow) the core dev team might not even have a choice on this.

    In current global environment we’re living in, I’d rather have a bit too careful management than one which doesn’t take things seriously enough. We already have Canonical and others to break stuff way too often, we don’t need malicious government to expand on that with nefarious purposes which could compromise a shit on of stuff on a very fundamental level if left unattended.


  • I personally don’t, but many do. But it doesn’t matter, my employer isn’t legally allowed to read my emails, unless it’s a sort of an emergency. My vacation, weekend, short sick leave and things like do not qualify. And even then, if the criteria is met, it’s illegal to read anything else than strictly work related things out of my box.

    We even have a form where people leaving the company sign permission that their mailbox can be accessed by their team leader and without signature we’re not allowed to grant permissions to anyone, unless legal department is on the case and terms for privacy breach are met.


  • This is the same as complaining that my job puts a filter on my work computer that lets them know if I’m googling porn at work. You can cry big brother all you want, but I think most people are fine with the idea that the corporation I work for has a reasonable case for putting monitoring software on the computer they gave me.

    European point of view: My work computer and the network in general has filters so I can’t access porn, gambling, malware and other stuff on it. It has monitoring for viruses and malware, that’s pretty normal and well understood need to have. BUT. It is straight up illegal for my work to actively monitor my email content (they’ll of course have filtering for incoming spam and such), my chats on teams/whatever and in general be intrusive of my privacy even at work.

    There’s of course mechanisms in place where they can access my email if anyting work related requires that. So in case I’m laying in a hospital or something they are allowed to read work related emails from my inbox, but if there’s anything personal it’s protected by the same laws which apply to traditional letters and other communication.

    Monitoring ‘every word’ is just not allowed, no matter how good your intentions are. And that’s a good thing.


  • My ecotank died just like all the other inkjets. It went few weeks without printing and blue nozzle dried completely up and on the pipes I can see dried up ink on other colors as well. So I had to dig up old Brother HL3040 back to the duty which I retired after print quality started to drop (it needs new fuse unit or something similar, so not that big of a deal) and I thought having an option to print nice color pictures would be nice.

    So, if you plan to run ecotank (which does have pretty good printing quality when it works) set up a scheduled task on your computer to print something, in color, quite frequently even if it wastes some ink and paper. I think the main issue with mine was that even if I print stuff somewhat often there was a period where I only needed b&w documents so color nozzles went unused for a while.

    I might get a new set of nozzles and ink tanks for my unit as it’s a ton cheaper than a whole new printer, but if you’re looking for a printer this is something to take into consideration, regardless of their marketing material.

    Edit: Mine is Epson, didn’t know that ecotank term is used by other manufacturers.


  • You can run clonezilla on your shell session, just apt install conezilla (or whatever variant you’re using) and it can do the trick. Dd will almost surely work too, but that leaves a ton of responsibility to you instead of making any sanity checks on the way. That makes dd very powerful tool and it has saved my ass a multiple times, but if you already have a working partitioning schema clonezilla has a ton of options to make your life a lot simpler and a likely a bit faster than dd.


  • Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it’s very early stages. And still ‘man <whatever>’ is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don’t remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command ‘man’ out of the box and I was more than midly annoyed that I had to search for that.

    Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would’ve taken several hours to download, but it was annoying enough that I’ll spend some time at monday to fix this on my laptop.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlMan pages maintenance suspended
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it’s not beneficial to the actual project he’s been working with. And I can also understand that he doesn’t want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.




  • IsoKiero@sopuli.xyztoLinux@lemmy.mlThe Insecurity of Debian
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    The threat model seems a bit like fearmongering. Sure, if your container gets breached and attacker can (on some occasions) break out of it, it’s a big deal. But how likely that really is? And even if that would happen isn’t the data in the containers far more valuable than the base infrastructure under it on almost all cases?

    I’m not arguing against SELinux/AppArmor comparison, SElinux can be more secure, assuming it’s configured properly, but there’s quite a few steps on hardening the system before that. And as others have mentioned, neither of those are really widely adopted and I’d argue that when you design your setup properly from the ground up you really don’t need neither, at least unless the breach happens from some obscure 0-day or other bug.

    For the majority of data leaks and other breaches that’s almost never the reason. If your CRM or ecommerce software has a bug (or misconfiguration or a ton of other options) which allows dumping everyones data out of the database, SElinux wouldn’t save you.

    Security is hard indeed, but that’s a bit odd corner to look at it from, and it doesn’t have anything to do with Debian or RHEL.


  • If I had to guess, I’d say that e1000 cards are pretty well supported on every public distribution/kernel they offer without any extra modules, but I don’t have any around to verify it. At least on this ubuntu I don’t find any e1000 related firmware package or anything else, so I’d guess it’s supported out of the box.

    For the ifconfig, if you omit ‘-a’ it doesn’t show interfaces that are down, so maybe that’s the obvious you’re missing? It should show up on NetworkManager (or any other graphical tool, as well as nmcli and other cli alternatives), but as you’re going trough the manual route I assume you’re not running any. Mii-tool should pick it up too on command line.

    And if it’s not that simple, there seems to be at least something around the internet if you search for ‘NVM cheksum is not valid’ and ‘e1000e’, spesifically related to dell, but I didn’t check that path too deep.




  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    I’ve read Linus’s book several years ago, and based on that flimsy knowledge on back of my head, I don’t think Linus was really competing with anyone at the time. Hurd was around, but it’s still coming soon™ to widespread use and things with AT&T and BSD were “a bit” complex at the time.

    BSD obviously has brought a ton of stuff on the table which Linux greatly benefited from and their stance on FOSS shouldn’t go without appreciation, but assuming my history knowledge isn’t too badly flawed, BSD and Linux weren’t straight competitors, but they started to gain traction (regardless of a lot longer history with BSD) around the same time and they grew stronger together instead of competing with eachother.

    A ton of us owes our current corporate lifes to the people who built the stepping stones before us, and Linus is no different. Obviously I personally owe Linus a ton for enabling my current status at the office, but the whole thing wouldn’t been possible without people coming before him. RMS and GNU movement plays a big part of that, but equally big part is played by a ton of other people.

    I’m not an expert by any stretch on history of Linux/Unix, but I’m glad that the people preceding my career did what they did. Covering all the bases on the topic would require a ton more than I can spit out on a platform like this, I’m just happy that we have the FOSS movement at all instead of everything being a walled garden today.


  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    That kind of depends on how you define FOSS. The way we think of that today was in very early stages back in the 1991 and the orignal source was distributed as free, both as in speech and as in beer, but commercial use was prohibited, so it doesn’t strictly speaking qualify as FOSS (like we understand it today). About a year later Linux was released under GPL and the rest is history.

    Public domain code, academic world with any source code and things like that predate both Linux and GNU by a few decades and even the Free Software Foundation came 5-6 years before Linux, but the Linux itself has been pretty much as free as it is today from the start. GPL, GNU, FSF and all the things Stallman created or was a part of (regardless of his conflicting personality) just created a set of rules on how to play this game, pretty much before any game or rules for it existed.

    Minix was a commercial thing from the start, Linux wasn’t, and things just refined on the way. You are of course correct that the first release of Linux wasn’t strictly speaking FOSS, but the whole ‘FOSS’ mentality and rules for it wasn’t really a thing either back then.

    There’s of course adacemic debate to have for days on which came first and what rules whoever did obey and what release counts as FOSS or not, but for all intents and purposes, Linux was free software from the start and the competition was not.



  • Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

    True. I have no doubt that you could set up a linux system to calculate pi to 10 million digits (or something similar) more power efficiently than windows-based system, but that would include compiling your own kernel leaving out everything unnecesary for that particular system, shutting down a ton of daemons which is commonly run on a typical desktop and so on and waste a ton more power on testing that you could never save. And that might not even be faster, just less power hungry, but no matter what that would be far far away from any real world scenario and instead be a competition to build a hardware and software to do that very spesific thing with as little power as possible.