• 4 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • Have you setup a rules file for USB? You must have a udev rule setup that gives your user access to the hardware. It is trivial to create, but is one of those little headaches you learn as you go. Sparkfun and Adafruit should both have good tutorials if you search either of them for udev rules.

    Mine for a ch340 is done like this:

    $ cd  /etc/udev/rules.d
    $ sudo nano 69-my-usb-serial-devices.rules
    # ch340
    SUBSYSTEM="USB", ENV{DEVTYPE}=="usb_device", ATTR{idVendor}=="1a86", ATTR{idProduct}=="7523", MODE="0666"
    

    I just told you to enter the terminal editor nano and enter a note that will help you remember that this is for the ch340 # ch340 followed by a line that sets the permissions for the device using a rule for which users have access to the device. I’m assigning the rule based on the vendor and product ID numbers. You can find these numbers by using the $ lsusb command. FYI, the $ is standard shorthand for command line as your standard user. This is opposed to # which is short for the root user at the command line.

    Once you enter this line in nano, follow the instructions to save the file in nano :qw IIRC. The next time you plug in the device, the kernel should use this rule to set the permissions for the device to 0666 which means everyone can read write, but not execute stuff from the port; with execute would be 0777.

    When you are trying to find info about a USB device the following may be helpful:

    $ sudo dmesg | grep -F "USB device number" 
    

    Note that the last line should be the most recently connected device. $ dmesg is the system-d boot log. Depending on how system-d is configured, you’ll probably see timestamps on the left. The initial bootup devices will show up with a tightly grouped time stamp, while later connections will show a much larger number.

    There have been some recent changes in Fedora that have broken a script I wrote to help me with all the various places where USB hardware is located and finding the right info. I’m trying to parse that script for the key elements. The first step is to find the location of the hardware. You are looking for something like /dev/bus/usb/003/003 or wherever the new device got mounted. This is only the start, because different parts of the device may be mounted in different locations. I’m not just talking about the CH340, but like, if you are doing microcontrollers stuff that gets more complicated like forth, micropython and circuit python where there will be more going on than just the serial port, or you need to know low level stuff. Once you know the specific port, you can use $ udevadm info --attribute-walk --path=$(udevadm info --query=path --name=/dev/bus/usb/003/003) # enter the port for the device in question.

    In the past, my script used $ dmesg to retrieve the device location, then used $ lsusb -D *device location* to get the basic info. Then I went a layer deeper with the udevadm command to see everything related to the device. The command $ fdisk -l might also help with some STM32 type stuff that has a dfu bootloader and identifies as a USB drive when plugged in… At least, I think that was the reason I kept that option in my script, it has been awhile since I used one of those.

    Edit: I can get the actual port location of a device now using $ lsusb -t -vv.


  • j4k3@lemmy.worldtoLinux@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    Slowly over time you learn what you need when you need it. There is no hand holding. Under the surface, this thing is very complex. Every aspect of Linux is public. You do not need to understand most of it, but this is the realm of many brilliant developers and most computer science students, especially those studying operating systems. Everyone is welcome here, but be aware that all levels are present.

    The vast majority of Linux is not related to desktop users. Linux is more common on servers and embedded devices like routers, cars, and industrial/enterprise equipment. People are happy to help you learn when you hit a wall, but no one wants to be your tech support.

    Distros are not brands or marketing. They all have a specific reason to exist and specialties. Learning what these specialties are and how to leverage them for things like documentation for any specific task can make a big difference in your overall experience.

    It is quite common for people to call it Linux, but you are unlikely to interact with kernel space very much. Your actual experience is mostly limited to the desktop environment and applications.

    Since you are on a Debian > Ubuntu derivative, you are on a distro that may have outdated dependencies in some cases, especially with outlier software. Terms like outdated and stable/unstable are not at all what they seem at first intuitive thought. Windows is a stable OS, which really means it has outdated dependencies in most cases too. Distros like Fedora or Arch are kept up to date with the latest kernel and dependencies. If your software you want to run is actively developed and kept up to date, these are the best distros to run. If your software is static, these distros may break it and create headaches. By contrast, if your software is kept up to date but you are on a stable distro, either the distro packager may keep the needed libraries up to date or you need to go to the extra effort required to update stuff yourself by adding a PPA to your Aptitude sources list. This is important to understand because, if you are following documentation for some package using the internet, that documentation may be for a much newer version than what is available in the distro natively. This mostly applies to edgy software when you’re doing something specific that is not super common. The practical way to think about this is that Debian stable is primarily created as a way for developers to create some device that will be used online for a specific task and uses many high level software packages. Once the thing is working, the developer knows that the packages they used are not going to get updated arbitrarily and break what they created, while the device is still going to receive all the needed security updates to remain online safely for as long as the kernel is supported by the Debian team. This is beneficial for small one off devices and subcontracted types of development without a full time dev. Understanding this paradigm will massively improve your overall experience. I had a lot of frustration before I understood that much of what I was using was outdated and why when I first started using Ubuntu over 10 years ago.


  • Slowly trying to learn sh while using mostly bash. Convenience is nice and all, but when I encounter something like OpenWRT or Android, I don’t like the feeling of speaking a foreign language. Maybe if I can get super familiar with sh, then I might explore prettier or more convenient options, but I really want to know how to deal with the most universal shell.


  • I found a Python project that does enough for my needs. Jq looks super powerful though. Thanks. I managed to get yq working for PNG’s, but I had trouble with both jq and yq with safetensor files. I couldn’t figure out how to parse a string embedded in an inconsistent starting binary, and with massive files. I could get in and grab the first line with head. I tried some stuff with expansions, but that didn’t work and sent me looking for others that have solved the issue better than myself.







  • The best deal is probably going to be looking for a used machine with a 3080Ti. There were several of these made with Intel 12th gen CPU’s. That is probably the cheapest way to get a 16 GB GPU. They can be found for considerably less than $2k. Anything with a “3080Ti” where the “Ti” part is super important, has a 16 GB GPU, (the “3080” is 8GB). That was the only 16 GB laptop GPU until the newer Nvidia 4k stuff.

    That can play any game, and can run some large models for AI stuff if you become interested. On the AI front, you want maximum system memory too if possible. My machine can only address 64 GB of sysmem. Some go up to 96 GB. I wish I could get like 256 GB.

    Just because a machine comes with Linux does not mean the problems are solved. You will find many times when people buy machines that have peripheral kernel modules that are orphaned and not part of the kernel. Orphaned kernels are not real Linux and are like phones. Indeed this is the exact mechanism used to steal your phone and prevent you from using it for its true hardware lifetime.

    The real solution is https://linux-hardware.org/. Use that to see what works where. You also need to understand modern secure boot with the TPM chip and package keys. These exist outside of the Linux kernel. If delving into this system is too much for you to deal with or of no interest, just stick to using either Ubuntu or Fedora. These both have a special system outside of Linux that will handle the keys for you. Presently, these are the only two distro choices that do this; not derivatives either, it must be vanilla Ubuntu or Fedora. You won’t be able to change anything in kernel space when going this route, but if the keys issue is unimportant, that probably won’t be an issue.


  • For me, it is not about “lost history.” It is about contextual history and knowing if some tool I built in a distrobox uses only dandified, pacman, aptitude, portage; or if it also uses venv, conda; or if there was some install script.

    It would be nice if I was on a stable kernel to avoid such a dependency salad, but that is not within the scope of playing with the latest AI toys where new tools and exploring new spaces is constantly creating opportunities to explore.

    It would be nice if I was some genius full stack dev that could easily normalized all the tools under a single dependency containerization scheme, but that is not within my mental scope or interests at the present. For most AI tools, I follow the example given and only add a distrobox container as an extra layer of dependency buffering from the host. The ability to lazily see the terminal history for each of those containers is a handy way to see exactly what I did months ago.


  • Distrobox supports waydroid to use android apps on wayland. There are many small purpose built apps for android than can be useful on desktop.

    No one seems to be mentioning apps in this specific kind of context, and I don’t consider a locked down and stripped orphan kernel to be “Linux” but a lot of this stuff it FOSS and can now run on both.





  • If it died as a result of spilling something on it. You most likely damaged something hardware wise. If it was powered off, first remove the battery asap. Then just take off the bottom cover, pat anything needed dry, and let it air out.

    The real concern are the chips that do not have any pins sticking out of them. Those are ball grid arrays (a whole bunch of connections are made under the black epoxy packaging. Those can hold moisture under them for longer. Your best bet is to let it dry in a warm place for a few hours.

    Getting wet is not a problem. The problem is a powered connection having a conductive fluid bridging two or more connections that can not tolerate the current the fluid creates.

    When the actual circuit board is made, it goes into ovens and submerged in liquids. Some even go across molten pools of tin as part of the component assembly process. The board itself, (not all the other plastics and stuff for the case, screen, etc., is very resilient.

    In many industrial settings where the environment is very dirty, it is common to take a desktop PC apart and hose it off with water. The only issue is shorting connections under powered conditions.

    So yes, technically, any form of drying can help “recover” the device.


  • I backup and then upgrade through the mechanism provided. Why? Lazy. I should take the time to set up a NAS and run most of /home from that, but never have been motivated enough to try it.

    I usually let myself lag behind on Fedora to wait until the kinks have been worked out. I just jumped from 38 to 40 in an upgrade and totally regret it. Python is screwed up in distrobox and making problems, but I can roll back too.



  • I’m presently having issues with 40 and old Stable Diffusion/ComfyUI related to torch and stuck in a dependency loop. Almost defiantly unrelated.

    When I was looking into AMD a year ago or so, the 7k thing was in a conference somewhere on YT. It had to do with some kinds of conflicts or something like that in how 7k versus the older stuff was designed and how CUDA is set up. I really don’t recall the details well. I was about to pull the trigger on a 6k setup, and after seeing that info I went the other direction.

    I was researching the CPU scheduler at the time and I may be blurring this and the GPU stuff together when I say: I think it was the open source team that was talking about this in a Linux Plummers conference, it might have been about the enterprise GPU stuff and about HIPS or something like that. Sorry I’m fuzzy on it.

    Edit: I was always only looking for the AI side, so the back end/kernel/API was all I cared about.


  • HIPS/ROCm targets 7k series. At least that was what I recall from my research almost a year ago when I was shopping for a machine. The 7k stuff is from the enterprise design team side of AMD, while the 6k series and before were like a totally separate thing inside the company.

    I got the impression 6k and before were only targeted at gaming. IIRC there was some project talked about a few months ago about doing some more back porting of the kernel API stuff, but I didn’t save the reference. I think Brody Robertson posted something about it on YT/Odyssey etc.


  • A simple intuitive whitelist/blacklist firewall with logging for both inputs and outputs. I shouldn’t have to navigate NFT’s complexity or write scripts simply to list all the websites I’m willing or unwilling to connect to and their port number. There are silly limitations on all the tools I’ve tried.

    I use a whitelist because my code sucks, and PDF datasheets for hobbyist hardware projects can be super sketchy to download. I have somewhere around 600 entries on my list. It feels like an intentionally obfuscated/overcomplicated issue in OpenWRT and elsewhere from a user’s perspective.

    I really don’t trust local LLM’s overall now that they’ve been shown to have hidden vulnerabilities and would love to have an easier way to monitor an outputs log and sandbox really.


  • j4k3@lemmy.worldtoLinux@lemmy.mlFedora
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    Fedora is not Red Hat per se, it is upstream. Red Hat is a few things in different spaces. For one is is a great source of documentation. Secondly, a sizable chunk of kernel code is developed and maintained by Red Hat. They are known for their zero down time kernel updating system among other things.

    Fedora is excellent. However, it is very different than Ubuntu by design. Fedora is primarily useful for entry level users that intend on only running software that is regularly kept up to date and maintained. You will start running into problems with software that is not kept up to date. There are relatively easy tools like distrobox, toolbox, and podman that can run most software regardless. The exception to this comes with the GPU. If you are running a GPU, you’re likely getting updates in Fedora with will break your older projects entirely. This is because Fedora is constantly updating the Linux kernel. Fedora is pushing out these updates constantly and looking for problems that might pop up. These issues get fixed and down stream to Red Hat to make it rock solid.

    Ubuntu is based on a much longer term stability with even longer term LTS versions. This means the kernel and dependencies are frozen in time at a specific state. If you want to write some custom package that never gets broken when a dependency is updated, Ubuntu is the goto distro. You must be aware that, on Ubuntu, the native packages are largely out of date. You can add a ppa to the sources list in aptitude so that you get the latest packages, but these should be used only in special cases. If you want to be up to date, use the proper distro for the task.

    This context is more important for servers where you want to deploy a project using a bunch of apps and packages. Once it is working, it should stay working for however long the LTS kernel is supported.