• 4 Posts
  • 1.7K Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle
  • “Tech workers” is pretty broad.

    Tech Support

    There are support chatbots that exist today that act as a support feature for people who want to ask English-language questions rather than search for answers. Those were around even before LLMs, could work on even simpler principles. Having tier-1 support workers work off a flowchart is a thing, and you can definitely make a computer do that even without any learning capability at all. So they definitely can fill some amount of role. I don’t know how far that will go, though. I think that there are probably going to be fundamental problems with novel or customer-specific issues, because a model just won’t have been trained on it. I think that it’s going to have a hard time synthesizing an answer from answers to multiple unrelated problems that it might have in its training corpus. So I’d say, yeah, to some degree, and we’ve successfully used expert systems and other forms of machine learning in the past to automate some basic stuff here. I don’t think that this is going to be able to do the field as a whole.

    Writing software

    Can existing LLM systems write software? No. I don’t think that they are an effective tool to pump out code. I also don’t think that the current, “shallow” understanding that they have is amenable to doing so.

    I think that the things that LLMs work well at is in producing stuff that is different, but appears to a human to be similar to other content. There are a variety of uses that that works, to varying degrees, for content consumed by humans.

    But humans deal well with errors in what we see. The kinds of errors in AI-generated images aren’t a big issue for us – they just need to cue up our memories of things in our head. Programming languages are not very amenable to that. And I don’t think that there’s a very effective way to lower that rate.

    I think that it might be possible to make use of an LLM-driven “warning” system when writing software; I’m not sure if someone has done something like that. Think of something that works the way a grammar checker does for natural language. Having a higher error rate is acceptable there. That might reduce the amount of labor required to write code, though I don’t think that it’ll replace it.

    Maybe it’s possible to look for common security errors to flag for a human by training a model to recognize those.

    I also think that software development is probably one of the more-heavily-automated fields out there because, well, people who write software make systems to do things over and over. High-level programming languages rather than writing assembly, software libraries, revision control…all that was written to automate away parts of tasks. I think that in general, a lot of the low-hanging fruit has been taken.

    Does that mean that I think that software cannot be written by AI? No. I am sure that AI can write software. But I don’t think that the AI systems that we have today, or systems that are slightly tweaked, or systems that just have a larger model, or something along those lines, are going to be what takes over software development. I also think that the kind of hurdles that we’d need to clear to really fully write software from an AI require us to really get near an AI that can do anything that a human can do. I think that we will eventually get there, and when we get there, we’ll see human labor in general be automated. But I don’t think that OpenAI or Microsoft are a year away from that.

    System and network administration

    Again, I’m skeptical that interacting with computers is where LLMs are going to be the most-effective. Computers just aren’t that tolerant of errors. Most of the things that I can think of that you could use an AI to do, like automated configuration management or something, already have some form of automated tools in that role.

    Also, I think that obtaining training data for this corpus is going to be a pain. That is, I don’t think that sysadmins are you to generally be okay with you logging what they’re doing to try to build a training corpus, because in many cases, there’s potential for leaks of sensitive information.

    And a lot of data in that training corpus is not going to be very timeless. Like, watching someone troubleshoot a problem with a particular network card…I’m not sure how relevant that’s going to be for later hardware.

    Quality Assurance

    This involves too many different things for me to make a guess. I think that there are maybe some tasks that some QA people do today that an LLM could do. Instead of using a fuzzer to throw input in for testing, maybe have an AI to predict what a human would do.

    Maybe it’s possible to build some kind of model mapping instructions to operations with a mouse pointer on a screen and then do something that could take English-language instructions to try to generate actions on that screen.

    But I’ve also had QA people do one-off checks, or things that aren’t done at mass scale, and those probably just aren’t all that sensible to automate, AI or no. I’ve had them do tasks in the real world (“can you go open up the machine seeing failures and check what the label on that chip on the machine that’s getting problems reads, because it’s reporting the same part number in software”). I’ve written test plans for QA to run on things I’ve built, and had them say “this is ambiguous”. My suspicion is that an LLM trained on what information is out there is going to have a hard time, without a deep understanding of a system, to be able to say “this is ambiguous”.

    Overall

    There are other areas. But I think that any answer is probably “to some degree, depending upon what area of tech work, but mostly not, not with the kind of AI systems that exist today or with minor changes to existing systems”.

    I think that a better question than “can this be done with AI” is “how difficult is this job to do with AI”. I mean, I think that eventually, pretty much any job could probably be done by an AI. But I think that some are a lot harder than others. In general, the ones that are more-amenable are, I think, those where one can get a good training corpus – a lot of recorded data showing how to do the task correctly and incorrectly. I think that, at least using current approaches, tasks that are somewhat-tolerant of errors are better. For any form of automation, AI or no, tasks that need to be done repeatedly many times over are more-amenable to automation. Using current approaches, problems that can be solved by combining multiple things from a training corpus in simple ways, without a deep understanding, not needing context about the surrounding world or such, are more amenable to being done by AI.


  • Honestly, I’m a little surprised that a smartphone user wouldn’t have familiarity with the concept of files, setting aside the whole familiarity-with-a-PC thing. Like, I’ve always had a file manager on my Android smartphone. I mean, ok…most software packages don’t require having one browse the file structure on the thing. And many are isolated, don’t have permission to touch shared files. Probably a good thing to sandbox apps, helps reduce the impact of malware.

    But…I mean, even sandboxed apps can provide file access to the application-private directory on Android. I guess they just mostly don’t, if the idea is that they should only be looking at files in application-private storage on-device, or if they’re just the front end to a cloud service.

    Hmm. I mean, I have GNU/Linux software running in Termux, do stuff like scp from there. A file manager. Open local video files in mpv or in PDF viewers and such. I’ve a Markdown editor that permits browsing the filesystem. Ditto for an org-mode editor. I’ve a music player that can browse the filesystem. I’ve got a directory hierarchy that I’ve created, though simpler and I don’t touch it as much as on the PC.

    But, I suppose that maybe most apps just don’t expose it in their UI. I could see a typical Android user just never using any of the above software. Not having a local PDF viewer or video player seems odd, but I guess someone could just rely wholly on streaming services for video and always open PDFs off the network. I’m not sure that the official YouTube app lets one actually save video files for offline viewing, come to think of it.

    I remember being absolutely shocked when trying to view a locally-stored HTML file once that Android-based web browsers apparently didn’t permit opening local HTML files, that one had to set up a local webserver (though that may have something to do with the fact that I believe that by default, with Web browser security models, a webpage loaded via the file:// URI scheme has general access to your local filesystem but one talking to a webserver on localhost does not…maybe that was the rationale).


  • ProjectM is literally the modern incarnation of Milkdrop, and it’s packaged in Debian, so I assume all child distros.

    EDIT: If you want something more expensive and elaborate than a picture on a computer display, look into DMX512. That’s the electrical standard for connecting the kind of controlled lighting systems that are used in clubs and such. You get a DMX512 transciever, plug it into a computer, hook up software, and attach DMX512 hardware. Strobes, computer-directed spotlights doing patterns, color-changing lights, etc. You can find YouTube videos of people using systems like that to drive lighting displays on houses synchronized with audio, stuff like this:

    https://www.youtube.com/watch?v=DnrBd7bCLoU





  • Depends massively on what subreddit on Reddit, and to a lesser degree, what community on the Threadiverse. /r/AskHistorians, /r/seventhworldproblems, /r/Europe, and /r/NFL don’t have a whole lot in common.

    I think that in terms of content, the Threadiverse today is much closer to very early Reddit than to Reddit over the past ten years or so. Reddit used to have a much heavier tech focus, lot of Linux too, though it tended to be more Lisp, academia, and startups. A lot of the people who came over early on the Threadiverse are far-left; the proportions definitely differ a lot there. I’m pretty sure that there’s a higher furry and trans content ratio, but that’s harder to judge; it may also just be people using avatars and home instances providing a hint.

    A significant chunk of people on here seem extremely depressed. That was definitely not my take on especially early Reddit, which was fairly upbeat (though I do remember one Italian guy on /r/Europe who kept talking about how terrible Italy is today and how much better the 1980s were).

    I think that there are more people who are kinda…I’m not sure how to put this politely. A little unglued from reality. I mean, I remember back during Bush’s time in office, there being a lot of 9/11 conspiracy stuff on Reddit, but I feel like the proportion of people whose general take on everything feels extremely paranoid is a lot higher.

    It definitely feels more international, less US-oriented, to me, and I frequented /r/Europe.

    I feel like there are more older people. I have seen some website analytics of Reddit, and as I recall, it averaged something like early twenties. That may have changed over time, but I’d still bet that the median age here is higher.

    Most of the subreddits that I used had far more users than even the most-active communities on the Threadiverse. This meant that there was a lot more content. On the other hand, it also meant that it was increasingly-common to spend a lot of time writing something, only for it to be buried under a flood of other content; if one didn’t get a comment in pretty early in a post, users just skimming top comments might never see it. That was even more-true for posts – one’s chance of a post attracting attention in a community where a new post arrives every few minutes and many people just view top posts was not good, whereas here, I’m pretty sure that almost everyone on a community sees it. I think that Reddit had a better variety and amount of content to consume, whereas I feel that it’s more-rewarding to contribute content here.

    For the same smaller-size reason, it’s a lot more common here for me to recognize usernames. Especially late Reddit, the chance of recognizing anyone off a subreddit, other than a few extremely-prolific posters, was not high. I’m talking to pseudonyms, sure, but it’s “Kolanki, that furry dude that I remember”, or “Flying Squid, that guy who mods a bunch of communities”, not another user name that I’ll probably never see or remember. I think that that affects the environment somewhat, that people act differently in a crowd of people that they “know” than in a crowd of strangers.

    The Threadiverse in 2025 isn’t a full replacement for me in the sense that Reddit has a subreddit with some level of non-zero activity on virtually any topic remotely of interest that I can think of. There are a few subreddits that I used to read regularly, like /r/cataclysmdda, for the video game Cataclysm: Dark Days Ahead. [email protected] has very little activity, and for most video games, software packages, products, etc there isn’t a community. Some subreddits dealt with content creation or all sorts of things, and the userbase just isn’t here now to support that. So what I talk about differs somewhat.

    I feel like users on the Threadiverse are less aggressive. Maybe it’s moderation or the userbase or who-knows-what, but I remember a considerably higher proportion of flamewars on Reddit. I felt that there was a much-higher tendency for people to want to get the last word in on Reddit.

    I have seen far less trolling than I did on Reddit (or Slashdot).

    It’s hard for me to judge the impact of LLM-generated bot comments on Reddit. I didn’t personally notice many, at least on the (mostly-not-largest-in-size, so maybe not heavily-targeted) subreddits that I followed, but I’ve seen plenty of people on both Reddit and on the Threadiverse complaining about LLM-generated comments on Reddit, so unless they were outright wrong, either I couldn’t pick up on some or they were targeting larger subreddits. It wasn’t to the point that my conversations felt degraded, at least not at the time that I left.

    The Threadiverse is smaller, and I think that I’ve seen content on one community inspire related-topic conversations on another. I don’t think I recall that on Reddit.



  • PNG is really designed for images that are either flat color or use an ordered dither. I mean, we do use it for photographs because it’s everywhere and lossless, but it was never really intended to compress photographs well.

    There are formats that do aim for that, like lossless JPEG and one of the WebP variants.

    TIFF also has some utility in that it’s got some sort of hierarchical variant that’s useful for efficiently dealing with extremely-large images, where software that deals with most other formats really falls over.

    But none of those are as universally-available.

    Also, I suppose that if you have a PNG image, you know that – well, absent something like color reduction – it was losslessly-compressed, whereas all of the above have lossless and lossy variants.



  • I would guess that at least part of the issue there is also that the data isn’t all that useful unless it’s also exported to some format that other software can read. That format may not capture everything that the native format stores.

    In another comment in this thread, I was reading the WP article on Adobe Creative Cloud, which commented on the fact that the format is proprietary. I can set up some “data storage service”, and maybe Adobe lets users export their Creative Cloud data there. Maybe users even have local storage.

    But…then, what do you do with the data? Suppose I just get a copy of the native format. If nothing other than the software on Adobe’s servers can use it, that doesn’t help me at all. Maybe you can export the data, export to an open format like a PNG or something, but you probably don’t retain everything. Like, I can maybe get my final image out, but I don’t get all the project workflow stuff associated with the work I’ve done. Macros, brushes, stuff broken up into layers, undo history…

    I mean, you have to have the ability to use the software to maintain full use of the data, and Adobe’s not going to give you that.


  • I think the first filesystems had flat layout (no directories), but also had different file types for a library, an executable, a plaintext file. Then there were filesystems where directories could only list files, not other directories.

    The original Macintosh filesystem was flat, and according to WP, used for about two years around the mid-1980s. I don’t think I’ve ever used it, personally.

    https://en.wikipedia.org/wiki/Macintosh_File_System

    MFS is called a flat file system because it does not support a hierarchy of directories.

    They switched to a new, hierarchical filesystem, HFS, pretty soon.

    I thought that Apple ProDOS’s file system – late 1970s to early 1980s – was also flat, from memory. It looks like it was at one point, though they added hierarchical support to it later:

    https://en.wikipedia.org/wiki/Apple_ProDOS

    ProDOS adds a standard method of accessing ROM-based drivers on expansion cards for disk devices, expands the maximum volume size from about 400 kilobytes to 32 megabytes, introduces support for hierarchical subdirectories (a vital feature for organizing a hard disk’s storage space), and supports RAM disks on machines with 128 KB or more of memory.

    Looks like FAT, used by MS-DOS, early 1980s, also started out flat-file, then added hierarchical support:

    https://en.wikipedia.org/wiki/File_Allocation_Table

    The BIOS Parameter Block (BPB) was introduced with PC DOS 2.0 as well, and this version also added read-only, archive, volume label, and directory attribute bits for hierarchical sub-directories.[24]



  • One thing he has said was that they change how loud some things are compared to how they should be which I think he means they will make certain pitches louder than other pitches so something like setting spoon on glass plate will be loud but the sound of a low voiced man talking is quiet when normally the low voices are the only ones he can hear.

    I don’t have any experience here, but you can probably find yourself a software package – probably webpages out there that can do it – called a “tone generator” that can generate a tone at different frequencies and volumes that would let you check that. Can find the threshold of hearing for different frequencies, given the ability to do that.

    I will say that normally, as one ages, one’s ability to hear high-pitched frequencies falls off. It might be that the hearing aid is aiming to compensate for that, if it’s amplifying higher pitches more than low. Or maybe he’s just used to having lost some high-frequency hearing and now it’s annoying to have that reverted.

    But I don’t know what the current state-of-the-art is. I’d think that one could do the equivalent of what an equalizer does, set response at different frequencies.

    Maybe have a sound that plays, plays a “frequency sweep” that should sound the same amplitude at all the frequencies to calibrate.

    You only mention Apple ones.

    kagis

    Assuming this is the right thing, it looks like this is the Airpods Pro, with a “hearing aid” mode. They do apparently support a calibration option, and having different response at different frequencies:

    https://www.apple.com/airpods-pro/hearing-health/

    Assuming that he did calibrate them, maybe the process didn’t go well, and it’s mis-calibrated?


  • The average person does not deal with files anymore. Many people use online applications for everything from multimedia to documents, which happily abstract away the experience of managing file formats.

    I remember someone saying that and me having a hard time believing it, but I’ve seen several people say that.

    https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z

    Catherine Garland, an astrophysicist, started seeing the problem in 2017. She was teaching an engineering course, and her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her over for help. They were all getting the same error message: The program couldn’t find their files.

    Garland thought it would be an easy fix. She asked each student where they’d saved their project. Could they be on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. “What are you talking about?” multiple students inquired. Not only did they not know where their files were saved — they didn’t understand the question.

    Gradually, Garland came to the same realization that many of her fellow educators have reached in the past four years: the concept of file folders and directories, essential to previous generations’ understanding of computers, is gibberish to many modern students.

    https://old.reddit.com/r/AskAcademia/comments/1dkeiwz/is_genz_really_this_bad_with_computers/

    The OS interfaces have followed this trend, by developing OS that are more similar to a smartphone design (Windows 8 was the first great example of this). And everything became more user-friendly (my 65+ yo parents barely know how to turn on a computer, but now, use apps for the bank and send emails from their phone). The combined result is that the younger generations have never learned the basic of how a computer works (file structure, file installation…) and are not very comfortable with the PC setup (how they prefer to keep their notes on the phone makes me confused).

    So the “kids” do not need to know these things for their daily enjoyment life (play videogames, watch videos, messaging… all stuff that required some basic computer skills even just 10 years ago, but now can be done much more easily, I still remember having to install some bulky pc game with 3 discs) and we nobody is teaching them because the people in charge thought “well the kids know this computer stuff better than us” so no more courses in elementary school on how to install ms word.

    For a while I was convinced my students were screwing with me but no, many of them actually do not know the keyboard short cuts for copy and paste. If it’s not tablet/phone centric, they’re probably not familiar with it.

    Also, most have used GSuite through school and were restricted from adding anything to their Chrome Books. They’ve used integrated sites, not applications that need downloading. They’re also adept at Web 3.0, creation stuff, more than professional type programs.

    As much as boomers don’t know how to use PCs because they were too new for them, GenZs and later are not particularly computer savvy because computers are too old for them.

    I can understand some arguments that there’s always room to advance UI paradigms, but I have to say that I don’t think that cloud-based smartphone UIs are the endgame. If one is going to consume content, okay, fine. Like, as a TV replacement or something, sure. But there’s a huge range of software – including most of what I’d use for “serious” tasks – out there that doesn’t fall into that class, and really doesn’t follow that model. Statistics software? Software development? CAD? I guess Microsoft 365 – which I have not used – probably has some kind of cloud-based spreadsheet stuff. I haven’t used Adobe Creative Cloud, but I assume that it must have some kind of functionality analogous to Photoshop.

    kagis

    Looks like off-line Photoshop is dead these days, and Adobe shifted to a pure SaaS model:

    https://en.wikipedia.org/wiki/Adobe_Creative_Cloud#Criticism

    Shifting to a software as a service model, Adobe announced more frequent feature updates to its products and the eschewing of their traditional release cycles.[26] Customers must pay a monthly subscription fee. Consequently, if subscribers cancel or stop paying, they will lose access to the software as well as the ability to open work saved in proprietary file formats.[27]

    shakes head

    Man.

    And for that matter, I’d think that a lot of countries might have concerns about dependence on a cloud service. I mean, I would if we were talking about China. I’m not even talking about data security or anything – what happens if Country A sanctions Country B and all of Country B’s users have their data abruptly inaccessible?

    I get that Internet connectivity is more-widespread now. But, while I’m handicapped without an Internet connection, because I don’t have access to useful online resources, I can still basically do all of the tasks I want to do locally. Having my software unavailable because the backend is unreachable seems really problematic.




  • Even if there is an issue, it would hit every single website out there. Like, Option 1: the browser works the same way all of the others out there do. Option 2: All of the websites out there adapt to one browser. There is no way that Option 2 is a sane choice.

    How did this happen, anyway?

    kagis

    https://old.reddit.com/r/iphone/comments/syyab1/the_reason_theres_no_back_button/

    The reason there’s no back button:

    To sum up the video, Steve Jobs wanted a back button but human interface designer Imran Chaudhri convinced him that the back button would be unreliable and complicated and create a “trust issue” for users. The argument was that on Androids the back button might perform different/inconsistent actions: return you to a previous menu, a previous app or the home screen.

    Well, having websites individually implement a back button sure isn’t gonna be more-consistent from a UI standpoint.

    EDIT: And I kind of suspect that swiping is probably also less-consistently-used, just because there are going to be programs where one can’t reasonably dedicate swiping to “back”, so this creates a situation where you have feature collision with entirely unrelated features.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldWhat happened in 1971?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    I don’t think that anything significant happened in that particular year to produce an long-term difference, but computers were becoming a lot more common around that time.

    As countries develop economically, they start out with a large primary sector, sometimes called the “agricultural sector”. That deals with agriculture, mining, fishing. Extracting raw resources from the land.

    As they develop, the size of the primary sector declines relative to the secondary sector, sometimes called the “manufacturing sector”. It takes in raw resources, and converts them into processed goods.

    As they develop still further, the size of the secondary sector declines relative to the tertiary sector, sometimes called the “service sector”… These are services – people do work that doesn’t involve processing resources.

    https://en.wikipedia.org/wiki/Three-sector_model

    According to the three-sector model, the main focus of an economy’s activity shifts from the primary through the secondary and finally to the tertiary sector. Countries with a low per capita income are in an early state of development; the main part of their national income is achieved through production in the primary sector. Countries in a more advanced state of development, with a medium national income, generate their income mostly in the secondary sector. In highly developed countries with a high income, the tertiary sector dominates the total output of the economy.

    The US underwent a good bit of that secondary-tertiary transition in vaguely the timeline that you’re looking at.

    The way a market allocates labor in the economy is via wages. If there is a lot of demand for labor for a given job relative to supply, wages for that job rise. This causes more workers to shift into that job. If there is little demand for labor for a given job relative to supply, wages fall. This causes workers to shift out of that job.

    The US used to do a lot of low-skill assembly-line production, a lot like China does today. It used workers who were coming off farms – just like China has. Forged a lot of the practices involved, in fact:

    https://en.wikipedia.org/wiki/American_system_of_manufacturing

    The American system of manufacturing was a set of manufacturing methods that evolved in the 19th century.[1] The two notable features were the extensive use of interchangeable parts and mechanization for production, which resulted in more efficient use of labor compared to hand methods. The system was also known as armory practice because it was first fully developed in armories, namely, the United States Armories at Springfield in Massachusetts and Harpers Ferry in Virginia (later West Virginia),[2] inside contractors to supply the United States Armed Forces, and various private armories. The name “American system” came not from any aspect of the system that is unique to the American national character, but simply from the fact that for a time in the 19th century it was strongly associated with the American companies who first successfully implemented it, and how their methods contrasted (at that time) with those of British and continental European companies. In the 1850s, the “American system” was contrasted to the British factory system which had evolved over the previous century. Within a few decades, manufacturing technology had evolved further, and the ideas behind the “American” system were in use worldwide. Therefore, in manufacturing today, which is global in the scope of its methods, there is no longer any such distinction.

    The American system involved semi-skilled labor using machine tools and jigs to make standardized, identical, interchangeable parts, manufactured to a tolerance, which could be assembled with a minimum of time and skill, requiring little to no fitting.

    Since the parts are interchangeable, it was also possible to separate manufacture from assembly and repair—an example of the division of labor. This meant that all three functions could be carried out by semi-skilled labor: manufacture in smaller factories up the supply chain, assembly on an assembly line in a main factory, and repair in small specialized shops or in the field. The result is that more things could be made, more cheaply, and with higher quality, and those things also could be distributed further, and lasted longer, because repairs were also easier and cheaper. In the case of each function, the system of interchangeable parts typically involved substituting specialized machinery to replace hand tools.

    So when that transition happens, what you’re gonna see is manufacturing industry going into relative decline. That’s gonna mean reduced relative demand for labor, and it’ll put downwards pressure on wages.

    The US still manufactures a fair bit of stuff in dollar value – more than in the past in dollar terms, though the manufacturing sector is smaller as a relative percentage of the economy. However, the manufacturing processes it uses today tend to rely a lot on computerized automation, so that you don’t require a lot of low-skill labor on an assembly line, which employ comparatively-few people to manufacture a lot of stuff – they have high productivity, produce a lot in dollar terms per person employed.


  • Well, it depends on what you’re aiming for.

    My experience has generally been that if you try to have a conversation in an unmoderated environment, there is a very small percentage of people who enjoy derailing other people’s conversations. Could be just posting giant images or whatever. And it doesn’t take a high percentage to derail conversations.

    There are places that are more-hands-off that do have communities. I guess 4chan, say – not the same thing, but there are certainly people who like that.

    But, in any event, if you want to have a zero-admin, zero-moderator discussion, you can do it. Set up an mbin/lemmy/piefed instance. State that your instance rules are “anything goes”. Then start a community on it and say that you have no rules and give it a shot.

    I tend to favor a probably-more-hands-off policy than many, but even with that, I think that there are typically gonna be people who are just going to try to stop users from talking to each other.