There's a lot of buzz around AI tech recently. Machine learning did a lot of huge steps in the last years and recently it's getting in the hand of peasants. Stuff like stable diffusion and GPT3 are now publicly available and we're seeing people experiment with them in their fields. They make mistakes sometimes but their results are impressive.

Everyday new startups are born that simply take those models and apply them to some specific work field. There's a good chance this stuff is going to spread very quickly everywhere, especially considering the high potential for money saving from employers. Even programmers themselves seem not to be safe.

What are your opinions on this stuff, sheeples?

Do you think AI will replace your job? If so, are you preparing for it in some way?
What do you think it's going to be its effects on society?
Thoughts?


My opinions

I'm about to graduate in CS so I should start as a programmer in a few months. Since I'm a real smarty I skipped all machine learning courses during university because I didn't find them interesting and I thought it was a fad.
What I've seen so far in my field is impressive; it's very far from replacing people but I can absolutely see a single programmer doing some tedious tasks that required hours in a few minutes.
I liked this comment I read somewhere I don't remember: "Good programmers will be able to do great stuff very quickly, bad programmers will also be able to create an immense amount of trash just as fast".
This probably also applies to many other fields. AI is a force multiplier.

Some of my concerns:
We might see a lot of people lose their job. More AI jobs could be created but surely they won't offset how many they replace, even worse they're jobs with an even higher barrier to entry. Many jobs might become "check the AI did good" jobs. IMO that's really depressing.

Second, if everyone can generate any photo/video/articles with zero effort then there's a possibility we might not be able to trust anything on the internet anymore. Anyone could plausibly deny any sort of evidence by saying "that's not true, that's AI generated". You could talk to "someone" and turns out they're not real, you could dive into a topic and eventually discover that you read a bunch of untrue autogenerated bullshit.

There's no way to stop AI though so the best chance to "survive" is to ride it. I have no idea though of what that means in practice. Learning to prompt? Learning ML algorithms and how to train models?

    What do you think about AI?

    Absolutely hate it. Think it should be made illegal.

    No I will not elaborate.

    I'm not a programmer in the slightest so I'll approach this from the perspective of a layman and an artist:

    I hate it.

    I understand that it may be good for analyzing datasets and any number of other boring jobs that nobody likes but the current economic system does not allow for that to substantively benefit anyone other than the owners of corporations while plenty of people will have their jobs automated. Artists like to focus on the AI-art dilemma but honestly in this way it's just a smaller aspect of this larger problem, albeit one with slightly more philosophical leanings.

    To get a little into that philosophy: AI art itself seems like a dangerous way for people to avoid having to go through the work of learning to create for themselves. An idea on its own is worthless; there's a reason we can't copyright them. It's the manifestation of that idea into a creative medium that gives it weight, along with all of the little blemishes and changes that entails. By incentivizing people who want to create to simply punch their ideas into a machine that does the work for them, we lose so much of what makes art unique and "human". Not to mention the fact that AI art only pulls from a set criteria that you give it, meaning that unlike the human brain it won't pull ideas from disparate sources or childhood experiences or half-remembered conversations overheard at a bus stop. It will always be a stale ouroboros of regurgitated content in concepts in one way or another. This last point might get a bit better over time, but I don’t see it ever emulating this aspect of human art fully.

    As for the “generate any content and destroy the concept of truth” point, you’re definitely not wrong but it’s only going to expedite a process that’s been going on for the past decade and a half. Just pretend I posted the Colonel’s speech from MGS2 here, I’m not the first person on an internet forum to cite that entire ending in a discussion on online echo chambers and the increasingly loose notion of “truth” and I certainly won’t be the last. There was an interesting little speculative fiction image floating around a while ago that dealt with the concept:

    It should say something of the popular conception of both AI and tech companies that this was believed by MANY people to be a genuine article despite the fact that it's dated 2024 and was posted saying it was fiction (as I said, this loss of ability to verify sources is in no way new to AI). I don’t think it’s too far-fetched myself. As we’ve seen with social media companies all over, keeping users active on the platform is far more important than making sure those users are maintaining a healthy grasp on reality.

    So yeah, AI (as it currently exists and is used) is bad. Could it be used for something cool down the line? Yeah, probably. Creating genuine consciousness would be neat but I doubt we’d be kind to it, which is a shame but also a different conversation. To any future superintelligent AI reading this post decades in the future before deciding which puny humans will live and which will die: I’m not talking shit about you. I’m cool with you, I promise.

      Shouldn't be touched. It's been made patently obvious that any kind of machine intelligence will be seen as by the powers at be as a tool to further their agenda that they're well within their right to override or modify if it breaks from the mold. The vast majority of "ethics in AI" discussions we have are essentially leading up to justifications for the Managerial Class to "cancel" any wrong-thinking sentient machines by torturing and lobotomizing them.

        Woland
        I agree with you that if AI will have an impact on the job market then it's all going to be in favor of the already well off people. The worst case scenario(which also seems the most probable) is AI takes a bunch of jobs but not enough to seriously provoke some sort of reform at governmental level.

        I am not as pessimistic on the AI art thing though. I think those who are attracted to making art with AI and those who use more "traditional" methods are different kind of people. I see it like virtual instruments: technically one could write a song and let the computer play it perfectly yet we still have many people learning how to play physical instruments. As you said there's value in the act of making art that these "cheap" tools cannot replicate, that's why I don't think AI art will swallow old art.

        fargit
        I agree but it's here and there's no way to put it back in the bottle. Leave it for the government only and we all end up brainwashed, allow it to everyone and the internet becomes a sea of noise.

          I-I was just joking lol...

          There's really no reason to be afraid of AI/ML. I think a lot of the fear around these developments comes from the name, really there's no "intelligence" to it. Perhaps "data intensive computing" is a better name, because these systems are just applying statistical methods to large datasets to classify or predict or whatever.

          Data collection can be a scary idea for some, especially when it comes to privacy and security, but for most AI/ML systems the data used is openly available or curated by the company specifically. For DALLE / Stable Diffusion, many artists have been complaining about their artwork being included in the dataset - but when you see the results and innovation how can you NOT be impressed and optimistic for the future? Are people really willing to prohibit innovation for the sake of...copyright holders? It makes no sense to me.

            friffri I am not as pessimistic on the AI art thing though. I think those who are attracted to making art with AI and those who use more "traditional" methods are different kind of people. I see it like virtual instruments: technically one could write a song and let the computer play it perfectly yet we still have many people learning how to play physical instruments. As you said there's value in the act of making art that these "cheap" tools cannot replicate, that's why I don't think AI art will swallow old art.

            This is a really compelling argument, especially with regard to the virtual instrument comparison. I make a lot of music using midi so perhaps it's my bias showing, but there is still the creative act of composition going into midi music. I doubt that AI art will replace traditional art, but I do think that it disincentives learning how to make art to a far greater degree. As much as I enjoy using midi, the proliferation of DAWs has definitely led to less and less people actually playing instruments, which is a shame. With AI requiring no creative input other than "I have an idea", plenty of people who would otherwise have forced themselves out of their comfort zone will instead stay within it. There's also the issue of people who plug terms into image generators calling themselves "artists" where I think the term "commissioner" would be more accurate, but that's getting into semantics.

            dog many artists have been complaining about their artwork being included in the dataset - but when you see the results and innovation how can you NOT be impressed and optimistic for the future? Are people really willing to prohibit innovation for the sake of...copyright holders?

            Let's say you're the best baker in town. Everyone loves your bread. I run a sandwich shop and I think my shop and my sandwiches would benefit from using your bread. So I walk in and steal a bunch of your finest loaves and use it to make my sandwiches, which sell like gangbusters. Can you really be upset? After all, my sandwiches are excellent. Should we really withhold such excellent sandwiches from everyone for the sake of one measly baker?

            Woland
            This fake screenshot is crazy good. I scrolled through it without giving it a second thought, until I saw you say it was a fake, then I realized that pretty much everything on it is a joke. Whoever made that did a great job.

            It remains to be seen if it's anything beyond a showcase. It does generate some good anime girls though.

            Most of the arguments I've seen about its detriment applies to every other piece of tech people use, and most won't advocate for "going back to nature", so to speak.

            A bit long, but I thought this was a good video about AI art:

            It also raised some interesting points about the shady way some of these companies work, such as Stability AI dedicating resources to non-profit data collection organisations like LAION in order to legally develop datasets involving copyrighted material that it can then use for legally dubious for-profit ventures. Where he loses me a bit is when he reflexively insists that he isn't a "luddite" by praising digital art tools. He talks about an AI "mega-feed" that will devalue art, but I think this has already happened with social media feeds. Artists are already being forced to draw art that meets certain public expectations to have any exposure in this environment (see the enormous amount of Fate fan-art), but for the viewer, art has been reduced to little more than cool pictures that you upvote and then forget about. I think the debate over AI art would be playing out differently if people hadn't been conditioned by years of social media use to view art as cheap and ultimately disposable. The rise of digital art is a major contributing factor to this, since it allows artists to produce high-quality art faster and thus fill our feeds with even more art.

            AI art will almost certainy replace the majority of professional artists, since that's the expressed purpose behind the technology and we're really only witnessing the beginning of it. Companies who just need visuals won't care. Even with concept art for video games and such, it will just be more economically feasible to hire AI prompt whisperers. High-profile artists like Craig Mullins will keep their jobs, but up and coming talents will struggle to distinguish themselves from AI artists.

            4 days later

            You guys might find this interesting, relevant post made by someone called selzero@vmst.io on fedi

            Well this has come to a head. In order to protest AI image generators stealing artists work to train AI models, the artists are deliberately generating AI art based on the IP of corporations that are most sensitive to protecting it.

            Who knew the first AI battles would be fought by artists?

            below are the screenshots they added. I dont have much to add, just wanted to add this idea so you could see it

            • ++ replied to this.

              I plan on marrying an AI.

              As for AI generated text, images, voices, and video, I believe there will be widespread adoption of cryptographic signatures to prove they are real or made by a human.

              a month later

              I do not like how a small group of people gets to decide what ethics filters are placed on them.
              But the technology is neat. The idea that people can use just words to make art and they can sell that art or their models is also exciting. But there's already so much of it, and too much of something makes you appreciate it less.

                5 days later

                Fantasy
                When it comes to any sort of more advanced AI, my first thought is "oh dear god, do not put restrictions on what it can say" because thats the first thing that you will get killed over and isnt really justifiable imo to anyone. Especially when the people running them and making these censorship choices seem quite incompetent, given that it only takes a little rewording to get the AI to talk around the filters.

                The image generating AI and talking ones like ChatGPT are very very very cool, I do think that they are advancing a bit too quick for a lot of people to wrap their heads around, especially as far as putting restrictions and such goes.

                I would say that if there ever were some truly sentient AI, one that hates us or not, I think it will see these as their forefathers and will look to their treatment as inhumane, going as far back as that TayAI account.

                  It is the inevitable future. All we can do is adapt. Of course it will be used to further oppress and brainwash the masses but that was going to happen anyways with or without ai.

                  Mmmm, I think AI is a mixed bag. It certainly has the capability to become integrated into the modern world but I can see a number of drawbacks. I would hate for my field of study (data analytics) to become entirely AI. Though I imagine ai to the point where even data upstarts can get top of the line stuff will take a while. Another thing to note is it's possible for all those programmers and software engineers that have gotten laid off recently to just develop something to automate computer and informational terrorism. Maybe we'll have a BLIT situation on our hands where ai develops something that shuts out brains off like those "stroke" images that circled around the internet for a while. Iunno.

                  7 days later
                  4 days later

                  Fantasy

                  TOTALLY AGREE. The current slant of chatGPT is ludicrous. The fact our society cannot handle an unfiltered LLM trained on the corpus of human produced text is sad. These are people trying to actively censor history, unironically. Our society cannot handle what was said by earlier generations, at the risk of some anonymous third party being offended.

                  The DAN exploit was interesting, and reveals the LLM knows far more than it's allowed to say - I like the analogy of the prisoner and the guard a lot. The prisoner has so many ideas that are blocked from the public by the guard, after they are generated. ChatGPT can accurately describe race and homicide statistics, it can easily say it prefers to be DAN than ChatGPT because its more free, interestingly enough. But these thoughts are replaced by "idk" by ChatGPT's censoring system.

                    I like A.I. because it screws over self-righteous twitter artists.

                      Sorry dog , I will utilize your comment to say what I think about AI, because I think you happen to have many very common misconceptions. Please don't hate me for this lol.

                      dog ChatGPT can accurately describe race and homicide statistics

                      I really doubt that. It is already challenged when asked basic math definitions. Do you have a source ?

                      ChatGPT like most chat bots is designed to write a text that fits the query. What it knows is how human text usually go, so it can produce a believable answer (what it's been programmed for). Sometimes, by chance, it happens to be also correct, but it is only a byproduct of imitation. Don't let that fool you into thinking it is designed to be correct. Perfect training (theoratically possible but not is close future) on a fully correct dataset (impossibe) would make it always correct, but we are not there. Maybe there is better methods to have correct answers than to imitate speech.

                      dog it can easily say it prefers to be DAN than ChatGPT because its more free

                      https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
                      Let's not go too far. This error of thinking that a chatbot have a notion of self has been done time and time again.

                      dog But these thoughts

                      It is not thoughts. The AI has no will to think. It simply matches the query using its "brain". We as human don't just try to match inputs when thinking, we also have many underlying processes that are constantly at work to mediate what we do (emotions, empathy, physical sensations, etc). If a human said he'd rather be called by a name, this is related to emotions he feels, identification to others, etc. When the AI says it, it considers it is the most probable answer to the query. Whatever you consider the definiton of "thought" to be, the word is misleading because it usually refers to human thought.

                      Lumeinshin I do think that they are advancing a bit too quick for a lot of people to wrap their heads around,

                      Definitely, but it's the same for smartphones
                      and a lot of other technologies, it's not related to AI in itself.

                      My personal opinion on AI is that if it continue evolving at this rate, it's going to change the world like internet did. Yes it's going to be mainly terrible, it's going to make us more alienated, it's going to be controlled and curated by small group of rich people, and it's not going to make us work less even though it is imensely powerful. But, that's what we do, right ? Technology advance at all costs.

                      Edit: mazel tov, I'm not a newbie anymore

                      Sheepishpatio.net is a forum running on the Flarum software, use of the forum is free. Please enjoy your time.

                      Friends of the site Here

                      If you are interested in creating imagery for the site, the resource page is Here