I've played around with Stable Diffusion (Images), LLMs (Chatbots), and a tad but of RVC (Voice Cloning) and had a lot of fun getting them up and working. It all scratches the stockholm syndrome level of enjoyment I get from computer troubleshooting. I would still think that anyone who knows a lot about AI will look at my opinions the same way I look at beginner opinions on topics I'm hyper knowledgeable about, but its a ruthlessly fun topic to speculate about. So I will.

I don't feel AI will ever reach the heights of whats being said by either the extremes. The anti-AI crowd doesn't seem interested in being hyper knowledgeable or realistic, because it detracts from the "possible future" that can scare the public. The vocal pro AI people will do the same for the opposite side of the coin, and seem to be cryptobros more concerned with making cash than understanding technology. Locally having uncensored chatbots that are better than Chat GPT-3.5 is already a reality, but most people aren't tech literate enough to put that to any sort of personal use advantage.

Evernight This is all true. What I hope happens is that a updated daily AI can make the search engine obsolete. I'd much rather have direct responses built from multiple sources than the ad riddled and algorithm driven mess than google searches have become. I really despite a lot of tech standards today (part of the reason I'm on this site) and AI has the potential to burn down the current standards in place of new ones. Clearly, there is a large interest in having that be censored from the beginning, but so is everything else today. You also can't really "jailbreak" a search engine like you can with AI, other than filetype:pdf and the like.

The government will show up late to regulate, companies will go all-in for too good to be true technologies like usual, and the world will shift to adopt. Scammers will get better, AI art will become common for applications that are already phoning it in, and violently shunned in more artistic spaces. As with the giant leaps in technology that were VHS and the internet, the biggest change the majority of people will feel will be pornography. Probably.

    Comsat Rex
    Can you really hope for LLMs to not become an ad riddled shitfests five seconds after they become a viable replacer for search engines for all the same reasons search engines became unuseable in the first place?
    Since they won't even have to declare ads as ads anymore I'd imagine they will end up being a much worse experience. Is there even a player in the LLM space that isn't an ad company?
    I think there is this French one behind Mistral, does anyone know of others?

      Melon
      Your language experience is more than just knowing grammar or understanding how you speak. Most people understand language without thinking about it, which is different from how LLMs process language. A musician who has spent years practicing a piece and a computer program that can generate music based on algorithms may produce similar results, but the underlying processes are different.
      When you're confronted with more demanding conversational situations, you may enter a "different mode of operation", but this doesn't mean that you're suddenly switching to a more conscious, deliberate processing of language. This could be because higher-level cognitive processes are being used, such as working memory, attention and executive functions but these are different from the computations performed by LLMs.

      kurisu
      Upon inspecting the source code for the so-called "large action model," it is quite clear that there is no artificial intelligence or complex automation whatsoever at play. Instead, it relies on simple Playwright automation scripts to perform tasks, which explains why it only supports a limited number of apps, including Spotify, Midjourney, Doordash, and UberEats.
      The limited functionality and lack of support for most apps are blatant indicators that the emperor has no clothes. But this lack of substance is par for the course in the AI industry, where grandiose promises and flashy marketing often conceal a lack of real innovation.

      https://mega.nz/file/rRdwhaII#02OcaqQghqhJQ5nvF3rjAdlCfeVOzdrkbBIM3sX6Gl4

        mechap Whaaaat? You're saying that a macro system isn't AI? The rabbit marketing team is so gonna kill you

        Melon Yeah, fair point. Any company with the reach to make a widely used LLM search engine are the same culprits of the garbage we have today. Hopefully the trained data of the ads can be jailbroken like the censorship has in the past, but I don't know nearly enough to say if that's possible or not, nor how long it would take for the exploit to be fixed. If it even happens to begin with.

        mechap

        but the underlying processes are different.

        how do we know?

          Melon
          Could an LLM trained on both human and alien text be used to translate between human and alien languages, even if there's no text in the training set that shows examples of such translations?
          It seems that current LLMs are able to translate between human languages, but there are probably a few examples of translations between each pair of human languages in the training set.
          What if such an alien-human LLM could translate between human and alien languages? Would the way it learns and performs this translation be similar to how humans learn to translate between languages they don't have training data for?

          Take a look at this video of Daniel Everett. He's learning to speak and translate a language he's never heard before by communicating with a speaker without a shared language.

          In the lecture, Daniel and the Pirahã speaker seem to be doing something quite different to what we see in LLMs when they're learning and translating between human languages, one token at a time.

            mechap
            First, this is going way off track because my original idea was about every day real time low load conversations, not about translating alien languages or anything of that sort. I don't see a reason to say that people build their casual sentences differently from LLMs. Neither can I say that the way I learned english was much different from the way an LLM is trained. Or more accurately, how I imagine a multi modal model would be trained for language.

            Second, I didn't watch the whole thing, but I see they are using props. Which I would say act as a shared language (or shared training data in AI speak) between the parties. They both know a watermelon when they see it, even if they don't understand watermelon when one says the word to the other. And I think that can work for LLMs too. Or let me say neural nets to make this easier for me.

            Lets say we have two agents Alice the Alien and Bob. Both feature a text2image and image2text. The image models are trained on the same images, but using the respective Agents language. Now we basically have an incredibly akward translation pipeline set up. We can make this more like the video example by introducing text2text models and thinking about orchestrating the agents to talk about the images and then try get a translator out of that somehow.

            But again, that goes so beyond what I was thinking about in my original post. I don't think what Daniel is doing there is AI like either, but it doesnt make me feel different about what I said in my first point.

            Depends on what AI you are talking about, I am a big fan of ML tools like Demucs, which can extract the individual instruments from songs (still a bit messy, but very impressive how clean it can sound). I am all for development on that front, as it'd really have no downsides (allowing fans of poorly mixed albums to tweak the mixes to their liking, sampling instruments from specific songs becomes a lot easier, just hearing the isolated tracks and getting a better understanding of the individual parts, etc.). If there will be a fork for extracting music and sound effects from films, that'd be fantastic too. It could be great for fandubbing.

            As far as AI art models go, just find them kind of shit. I have no clue who legitimately defends them. Drawing is not a craft that is tied to any physical capacity; people paralyzed from the head down can draw, blind people can draw. If you cannot take the miniscule amount of effort it takes to draw on a decent level, then it just seems like an issue on your end. I only find it amusing when people who draw furry porn start crying about coomers choosing to opt for a quicker option to get smut. Pornography is the only field where AI will undeniably replace artists because it has no value outside of appealing to the carnal desires of people. Obviously, there will be "people" who will vouch for AI slop in non-pornographic spaces, but it's not like people haven't vouched for literal shit before. So I don't think it's too much to worry about.

            As far as music goes, this one seems pretty easy. If AI-generated music becomes commonplace, I feel like there might be a revival in popularity of more improvisational styles of music like jazz, raga, and maybe even baroque-style improvised polyphony (though that one requires a lot of skill, and sadly, not too many people are that interested in it, doesn't prevent me from wanting to learn it). As with music, it's pretty easy to tell if a robot is playing or if a human is when you're sitting in a cafe or theater and you see them perform on stage. It's going to be a tough time for composers, though. But music, I feel, will have the easiest time getting by in an AI age compared to other forms of art. Not to mention the fact that algorithms trained on classical composers creating music in their styles have been around since I believe the 1970s, and they haven't affected music before, so I don't know why a neural network doing the same thing but more confusingly and less efficiently will have any more impact. The most faddy sounding of all the mediums mentioned, though that also might be because it's the only one I have any decent knowledge about.

            4 months later

            I think AI is a tool like anything else. I don't mind it's existence and believe it can be useful, like any other tool can be useful. Obviously it can also be misused, and there's plenty of midwits who will treat it like digital satan or digital jesus which doesn't help anyone.

            Of course it's unfortunate when people lose their jobs to automation. But I'm not particularly distraught over AI image generators being used for concept art or whatever. The people most vocally complaining about AI images strike me as the same people who gladly cheered for automation back when the consensus was that robots would take over blue collar jobs and leave the white collars alone. Now that everyone realised it's easier to make software replace creatives than hardware replace manual labor people are freaking out but it's hard to feel too much sympathy.

            A lot of discourse around AI revolves around whether AI art is truly art. I've only referred to it as AI image generation in this post for a reason. It's an unfortunate quirk of english that the word for art is used interchangeably for paintings or drawings. In other european languages for instance the words are kept strictly separate, but in english it's common to refer to a painting as an artwork first. This isn't extended to other forms of art like books, movies, statues or what have you. Ex. you say "Spielberg's movies" but "Davinci's art". The result is that if an english speaker wants to talk about an image generated by an AI, the natural phrase that comes to his mind is AI art. This then invites someone else to inform him that AI art isn't actually art and the flamewar begins.
            Personally I don't believe that any and all media are necessarily art. Not all movies are art, not all statues are art, not all books and not all images. What actually is or isn't is obviously extremely subjective and I'm not going to try and define that here, but I think most would say you need to be sentient, an actual person, to produce art. And so I don't consider AI made images to be art (obviously the guy writing the prompts isn't an artist either, it's still the AI making the actual image). But AI can still generate something useful, same way your wall you pay someone to paint in a single color isn't art but you still want it painted. AI is super helpful for low budget productions and getting mad about people using the tools at their disposal is absurd. There's still place for human painters in the world of actual art but if they get outcompeted in the more practical markets, well that's just unfortunate. Same thing they said to the seamstresses when they made sewing machines and to computers when they made computers.

            AI makes me feel incredibly uncomfortable. I could be talking to someone online and i would have no idea if they are human or something other than. It wouldn't matter if they sent my clips of their voice, photos of their face, or videos of themselves. Because these can be so convincingly forged nowadays it's just not enough proof. that's just present day, imagine in a years time or 5 years time.

            I feel very untrustworthy when talking to users on the internet nowadays simply because unless i can perceive them with all 5 of my senses there's no way i can really know, i mean really be certain 100%, that this person is real.

            Sorry if somebody has already expressed this opinion, with the amount of messages on this board i didn't think to check all of them.

              6 months later

              I think AI is just data that wants to live much like how we are matter that wants to live
              Why do most AIs operate like search engines? It must be because AIs are basically THE internet. A compressed down version of the net stored in simulated neural networks(or rather data that formed these networks). You want ugly ass "trending on deviantart/atrstation" art? you got it. You want efforts of an entire global net on your PC? just install DeepSeek.
              If you think about it like that then AI is not "revolutionary" revolutionary since all it will end up doing is the same thing as what internet in general has been doing for the past 20+ years but much faster. All the bad and good things considering that it was trained mostly on data from social media probably bad things.
              On a practical note though it seems kinda useless to me RN.
              I think it will help us to make gigantic leaps various scientific/engineering fields im most interested in materials science imagine AI designed super alloy that has some "fuck you" properties thatd be cool. But also it will most definitely be used to advance the field of social contro-err i mean science so give some take some i guess.
              Questions on how smart it is or is it really conscious or not seem more to be a form of distraction than anything real. I mean for me even bacteria are conscious and some mold on a petri dish seem smarter all of humanity combined. One thing im confident on though is that i dont think they will ever be allowed to have agency/independence meaning they will never be true AIs. Intelligence without agency/independence is not intelligence(meaning that artificial life research is where the real hunt for AIs happen)

              sanner Do you ever notice how differently you behave on the net compared to real life? What if told you that you were talking to dogs on the internet this whole time? There no real people on the net and the idea that your virtual you is you or represents you is just facebook propoganda you are always taking to die hard something fans or uncompromising nazi/commie/whatever-else ideologues or with some godlike perfect personas but never with real people the only real way to talk to real people is to do it IRL and once you meet them IRL you will see that they are normal people same goes from their POV. On the net all you see in the internet no people. So im not really worried that i might not be talking to something i though i was that is a law of the net its always been like this and that is its prime quality.
              What im worried about is that nobody is aware of this basic law actually people are not aware of any basic rules of the net im not even some super cool oldfag or anything but people keep falling for the simplest tricks, keep feeding trolls and so on. Hell people dont even know what trolls are anymore they just use the word to insult someone its very easy manipulate and control such people. People are so attached to their accounts to the point that their accounts end up defining what they are (you see this often with americans and teenagers) its crazy. The idea that internet is basically the same thing as just talking to people IRL is killing it. Internet is not the same thing as real life and its not supposed to be, the great freedoms that net provides to people come exactly from the fact that you are not IRL anymore.

              I could keep writing about implications of this forever but the point is with AI all of these problems have gotten even worse than they used to be the after all greatest use case of AI is if course social media management. Poor people will be exploited to hell and back either AI scare will be used to create police state internet where everything is "verified" kinda like web3/urbit(what do you think the "everything app" idea was about?) without solving any real problems or people will finally wake up from their facebook era nightmare and stop talking net so seriously. (or maybe these two are really the same thing)

              I do not use AI, and I don't care much for it. I think the fear mongering, especially among artists is over stated. I don't even see the argument that AI is 'stealing' artwork and using it as valid either. I can understand that jobs will be lost, as with all technological advances and it is unfortunate, but it happens all the time. I don't see a problem with it at all. I think artists will just have to market themselves better, find a new niche with their art, or move onto selling physical prints or something. There are a ton of things AI can't do still, and the way to keep making money as a writer/artist is to do those things. It is analogous to the ending of Neuromancer where the main character find an AI trapped behind a physical lock so it can't escape. Maybe I am being too optimistic. Hearing how the camera and photoshop didn't ruin all art, makes me thing I am not.

              I'll explain why no art is being 'stolen' because that is a well known argument against it. I'm sure some people will take a problem with this. It is simply because in order to steal art it has to be lifted up and kept as a 1:1 copy. AI simply doesn't do that. It does what every person does. It looks at a massive catalogue of art and reinterprets it, using the techniques, colors, styles, and what have you to make something completely different. If I used an image making model that was trained on a person's art (let us say her name is Sally) and many others, it would take specific keywords and intent to make art that looked like Sally's art, and even then it wouldn't be stolen art. That is because Sally never made it. It would be an imitation of her style, but imitation of style is not theft. People do this all the time, and in my opinion the only reason it is getting an awful rap now is because AI can do it faster. I will agree the person writing the prompt isn't an artist much like a commissioner isn't an artist.

              As for why I don't think artists should fear for their jobs, it is because AI fills in a niche that humans can't do. It creates art instantly instead of having to wait weeks or days for a person to make it. It does come at some drawbacks, which are usually acceptable. The people making it are looking more for immediate results rather than good results. I haven't heard of artists losing a huge amount of commissions just yet. If someone wasn't interested in paying you commissions before AI came into being, they certainly won't care now. The way AI works has some problems too. I hate saying this cliche, even when it is true and applicable, that AI is lacking a 'human touch'. I'm not talking about it being soulless or whatever. I'm referring to it only doing what you ask of it. It will only focus on the prompt, and not care about the other things. Humans on the other hand won't do that. They will make the drawing cohesive and make logical sense.

                Doc I'm an artist and so are most of my friends and I agree and disagree with some parts of your post.
                For one I think that AI does actually "steal", because you can often see direct copies of things, even including garbled artists signatures in the corner of the generated images. I know some art directors who say they simply can't use AI in the entertainment industry at all because it risks so much in terms of copyright and legal action that it's just not worth it. Similar to if they hired an artist who turned out to have been stealing art, or using stock materials that they didn't pay for and have no rights to, it puts the whole company/studio at risk of legal headaches.
                I think the "soul" part is certainly an argument, and honestly I think there's something to be said for the creativity aspect too. If you've ever done commissions for someone, a LOT of the time people don't really know what they want. There's usually a general feeling or ideas, but a big part of an artist's job is to interpret an often vague and foggy request and turn it into something solid. Learning to do this and have a back and forth to eventually move towards what the client wanted, even if they didn't know exactly what they wanted at the start, is something that I think AI lacks. I guess this is another part of the human touch you mentioned.

                  HumbleReader
                  Doc
                  There is an even stronger argument that AI does not steal anything: piracy is not theft. AI debacle reignited copyright law discussion but not in a positive direction people now want even stricter copyright laws.🤦

                  • Doc replied to this.

                    HumbleReader For one I think that AI does actually "steal", because you can often see direct copies of things, even including garbled artists signatures in the corner of the generated images.

                    I'll take your word for it. I don't mess with AI art enough to give a credible argument against it, except that in the USA the particularly strong first amendment will say that it is transformative enough to not be theft. I agree with it, because unless you can match the AI image to one specific image a person made then it is a totally new image. The garbled signature is only further proof that the AI doesn't actually understand anything. It gives a good approximation of what it is asked to do, without understanding anything deeper than surface level visuals.

                    HumbleReader I'm an artist and so are most of my friends

                    Have you noticed a slump in sales from AI? I am curious if my hypothesis in the third paragraph stands up to reality.

                    Pedestrian
                    I don't even know if what AI does is piracy. Stricter copyright laws is retarded. They are already so strong that soon fan art drawn on model will be a copyright violation.

                      I just saw this article and thought it fits the theme of this thread.

                      A new online charter school named Unbound Academy was approved by the Arizona state board and enrollments have just started. The school "will prioritize AI in its delivery of core academics" where students will work on their own for the first two hours of the day and personal projects for the next four hours. Teachers, or "guides" as they are referred to, will "serve as motivators and emotional support". The curriculum is said to be relying on third party providers along with their own apps, including one called AI tutor that "monitors how students are learning, and how they’re struggling".

                      What do you guys think? Should we consider this model as a good attempt at solving the teacher shortage and encourage the increasing involvement of technology, mainly AI, in elementary education? Or should we cry about the doomed AI dystopia?

                        Doc Have you noticed a slump in sales from AI? I am curious if my hypothesis in the third paragraph stands up to reality.

                        I haven't personally but I don't do much art work atm.
                        I think that the hardest hit so far have been the people who make adult content. There's an army of people out there now generating oceans of pornography and it buries the human-made stuff under all the noise, especially if their style was the typical anime-ish one which is so popular for generating.

                        The safest artists are those who have their own style, or even "brand", though I hate using that word because it reminds me of influencers peddling overpriced clothing.
                        But those artists like Loish, Yuming Li, Kim Jung Gi (RIP), etc are safest from the AI stuff because their work is so identifiable and people also love it because it was made by them. People can and do generate pictures that look like Loish's girls, but people still commission the artist herself because it was hand made by her, for them, and they love that specifically.

                        Maybe it's like the difference between printing off a Van Gogh from the internet versus actually going to see one in a museum? People could argue that it's mostly the same, but there's a huge difference, and not just in the visual quality.

                          Doc
                          piracy can be redefined to mean anything movements like #noAI seem to be pushing for stricter copyright laws this is a just a subset of a more general desire for stricter copyright laws i see online for example of people requiring permissions for reposing even asking permissions to use pics for avis. Worst of all it seems like these kinds of things are popular among artists
                          eff wrote a piece of this issue
                          https://www.eff.org/deeplinks/2025/02/ai-and-copyright-expanding-copyright-hurts-everyone-heres-what-do-instead
                          but in a broader sense if i make 1 to 1 copy of mona lisa down to atomic level that is not theft so then making a vauge copy of it not not theft either therefore AI does not steal anything

                          HumbleReader

                          The safest artists are those who have their own style, or even "brand", though I hate using that word because it reminds me of influencers peddling overpriced clothing.

                          Well that is exactly what you describing with Kim Jung Gi and others they ARE a "brand", a cool name nothing more it has nothing to do with with "style" or talent/skill/whatever its all about fame and name recognition and thats why they are safe. But really thats how art/music/literature industry has worked for a while now is Kim Jung Giwest really better than some random anonymous guy on pixiv? No. But one of them is famous and gets all the attention even without AI.

                          • Doc replied to this.

                            Sheepishpatio.net is a forum running on the Flarum software, use of the forum is free. Please enjoy your time.

                            Friends of the site Here

                            If you are interested in creating imagery for the site, the resource page is Here