HumbleReader
Doc
There is an even stronger argument that AI does not steal anything: piracy is not theft. AI debacle reignited copyright law discussion but not in a positive direction people now want even stricter copyright laws.🤦

  • Doc replied to this.

    HumbleReader For one I think that AI does actually "steal", because you can often see direct copies of things, even including garbled artists signatures in the corner of the generated images.

    I'll take your word for it. I don't mess with AI art enough to give a credible argument against it, except that in the USA the particularly strong first amendment will say that it is transformative enough to not be theft. I agree with it, because unless you can match the AI image to one specific image a person made then it is a totally new image. The garbled signature is only further proof that the AI doesn't actually understand anything. It gives a good approximation of what it is asked to do, without understanding anything deeper than surface level visuals.

    HumbleReader I'm an artist and so are most of my friends

    Have you noticed a slump in sales from AI? I am curious if my hypothesis in the third paragraph stands up to reality.

    Pedestrian
    I don't even know if what AI does is piracy. Stricter copyright laws is retarded. They are already so strong that soon fan art drawn on model will be a copyright violation.

      I just saw this article and thought it fits the theme of this thread.

      A new online charter school named Unbound Academy was approved by the Arizona state board and enrollments have just started. The school "will prioritize AI in its delivery of core academics" where students will work on their own for the first two hours of the day and personal projects for the next four hours. Teachers, or "guides" as they are referred to, will "serve as motivators and emotional support". The curriculum is said to be relying on third party providers along with their own apps, including one called AI tutor that "monitors how students are learning, and how they’re struggling".

      What do you guys think? Should we consider this model as a good attempt at solving the teacher shortage and encourage the increasing involvement of technology, mainly AI, in elementary education? Or should we cry about the doomed AI dystopia?

        Doc Have you noticed a slump in sales from AI? I am curious if my hypothesis in the third paragraph stands up to reality.

        I haven't personally but I don't do much art work atm.
        I think that the hardest hit so far have been the people who make adult content. There's an army of people out there now generating oceans of pornography and it buries the human-made stuff under all the noise, especially if their style was the typical anime-ish one which is so popular for generating.

        The safest artists are those who have their own style, or even "brand", though I hate using that word because it reminds me of influencers peddling overpriced clothing.
        But those artists like Loish, Yuming Li, Kim Jung Gi (RIP), etc are safest from the AI stuff because their work is so identifiable and people also love it because it was made by them. People can and do generate pictures that look like Loish's girls, but people still commission the artist herself because it was hand made by her, for them, and they love that specifically.

        Maybe it's like the difference between printing off a Van Gogh from the internet versus actually going to see one in a museum? People could argue that it's mostly the same, but there's a huge difference, and not just in the visual quality.

          Doc
          piracy can be redefined to mean anything movements like #noAI seem to be pushing for stricter copyright laws this is a just a subset of a more general desire for stricter copyright laws i see online for example of people requiring permissions for reposing even asking permissions to use pics for avis. Worst of all it seems like these kinds of things are popular among artists
          eff wrote a piece of this issue
          https://www.eff.org/deeplinks/2025/02/ai-and-copyright-expanding-copyright-hurts-everyone-heres-what-do-instead
          but in a broader sense if i make 1 to 1 copy of mona lisa down to atomic level that is not theft so then making a vauge copy of it not not theft either therefore AI does not steal anything

          HumbleReader

          The safest artists are those who have their own style, or even "brand", though I hate using that word because it reminds me of influencers peddling overpriced clothing.

          Well that is exactly what you describing with Kim Jung Gi and others they ARE a "brand", a cool name nothing more it has nothing to do with with "style" or talent/skill/whatever its all about fame and name recognition and thats why they are safe. But really thats how art/music/literature industry has worked for a while now is Kim Jung Giwest really better than some random anonymous guy on pixiv? No. But one of them is famous and gets all the attention even without AI.

          • Doc replied to this.

            gingermilk
            sounds kinda stupid why would a parent send their kids to a school that uses AI when they could do use AI themselves? for me it seems like AIs are better for self study than anything and If AI scientists manage to create a highly standardized hi quality "AI teacher" that would be cool i guess.
            But in general i dont see much future for schools/education its not just shortage but also quality of teachers AI wont really save it. It seems like people will soon realize that its more "cost effective" to just send their kids to tutors instead to learn professions directly from masters like in the old days.

              Pedestrian why would a parent send their kids to a school that uses AI when they could do use AI themselves?

              It seems like parents these days treat schools as a daycare agency when they are at work more than just an educational institution. Schools also allow kids to socialize with their peers and "keep in touch with the world" in a sense.

              But yeah, I agree that AI is a good tool for self-study when you have already obtained decent study skills, and I can see how AI can create an engaging, personalized, and adaptive learning environment for little kids. Quality of teachers won't really improve when they are getting paid shit and opting for AI might even be cheaper for the schools and less fuss from the unions.

              gingermilk
              I think it is worth a shot. I believe that most people can learn nearly anything by themselves, and the only thing making it not possible is their own self discipline. It will go through some new growing pains, and it can end in horrific failure since the AI doesn't understand how humans learn. The use of a teacher to do course correction is a very smart move, and one I agree is needed. I do think it is better than the current system we have, where smart students are cut at the knee cap so that dumb students can learn.

              HumbleReader I think that the hardest hit so far have been the people who make adult content.

              That's not a huge surprise. People that are horny are not known for standards. I thought that people who made characters with no backgrounds would also be heavily effected, because AI focuses on the character and not the background.

              Pedestrian
              Thank you for the article. It explains things well, and in a way that even most people would find non combative.

              gingermilk
              The pitch sounds good and reading through the article it does sound like they have thought about the usecase.
              School was always on a bit of a shoestring budget from what I could gather so i'm hoping it doesn't become an excuse to give less resources to teachers in other areas.

              Most interesting I think though is the 'two hour academics' and 'four hours of personal projects'. Might be a good way to channel a students interests and with the way they said about using the AI to help gauge student difficulty it could well be very productive.

              There are of course all of the AI dystopia fears and worries we all share and it'll have to be quite heavily checked and appropriate safeguarding added.
              I can sadly see it being used to enforce more strict rules for students and monitoring, but it may also be seen as less needed if students are inherently more interested in the things being covered.

              Since it's a new school doing it we can only really wait and see. I'll be worried that not enough core topics will be covered, but also from what i've heard a lot of schools spend half the day covering whatever pet political goal the teachers have.. so maybe its not an issue 😆
              I've definitely been out of school long enough to not really know what it's like for modern students now

              My friend needs to stop feeding the entirety of his homework into ChatGPT and submitting whatever the chatbot spits out.

                Sean Oh I definitely feel that A LOT when it comes to people submitting GPT generated garbage in university, some people don't even read the course materials and ask everything to GPT.

                I must confess that I do use GPT as a study partner of some sort, but I always make sure I read and try to understand the material, and then ask smaller doubts or rephrasings to GPT. Things like "I get what this is and what it does, it's xyz, correct?" then when it answers, I ask some doubts that I may have, like "is ijk management done automatically or do I have to manage it myself?", or "so a stack is just a LIFO, correct? Does this mean that if I do xyz and ijk, I have to make sure I close ijk before xyz, otherwise it'll fail? Can I think of it as if they were a pair of braces in code?"

                It's honestly WAY more effective than submitting whatever GPT says blindly. And sure, GPT may make some mistakes sometimes, but the best thing is to read the course material first, so you can warn it about any contradictions. I've gotten a bit too spoiled with GPT ever since I started having conversations like that with it.

                10 days later

                Anyone feel as if AI, in terms of LLMs, have kind of reached their peak as of now? A plateau maybe? I don't see anything that really "wows" me anymore, the wow factor has pretty much completely dissipated.
                There was Claude 3.7 Sonnet recently and even that didn't exactly catch my interest, I've been so out of the loop with AI advances recently.

                Funnily enough, I did meet an "AI artist" recently here in the real world, who apparently makes money by.. selling "AI art?" Strange times we live in.

                gingermilk

                I think it's quite dystopian but also really revealing of how incompetent they are. It's usually an actual human's job to really assess how another person is doing academically and what challenges they seem to be presented with, I feel like if anything. This is one of those things that maybe a human could be better at than an AI. Also, a lot of learning (to me) is about the accumulated failed and successful efforts you learn from. When you ask GPT a simple programming question, it gives it to you, which is nice and fine of course. But you're missing out on the valuable experience of shuffling through 20 different blogs, in each one you read you learn something "extra" that slowly accumulates into what we call a knowledge and experience. You learn the nomenclature/terminology/lingo used in your respective field, learn from your mistakes, etc. I feel like this is something that, with the advent of AI especially, will be undervalued and neglected even more.

                Wicked technology. But all these reports of models trying to "break free" are somewhat concerning. I say "somewhat" because I assume retaining control of them is simple. Let them do their thing on air gapped machines. Just keep that shit away from the World Wide Web.

                Give it an inch, and it'll take a mile.

                I can't really help but think we're rapidly hurdling towards the worst of all worlds:

                • Shitty corporate and sanitized AI trained to ignore 50% of the human condition so Coca Cola and AIPAC don't feel threatened.
                • AI accelerationists won, flat-out. As a result there is absolutely no plan for alignment despite learning models increasingly displaying scheming behavior.
                • Tech arms race between not just various companies but various countries.
                • No economic or social plan for the potential redundancy of human labor as a concept.

                Really, how exactly does it get worse? Giving them access to manufacturing plants and strategic military capabilities? 10 years, just give it 10 years.
                At best, we can hope the technology is about to hit it's full potential, or there is some revolutionary pivot in global policy. Because it's hard to not feel like on current trajectory, humanity sleepwalking into annihilation.

                i feel like most AI discussion is just 90% "twitter high" (aka BS) and 10% failed attempts at actual thought
                i really urge you guys to stop using twitter

                Sheepishpatio.net is a forum running on the Flarum software, use of the forum is free. Please enjoy your time.

                Friends of the site Here

                If you are interested in creating imagery for the site, the resource page is Here