Not reading whole thread but basically some work cannot be automated and some work can. In my line of work, one of the things I do requires immediate adjustment and feedback from whoever I'm working with, so it's a two person job and cannot be easily replicated by machines because frankly it would take too much time to do what I can do in seconds. That is what it means to have a "professional" skill. People who are concerned about AI overtaking their worth should devote themselves to new skills or find out how AI can support them if need be.

I don't like AI because the common man doesn't understand what it really is, they think it's autonomous and not just an index of compiled information being spit back out. They give it sentience in their own minds, but it has none. The problem is the most people are simply too ignorant or stupid or whatever word you want to use to see beyond the illusions of the world. They are akin to spectators at a magic show who believe the lady was truly sawed in half. They cannot help but stare and gawp in amazement because they don't understand how the trick works.

Like all tools, AI should be used with moderation and only as a supplement. You cannot rely on it for everything. That is just carelessness.

a month later

Sorry to bomb an old thread, but you guys might find this article on how ChatGPT can be better than humans at being empathetic and providing basic therapy interesting.

    gingermilk No worries about old threads, I think most threads on this site cant really be necro'd, aside from maybe ones with specific dates
    Interesting article

    I think part of it is that the way people interact with the GPT bots are different from regular people, theres an element of problem solving expected if the bot doesnt quite get it, so you allow yourself to rephrase things in a more clear way. Whereas you might get frustrated with a therapist quicker if they dont 'get' it.
    The other part I think is just where its superior, being able to basically pick and choose from billions of posts, probably a lot of which will be people consoling eachother or providing ideas. It's pretty cool imo, if a bit spooky. It'll be keeping track of many convoluted instances that it will struggle with, just like real therapists, tho I suppose they can have an unlimited memory if kept to the same conversation history.

    For technical things I've found i've had to re-refer to our own conversations a few times to get the relevant reply, so I do wonder how that is for a more emotional conversation

      Lumeinshin Yeah I definitely agree. In terms of more emotional conversations, it is usually hard to find someone who is patient, emotionally intelligent, and let's say affordable enough to pay attention to your vents and provide adequate feedback that would not be considered as potentially offensive or careless. On the venter's side, the sense of guilt could be overwhelming from treating the opponent like an emotional trash can. It might be frustrating to keep getting irrelevant replies, but just as you have said, we tend to be more lenient with machines who we assume to be incapable of being judgemental.

      gingermilk
      It's strange how people's reception of their treatment went down when they learned it was 100% from AI. Having a human touch makes it okay but without that slight touch it's a very negative thing. I wish they gave examples to show what a response touched up by a properly trained therapist looks like compared to a regular AI generated one.

        Fantasy This is giving me some stupid and unethical ideas. We could set up an experiment with the purpose of seeing what percent of people could distinguish between responses given by AI and certified therapists. But instead of giving them both, we should just give them AI generated ones. If people are not able to spot this deception, we could utilize the responses labeled by a majority of participators as given by therapists to further train AIs by limiting the range of possible responses and finding the optimal pattern.

          gingermilk
          That makes me think of the experiment that Facebook did where they decided to only show people posts considered negative to see if the users got depressed. They did.

          gingermilk Well here's hoping they give better advice than the betterhelp therapist

          Fantasy there's something pretty patronizing about a computer giving you mental health advice

          16 days later

          It's good that there's competition in AI now, especially with open source models. Heard of a study (just check those graphs) where it shows chatGPT4 dropped performance in many areas over the course of a month. Whether it was due to OpenAI placing limits on what it can say or whether it was due to genuine mistakes they made while trying to improve something, either way it shows if we want quality AI we can't rely on one organization.

            5 days later

            I'm not a fan of it, but I don't think it should be made illegal or anything
            Back when things like image models were just coming out, it was a bit annoying to have to sift through dozens of AI generated girls when I just don't see the worth of an AI painting compared to a real human's artwork, but I have no problems now that the same "prompters" isolate themselves into some random corner and just prompt away creating erotic stories and art; really it's the same as always

            What I do have a problem with, as many others do, are the people that have the most control over AI models
            Companies inserting AI everywhere like Microsoft and Google can have greater control over censorship and people's thoughts, with prompts coming pre-adjusted with invisible keywords that make whatever Cloud model you're using be biased and lie to you in order to benefit the company hosting it

            AI training using artists' works without permission is also something I don't like, there's no "progress" or "innovation," it's all corporations trying to make more profit at the expense of normal people, just as they've done for years with data collection to power their various algorithms
            We can have image and language models, but the consequences of bringing even simple Artificial ""Intelligence"" like ChatGPT and other models into the economy outweigh the benefits significantly.

              magicrabbit
              I think its very much a pandoras box thats open, and any attempts at closing it are gonna be like adobe trying to add copyright to styles via impersonation laws, which would be infinitely worse than any AI model IMO
              https://nichegamer.com/adobe-urges-lawmakers-to-penalize-individuals-who-use-ai-to-mimic-other-artist-styles/
              (as an aside, jesus christ they use a lot of ads, on the browser i searched for the article on i had adblock off)
              I think the only people who would benefit from any sort of limitation on imitation would be the big corps and pre-established artists, especially since mimicry is so important in developing skill in art

                Lumeinshin
                Training AI models isn't something that can be stopped so there's no point to that, if people can make their own local models only certain cases will end up being enforced, just as most cybercrimes
                So personally I'd try and push for something to penalize commercializing AI art trained without permission from those part of its dataset, since the result doesn't benefit anyone other than the individual/corporation that trained the AI in the first place, and may even hurt artists looking for employment by replacing them with more affordable machines

                silicon_ivory
                I have read and cited this study multiple times already. Thank you for making me seem smart in front of friends and family.

                Work in artificial intelligence has produced computer programs that can beat the world chess champion, control autonomous vehicles, complete our email sentences, etc. More importantly, AI has also produced programs with which one can converse in natural language, including customer service (the so-called "virtual agents") and chat assistant. Our experience shows that playing chess, and carrying on a conversation, are activities that require understanding and intelligence and we may ask at which point does an entity really understand languages ? Said otherwise, does computer prowess at conversation and challenging games then show that computers can understand language and be intelligent ?
                Woland pointed out that creativity is inherently the essence of genuine consciousness and understanding and AI appears to not be endowed with reason.
                Assuming that computation is defined purely formal or syntactical, whereas minds have actual mental or semantic contents, and that we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else, the implementation of the computer program is not by itself sufficient for consciousness or intentionality.

                It seems that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on "growth" is more or less a modern phenomenon. Productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall.
                Classic automation has replaced human muscle, whereas digital automation replaces human thought or information processing (and unlike physical machines, digital automation is very cheap to duplicate). It may induce a more radical change on the labour market. So the main question is : will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? Do we need to make societal adjustments for fair distribution of costs and benefits?

                11 days later

                I agree with what was said above about now being the Wild West era of AI (though I abhor the name "AI" due to how misleading it is). Things are going to be clamped down and heavily centralised and sanitised, just like how the Internet has gone from the early 2000s to present-day. It'll probably be an even briefer period of "anarchy". Therefore you should download your own local clients and make as much smut as you want while you can

                Societally... I don't know. While many jobs will be impacted (perhaps artists most of all), there's a lot that the generative AI can't do. It's not truly flexible. It can't really fact-check or discern that something might not be altogether there. It's very useful though. A force multiplier, as was said somewhere above. I believe there'll be a time where there's a broad move towards AI-ing a lot of jobs and functions, then people realising they've over-corrected and they'll try to hire people back into the fray when they realise the deficiencies of the technology. The tech's definitely gonna improve for sure, but I'm not sure that it will be able to overcome the core issues.

                Anecdotally, I've used chatgpt as a tool for my own personal writing. If I'm not satisfied with a part that I've written, my typical approach is to look up examples of how other people might have described such a thing, or if I can recall having read something like that, I'll go dig up the original book and reference the phrasing/metaphors. As a shortcut to the former, I paste the paragraph into chatgpt and tell it to rewrite the text. Then I mash the rewrite over and over and over again to get a sense of how else I can approach the description, and I'll crib off if I see something satisfactory. Sometimes the output is really awful, but the main benefit of doing so is helping me "de-rigidify" my language by seeing how it can be changed.
                Also sometimes, I want a synonym for a concept or phrase that's on the tip of my tongue, and google is woefully incompetent in providing synonyms for non-singular words, so chatgpt is good on this front too. I don't really use it to correct my spelling/grammar/syntax though, I take personal pride in those.

                I also work as an editor and it's painfully clear when something is written by chatgpt (though sometimes I wish the authors would use chatgpt, it's unbelievable how poorly some people can write). It seems to have a voice of its own that I can't quite pin down, but I usually recognise it when I see it. Its phrasings aren't quite... organic.

                Last point on the censorship front, I have a profound distaste for censorship of any sort, so yeah I hate that the softwares are being restricted, though I understand why a commercial enterprise would do that. I could at best accept censoring out swear words (though maybe just put asterisks through them), but the way it neuters everything outside the box of political correctness is dogshit. Even something as innocuous as asking it to rephrase a scene where someone gets stabbed will result in it lecturing me about how violence is bad. Stupid moralising in addition to censorship is just... infuriating.

                7 days later

                friffri I like AI and ML tbh, for some things, they can be useful tools. However, the fucking hype train is annoying, for some reason companies decided to add GPT and image generation to EVERYTHING. No, I don't need an AI assistant integrated into windows, microsoft, I already have chatGPT. No, generic ERP company, your dashboard still looks like it was last updated in 2010, please hire a designer instead of making the most inconsistent program out there, and don't even think of adding a GPT sidebar to your unstable piece of shit program. No, cryptobro, you will not sell tokens that can be exchangeable for AI credits, which you are buying from OpenAI and selling for more money.

                Sure, GPT is cool, it helps me code sometimes, I don't like it building everything for me, but for explaining legacy code or debugging it can be quite helpful.

                In short, it's overrated. I hope companies realize how absurd it is to say shit like how AI helps you calculate something that is done with a MATH FORMULA. Please SHUT UP.

                  kurisu
                  When I opened this thread I was excited. After all the bullshit hype around I'm just tired of it.

                  This morning at work I got annoyed with two co-workers who kept using ChatGPT for their tasks in a completely mindless way. Normally I wouldn't care about what others do, but I had some questions on some things and they kept just putting my questions on ChatGPT and copy pasting the response, thinking they had done me a big favor like I'm too retarded to ask myself.

                  Too bad the generated responses made no sense, talked about settings that don't exist in the software we're using and was not even accomplishing what I had asked my colleague about.

                  I don't know how I could have been so dumb to expect people to use those tools responsably...

                    friffri
                    I feel like its just gotten way worse already too, I just use the bing one especially at work as a joke, because i know it will give the wrong answer
                    The voice and images are the things i'm still interested in, maybe i'll run a GPT myself at home in future but otherwise I dont care for any of the commercial options

                    friffri Yup, I feel ya, I used to be excited back in december too, but to no one's surprise it ended up being a bubble.

                    Again, it can be useful to automate simple tasks you already know the process and the answers for, but as you said, there are people who ask absolutely everything to GPT and call it a day, even when it comes to custom requests like specific programs or pieces of data that are not available to the public (or weren't in 2021). I kinda went from loving AI to realizing how overvalued it is at this point of time. Can it be useful? For sure! But people rely on it too much. It makes mistakes and it makes things up, it assumes things, it relies on intuition, just like a normal person!

                    And I know how controversial AI image generation is outside of these smaller forums and corners of the internet, but all I can say is that, if I want a quick visualization of my thoughts, it's useful, but if I want proper assets for an actual project, I'd rather pay for a real artist that can maintain consistency and will understand my human requests. It's another thing people overvalue.

                    In short, AI is cool, but treat it just like another human. It makes mistakes, it makes things up, it's sends the data back so it's not your secrets keeper either... If you're not sure about it, use human feedback, ask someone who knows, search it using the internet, ask about it in forums or view past discussions, but don't use AI for absolutely everything. It's just like a stupid person with lots of knowledge, that's all.

                    Sheepishpatio.net is a forum running on the Flarum software, use of the forum is free. Please enjoy your time.

                    Friends of the site Here

                    If you are interested in creating imagery for the site, the resource page is Here