Cure
Thanks for answering! And yeah, I'm in the process of learning C++ myself, mostly write C++11 because I lack experience in the newer standards. Although to be curt I'm lacking very much in the mathematics department, so I'm essentially self-studying math through the internet, was never a very smart student academically to begin with.
Gratz on the contract and job

2 months later
2 months later

A wave of 'ai assistant' products have started releasing, including the Humane AI Pin and the Rabbit R1 which have promised a lot.
They aim to be able to answer queries, set tasks etc and anything else a virtual secretary may do.
I think we'll see these devices improve over the next few years but until they are either fully self sufficient or working via a connection a phone I, and everyone else it seems, see little point in them.

Humane Pin in use with its "tap to use" nature

Marques Brownlee, a pretty popular tech guy youtuber has had his hands on the device and isnt a fan.. not entirely shockingly
Youtube Link to his video

I do think they are interesting in that they are an angle people havent quite gone at AI with yet, do you see these products getting any better into "worth using" territory?
If nothing else at least a good one would fit the Star Trek badge category and have a bit of neatness because of that alone.

On the flip side, what are we naming our rooms in our houses we're going to transform into faraday cages 😆

    Lumeinshin
    Just from reading your post, Isn't that what Siri was supposed to be 10 years ago?
    Anyway, it's seems a bit overindulgent to me to have my voice command sent accross the planet to have it processed in an whateverflop tier datacenter
    just to get a calendar entry synced back to my phone.

    A voice alone assistant will never work in that form factor because a lot of people don't want to be shouting at their shirt in public.

    I think it was a pretty dumb idea from the beginning. I remember everyone asking "why wouldn't I just use my smartphone?" and there really isn't any reason for this thing to be a dedicated device instead of an app.
    Get a shirt with a pocket and put your smartphone in it, if you really want it to be constantly monitoring your surroundings.

    I'll give it to them that the projector is a neat gimmick though.

    Also, I just noticed that I opened this thread in 2022! Time passes fast...

    Lumeinshin I've seen many people shitting on these, especially the humane AI pin. But honestly? I like the concept.

    The execution is absolutely terrible, of course. AI still hallucinates, you can't rely on it for precise or day to day tasks that depend on an external data source, such as current events or news. It's practically an unusable product/service. You talk to it, you wait in an awkward silence, and you get your unreliable response that you might have to double-check. It's completely awful, and I don't see why anyone would use this today. Tomorrow though? I think there's potential for it.

    I can see the market for non-intrusive day-to-day devices that can help people do everyday tasks or live the life while answering those small questions that pop right into our heads without having to go out of their way by moving aside in the middle of a walk, unlocking their phones and typing out the question. It's like having a normal person with you, it feels natural! I love it, to be honest.

    The projector looks like a nice feature too! Sure, it's pretty bad in its current state, but if I could see my next appointment, read a quick message/email, or check the weather in it, I'm sold! I don't want to keep talking to it out loud in a public place, or even worse, a quiet public place like a train. It'd be cool to just check basic info in it, again, without having to open your shiny addictive brick with 100 ways to keep you hypnotized.

    Would I wear one? ...not until it goes open source. I'm not letting them put a camera on me 24/7 (he said cluelessly, while he had a smartphone in his pocket). Jokes aside, it might not be my thing right now, but I absolutely dig it.

    ...except for the overhyped AI part.

    And about the Rabbit R1? No comment

    I don't think AI is inherently bad. In fact I've used AI-generated images to help with getting poses right while drawing hand-made art, and chatbots can be pretty fun.
    But I am somewhat afraid that places like the anime industry will use it improperly by cutting corners. There are certain things that need a human touch.

    AI, (or rather, any given LLM) is only bad when people don't understand what it is and what it's good for.
    What it is: a revolutionary data mapping and search algorithm.
    What it's good for: searching data it's trained on.
    If you think of it you realize that it's good for a lot of things, but none of what is typically associated with "AI" or actually dynamic activities where immediate decisions and unpredictability is involved.. It can be a great phone operator replacement or even a "factory worker" within bounds, because it can be trained on any data landscape, as long as it has reasonably narrow confines. This is why real self-driving cars are unlikely to be a thing for now, in my opinion, unless they have specific routes (like taxis) and even then.
    The whole hype around it "learning" and "creating" is only that, hype. Anything you make it do can only rehash or extract things from the data it contains, but it can't significantly exceed those confines, only catch things a human missed.

      4 days later

      Evernight
      I also used to strongly believe that LLMs are merely stochastic parrots, i.e., statistical models that solely follow a probability distribution to predict the next token of a given input. Yet, I am still amazed by the capabilities and recent innovations introduced by transformer based AI models. It is still quite obscure to me how LLMs are able to create content that look seemingly eloquent as if the hidden machinery was genuinely understanding a prompt. In spite of those recent innovations, the familiar argument of the "chinese room" is still used. It goes as follows:

      Imagine a person who does not speak Chinese is locked in a room with a set of rules and a large batch of Chinese characters. The person is given a piece of paper with a Chinese sentence written on it, and they follow the rules to produce a response in Chinese. The person does not understand the meaning of the Chinese characters, but they are able to produce a response that is grammatically correct and even clever.
      The question is: Does the person in the room truly "understand" the meaning of the Chinese characters? The answer is of course no, because the person does not have any comprehension of the language or the meaning of the characters.

      Now LLMs process and generate text by manipulating symbols (words, characters, etc.) according to the rules of the model. They don't truly "understand" the meaning of the text; they're simply rearranging symbols to create a coherent-seeming output. They are simply trained on vast amounts of data, which can lead to overfitting and memorization. This often results in the model producing responses that are statistically likely to be correct but lack true understanding.


      It is, of course, clear that machine learning models do little similar to what humans have. Scientific or statistical models, for example, might correctly predict outcomes in some specific domain, but they don't "understand" it. A human might know how orbital mechanics works, the same way an algebraic expression or computer program can predict the positions of satellites, but the human's understanding is different. Still, I will adopt a functionalist view of this question, since the internal experience of models, or lack thereof, isn't especially relevant to their performance.

      Still, can a stochastic parrot understand? Is having something that functions like a model enough? Large language models (LLMs) do one thing: predict token likelihoods and output one, probabilistically.

      But if an LLM can explain what happens correctly, then somewhere in the model is some set of weights that contain information needed to probabilistically predict answers. A LLM doesn't "understand" that Darwin is the author of On the Origin of Species, but it does correctly answer the question with a probability of X% - meaning, again, that somewhere in the model, it has weights that say that a specific set of tokens should be generated with a high likelihood in response. (Does this, on its own, show that it knows who Darwin was or what evolution is? Of course not. It also doesn't mean that the model knows what the letter D looks like. But then, neither does asking the question to a child who memorized the fact to pass a test.)

      https://arxiv.org/abs/2310.02207

      For example, this paper argues that language models contain a geographical model of the world (in terms of longitude and latitude), in addition to temporal representations of when events have occurred.

      They use linear probes to find these representations and argue that the representations are linear because more complicated probes don't perform any better.

      They also look for neurons with similar weights as the probes to show that the neuron actually uses the representations.

        mechap
        I'm not at all comfortable with the abundant usage of "true human understanding" in AI discussions. It's not like there is any sort of real idea of what that actually is. Or have I missed something?
        One angle of the chinese room is that we define the mechanism, the person, as not understanding chinese a priori (did I use that right?). But without doing that, there is no way for us to
        know that the mechanism does not have an understanding.
        In regards of that paper, if you think of an LLM, or any neural net as a lossy compression of the training data, that would be expected behaviour, right?

        Anyway, speaking of languages, how about this: I have pretty much no understanding of my native language, grammar wise. But I still speak with a native like quality.
        I would dare to suggest that I'm not the only one. It doesn't seem absurd to imagin a stochastic model powering my expression, maybe even an LLM like one.
        That's just on a syntactic level, but even on a semantic level, I feel that most of what I say (in a real time verbal conversation) is LLM-like. Because I don't think about it.
        Only when conversations become taxing enough that I have to think about what I say, I feel like I'm entering a distinct mode of operations where I would say I'm doing
        more than picking tokens according to previous tokens.

        TL;DR I think everyday human operations employ LLM like techniques.

          Oh boy... I hope I get to hear your thoughts on this one when enough reviews and details come out

            I've played around with Stable Diffusion (Images), LLMs (Chatbots), and a tad but of RVC (Voice Cloning) and had a lot of fun getting them up and working. It all scratches the stockholm syndrome level of enjoyment I get from computer troubleshooting. I would still think that anyone who knows a lot about AI will look at my opinions the same way I look at beginner opinions on topics I'm hyper knowledgeable about, but its a ruthlessly fun topic to speculate about. So I will.

            I don't feel AI will ever reach the heights of whats being said by either the extremes. The anti-AI crowd doesn't seem interested in being hyper knowledgeable or realistic, because it detracts from the "possible future" that can scare the public. The vocal pro AI people will do the same for the opposite side of the coin, and seem to be cryptobros more concerned with making cash than understanding technology. Locally having uncensored chatbots that are better than Chat GPT-3.5 is already a reality, but most people aren't tech literate enough to put that to any sort of personal use advantage.

            Evernight This is all true. What I hope happens is that a updated daily AI can make the search engine obsolete. I'd much rather have direct responses built from multiple sources than the ad riddled and algorithm driven mess than google searches have become. I really despite a lot of tech standards today (part of the reason I'm on this site) and AI has the potential to burn down the current standards in place of new ones. Clearly, there is a large interest in having that be censored from the beginning, but so is everything else today. You also can't really "jailbreak" a search engine like you can with AI, other than filetype:pdf and the like.

            The government will show up late to regulate, companies will go all-in for too good to be true technologies like usual, and the world will shift to adopt. Scammers will get better, AI art will become common for applications that are already phoning it in, and violently shunned in more artistic spaces. As with the giant leaps in technology that were VHS and the internet, the biggest change the majority of people will feel will be pornography. Probably.

              Comsat Rex
              Can you really hope for LLMs to not become an ad riddled shitfests five seconds after they become a viable replacer for search engines for all the same reasons search engines became unuseable in the first place?
              Since they won't even have to declare ads as ads anymore I'd imagine they will end up being a much worse experience. Is there even a player in the LLM space that isn't an ad company?
              I think there is this French one behind Mistral, does anyone know of others?

                Melon
                Your language experience is more than just knowing grammar or understanding how you speak. Most people understand language without thinking about it, which is different from how LLMs process language. A musician who has spent years practicing a piece and a computer program that can generate music based on algorithms may produce similar results, but the underlying processes are different.
                When you're confronted with more demanding conversational situations, you may enter a "different mode of operation", but this doesn't mean that you're suddenly switching to a more conscious, deliberate processing of language. This could be because higher-level cognitive processes are being used, such as working memory, attention and executive functions but these are different from the computations performed by LLMs.

                kurisu
                Upon inspecting the source code for the so-called "large action model," it is quite clear that there is no artificial intelligence or complex automation whatsoever at play. Instead, it relies on simple Playwright automation scripts to perform tasks, which explains why it only supports a limited number of apps, including Spotify, Midjourney, Doordash, and UberEats.
                The limited functionality and lack of support for most apps are blatant indicators that the emperor has no clothes. But this lack of substance is par for the course in the AI industry, where grandiose promises and flashy marketing often conceal a lack of real innovation.

                https://mega.nz/file/rRdwhaII#02OcaqQghqhJQ5nvF3rjAdlCfeVOzdrkbBIM3sX6Gl4

                  mechap Whaaaat? You're saying that a macro system isn't AI? The rabbit marketing team is so gonna kill you

                  Melon Yeah, fair point. Any company with the reach to make a widely used LLM search engine are the same culprits of the garbage we have today. Hopefully the trained data of the ads can be jailbroken like the censorship has in the past, but I don't know nearly enough to say if that's possible or not, nor how long it would take for the exploit to be fixed. If it even happens to begin with.

                  mechap

                  but the underlying processes are different.

                  how do we know?

                    Sheepishpatio.net is a forum running on the Flarum software, use of the forum is free. Please enjoy your time.

                    Friends of the site Here

                    If you are interested in creating imagery for the site, the resource page is Here