I have no idea. Last night I literally got AI to give me instructions on how to shave alligator hair and how to inflate a foldable phone.
AI is not actually intelligent, it’s a word prediction model. It’s royally ignorant actually.
Because of this, I find it basically boils down to a fancy search engine.
That’s the thing though, it’s not a search engine.
It’s a language prediction model, if you ask something that it has learned well and predicts correctly you’ll get a nice answer that makes you feel like it’s a search engine.
If you ask something more obscure or confuse it with words, you’ll get back garbage that hopefully doesn’t look like a right answer, because it’s much better to have a useless answer than a deceiving bad one.
It’s my dream that AI takes over middle management and bureaucracy as a whole, and we get rid of all the societal evils that come from corrupt or incompetent management in both - governments and companies. Imagine if every single working person had zero ambiguity in their jobs and complete clarity on when they have to work, and on what. The world would be so much happier!
Ideally it would help put real, complicated but achievable solutions forward to some of the world’s toughest issues, like poverty, hunger, war and disaster,. AI is but a tool and in the current trajectory, much of its use is to advance the interest of capitalist moguls. In order to heed answers of improved AI models to achieve ideals of a harmonious world, we need to start with a change with our society to work towards it and accept change away from purely monetary ends.
Any problem that can be expressed mathematically, has a huge search space, and where human intuition doesn’t necessarily help.
For example if a computer can solve chess then that same line of programming should be able to solve quantum physics and gravity.
You should check out the short story Manna. It’s maybe a bit dated now but explores what could go wrong with that sort of thing.
I just read the first two chapters. Yes, it doesn’t paint a pretty picture but the dystopia portrayed in that story started with Manna being an unregulated monopoly that was given power over everything.
In real life you perhaps won’t take it that far. All decisions would still be made and signed off by humans, AI would just be the planner/scheduler. And no tech services firm would want to get into employability tracking, they’ll quickly get chewed out by regulators if their AI product started discriminating against candidates in hiring.
Yeah. I mean I started reading that story and was thinking how cool it would be… Until it started going bad. Something like a GPS for whatever task you were doing at work would be cool.
Tax it. If corporations use it to replace employees, they should at least also have to contribute to the improvement of society.
Automatically respond to scam calls and emails, keeping scammers overwhelmed with useless work.
The real robot wars will be the Scam Call AI vs. the Scam Call Answering AI.
It will be like a new version of chess: Bobby Phisher vs. Magnus Callusthen
This is the epitome of those useless machines that turn themselves off.
Currently the obvious use is to help people express their thoughts in words. It’s helped me a lot writing out resumes and cover letters. This can be extended to languages other than our main/first one.
It’s also great to narrow down research on a personal scale in areas where if you have no expertise it would be very hard for you to figure out what you are looking for. I’ve used it to ID plants, insects and diseases successfully. I didn’t get a precise result from ChatGPT, but that’s not what I asked. I just requested pointers in the right direction. It delivered.
The next obvious implementation is with software interface. I’ve already used it (unsuccessfully) to work with Unreal Engine and other 3d software. I got half baked results because the models were not trained specifically for the software in question. But if they were, it would be very easy to just ask the software how to do something instead of searching everywhere for potential answers. That doesn’t sound too far fetched and I heard it’s a feature that will become standard.
I use GitHub Copilot to write code for me, every day
I just looked that up. It looks amazing. Does it really work? I’ve tried using ChatGPT for coding and it sucks.
I’d say it works pretty well most of the time, probably depends on the coding language. I use it regularly for PHP/Laravel and JS, and still get surprised when it delivers full working functions from a comment.
There’s a free trial, give it a try
I’ve never had much success having Copilot write actual code. Where is been very helpful is in writing documentation, boilerplate, and just being a very smart autocomplete. That alone has saved me so much time and energy already.
I’m curious about this. What model were you using? A few people at my game dev company have said similar things about it not producing good code for unity and unreal. I haven’t seen that at all. I typically use GPT4 and Copilot. Sometimes the code has a logic flaw or something, but most of the time it works on the first try. I do at this point have a ton of experience working with LLMs so maybe it’s just a matter of prompting? When Copilot doesn’t read my mind (which tends to happen quite a bit), I just write a comment with what I want it to do and sometimes I have to start writing the first line of code but it usually catches on and does what I ask. I rarely run into a problem that is too hairy for GPT4, but it does happen.
I am not sure if my answer is correct- I’ve tried ChatGPT to help me with Unreal in February/March this year. I can’t recall what model.
As for my query- I’m an artist, not a coder. I found ChatGPT would usually point me in the right direction if I had a simple interface question, but not when dealing with materials… Or the sequencer. I haven’t used Copilot though.
Ahh ok that makes sense. I think even with GPT4, it’s still going to be difficult for a non-programmer to use for anything that isn’t fairly trivial. I still have to use my knowledge of stuff to know the right things to ask. In Feb or Mar, you were using GPT3 (4 requires you to pay monthly). 3 is much worse at everything than 4.
Labour-less advancement. Human pass down centuries of advancement in languages.
- Version 1: Rough input labour steps automation, in restrictive detail.
- Version 2: Multiple labour steps input compiled, in restrictive choice. (Less supervise command)
- Version 3: Task automation. (Communicate wish, whole labour compilations taken care; no production supervise input, only output demands)
The automation gone from “multiple coffee gadgets” to “1 standard coffee button” to “a warm coffee of less sugar request”. (Now, where are human’s place in this picture? A balloon human such in Wall-E movie?)
I mostly see psychological benefits:
- Building confidence in writing and (when roleplaying) in interacting with other people. LLMs dont shame, or get needlessly hostile. And since they follow your own style it can feel like talking to a friend.
- related to that, the ability to help in processing traumatic events through writing about them.
For me personally, interacting with AI has helped me conquer some fears and shame that I buried long ago.
I see endless possibilities, but it’s questionable if any of them are realistic before we overcome capitalism.
But one idea I really like is AI helping with the implementation of sortition for democratic decision-making in government.
Recently, the concept got some attention due to climate protesters demanding it, which I think is nice. So while I don’t want to discuss the concept and where it should be applied, here’s what (future) AIs could do:
-
Enhanced Random Selection Process: AI can ensure a representative selection from the population for sortition by analyzing demographic data and employing stratified sampling algorithms.
-
Personalized Education and Communication: Once participants are selected, AI could offer personalized learning paths to prepare them for their role, and adapt communication to suit each participant’s unique circumstances.
-
Facilitating Communication and Mediation: AI can manage communication among the selected group by setting up secure environments for discussion, and serving as an impartial mediator to promote fairness and respectfulness during deliberations.
-
Information Provision, Fact-Checking, and Bias Detection: AI can provide relevant, unbiased information on complex topics, perform real-time fact-checking, and monitor discussions for potential biases.
-
Emotion and Sentiment Analysis: As discussions take place, AI could detect the emotional states and sentiments of participants, ensuring decisions are not overly influenced by emotional reactions.
-
Advanced Simulation and Scenario Exploration: AI could create sophisticated simulations to help participants understand potential outcomes of the policies they are considering.
-
Public Accountability and Feedback Collection: After decisions are made, AI can ensure transparency in decision-making by tracking and reporting the progress of the deliberations, and collecting public feedback on the decisions made.
I should probably add that this list was made with the help of GPT 😅 so a more direct answer to your question might be: AI can help humans lay out their ideas and foster discussions.
That’s the scariest thing I’ve read in a long time. I’ve gotten so many completely made up “facts” from AI that I wouldn’t want to hand it the keys to my car, much less my freedom. It even cites it’s sources, which don’t exist if you actually check them. The fact that the creators can’t even explain why this is happening makes it even more scary. I’m not scared of AI. I’m just scared of people trusting it. It’s about as trustworthy as a politician, but arguably a lot smarter.
What do you mean we can’t explain it? It’s designed specifically to make up some text that is very statistically likely. If it doesn’t have anything similar in it’s training data, it will try to extrapolate, and that gives you hallucinations.
Not generally disagreeing with you, but I doubt the following
thing I’ve read
Capitalism isn’t the problem here, it’s unregulated capitalism that doesn’t work.
Also, you can dislike it but so far, capitalism tampered with socialism, is the best system we have so far. The best Countries in terms of human happines and opportunities (think the Scandinavian states especially and most of central europe generally) are capitalist democracies. We however realised, unlike the US, that you can’t just let corporation do anything they wan and that the state has an obligation to provide services and help to it’s people.
This anti-capitalist sentiment os so common and not really founded in reality that it feels like a mere buzz word at this point.
-
We could outsource all the bureaucracy to machines. We could have entire data centres applying to things, sending that to another data centre, it get’s denied re-done and so on. Doing contracts, billing people, paying bills by billing yet other people.
Humankind would just need to supply power and meanwhile i could go hiking in the mountains and have every thursday and friday off, because there is no paperwork around anymore.
I use it to summarize search results on a certain topic, like what packages hold this or that library, stuff like that… or as a more comprehensive man page generator.
LLM’s are worthless and I’m skeptical they’ll ever be otherwise. I think for a program that works roughly like ChatGPT from a user’s perspective to ever achieve usefulness would require a whole different algorithm.
Aight so I’ve been holding off on making conversation since I generally disagree with most of the negative sentiment towards them. But for real, you think they’re worthless? Legit at their present moment they’ve got so much immediate value; how much have you used them?
I’ve pulled tremendous value from them. In my personal life, GPT-4 walked me through developing a Kotlin android app for my smart watch so that I could have access to it more easily and conveniently. It’s provided me guidance and knowledge, even teaching me German and Spanish and holding practice conversations with me. At work, it’s helped me write programs to improve my productivity, taught me how to use software like Excel, and just overal helps me be more capable.
And all that is just one person’s value from it. Just imagine what value it’s creating right now for the millions who use it. Just imagine what it could do in the hands of innumerable virtuous and malicious individuals. It is so far from worthless
Do you trust that the German it’s teaching you is real German? All it’s trained to do is to generate something that could pass as German.
I don’t speak German, but so far my conversations in Spanish have been flawless. So I would trust ChatGPT with language in that regard.
FYI: It’s the same with german. I think you’re quite alright with the ‘big’ languages. I didn’t spend much time with ChatGPT, but even some smaller language models speak multiple languages well enough. I tend to use english, i think the sentences are a bit more expressive and nuanced. But with ChatGPT that’s probably barely noticable.
I will admit I’ve never used them. I’m not keen on providing my email address to huxters for purposes of signing up and they won’t accept a disposable email address. At least not one I’ve been able to find.
I’ll be honest, though. Running into someone extolling the benefits of LLM’s, I wonder if they have ulterior motives. A lot of the cryptobros are now jumping ship from the blockchain bandwagon to the AI bandwagon. (Because the blockchain bubble has partially burst now and the AI bubble is still going strong.)
With cryptocurrencies or NFT’s, anyone telling you it was the best thing ever was always misrepresenting their own gains and telling lies about the capabilities of blockchain. Maybe they were themselves deluded, but the ultimate motivation to extoll the benefits of blockchain was not actual benefits, but rather that the extoller was invested. If they could be convinging enough and their audience believed them and invested, the value of the extoller’s investment would go up.
Now, LLM’s are known to hallucinate. And very confidently and convincingly. None of the content of what LLM’s produce can be trusted for factual accuracy. LLM’s as a technology are just not suitable for producing factual output and will always be inferior to platforms like StackOverflow or… what Reddit used to be.
So, what you’ve claimed GhatGPT has helped you with: Software development, language aquisition, and learning how to use software (Excel specifically). I really hope you’re not just copying programs out of ChatGPT and using those programs at work without auditing them first. If you have the skills to vet code, then what do you need ChatGPT for? And would plain-old Google not do a better job? And for learning Excel as well?
And as others have said, I wouldn’t trust any language learning I got from ChatGPT.
Just imagine what it could do in the hands of innumerable virtuous and malicious individuals.
So, when Beanie Babies were at the height of their economic bubble, people were robbing stores and engaging in fist fights to get them. I very much believe that the hype around AI lately is causing a lot of terrible things. Big companies are publicly announcing they’re “replacing jobs” with AI. I think some of those cases are just big corporations finding dumb ways to put positive PR spins on “we’re laying off a lot of people” without actually intending to replace them with AI. I think some big businesses are actually swept up in the hype and think “replacing people with AI” is actually going to work out for them. Maybe some companies are somewhere in the middle: laying people off with the intention of getting them back on a part-time contracting basis for lower pay as “editors” of content output by ChatGPT. But really they’ll be doing the same job, just less efficiently and for lower pay.
Again, look at the effect Beanie Babies had on the world. And that proved to have been a worthless nothing burger all along. The effects the AI hype is having on the world is no proof that it’s anything other than worthless lie-generating machines.
My ulterior motives are the same as yours: convey a strong opinion. It’s not like making others as optimistic about this as I am will change anything. Even if we both agreed to forget the concept, the cat is out of the bag; open sourced LLMs are getting better and access is getting cheaper. Everyone is impotent to stop what’s about to happen, it’s as futile as trying to stop torrenting copyrighted media. And more advanced they become, the less people need to be involved to make a large impact with it.
Also to clarify, ChatGPT and GPT-4 are two different AIs with different capabilities. I used GPT-4, the better AI. There are many different LLM AIs out there now with varied strengths, weaknesses, and attitudes. ChatGPT is old news, so please don’t use it as your sole resource to judge LLMs (especially as you haven’t used it yet).
The language it’s taught me is valid; the programs successful. I have programming knowledge, but its expertise often surpasses my own and is thus an invaluable resource. There is remarkably limited risk in using it as the tools are limited in scope and I am not a programmer by profession (it just helps); in any case, it writes secure code mostly and is only getting better over time. I imagine soon its kind will take over this domain entirely as their context limits and capabilities continue to grow.
You’re probably seeing so many crypto bros liking this AI because they’re much more risk tolerant than the average person. These AIs are as much a risk as they are an opportunity. While I am optimistic, I fully recognize that things could go horribly wrong.
With an opinion as strong as yours, I only ask that you look into it more before being so confident in your dismissal. At least try it out first before you denounce it as worthless and disregard the experiences of others
That’s a great question! It’s something I think about a lot. This is probably gonna sound sarcastic, but I mean it genuinely: Have you asked ChatGPT (or any other LLM) that question? I’d be curious to hear what it might have to say. Of course, its first few answers are probably gonna be just generic, useless stuff, so you’ll have to really drill down into details to find something useful. But you might be able to find some good ideas in there.
Here are two things that immediately came to mind:
-
Democratization of knowledge and expertise. Think of the many people that now have access to (e.g.) a virtual doctor just because they have an internet connection. As with everything I’m going to say, this comes with the big caveat that nobody should trust LLMs unquestioningly and that they definitely hallucinate and confabulate frequently. Still, though, they can potentially provide quick diagnoses and relevant, immediate, life-saving information in situations where it’s difficult or impossible to get an appointment with a doctor.
-
Handling information problems. I heard someone say recently that because LLMs are likely to be used for spam, ads, propaganda, and other kinds of information distortions and abuses, LLMs will also be the only systems capable of combating those things. For example, if people start using LLMs to write spam emails, then LLMs will almost certainly have to become part of the spam detection process. But even in cases where information isn’t being used maliciously, we still struggle with information overload. LLMs are already being used to sift through (e.g.) the daily news, pick out the top few most important articles, and summarize them for readers. Finding a signal among the noise is actually quite important for all parts of life, so augmenting our ability to do that could be very useful.
I suspect those answers might be broader and larger-scale than what you were asking for. If so, I apologize!
It’s curious to hear AI detection as being a feature, given that it’s just the same machine being used ‘in reverse’ - that arms race will just leave humans unable to know what is real.
Yeah, I agree. Less a “feature” and more a necessary evil.
-
Eugenics
Side note, it’s pretty early days but we’ve just setup an !aistuff@lemdro.id community on Lemdro.id (where /r/android calls its home in the Fediverse).
For now, I think LLMs are not relevant in its current form, other than helping with writing out ideas.
I think in the future there’s an LLM that uses Google searches to infer specific information. Basically the assistant that every tech company on this planet pretended to have at so e point, but now it actually exists.
Other than that, probably not much. Maybe translation, maybe predicting the truthfulness of information, maybe converting data or writing code, but all these things require a variety of different specialized AIs that are designed specifically for these use cases. I don’t see that being an actually commonly used thing until 2030. Maybe we’ll have some of these things by 2025, but I have a feeling it will be ok, and not good enough to really substitute the human resources.