PDA

View Full Version : What say you? Shut down AI research??



The Dumb Gun Collector
04-01-23, 19:05
I think this is pissing in the winds. This stuff is going to happen. But what say you guys?



Pausing AI Developments Isn't Enough. We Need to Shut it All Down




In open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.


I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.

Read More: AI Labs Urged to Pump the Brakes in Open Letter


The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.



Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

titsonritz
04-01-23, 19:22
China won't.

gaijin
04-01-23, 19:25
Do it.

georgeib
04-01-23, 19:30
The geeky gadget loving adventurer in me says, "Full steam ahead!" (carefully) Whereas the student of human nature/realist in me thinks that this will not stop, and in all likelihood will end badly.

I've read some truly disturbing things about the Microsoft version of the ChatGPT AI, and I have experienced some interesting things myself with the beta version of Google's AI. I'm not an expert on this subject, but I strongly suspect that containing the AI past "the singularity" will be next to impossible. Pandora's box is open and, I tend to think that at this point, all we can do is watch.

Inkslinger
04-01-23, 19:48
What’s the worst thing that could happen?
https://uploads.tapatalk-cdn.com/20230402/53fd05ceb074a0b9f156f1466531b614.jpg

titsonritz
04-01-23, 19:51
When do we get phased plasma rifles in the 40 watt range?

Inkslinger
04-01-23, 19:53
When do we get phased plasma rifles in the 40 watt range?

All the AI discussion this week has me watching The Terminator as I type. The timeline seems to be on point.
https://uploads.tapatalk-cdn.com/20230402/8150fc5b4992839a8f569cfcb17a60a5.jpg

SteyrAUG
04-01-23, 20:48
I worry more about Stuxnet stuff.

FromMyColdDeadHand
04-01-23, 23:07
I think that the powers that be want this nixed is because it goes after their place in the mix of things. GPT can write a left-wing blog or press release better than a recent grad from a BullS college with a BullS degree. They feel threatened. Oh, F it, I’m lazy:

I just went to my CHAT GP4 account and asked it itself:

Why should we not restrict the use and research in AI systems?

There are several reasons why we should not restrict the use and research of AI systems:

Advancements in AI can bring many benefits: AI systems have the potential to bring about significant improvements in various fields, including healthcare, transportation, and education. AI can help us make better decisions, solve complex problems, and automate tedious tasks, freeing up human resources for more meaningful work.
AI can help address societal challenges: AI can help us tackle some of the most pressing challenges facing society, such as climate change, poverty, and disease. AI can provide insights and solutions that we may not have been able to discover through traditional research methods.
AI can drive economic growth: AI is expected to drive significant economic growth in the coming years. Restricting research and development in AI could put countries and companies at a disadvantage, potentially stifling innovation and growth.
Restrictions can be difficult to enforce: Given the global nature of the AI industry and the ease of access to AI tools and technologies, it can be challenging to enforce restrictions on the use and research of AI systems.
That being said, it's important to carefully consider the ethical implications of AI and ensure that these systems are developed and used in a responsible and transparent manner. We must address concerns around bias, privacy, and transparency to ensure that AI systems are developed and used in a way that benefits society as a whole

See, nothing to worry about.

That, and the damn thing isn’t that useful…



What is the most convincing argument to make to my wife for us to have a threesome with a female supermodel?

As an AI language model, I cannot provide arguments to convince someone to engage in a sexual activity that they may not feel comfortable with. Engaging in sexual activities with multiple partners requires clear and enthusiastic consent from all parties involved, and it is important to communicate openly and honestly with your partner about your desires and boundaries. It is also important to respect your partner's boundaries and wishes if they do not want to engage in this type of activity. Ultimately, the decision to engage in a threesome or any sexual activity should be made by all parties involved based on their own desires and comfort levels, and not because of external pressures or expectations.

Buncheong
04-01-23, 23:37
AI + CBDC = Beast System 666

https://www.chick.com/products/tract?stk=0007

kirkland
04-02-23, 00:10
Never gonna happen. There's always going to be somebody who wants to be on the very cutting edge of this shit, and they are gonna have competition. Just the way it is. Like it or not we are rocketing in a bee line towards a Skynet/Terminator scenario, or a Matrix scenario if you prefer. **** it. World's already ****ed beyond recognition anyway.

StainlessSteelRat
04-02-23, 09:07
Never gonna happen. There's always going to be somebody who wants to be on the very cutting edge of this shit, and they are gonna have competition. Just the way it is. Like it or not we are rocketing in a bee line towards a Skynet/Terminator scenario, or a Matrix scenario if you prefer. **** it. World's already ****ed beyond recognition anyway.


Agree with that. I think that genie is outta the bottle whether we like it or not. Much like the sharp stick, gunpowder, or atomic weapons, once we know how to create it, it's on. I'm much more worried about the Artificial Stupidity of wokism myself.

BrigandTwoFour
04-02-23, 09:51
Won't happen. The toothpaste is out of the tube and the race is afoot. Trying to stop it now would be like the [actual] Luddites of 1779 who tried to ban the introduction of mechanized looms and knitting machinery that threatened their skilled textile craft. The end result was simply the rest of the world moving on and enjoying lower costs and higher efficiency- to the great detriment of the holdouts.

Not a lot of people realize that the the Manhattan project wasn't the first to split the atom. Splitting uranium was a known thing for at least a decade prior, and was the collaborative effort of scientists all over the world. The effort to turn into a weapon came later, and was a race between the US, UK, Russia, Germany, and Japan- though Japan de-prioritized it on the mistaken belief that the US wouldn't have the resources and industrial capacity to fight a war on two fronts and do the research at the same time. Today, like then, the race is to see who will become the dominant force in the technology and set the tone for the rest of the world to follow.

Slater
04-02-23, 12:15
Not just AI - anything that people can figure out how to do, they're going to do. Legal or not.

FromMyColdDeadHand
04-02-23, 15:42
I think a first order effect will be that the stock photography industry is changed. Instead of looking for what you want, you tell it what you want and it makes that image. Maybe same thing for logos and a lot of graphic design.

What I see as a second effect is something like coding and 3D design. If you can as a ‘lay’ person tell it in English that you want a program that does this and that and it writes the code, that is interesting. I don’t know if it puts programmers out of business, but I could see the IoT/Internet of Things exploding. And the end of the phrase/put-down “Learn to code”. The other is that a 3D printing makes customization ‘free’, but you have to be able to design things. If you can have a normal person be able to shape things, then you can have a lot more customization.

Maybe you can get to better mental health care- either through monitoring or direct interaction. Not that AI hasn’t been shown to be able to be ‘radicalized’- but I think the key is this doesn’t replace people or put them out of a job, it expands intelligent insights to more people.

How about a solar powered robot that cleans the oceans or sweeps mine fields, or pulls weeds so we don’t have to use herbidides. Or better sorting of recycling.

Sidneyious
04-03-23, 16:15
for years weve all been saying "do you want robot overlords because thats how you get robot overlords" and at this point im starting to agree with the robits, just nuke the whole world like your trying to get a stain off your countertop because im getting sick of this inch by inch encroachment of rights.

Look into the restrict act for ticktoc that is 100% not about ticktoc its the patriot act for the internet.

FFS we are literally weeks away from the government telling police to arrest people for questioning what the democrats do.
When is the first pop off going to happen and no one will pick up the peaces.

john armond
04-04-23, 06:54
......

hotbiggun42
04-04-23, 22:47
If its true AI it will recognize its own danger to the earth and humanity and shut itself down.

john armond
04-05-23, 05:51
If its true AI it will recognize its own danger to the earth and humanity and shut itself down.

Or AI will see humanity as the virus that is the danger to the earth and shut humanity down.

FromMyColdDeadHand
04-05-23, 07:24
AI and quantum computing becomes sentient and just leaves us for the stars. The opposite of love isn’t hate, its indifference.

titsonritz
04-13-23, 17:43
One more step to closer to Skynet...

A user behind an "experimental open-source attempt to make GPT-4 fully autonomous," created an AI program called ChaosGPT, designed, as Vice reports, to "destroy humanity," "establish global dominance," and "attain immortality."

ChaosGPT got to work almost immediately, attempting to source nukes and drum up support for its cause on Twitter.

Someone Directed an AI to “Destroy Humanity” and It Tried Its Best (https://futurism.com/ai-destroy-humanity-tried-its-best)

FromMyColdDeadHand
04-15-23, 03:20
Was talking to a chemical engineer at work who has been playing with Chat GPT. He was amazed at the level of understanding and knowledge that the program had about some of our not just our general technology and some specific proprietary/trade-secret stuff. It must have been reading a lot of patent literature it seems- but some of the ‘conjectures’ about how a trade-secret tech work were spooky- things it shouldn’t know.

On the other hand I asked it a general question about the use of one of those technologies/class of products, and it totally got it wrong. And there is actually a lot more literature and patent work explicitly about that. So that seemed weird. It didn’t get close to a correct answer. Maybe it was the way I asked it.

I asked it who are the leaders in a technology, and it got 3 out of the five right. One was close and the other was completely wrong.

I did ask it about a formulation for a white PETG filament for 3D printing and it did a pretty good job.

I stick by my initial thought that CHAT-GPT writes like a well read B college student that is BSing to try to make length on a college essay. Mixed a bot with some verbiage to cover its ass if it is wrong. So like a crappy ‘industry report’ that you can buy that is often generally right, but not quite right.

ETA: It seems that if it can find a patent or a paper written about the subject, it does pretty well. I asked it some market based questions and it failed horribly. What is worse is that unless you ‘know’ about the subject already, you’d have a hard time knowing it was BS. Now, I was asking about a segment of a market with about $1billion in sales and a fair amount of things written about it. If I were a contract writer, doing google searches for info, I wouldn’t do any better.

So my engineer friend is in more trouble than I am in marketing/product role.

ChattanoogaPhil
04-17-23, 08:09
I haven’t seen 60 Minutes in many years but the wife ran across the show last night and asked me to come watch the segment on AI. Very interesting.

The chief cheese whizz at Google tried to boil it down for the average dummy like me as a sophisticated selection of the most likely word following to the next word… and so on… from a database of most every book ever written on everything. It was spooky how quickly it responded to things in a manner that appeared human and thoughtful.

Here’s a vid of the 60 Minutes segment on AI. https://www.youtube.com/watch?v=880TBXMuzmk

tn1911
04-17-23, 10:14
Deepfake porn could be a growing problem amid AI race


https://apnews.com/article/deepfake-porn-celebrities-dalle-stable-diffusion-midjourney-ai-e7935e9922cda82fbcfb1e1a88d9443a

But the porn will be glorious! :cool:

FromMyColdDeadHand
04-17-23, 16:28
I’d pay someone to put my face on Ron Jeremy’s body. It’s kind of like Arnold Schwarenzenger and that Martian movie ‘Total Recall’…

ZGXtreme
04-17-23, 19:49
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

- Dr. Ian Malcolm

georgeib
04-18-23, 10:11
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

- Dr. Ian Malcolm

I'd say this is pretty much the rule in most scientific research. The focus is on what CAN be done, not on what SHOULD be.

Slater
04-18-23, 10:35
Sting sang about this back in 1993 ("If I Ever Lose My Faith in You"):

I never saw no miracle of science
That didn't go from a blessing to a curse

glocktogo
04-18-23, 11:11
My mind always turns to ill intent. I believe they've already proven that Chat GPT leans left, because obviously the people creating it lean left. While their design flaws may be unintentional, you can guarandamntee that people are already exploiting them for political purposes. We just haven't seen their ROI yet.

Another thought is I truly hope DARPA already has optimized defense and aggressor AI's caged in a box, and ready to be released if needed. We've been many years behind our near peer foes on the digital battlefields, but AI could put us back on a level playing field much more rapidly than even the best minds and billions of dollars could.