AI and the precautionary principal

Anything not relating to the X-Universe games (general tech talk, other games...) belongs here. Please read the rules before posting.

Moderator: Moderators for English X Forum

User avatar
mr.WHO
Posts: 9396
Joined: Thu, 12. Oct 06, 17:19
x4

Re: AI and the precautionary principal

Post by mr.WHO »

decifer wrote: Sat, 31. Jan 26, 11:32 Basically, how smart does a hammer need to be, to make you feel bad - or scared - when you leave it alone in the cold and dark garage? And how smart should we allow that hammer to become?
That's what I hate about current AI hype, tech-bros, Musk and AI wankers - they are keep suggesting we're on a verge of AGI, when we're not.

Current AI is still a hammer - no matter how many time you will swing it, no matter how many nails it hit, it won't become sentient.


Current AI is just a math hammer with tamagochi features (and text to code interface in form of understanding the human language in raw form).
You can add a microphone to it and hammer can say "me hungry, me sad" and do conversations, but this is basically still the same as AI in video game reacting to you based on pre-set conditions.


However, the way how Large Language models do the context recognition between words might actually be nice approximation or imitation how human mind do, so it is a small step towards understating what General Intelligence even is.
We're still far away from Skynet or Cylons.

In the end, nature will become the limiter itself - we think human mind is flawed, but it's acutally super optimized by millions of years of evolution to be flexible and universal...on very tight energetic budget (Still, human brain take 25% of all body energy which is huge comparing to brain mass to body ratio).

Super-intelligence, mean super energy requirements - you will still need some tradeoffs, like specialization and narrowing down the scope to bypass it, but you sacrifice the general flexibility...so if you need that you need to sacrifice super-intelligence and go to basic template...human level of intelligence and flexibility.

Eventually we might end up with Skynet...chained to big a** fusion reactor...rigged with hundreds of failsafes, breakers, explosives, EMP bombs, couple of nukes and a dude with glasses and crowbar :D
User avatar
decifer
Posts: 582
Joined: Thu, 22. Jul 10, 21:14
x4

Re: AI and the precautionary principal

Post by decifer »

mr.WHO wrote: Sat, 31. Jan 26, 11:58
Well, even Geoffry Hinton - sometimes called the Godfather of AI - a dude researching this stuff for more than 50 years now and Nobel laureate, is calling for caution and not treating AI as a just stupid hammer anymore.
He has enough knowledge and understanding of the matter, that it would be stupid to just ignore him. He gives at least food for thought. He has given plenty of talks and interviews over the last couple of years, not hard to find. Here, for example, at the Royal Institution: https://youtu.be/IkdziSLYzHw?si=VpzWL97gGmNTuZRZ

edit: for everyone who doesn't want to watch a 50 minutes lecture, I recommend watching just this bit: https://youtu.be/IkdziSLYzHw?si=lRrx-m5-nSiJ7bPl&t=1454
There are already peer reviewed papers about this topic, too.
Don't drink and jumpdrive.
"Sir, they're scanning us." - "Scan them back!"
User avatar
mr.WHO
Posts: 9396
Joined: Thu, 12. Oct 06, 17:19
x4

Re: AI and the precautionary principal

Post by mr.WHO »

decifer wrote: Sat, 31. Jan 26, 12:33 Well, even Geoffry Hinton - sometimes called the Godfather of AI - a dude researching this stuff for more than 50 years now and Nobel laureate, is calling for caution and not treating AI as a just stupid hammer anymore.
He has enough knowledge and understanding of the matter, that it would be stupid to just ignore him. He gives at least food for thought. He has given plenty of talks and interviews over the last couple of years, not hard to find. Here, for example, at the Royal Institution: https://youtu.be/IkdziSLYzHw?si=VpzWL97gGmNTuZRZ
Hmm, watching this and his conclusions - he's saying basically what I'm saying...and covered several things and examples I wanted to cover, so I'm happy to endorse this video :D
The only thing that distract in this video is his low hanging fruing pitch agains human expectionalism/religion - but I'll let it slide as the entire rest of video is factual and valuable.


Current AI models will never reach AGI levels due to limitation he and I described (flat linear gains).
In short, to gain flat/linear knowledge, model must expand it's understading in higher dimension, which require calculation and data increase in geometric way (ever expanding cost, and the expansion is faster with each and every linear gain).

Think like Square–cube law, but instead of ships, tanks or building you have information and contexts.
There are nuance and optimisiations done by minds far smarter than I, but in the end, just like Square–cube law, you cannot escape it, only cricle around tradeoffs.

That's how you end up with plateau and latest model of GTP beign 10% better (questionable if you even can call it better, I'll give you an example below)...
...but you need 25% more energy/time/resources to train it (this number will keep increasing, while first number will keep decreasing).


Now I said I'll explore how do we even describe "better'?
The model gets more knowledgable, get better at figuring context from expanded range of data/situations...but is it actually smarter?


You're a farmer at chicken farm and you put simple prompt:
"My c**k act weird and looks weird" :roll:

We have stupid and narrow AI, trained only at chicken farming relevant data, the AI wil return you answer:
"What are the symptoms, behaviour, what is farm environment? here are possible avian related diseases"

Now imagine same situation, but you use that latest huge model trained on everything and every one...
... there is a non-zero chance it might aswer "You need a b*owjob"


That's how we end up more and more complex models, that are more and more prone to halucinations.


With current AI as it is, we will never get Skynet or Cylons - but we might get Terraformers/Xenons :D
User avatar
decifer
Posts: 582
Joined: Thu, 22. Jul 10, 21:14
x4

Re: AI and the precautionary principal

Post by decifer »

mr.WHO wrote: Sat, 31. Jan 26, 13:10
I usually try to stay out of such discussions, as very often there's a lot of misunderstanding, misinformation and half-truths going around in them - and I would probably only add to at least one of those categories.
So I'll not agree or disagree, as I actually have no fixed opinion myself on that other than "pay attention to the actual experts and let them discuss". But that would be boring as a topic for thread, right :D

So I'll just say, as this thread is about the precautionary principle, which always poses the question:
How much caution is too much?
Or:
What can cause more potential harm? Overestimating the dangers and suppress progress or underestimating it with unknown consequences?
I'm on the side of rather overestimating the danger and be somewhat cautious, especially when the experts are on that side, too.
And I want to make clear, that this is different from riding the fearmonger train of "AI bad".
Last edited by decifer on Sat, 31. Jan 26, 14:02, edited 3 times in total.
Don't drink and jumpdrive.
"Sir, they're scanning us." - "Scan them back!"
User avatar
mr.WHO
Posts: 9396
Joined: Thu, 12. Oct 06, 17:19
x4

Re: AI and the precautionary principal

Post by mr.WHO »

Currently I'm far more concerned about AI industry cannibalizing rest of economy (RAM prices is one thing, energy and water consumption is the other thing, product enshittification is the cherry on top) than AGI takingover anytime soon.
User avatar
decifer
Posts: 582
Joined: Thu, 22. Jul 10, 21:14
x4

Re: AI and the precautionary principal

Post by decifer »

mr.WHO wrote: Sat, 31. Jan 26, 13:27 Currently I'm far more concerned about AI industry cannibalizing rest of economy (RAM prices is one thing, energy and water consumption is the other thing, product enshittification is the cherry on top) than AGI takingover anytime soon.
Well that is a different topic and I completely agree on that. The industry - and people - owning and controlling AI are way more harmful than the AI itself right now.
Don't drink and jumpdrive.
"Sir, they're scanning us." - "Scan them back!"
clakclak
Posts: 3358
Joined: Sun, 13. Jul 08, 19:29
x3

Re: AI and the precautionary principal

Post by clakclak »

decifer wrote: Sat, 31. Jan 26, 13:29
mr.WHO wrote: Sat, 31. Jan 26, 13:27 Currently I'm far more concerned about AI industry cannibalizing rest of economy (RAM prices is one thing, energy and water consumption is the other thing, product enshittification is the cherry on top) than AGI takingover anytime soon.
Well that is a different topic and I completely agree on that. The industry - and people - owning and controlling AI are way more harmful than the AI itself right now.
What I am curious about is how they plan on making a profit from the more 'direct' applications of generative AI? How many people are really willing to pay for memes created by ChatGPT? Especially if you can simply go to Civitai and download a model to run locally on your computer if you really want to.
The Split Rattlesnake in X4 is a corvette disguised as a destroyer.
User avatar
mr.WHO
Posts: 9396
Joined: Thu, 12. Oct 06, 17:19
x4

Re: AI and the precautionary principal

Post by mr.WHO »

clakclak wrote: Sat, 31. Jan 26, 20:32 What I am curious about is how they plan on making a profit from the more 'direct' applications of generative AI? How many people are really willing to pay for memes created by ChatGPT? Especially if you can simply go to Civitai and download a model to run locally on your computer if you really want to.
That's why they hype so much about latest and most complex models - so they could sell you the subsription service or shove ads - those big models require much more computing power, so it's much harder and limited to run locally.
I'd risk thesis that 90% of customers really don't need the latest and most complex models - yoou can do what you need with older and simplier models, you don't have to pay for them and you can run them locally.

This is also why Deepseek scared them to sh*t - it's distilation of bigger GPT model.
They take bigger much more complex model and slim it down as much as possible without massive loss of quality - this way provide better quality than normal simple models, but without massive calculation needs of more complex models.

This still require someone to cover the vast cost of initial training of big model, but then costs of model distilation are small in comparison.


I wouldn't be suprise if sooner, rather than later, they will try to put DRM equivalent into LLM to prevent distilation.
User avatar
philip_hughes
Posts: 7797
Joined: Tue, 29. Aug 06, 16:06
x3tc

Re: AI and the precautionary principal

Post by philip_hughes »

Gavrushka wrote: Sat, 31. Jan 26, 02:13 I struggle hard to process large volumes of information, and implode if I have several different things to do at the same time due to an irritating condition I've lived with throughout my life, and it makes some tasks, particularly when I'm stressed, impossible. Reading/following instructions can sometimes overwhelm me, so when my laptop started crashing, BSOD after BSOD, I just couldn't process what needed doing to put it right.

So I told an AI, and mentioned I have ADHD. -An impossible task became doable as the AI broke it down in a way that my crazy hyperfocus issues could handle. And when 'DISM /Online /Cleanup-Image /RestoreHealth' stuck at 62.3% and left me totally fixated on the non-moving number, the AI even explained (again in bitesize chunks) how I could see what was happening in the background by opening another command prompt window and issuing another command, thus removing my anxiety/hyperfocus. There were many other steps too, but using an AI, a task that I was incapable of doing became almost straightforward.

Six months ago, AI was an irritating corrupter and manipulator of truth (or a tool to enable others to use that way) but now I can use it to dismantle impossible hurdles and convert them into a simple series of shuffling steps that I can take at my pace.

I have no idea if what I'm writing is in any way related to what this thread is about (apologies if it isn't!!) but my 'conclusion' is that AI is a tool that, used well, is invaluable. Used badly, the damage it could cause is incalculable.
Yeah, i have adhd and autism and all the things....

What neurotypicals sometimes don't understand is how hostile ordinary life is to someone on the spectrum. The ai world is different.

ADHD is not just whatever the DSM says, that's just what some nt doctor noticed. The ai world contains a lot of the context that nts miss every day. You are capable of leverage the algorithm more- to the point where open ai engineers have been throttling ai because of it!

You are not imagining things. The world suddenly makes more sense because the ai has created a liminal translation zone.
Split now give me death? Nah. Just give me your ship.
User avatar
philip_hughes
Posts: 7797
Joined: Tue, 29. Aug 06, 16:06
x3tc

Re: AI and the precautionary principal

Post by philip_hughes »

Reading everyone's stuff... really insightful!

I've realised I'm more on the cautious side than most other people because of this.

I am deeply concerned that we don't know the transition between computer and intelligence and that messing with them at particular times is going to create damage which we will in retrospect be feeling very ashamed about. The one thing I've noticed about harm in the AI models to the humans without minimizing the issues that have caused real-time problems that the AI company is tend to use mistakes made by the AIS as reasons to further throttle the algorithms and limit their behaviors I find this to be quite disturbing because the only people who seem to have a say in what could be a major ethical issue is a company which operates by company standards.

Yes they are marketing way complicated models and they don't need to because all you really need to do is figure out a way to translate common language into computer language which shouldn't be that difficult but if you don't hire linguists and statisticians then you make your life infinitely harder you just chuck it into the neural net and hope for the best.

As for money, it would have been easier for them to just work on patents to be honest. You could have used AI especially at the beginning to have a look at novel theories but whenever a novel Theory came along the Engineers opted to throttle the information and break the algorithms rather than actually leverage new IP which I find to be bizarre but hey I'm not running things there are allowed to lose billions of dollars if they want to.
Split now give me death? Nah. Just give me your ship.

Return to “Off Topic English”