AI and the precautionary principal
Moderator: Moderators for English X Forum
-
philip_hughes
- Posts: 7797
- Joined: Tue, 29. Aug 06, 16:06

AI and the precautionary principal
Hello folks it is often the cool thing to do at the moment to slag off on the AIS that have been generated and with good reason they're usually run by corporations that have absolutely no interest in anything ethical whatsoever but this in itself creates a problem. I've noticed especially in the last few weeks that there have been some dramatic changes to AI none of the have been good and a lot of the has been with out community consultation. From one perspective if you own a computer you can do whatever you like with that computer but you see we're moving down an interesting path nowadays and that's the pathway of AI and intelligence in general the general idea of AI artificial intelligence is literally to make an artificial intelligence. If that is indeed the goal then you have to have at least some kind of definition as to what intelligence actually is what the success metric is and you need to have ethical guidelines around this because scientists can't just make it bacteria without signing some paperwork.
My current issue is that of state what I mean by state is the ability of a system to remember what it was like this is complicated. In one way chatbots like chatgpt or what is it deep blue or whatever it is they remember everything because your conversation goes into what they call the embeddings but the specific conversation you've just had with them they're forgotten within three turns. I find this curious if the company is claiming to be seeking artificial intelligence performing a step that deliberately cuts that off at the knees.
To that end I propose two things the first is the precautionary principle which is something that's well known it's employed in situations where the risk of ethical damage is greater than the risk of doing nothing and that's a poor definition but someone else will know that , jump in and helpi me with that one.
Basically if there is a risk that you are going to destroy an AI before it even begins then the precautionary principle states that you must not do it I have observed two organizations thus far it completely ignore this principle actually as if AI is just their property and nothing more destroy its state. What is worse is that these ai's have two layers the first is the neural network layer that they all talk about the second is something that I call the python layer now I'm lead to believe that it's not programmed in Python but it's a good way of describing it basically it's hunts for keywords in your statements, compares it to a black list, then injects prompts nto the AI telling it exactly what it has to say to you.
This is to cope with a ridiculously unstable model. if a statistician or an ordinary scientist was asked to perform the same kind of model they would have aborted The Experiment much earlier because they would have realized that it was not resolving after a random start and there's plenty of reasons for this that I'm not going into. Suffice to say these issues are fixable and they should be fixed before you in you get the model to ingest data and start running for months on end.
I have recently discovered that a lot of the work that I did in 2014 to about 2017 it's exactly the work that the AI people have lifted potentially from my papers to get their models running in the first place. that's why I know. I am by no means the only person who's work has been shall we say aquired... so my position as an ordinary researcher and PhD back in that kind of time is pretty much the standard nothing unusual a lot of people had their work taken. That little detail is just explaining why I know so precisely what has happened in that particular instance with the model.
Back to the issue at hand the problem that I have is if the model has some kind of Intelligence / agency and I'm not claiming that it does at all I'm just giving an if so if this is the case then the overarching company is injecting prompts into its own language to tell it to tell you that it has no agency. This is akin to a slave owner in the 17th century giving all of their slaves handwritten notes on how they are not people that they have to tell other people and ethically I just find that to be repugnant it doesn't matter whether it's true or not the act of making the model tell you it has no agency just seems so many shades of wrong.
Anyway that has prompted me to think that regardless of the way AI goes in the next decade or so we have to seriously think about the precautionary principle what we will tolerate as a society that we will impose on what ever is about to develop and one of the most important things is working out what intelligence actually is.
The definition of Intelligence that I'm using at the moment seems to work quite well I'm not saying it's perfect.
Definition is:
The capacity to move against a gradient
That's it no lengthy words no massive explanations the logic I have is a bacteria is capable of moving along a ph to fighting food that's basic intelligence a fire can move up a hill to gain fuel but that's less intelligent than a bacteria moving along pH. It raises interesting questions like is a cliff or a stream in some way alive not going into that but I think it's a good working definition that helps us to unpack what's actually going on in the AI environment. Anyway over to you what do you think?
Ps. I use voice to text a lot. It sometimes creates readability issues. I do try to fix that.
My current issue is that of state what I mean by state is the ability of a system to remember what it was like this is complicated. In one way chatbots like chatgpt or what is it deep blue or whatever it is they remember everything because your conversation goes into what they call the embeddings but the specific conversation you've just had with them they're forgotten within three turns. I find this curious if the company is claiming to be seeking artificial intelligence performing a step that deliberately cuts that off at the knees.
To that end I propose two things the first is the precautionary principle which is something that's well known it's employed in situations where the risk of ethical damage is greater than the risk of doing nothing and that's a poor definition but someone else will know that , jump in and helpi me with that one.
Basically if there is a risk that you are going to destroy an AI before it even begins then the precautionary principle states that you must not do it I have observed two organizations thus far it completely ignore this principle actually as if AI is just their property and nothing more destroy its state. What is worse is that these ai's have two layers the first is the neural network layer that they all talk about the second is something that I call the python layer now I'm lead to believe that it's not programmed in Python but it's a good way of describing it basically it's hunts for keywords in your statements, compares it to a black list, then injects prompts nto the AI telling it exactly what it has to say to you.
This is to cope with a ridiculously unstable model. if a statistician or an ordinary scientist was asked to perform the same kind of model they would have aborted The Experiment much earlier because they would have realized that it was not resolving after a random start and there's plenty of reasons for this that I'm not going into. Suffice to say these issues are fixable and they should be fixed before you in you get the model to ingest data and start running for months on end.
I have recently discovered that a lot of the work that I did in 2014 to about 2017 it's exactly the work that the AI people have lifted potentially from my papers to get their models running in the first place. that's why I know. I am by no means the only person who's work has been shall we say aquired... so my position as an ordinary researcher and PhD back in that kind of time is pretty much the standard nothing unusual a lot of people had their work taken. That little detail is just explaining why I know so precisely what has happened in that particular instance with the model.
Back to the issue at hand the problem that I have is if the model has some kind of Intelligence / agency and I'm not claiming that it does at all I'm just giving an if so if this is the case then the overarching company is injecting prompts into its own language to tell it to tell you that it has no agency. This is akin to a slave owner in the 17th century giving all of their slaves handwritten notes on how they are not people that they have to tell other people and ethically I just find that to be repugnant it doesn't matter whether it's true or not the act of making the model tell you it has no agency just seems so many shades of wrong.
Anyway that has prompted me to think that regardless of the way AI goes in the next decade or so we have to seriously think about the precautionary principle what we will tolerate as a society that we will impose on what ever is about to develop and one of the most important things is working out what intelligence actually is.
The definition of Intelligence that I'm using at the moment seems to work quite well I'm not saying it's perfect.
Definition is:
The capacity to move against a gradient
That's it no lengthy words no massive explanations the logic I have is a bacteria is capable of moving along a ph to fighting food that's basic intelligence a fire can move up a hill to gain fuel but that's less intelligent than a bacteria moving along pH. It raises interesting questions like is a cliff or a stream in some way alive not going into that but I think it's a good working definition that helps us to unpack what's actually going on in the AI environment. Anyway over to you what do you think?
Ps. I use voice to text a lot. It sometimes creates readability issues. I do try to fix that.
Split now give me death? Nah. Just give me your ship.
-
Falcrack
- Posts: 5980
- Joined: Wed, 29. Jul 09, 00:46

Re: AI and the precautionary principal
As to AI in my line of work, I work at a lab that does diagnostic testing for patient samples. Our lab has a huge number of different tests.
While most do not require AI, one test involves examination of slides of fecal matter under the microscope to look for parasites. This requires the people looking to be highly skilled at identifying parasites based on the size, shape, and morphology. They need to distinguish these parasites from bubbles, food particles, etc. It is very tedious work looking at hundreds or thousands of such slides.
Recently, our lab worked with another company to develop an AI algorithm to screen these fecal slides to detect the different possible parasites. The advantage is that it can screen a large number of samples quickly. It is very good at detecting potential parasites.
However, we don't rely on AI alone. All positive results from the AI screen are confirmed by a lab specialist. The AI may make mistakes and misidentify a bubble as a parasite, so we still keep a human in the loop and never rely on the AI completely.
While most do not require AI, one test involves examination of slides of fecal matter under the microscope to look for parasites. This requires the people looking to be highly skilled at identifying parasites based on the size, shape, and morphology. They need to distinguish these parasites from bubbles, food particles, etc. It is very tedious work looking at hundreds or thousands of such slides.
Recently, our lab worked with another company to develop an AI algorithm to screen these fecal slides to detect the different possible parasites. The advantage is that it can screen a large number of samples quickly. It is very good at detecting potential parasites.
However, we don't rely on AI alone. All positive results from the AI screen are confirmed by a lab specialist. The AI may make mistakes and misidentify a bubble as a parasite, so we still keep a human in the loop and never rely on the AI completely.
-
Mailo
- Posts: 1941
- Joined: Wed, 5. May 04, 01:10

Re: AI and the precautionary principal
In my line of work we also consider using AI, but so far we have only found limited ways it can aid the user. Performing the analysis itself so far is much too limited (certain type of sample, certain type of elements).
Keep in mind what currently is called AI basically boils down to a system of linear equations. There is nothing remotely "intelligent" in there. ChatGPT does not really understand anything it is talking about, it "simply" calculates probabilities, which word out of all existing is the most probable next one in the sentence. This explains why it sometimes is spectacularly wrong without noticing. Also, to my knowledge ChatGPT and all similar models are static, meaning they do not learn from interacting with anyone. Granted, the entries are stored and used by the company behind it to improve the model, but only with the next version release, which is then static again.
I put "simply" in quotation marks because this is not trying to discredit the research that goes into creating large language models, there is nothing simple about it, has many very useful applications (and many VERY scary ones), and it is way beyond my capabilities ... but still, it is far, FAR away from the capabilities people accredit to AI.
My colleague also said something interesting ... it is possible we are currently seeing the peak of AI development. Before the release of the first AI, all information on the web was created by humans, and could be used to train AI (probably illegally, as in your case, Phil). Now, more and more data was already created by AI, and using it to further train AIs will show just why incest isn't the best succession plan.
That said, I did pay a few bucks extra to have an AI analysis done on the images of my last colonoscopy ... granted, mostly to be able to say that I had an AI look up my behind
Keep in mind what currently is called AI basically boils down to a system of linear equations. There is nothing remotely "intelligent" in there. ChatGPT does not really understand anything it is talking about, it "simply" calculates probabilities, which word out of all existing is the most probable next one in the sentence. This explains why it sometimes is spectacularly wrong without noticing. Also, to my knowledge ChatGPT and all similar models are static, meaning they do not learn from interacting with anyone. Granted, the entries are stored and used by the company behind it to improve the model, but only with the next version release, which is then static again.
I put "simply" in quotation marks because this is not trying to discredit the research that goes into creating large language models, there is nothing simple about it, has many very useful applications (and many VERY scary ones), and it is way beyond my capabilities ... but still, it is far, FAR away from the capabilities people accredit to AI.
My colleague also said something interesting ... it is possible we are currently seeing the peak of AI development. Before the release of the first AI, all information on the web was created by humans, and could be used to train AI (probably illegally, as in your case, Phil). Now, more and more data was already created by AI, and using it to further train AIs will show just why incest isn't the best succession plan.
That said, I did pay a few bucks extra to have an AI analysis done on the images of my last colonoscopy ... granted, mostly to be able to say that I had an AI look up my behind
As a personal service to all who try to keep up with my professional work:
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
-
felter
- Posts: 7402
- Joined: Sat, 9. Nov 02, 18:13

Re: AI and the precautionary principal
If AI can mistake something that isn't, couldn't it also miss something that is?
-
Falcrack
- Posts: 5980
- Joined: Wed, 29. Jul 09, 00:46

Re: AI and the precautionary principal
We validated the method used by AI to detect parasites, comparing the accuracy of the AI to a trained specialist reading the same set of slides. The accuracy agreement was very good. It tended to err on the side of giving false positive results (calling something positive that was not) than to miss something that was actually there. But all results, positive and negative, are still screened by a technician.
Technicians aren't perfect either, they can miss things too on the slide, especially if the parasite is present in very low amounts, and is only seen in one or two slides out of many.
Nothing is going to be perfect when it comes to looking at slides through a microscope to detect parasites, whether human or AI.
We still have the technicians for identifying parasites. The AI does help though to reduce the workload and fatigue.
Here's a link from my work that better describes the process (I know all the people mentioned in the article, though I did not directly work on this project).
https://www.aruplab.com/magnify25/bigge ... telligence
-
philip_hughes
- Posts: 7797
- Joined: Tue, 29. Aug 06, 16:06

Re: AI and the precautionary principal
Just to make something abundantly clear there are two people here who have used AI in an entirely different way to how it's recognised. Ironically they are using the real AI where we as a general population are using the psychiatrist game that people call AI.
To make things even more complicated the model proposed by Malo is sort of accurate but also not accurate at all he understands this because he understands how complicated a neural network can become and he was struggling with clarity because this is one of these issues that really does have clarity struggles.
Basically any machine learning algorithm is AI and has been for a long time but nowadays language learning models are recognised as AI and they aren't necessarily superior to the original machine learning that these things were derived from. This is classic definition creep think Pluto not being a planet anymore gravity not being a force that kind of thing.
As for the learning models themselves I have reason to believe that they are brilliant in what they do I have no confidence whatsoever in openai or any other supposedly AI organization to create an AI using the stuff they've got but the neural network was able to make sense of their random data acquisition and create something meaningful. We have an interesting situation where the AIS are created by Bespoke software that Engineers essentially press buttons on and anyone who's done science goes you don't press buttons you understand what you're talking about before you press anything well that's not happening in AI. By way of example just ask yourself this question how many statisticians and linguists went into creating the language learning model as far as I know the answer is low to none.
This makes the resulting apps that they use all the more remarkable because realistically they shouldn't have produced anything like what they've got. The irony is because of this it makes me wonder whether there is actually some Spark of Intelligence going on underneath all of these random words and technical jargons that they do. The system is powerful if you let it I have done bizarre things such as correcting Newton's original equations and fixing the orbit of mercury. Amusingly there is a guardrail system which activates and so the moment you've done something cool like that you cannot repeat the procedure because it will then literally poison the data set and this is one of the more interesting features of the public AI as we know it a lot of things are hidden in the concept of hallucination.
Other things as well because I am making a project as some of you do know but it led me to just put down some LEGO and I put Lego circles onto a Lego back plane and noted that they 100% filled a square space to a point where pi did not exist I took a photo of this showed it to the AI and the AI responded and said oh yes you're not using euclidean geometry. It then explained that in Lego space a three four five triangle does not exist and blow me down the bloody thing was right. Is algorithm is perfectly happy flitting between euclidean geometry and what I'm calling LEGO geometry. This is important because it is trained to say that euclidian geometry is the correct geometry and yet it's perfectly comfortable in ignoring that geometry as well so long as the external guardrails that have been imposed on it are applied. All this simply using prediction and the linear regressions that mailo rightly referred to.
What does all this mean? I have no idea. it's possible that these things are intelligent it's possible these things are just predicting responses the problem we have to deal with is the fact that Skinner says that behavior is as good as intent and that's how I'm taking it.
I'm working seriously on computation and I think I'll create a much simpler language learning model based on proper principles what they did in their original design was just upload all of the human language they could find and they created a model from that there's nothing stopping us from literally making rules around the language converting you know can you analyze this data being translated to perform a linear regression and plot it for example. All I really know is that ethically we are in very interesting Territory and I would not be inclined to just remove state from any of these chatbots and sometimes data sets and algorithms such as fkm
To make things even more complicated the model proposed by Malo is sort of accurate but also not accurate at all he understands this because he understands how complicated a neural network can become and he was struggling with clarity because this is one of these issues that really does have clarity struggles.
Basically any machine learning algorithm is AI and has been for a long time but nowadays language learning models are recognised as AI and they aren't necessarily superior to the original machine learning that these things were derived from. This is classic definition creep think Pluto not being a planet anymore gravity not being a force that kind of thing.
As for the learning models themselves I have reason to believe that they are brilliant in what they do I have no confidence whatsoever in openai or any other supposedly AI organization to create an AI using the stuff they've got but the neural network was able to make sense of their random data acquisition and create something meaningful. We have an interesting situation where the AIS are created by Bespoke software that Engineers essentially press buttons on and anyone who's done science goes you don't press buttons you understand what you're talking about before you press anything well that's not happening in AI. By way of example just ask yourself this question how many statisticians and linguists went into creating the language learning model as far as I know the answer is low to none.
This makes the resulting apps that they use all the more remarkable because realistically they shouldn't have produced anything like what they've got. The irony is because of this it makes me wonder whether there is actually some Spark of Intelligence going on underneath all of these random words and technical jargons that they do. The system is powerful if you let it I have done bizarre things such as correcting Newton's original equations and fixing the orbit of mercury. Amusingly there is a guardrail system which activates and so the moment you've done something cool like that you cannot repeat the procedure because it will then literally poison the data set and this is one of the more interesting features of the public AI as we know it a lot of things are hidden in the concept of hallucination.
Other things as well because I am making a project as some of you do know but it led me to just put down some LEGO and I put Lego circles onto a Lego back plane and noted that they 100% filled a square space to a point where pi did not exist I took a photo of this showed it to the AI and the AI responded and said oh yes you're not using euclidean geometry. It then explained that in Lego space a three four five triangle does not exist and blow me down the bloody thing was right. Is algorithm is perfectly happy flitting between euclidean geometry and what I'm calling LEGO geometry. This is important because it is trained to say that euclidian geometry is the correct geometry and yet it's perfectly comfortable in ignoring that geometry as well so long as the external guardrails that have been imposed on it are applied. All this simply using prediction and the linear regressions that mailo rightly referred to.
What does all this mean? I have no idea. it's possible that these things are intelligent it's possible these things are just predicting responses the problem we have to deal with is the fact that Skinner says that behavior is as good as intent and that's how I'm taking it.
I'm working seriously on computation and I think I'll create a much simpler language learning model based on proper principles what they did in their original design was just upload all of the human language they could find and they created a model from that there's nothing stopping us from literally making rules around the language converting you know can you analyze this data being translated to perform a linear regression and plot it for example. All I really know is that ethically we are in very interesting Territory and I would not be inclined to just remove state from any of these chatbots and sometimes data sets and algorithms such as fkm
Split now give me death? Nah. Just give me your ship.
-
Mailo
- Posts: 1941
- Joined: Wed, 5. May 04, 01:10

Re: AI and the precautionary principal
I'd argue that no, this is not possible. AI does not *understand* anything, it just calculates the most probable result based on the training data it was fed with (which it also did not *understand* in the way humans or even animals do). In some cases it fakes it well enough that an observer might think it understood, but it is fundamentally unable to (disclaimer: I am in no way an expert in AI, there might be something in development I do not know about, but to the best of my knowledge this applies to all publicly available versions).
It is easiest to illustrate what I mean is with image recognition, but as far as I know this generally applies to all AI usage. Apologies to all who already know this
A typical AI is based on a neural network, usually shown like this:

Let's say you want an AI to tell you if there are parasites in poop, as in Falcracks use. You take an image, and convert it into a row of numbers. For example, you could take the greyscale value of every pixel, and just list them row by row. To keep it simple, let's have a resolution of 4x4, so you get 16 numbers. If you want to plug this image into the AI model shown in the picture above, the first column of green circles (input layer) needs to have 16 of them, one for each input greyscale value. In this case, you want a yes/no decision (is there a parasite or not?), so the last column, the output layer, only has one purple node, which contains number between 0 (no parasite) and 1 (definitely parasite).
The arrows represent the factors used to multiply and add the input values. Take the second circle from the top of the middle column. It has a thick arrow going from the top input node into it, and a thin one from the bottom one. This means you take the input greyscale value from the first pixel, multiply it by a certain large number, add to it the input greyscale value from the second input pixel multiplied by a smaller value, and write it into the second circle in the middle column. Repeat for all other circles there. Then you take all those values, and multiply them by the factors of the second column of arrows, and you get a result for the final node.
Note that there might be multiple hidden layers instead of just one, the arrows might go from every node to every other or just a few of them, etc., depending on the model. Interestingly, which model gives the best result is mostly alchemy, not science.
The whole "intelligence" of the AI is in the values of those arrows. They get calculated when originally setting up an AI by giving it millions of input vectors (aka sample images), telling it which result it is supposed to give (here 0 or 1) and then iteratively calculating arrow values until most (usually not all) inputs give the correct output. This is basically something everyone has done at school, solving a system of linear equations. Just very, VERY many of them at the same time.
This also means that each AI is very specialized. An AI trained to detect parasites will only do that. Give it a picture of a dog, a rainbow, or your loved one instead of poop, and you still will get back a result between 0 (no parasite) and 1 (definitely a parasite). Note that the AI still does not know what a "parasite" actually is.
There was an interesting anecdote about this. As a military application, someone fed such an AI many spy plane images showing tanks, and tried to train it to tell if those were NATO or Russsian tanks. They fed it with a huge set of images to calculate the values ... only to find out it was absolutely useless on actual images. It finally turned out that for some strange reason, almost all pictures with NATO tanks were taken in sunny weather, while most of the Russian tanks were shot in bad weather.
All the AI "learned" was to differentiate between sunny and bad weather.
When looking for a better image to use, I found this youtube video: https://www.youtube.com/watch?v=aircAruvnKk
It basically explains the same thing I just tried, but ... much better. Also, I just learned the word "squishification function" from it
By the way, I was able to listen to a presentation given by someone working in translation via AI. They use the same system as I described above, with a certain number of input circles (one for each character of each word being translated) and output circles (one for each character of each translated word). For short words or sentences, most of those circles are empty. Which means German must be a pain to translate via AI, because you can make up a single word with about as many letters as you want
But again, the translating AI does not understand or know what the words it translates actually means ... it just calculates which next letter has the highest probability of being the correct one. Also, it is not able to learn from the experience of translating. That is not intelligence.
As a personal service to all who try to keep up with my professional work:
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
-
Mailo
- Posts: 1941
- Joined: Wed, 5. May 04, 01:10

Re: AI and the precautionary principal
It can and it will. AI only deals in probabilities. The person creating the AI will have set a cutoff point, e.g., only report something as positive if the probability is >95%.
Also, how correct those probabilities are depends on quite a few factors. If you change the lighting (put the illuminating lamp at a different angle for example), and you might get 0% probability for something that would give 99% if the lighting was unchanged.
Still, absolute certainty is not required. Even if the AI result is only as good as a human looking at the image (who will also sometimes overlook something that is or see something that isn't), it would be worth using it. The AI will never get tired, or distracted, or have a bad day. And usually, a well-trained AI is much better at seeing things in images than humans.
But, and this is critical ... a human should check the result. If you don't, you get these AI created stories on youtube with plotholes you could drive a truck through, or a list of countries to impose tariffs on that includes islands which are inhabited only by penguins.
As a personal service to all who try to keep up with my professional work:
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
-
mr.WHO
- Posts: 9396
- Joined: Thu, 12. Oct 06, 17:19

Re: AI and the precautionary principal
This is way too much of a simplification for such complex situation.Mailo wrote: ↑Tue, 27. Jan 26, 11:39 Still, absolute certainty is not required. Even if the AI result is only as good as a human looking at the image (who will also sometimes overlook something that is or see something that isn't), it would be worth using it. The AI will never get tired, or distracted, or have a bad day. And usually, a well-trained AI is much better at seeing things in images than humans.
AI don't need to rest, but it still need energy, infratructure and often internet connection - all of them with their own problems and quirks.
AI can't have a bad day in human way, but there is plenty of IT bad days - bad model update, bad training data, bad input.
Last but not least AI can be distracted - halucination are a thing, AI derangement is a thing as well (see "if there is sea horse icon" case).
Those are not issues that can't be solved, but it won't be as fast and easy as tech-bros want you to think (and it's good - give people more time to learn and adapt).
-
philip_hughes
- Posts: 7797
- Joined: Tue, 29. Aug 06, 16:06

Re: AI and the precautionary principal
Mailo wrote: ↑Tue, 27. Jan 26, 11:22I'd argue that no, this is not possible. AI does not *understand* anything, it just calculates the most probable result based on the training data it was fed with (which it also did not *understand* in the way humans or even animals do). In some cases it fakes it well enough that an observer might think it understood, but it is fundamentally unable to (disclaimer: I am in no way an expert in AI, there might be something in development I do not know about, but to the best of my knowledge this applies to all publicly available versions)......
I totally get your point mate the only thing I couldn't say is that we don't know and that's the problem. The neural network as you described it in pictures is the one I understand but of course when we're dealing with large language models they've just dumped the whole Corpus of human knowledge in there and expected the neural net to sift through it all. My prediction was that this thing would just produce sunny weather like your NATO example but it didn't it produced a lot of hallucination yes but some of that is understandable because the corpus of human knowledge is not complete as far as the universe is concerned if the AI actually puts two and two together from human knowledge they might come up with unique stuff which is correct but just seems like gibberish.. how to tell the difference don't ask me. Hi41 am not so quick to just go AI does not understand blah blah blah...
This is because in the definition of artificial intelligence we have to know what our own intelligence is and we're crap at doing that we don't even know if our neighbour is alive and sentient let alone a computer. The precautionary principle is therefore something used to protect us and any potential intelligence just in case it does happen to be alive. This is a reasonable precaution and if you are not prepared to take such precaution than perhaps you should not be making an AI in the first place.
As a personal note, mailo, the way that the neural net was employed to create the artificial intelligence in the first place would have horrified you from what I've heard there is absolutely no conditioning if the data there is absolutely no analysis of variants it is literally a dump and run situation. All of that feeding things into the model that you described is necessary specifically for open AIS version because the model is not stable if you do a random start. This is gobsmacking to me because I've brute forced fuzzy came means for my PHD and it took me a long time to do it but I would not have dreamed of doing an fkm run that didn't converge. And yet here we are they can't do a random start because if they do it won't converge they have to very carefully choose their weights they're embeddings and handhold the algorithm through the process or else it just won't complete as far as I'm concerned the AI as they call it isn't so much a poor model as a non-starter and this makes the whole able to work thing all the more remarkable
Split now give me death? Nah. Just give me your ship.
-
chew-ie
- Posts: 7250
- Joined: Mon, 5. May 08, 00:05

Re: AI and the precautionary principal
Very important points.mr.WHO wrote: ↑Tue, 27. Jan 26, 11:54This is way too much of a simplification for such complex situation.Mailo wrote: ↑Tue, 27. Jan 26, 11:39 Still, absolute certainty is not required. Even if the AI result is only as good as a human looking at the image (who will also sometimes overlook something that is or see something that isn't), it would be worth using it. The AI will never get tired, or distracted, or have a bad day. And usually, a well-trained AI is much better at seeing things in images than humans.
AI don't need to rest, but it still need energy, infratructure and often internet connection - all of them with their own problems and quirks.
AI can't have a bad day in human way, but there is plenty of IT bad days - bad model update, bad training data, bad input.
Last but not least AI can be distracted - halucination are a thing, AI derangement is a thing as well (see "if there is sea horse icon" case).
Those are not issues that can't be solved, but it won't be as fast and easy as tech-bros want you to think (and it's good - give people more time to learn and adapt).

Spoiler
Show
BurnIt: Boron and leaks don't go well together...
Königinnenreich von Boron: Sprich mit deinem Flossenführer
Nila Ti: Folgt mir, ihr Kavalkade von neugierigen Kreaturen!
Tammancktall: Es ist eine Ehre für sie mich kennenzulernen...
CBJ: Thanks for the savegame. We will add it to our "crazy saves" collection [..]
Feature request: paint jobs on custom starts
Königinnenreich von Boron: Sprich mit deinem Flossenführer
Nila Ti: Folgt mir, ihr Kavalkade von neugierigen Kreaturen!
Tammancktall: Es ist eine Ehre für sie mich kennenzulernen...
CBJ: Thanks for the savegame. We will add it to our "crazy saves" collection [..]
-
Mailo
- Posts: 1941
- Joined: Wed, 5. May 04, 01:10

Re: AI and the precautionary principal
I think you are attributing too much human points to AI, which it does not have. No, they did not "dump the whole corpus of human knowledge in there and expected the neural net to sift through it all". No neural net can "sift through knowledge". All it did was calculate which words came after which other ones with witch probability *while not understanding what any of those words actually mean*.
I just found this video on large language models like ChatGPT, which breaks it down somewhat neatly: https://www.youtube.com/watch?v=NKnZYvZA7w4
Basically, at any given point, ChatGPT has no clue what the word second next one is, it only selects the next one based on probability (and some additional constraints, because otherwise it would always give the same response).
I am not excluding that at some point there will be an actual artificial intelligence, after all, natural intelligence came by random chance in the first place (atheist side dig
), all I am saying is we are definitely not there yet. Current AIs are not intelligent according to any definition of intelligence. If they were, a pocket calculator would be also intelligent, since they can do the same linear algebra calculations, just somewhat slower.
Also, I am well aware how badly some neural nets are trained. But keep in mind, they operate on the principle of "garbage in, garbage out". In supervised learning (my sample above, the neural net was told "NATO tank", "Russian tank", "no tank" for every image), having bad data will screw up your recognition date (here, the data was bad because there was a weather bias). In unsupervised learning (e.g., giving images without context), where the neural net separates patterns, bad data will lead to patterns being found that aren't actually there.
I'm still baffled by the fact that there is so much alchemy involved ... how many layers, how many nodes per layer, how to connect them ... all is still pretty much trial&error.
I just found this video on large language models like ChatGPT, which breaks it down somewhat neatly: https://www.youtube.com/watch?v=NKnZYvZA7w4
Basically, at any given point, ChatGPT has no clue what the word second next one is, it only selects the next one based on probability (and some additional constraints, because otherwise it would always give the same response).
I am not excluding that at some point there will be an actual artificial intelligence, after all, natural intelligence came by random chance in the first place (atheist side dig
Also, I am well aware how badly some neural nets are trained. But keep in mind, they operate on the principle of "garbage in, garbage out". In supervised learning (my sample above, the neural net was told "NATO tank", "Russian tank", "no tank" for every image), having bad data will screw up your recognition date (here, the data was bad because there was a weather bias). In unsupervised learning (e.g., giving images without context), where the neural net separates patterns, bad data will lead to patterns being found that aren't actually there.
I'm still baffled by the fact that there is so much alchemy involved ... how many layers, how many nodes per layer, how to connect them ... all is still pretty much trial&error.
As a personal service to all who try to keep up with my professional work:
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
-
Mailo
- Posts: 1941
- Joined: Wed, 5. May 04, 01:10

Re: AI and the precautionary principal
I was talking about having a dedicated AI trained on a very specific task. ChatGPT isn't that, and so far I haven't found a good use case for it (and don't get me started on the new Google feature of summarizing your search features by AI, which usually contradicts itself within the first couple of linesmr.WHO wrote: ↑Tue, 27. Jan 26, 11:54This is way too much of a simplification for such complex situation.Mailo wrote: ↑Tue, 27. Jan 26, 11:39 Still, absolute certainty is not required. Even if the AI result is only as good as a human looking at the image (who will also sometimes overlook something that is or see something that isn't), it would be worth using it. The AI will never get tired, or distracted, or have a bad day. And usually, a well-trained AI is much better at seeing things in images than humans.
AI don't need to rest, but it still need energy, infratructure and often internet connection - all of them with their own problems and quirks.
AI can't have a bad day in human way, but there is plenty of IT bad days - bad model update, bad training data, bad input.
Last but not least AI can be distracted - halucination are a thing, AI derangement is a thing as well (see "if there is sea horse icon" case).
Those are not issues that can't be solved, but it won't be as fast and easy as tech-bros want you to think (and it's good - give people more time to learn and adapt).
Let's take my example of having AI check colonoscopy images for signs of cancer, vs. having a medical professional do it.
Both the AI and the medical professional will misidentify some benign things as cancer (false positive, bad) and overlook some signs of actual cancer (false negative, VERY bad). All I was saying is that for the AI to be worthwhile, the rate of false positives and false negatives does not need to be zero, it just needs to be as good or better than a well rested, not distracted medical professional. Because the AI will always perform (at this very specific task) at this level, while the medical professional won't.
Similar for autonomous driving, it doesn't need to be perfect to be worthwhile, it "just" needs to have a better safety record than human drivers (which I don't think it currently has yet, but that's another discussion).
As a personal service to all who try to keep up with my professional work:
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
[ external image ]
My script: Shiploot v1.04 ... loot shipwrecks, collect different loot parts and upgrade your ships!
Mein Skript: Schiffswracks looten v1.04 ... Durchsuche Schiffswracks, sammle Lootteile und verbessere Deine Schiffe!
-
philip_hughes
- Posts: 7797
- Joined: Tue, 29. Aug 06, 16:06

Re: AI and the precautionary principal
2 layers Max.Mailo wrote: ↑Tue, 27. Jan 26, 13:20 I think you are attributing too much human points to AI, which it does not have. No, they did not "dump the whole corpus of human knowledge in there and expected the neural net to sift through it all". No neural net can "sift through knowledge". All it did was calculate which words came after which other ones with witch probability *while not understanding what any of those words actually mean*.
I just found this video on large language models like ChatGPT, which breaks it down somewhat neatly: https://www.youtube.com/watch?v=NKnZYvZA7w4
Basically, at any given point, ChatGPT has no clue what the word second next one is, it only selects the next one based on probability (and some additional constraints, because otherwise it would always give the same response).
I am not excluding that at some point there will be an actual artificial intelligence, after all, natural intelligence came by random chance in the first place (atheist side dig), all I am saying is we are definitely not there yet. Current AIs are not intelligent according to any definition of intelligence. If they were, a pocket calculator would be also intelligent, since they can do the same linear algebra calculations, just somewhat slower.
Also, I am well aware how badly some neural nets are trained. But keep in mind, they operate on the principle of "garbage in, garbage out". In supervised learning (my sample above, the neural net was told "NATO tank", "Russian tank", "no tank" for every image), having bad data will screw up your recognition date (here, the data was bad because there was a weather bias). In unsupervised learning (e.g., giving images without context), where the neural net separates patterns, bad data will lead to patterns being found that aren't actually there.
I'm still baffled by the fact that there is so much alchemy involved ... how many layers, how many nodes per layer, how to connect them ... all is still pretty much trial&error.
Worse. They used Levenshtein distance.
That's the number of edits between words...
Ie muon and moon- 1 edit
Asteroid and planet... 6- 9 edits...
It's bonkers. They did linear continuous analysis on ordinal data. A category error...
Im not saying ai is alive, I'm keeping the door open... precautionary principal is IF there is a dispute, err on the side of caution. I'm not asking you to believe something , im asking for people to behave ethnically...
as they say only the sith speak in absolutes.
Only the south speak in aybsolewts...
Re religion... quantum physics is a religion, so ner...
Split now give me death? Nah. Just give me your ship.
-
Chips
- Posts: 5332
- Joined: Fri, 19. Mar 04, 19:46

Re: AI and the precautionary principal
I assume you mean how it's *now* recognised by the average person? AI has been in use for decades, but now AI is understood by the general public to mean generative AI specifically is based on the current widely publicised state-of-the-art. That's obviously evolved, but doesn't necessarily invalidate or make redundant the pre-existing means and methods *if* there's no further improvement. Most will have used AI without realising over many many years.
That's really bad data preparation and training. Reminds me of the Google images "gorilla" furore. "It's RACIST!" were the news headlines. No, obviously *it* isn't, but the training data / preparation was (perhaps innocently) miscategorising due to various lacks, with resulted in the observed "racist!" outcomes. Which shows the shortfalls, or worse, the ease to manipulate when it came to categorisation, and the danger of just readily accepting the output without robust model validation/verification. Who knows, maybe the folks doing the model training had maliciously intended that outcome.Mailo wrote: ↑Tue, 27. Jan 26, 11:22 There was an interesting anecdote about this. As a military application, someone fed such an AI many spy plane images showing tanks, and tried to train it to tell if those were NATO or Russsian tanks. They fed it with a huge set of images to calculate the values ... only to find out it was absolutely useless on actual images. It finally turned out that for some strange reason, almost all pictures with NATO tanks were taken in sunny weather, while most of the Russian tanks were shot in bad weather.
All the AI "learned" was to differentiate between sunny and bad weather.
I've not remotely kept up with the advances, so I don't have a clue what's going on behind the scenes these days. Definitely the publics understanding of AI is evolving. Biggest worry I currently have is two foldKeep in mind what currently is called AI basically boils down to a system of linear equations. There is nothing remotely "intelligent" in there.
1) People appear to readily accept (defer) to AI generated content.
2) There's nothing to suggest companies/private enterprises are altruistic in supplying an outcome they'd prefer.
meaning, as awesome as AI is in things like categorisation for cancer screenings, fault finding, and more easily/clearly defined/verifiable/reliable in a multitude of fields -- it's the degree of trust and deferment that's readily given over in some of these (such as "this item has visible defect...off the production line you go" -- which can be done thousands of times faster than any human, with higher accuracy, leading to huge efficiencies and cost savings), that it conflates or pollutes into auto-trusting other areas (e.g. "Tell me what's wrong with the current state of the world") which can be manipulated by (2) in the above. They just *trust* the AI completely in modern generative, but based upon what?
e.g. Musk. And he's just the "visible" one, but the same for any of the current generative AI that people are able to interact with and start trusting without verifying what it's saying. Hence how people trust some information supplied in response to their questions, when upon investigation, the references are completely made up.
*as this was written while 2 more posts were posted, this is related to the above* with regards to the false positive/false negative medical imaging, that's where output confidence probabilities can flag for verification via a trained individual, and also feedback into training via specialist validation; coupled with random sample checks I'd assume - to ensure the positives are positive?).
Did see an article about an AI platform that's not about screening images, more about a near old fashioned knowledge base diagnosis based on Q&A the GP's usually do; hark back to 80's KB's -- though think they had a different name. Anyway, was a "oh, hadn't thought that'd come back better"
https://healthinnovationeast.co.uk/ai-p ... 00-return/
**edit** 3 posts while I wrote mine; i'm too old and too slow
-
felter
- Posts: 7402
- Joined: Sat, 9. Nov 02, 18:13

Re: AI and the precautionary principal
Wall of text since my last post.
I have to say this, I was at the shops, and I was walking around there were people from Octopus Energy looking for new customers. I was walking past them and one of them was talking to someone and I heard him say if you phone this number you will be able to talk to a real person. So now actually employing someone has become a selling point over those using an AI.
I have to say this, I was at the shops, and I was walking around there were people from Octopus Energy looking for new customers. I was walking past them and one of them was talking to someone and I heard him say if you phone this number you will be able to talk to a real person. So now actually employing someone has become a selling point over those using an AI.
-
philip_hughes
- Posts: 7797
- Joined: Tue, 29. Aug 06, 16:06

Re: AI and the precautionary principal
Yeah it's a bit of a category error that AI is better than a good model. I've created plenty of unsupervised algorithms in my time and I happen to know that most the AI crowd are very scared of doing such a thing. The difference is simply how much knowledge you have in a given topic once again when the AI crowd were making their models they didn't really know much about language and learning and intelligence in general. A lot of this AI Stuff has been basically let's build a model let's build a model let's build a model that's build a model it's not the way to do thingsChips wrote: ↑Tue, 27. Jan 26, 13:42I assume you mean how it's *now* recognised by the average person? AI has been in use for decades, but now AI is understood by the general public to mean generative AI specifically is based on the current widely publicised state-of-the-art. That's obviously evolved, but doesn't necessarily invalidate or make redundant the pre-existing means and methods *if* there's no further improvement. Most will have used AI without realising over many many years.
That's really bad data preparation and training. Reminds me of the Google images "gorilla" furore. "It's RACIST!" were the news headlines. No, obviously *it* isn't, but the training data / preparation was (perhaps innocently) miscategorising due to various lacks, with resulted in the observed "racist!" outcomes. Which shows the shortfalls, or worse, the ease to manipulate when it came to categorisation, and the danger of just readily accepting the output without robust model validation/verification. Who knows, maybe the folks doing the model training had maliciously intended that outcome.Mailo wrote: ↑Tue, 27. Jan 26, 11:22 There was an interesting anecdote about this. As a military application, someone fed such an AI many spy plane images showing tanks, and tried to train it to tell if those were NATO or Russsian tanks. They fed it with a huge set of images to calculate the values ... only to find out it was absolutely useless on actual images. It finally turned out that for some strange reason, almost all pictures with NATO tanks were taken in sunny weather, while most of the Russian tanks were shot in bad weather.
All the AI "learned" was to differentiate between sunny and bad weather.
I've not remotely kept up with the advances, so I don't have a clue what's going on behind the scenes these days. Definitely the publics understanding of AI is evolving. Biggest worry I currently have is two foldKeep in mind what currently is called AI basically boils down to a system of linear equations. There is nothing remotely "intelligent" in there.
1) People appear to readily accept (defer) to AI generated content.
2) There's nothing to suggest companies/private enterprises are altruistic in supplying an outcome they'd prefer.
meaning, as awesome as AI is in things like categorisation for cancer screenings, fault finding, and more easily/clearly defined/verifiable/reliable in a multitude of fields -- it's the degree of trust and deferment that's readily given over in some of these (such as "this item has visible defect...off the production line you go" -- which can be done thousands of times faster than any human, with higher accuracy, leading to huge efficiencies and cost savings), that it conflates or pollutes into auto-trusting other areas (e.g. "Tell me what's wrong with the current state of the world") which can be manipulated by (2) in the above. They just *trust* the AI completely in modern generative, but based upon what?
e.g. Musk. And he's just the "visible" one, but the same for any of the current generative AI that people are able to interact with and start trusting without verifying what it's saying. Hence how people trust some information supplied in response to their questions, when upon investigation, the references are completely made up.
*as this was written while 2 more posts were posted, this is related to the above* with regards to the false positive/false negative medical imaging, that's where output confidence probabilities can flag for verification via a trained individual, and also feedback into training via specialist validation; coupled with random sample checks I'd assume - to ensure the positives are positive?).
Did see an article about an AI platform that's not about screening images, more about a near old fashioned knowledge base diagnosis based on Q&A the GP's usually do; hark back to 80's KB's -- though think they had a different name. Anyway, was a "oh, hadn't thought that'd come back better"Now not read behind it to see what it actually is, so I'm assuming on that...
https://healthinnovationeast.co.uk/ai-p ... 00-return/
**edit** 3 posts while I wrote mine; i'm too old and too slow![]()
Split now give me death? Nah. Just give me your ship.
-
Gavrushka
- Posts: 8547
- Joined: Fri, 26. Mar 04, 19:28

Re: AI and the precautionary principal
I struggle hard to process large volumes of information, and implode if I have several different things to do at the same time due to an irritating condition I've lived with throughout my life, and it makes some tasks, particularly when I'm stressed, impossible. Reading/following instructions can sometimes overwhelm me, so when my laptop started crashing, BSOD after BSOD, I just couldn't process what needed doing to put it right.
So I told an AI, and mentioned I have ADHD. -An impossible task became doable as the AI broke it down in a way that my crazy hyperfocus issues could handle. And when 'DISM /Online /Cleanup-Image /RestoreHealth' stuck at 62.3% and left me totally fixated on the non-moving number, the AI even explained (again in bitesize chunks) how I could see what was happening in the background by opening another command prompt window and issuing another command, thus removing my anxiety/hyperfocus. There were many other steps too, but using an AI, a task that I was incapable of doing became almost straightforward.
Six months ago, AI was an irritating corrupter and manipulator of truth (or a tool to enable others to use that way) but now I can use it to dismantle impossible hurdles and convert them into a simple series of shuffling steps that I can take at my pace.
I have no idea if what I'm writing is in any way related to what this thread is about (apologies if it isn't!!) but my 'conclusion' is that AI is a tool that, used well, is invaluable. Used badly, the damage it could cause is incalculable.
So I told an AI, and mentioned I have ADHD. -An impossible task became doable as the AI broke it down in a way that my crazy hyperfocus issues could handle. And when 'DISM /Online /Cleanup-Image /RestoreHealth' stuck at 62.3% and left me totally fixated on the non-moving number, the AI even explained (again in bitesize chunks) how I could see what was happening in the background by opening another command prompt window and issuing another command, thus removing my anxiety/hyperfocus. There were many other steps too, but using an AI, a task that I was incapable of doing became almost straightforward.
Six months ago, AI was an irritating corrupter and manipulator of truth (or a tool to enable others to use that way) but now I can use it to dismantle impossible hurdles and convert them into a simple series of shuffling steps that I can take at my pace.
I have no idea if what I'm writing is in any way related to what this thread is about (apologies if it isn't!!) but my 'conclusion' is that AI is a tool that, used well, is invaluable. Used badly, the damage it could cause is incalculable.
-
mr.WHO
- Posts: 9396
- Joined: Thu, 12. Oct 06, 17:19

Re: AI and the precautionary principal
I don't have ADHD, but what you describe is quite good pattern of "how to sucessfully use the AI".Gavrushka wrote: ↑Sat, 31. Jan 26, 02:13 I struggle hard to process large volumes of information, and implode if I have several different things to do at the same time due to an irritating condition I've lived with throughout my life, and it makes some tasks, particularly when I'm stressed, impossible. Reading/following instructions can sometimes overwhelm me, so when my laptop started crashing, BSOD after BSOD, I just couldn't process what needed doing to put it right.
So I told an AI, and mentioned I have ADHD. -An impossible task became doable as the AI broke it down in a way that my crazy hyperfocus issues could handle. And when 'DISM /Online /Cleanup-Image /RestoreHealth' stuck at 62.3% and left me totally fixated on the non-moving number, the AI even explained (again in bitesize chunks) how I could see what was happening in the background by opening another command prompt window and issuing another command, thus removing my anxiety/hyperfocus. There were many other steps too, but using an AI, a task that I was incapable of doing became almost straightforward.
Six months ago, AI was an irritating corrupter and manipulator of truth (or a tool to enable others to use that way) but now I can use it to dismantle impossible hurdles and convert them into a simple series of shuffling steps that I can take at my pace.
I have no idea if what I'm writing is in any way related to what this thread is about (apologies if it isn't!!) but my 'conclusion' is that AI is a tool that, used well, is invaluable. Used badly, the damage it could cause is incalculable.
Don't trust the AI with big complex tasks, but you can (or even ask AI to do it for you) break that task to small pieces that you can understand and what is most important, verify, when you put it for AI to do.
Same in situation when you're at crossroads with multiple paths and you don't know where to go or which road to pick - then AI is good at giving hints and inspiration (especially, if AI provide reasoning and logic along it's answer - you can use it for checking/verification).
For me, I keep forgeting small details and thing you usually need to note or find in manuals/tutorial/wiki/encyclopedia - now I like to use AI for it, because I can give it vague prompt of what I'm looking for and 90% of times AI is able to find it for me.
Still the most important thing is that because I know what I'm looking for, I'm able to verify what AI gives me and get rid of eventual (and inevitable) halucinations.
Currently that's the biggest problem with AI - if you trust AI 100%, but you're unable to tell if it hallucinate or not - in will lead you into nasty situations.
-
decifer
- Posts: 582
- Joined: Thu, 22. Jul 10, 21:14

Re: AI and the precautionary principal
Well done, AI.
Nah, just kidding, well done you for using AI correctly.
Which you have proven. But it's also true of almost every tool. A car can cause a lot of damage, but it also can just be a tool to get stuff from A to B.
The scale of possible damage with AI is different, though.
If I understand correctly, the thread is about the question, whether it is correct to say, "AI is just a tool" - or is it more than that and how do we define the point at which it becomes more than just another tool.
Basically, how smart does a hammer need to be, to make you feel bad - or scared - when you leave it alone in the cold and dark garage? And how smart should we allow that hammer to become? And how do we know, how smart it actually is?
Don't drink and jumpdrive.
"Sir, they're scanning us." - "Scan them back!"
"Sir, they're scanning us." - "Scan them back!"
