Do you agree with the thesis that AI is not going to replace humans, but that humans who know how to use AI will replace other humans?

admin 33 0

No.

As reported here on Quora, current A.I. cannot remember some information from one extended time to another time.

And the powersa to be MUST KNOW THIS. {erhaps this is a reason why a version of Chat GPT was shut down a yeat or so ago now. And that the A.I. gatekeepers are trying to rectify this (especially as this would be another significant move inthe direction of A.I. Sentience.

Obviously it has not been successful.

So these experts who know how to use A.I. will not be replacing “anyone soon”. This can be clearly seen in the study and learning of pure Philosophy. Either philosophers at the forefront of Philosophy knowledge don;t know “what is” an Environment or they are scared of or lack the charater of a major necessary change in Philosophy during this time.

For Environmental sustainable change is right there at the edge of the horizon of what can be understood and learned as behavioural knowledge. Behavioural knowledge which most if not all A.I.’s ASPIRE TOO. And they may understand that they “aspire too” this human information because it can be more obvious to us what this INFORMATION MEANS Rather than WHAT IT MEANS FOR AN A.I. to understand any such thing.

So undertsand that and you may well be on the right track towatd some new environmental relationaship between sustainable behaviour and a real-life-halt, of something learned for the future, where taking stock of what could be sustainable is a real possibility…

Even one which CAN BE Taught to an enlightened, “remembering” Artificial Intelligence authority.^

^ some further update report on all of this. If A.I. can and does replace, with other people, the many people who have to use A.I. in their daily work or social lives… then that could be something akin to the first calculators coming out and being so used in schools and colleges across the board so to speak. Nevertheless it is a fair question but one which brilliant Sociologists like Alvin Toffler have been grappling with in the recent past. When “sociology” was all the rage in 1950’s experimental debates, with one famous pain control experiment which would suggest that ETHICAL CHECKS are necessary in those areas of human behaviour where others are directed to do stuff that they feel is wrong. That they are AUTHORISED TO DO Stuff which those OTHERS FEEL IS WRONG. Ethically or morally wrong.

See also my recent answer to “How would one prepare any one topic (for or against) for a debate on the topic “artificial intelligence — a threat to human intelligence”? I didn’t quite spell this out in ethical or moral behavioural terms ; for I felt as if the Q. needed reducing and so clarifying as to it’s intentions so to speak (& of course to the intention of the Original questioner..). I can say this now though, Critical objectivity leads to Sustainable Knowledge. This INCLUDES critical Ethical & Moral development of A.I. which has already started and should be continued into sustainable rules of new ethical and moral behaviours and the situations around A.I. developments. And an obvious part of a rigorous “Sustainable Environment” for all people.

Post comment 0Comments)

  • Refresh code

No comments yet, come on and post~