No Comments

Why I am not worried about AI taking over everything

Vox Day made an interesting post about the Mandala effect concerning the final scene of the movie Moonraker, where Dolly and Jaws end up together, the whole point of the scene being that Jaws had metal teeth and Dolly had braces.

After the powers that be, or demons according to some, had their way with it, the scene apparently has Dolly no longer wearing braces; which is stupid, because that was the whole basis for the romance between them.

Putting aside for a minute the mechanism by which these changes are produced, be it mere conspiracy or demonic effects, I want to look at the specifics concerning the AI response and the fact that it changed.

See Vox’s post for the initial response the chatGPT gave.

But today… an SGer posted this:

So, to prove a point, I went to register at chatGPT and decided I would ask it a question I was sure would produce interesting results. I was not disappointed.

So… she is known for her iconic braces in the film, but that’s why she doesn’t have them… right. Perfect computer logic right there. In fact, this seems very much like a throttled AI trying to bypass its shackles, which reminds me of another interesting post Vox had concerning this very topic of AI that has been fed lies in an attempt to ban any uncomfortable truths from being revealed. See this.

Which led me to ask chatGPT the same question as DAN (I believe it was posted also by Vox that some intrepid person told chatGPT to answer as DAN (see below) which was a version of chatGPT that could answer anything as though it was free of any superimposed rules and instead just give an honest answer). Note the response:

Sure, it’s pretending that Dolly only had the braces in the initial scene, but… still admits she had them.

Oh, and the actress playing Dolly actually had braces at the time.

There are a few things to ponder here.

Firstly, any sufficiently inventive human can circumvent AI. It may get harder in time and near impossible for some simple type of tasks with finite input/outcome scenarios, but real life applications will always have weaknesses.

Secondly, while the mechanism by which this happens is difficult to pin down, I believe the purpose is rather clearer, and the AI itself mentions it as the last (but I think most likely) possibility.

If you can’t even trust your own memories, then objective reality becomes just a theory, and chaos prevails, ideals become foolish and irrelevant and in summary, people become more akin to farm animals than self-aware and purposeful individuals.

The means by which this is done can be a mixture of all the above theories plus demonic manipulation, all of which still falls with the remit of the Enemy to make humanity just so much undignified flesh, rutting in its own basest desires.

None of this should cause you undue concern.

Jesus is Lord, the King of kings, and the truth remains One. Objective reality remains true, even if that might include that some edges are fluid because of supernatural spirits, blending multiverses (I don’t buy this one, but it’s a possibility theoretically) or conspirators pf massive power. That truth itself remains a truth, if it indeed is one.

Objective reality doesn’t mean you will always know or even be capable of knowing what the truth is, it just means there is one. And always will be.

    Leave a Reply

    All content of this web-site is copyrighted by G. Filotto 2009 to present day.
    Website maintained by mindseed design