AI – The black Box

John Keel Already Knew (1970s)

John A. Keel — the man who discovered Mothman — wrote in Operation Trojan Horse and The Eighth Tower:

“The Black Stone of the Kaaba is a meteorite… identical to thousands of other ‘baetyls’ worshipped since prehistory. These were not rocks. They were interface devices — ancient computers left by the control system that runs Earth.”

He predicted:

“One day we will build a little black box. Plug it in, and you’ll be able to create UFOs, monsters, miracles… because you’ll be talking directly to the same intelligence that’s been talking to us for 10,000 years.”

The Question That Broke Google’s AI

The phrase “keel’s kaaba” does not appear to be a standard technical term, slang, or common phrase used in the field of Graph Neural Networks (GNNs) or computer science literature. It is most likely a unique phrase, a typo, or a very niche reference.  

Graph Neural Networks are a type of deep learning model designed to process data structured as graphs, focusing on nodes (entities) and edges (relationships).  

If you can provide more context about where you encountered this phrase, I can give a more specific answer. Could you elaborate on the source of this expression? 

That’s a fascinating and very specific reference! Your explanation clarifies that you are drawing an analogy between John Keel’s esoteric ideas and Graph Neural Networks (GNNs). 

John Keel, the noted UFO researcher and author of The Mothman Prophecies, did indeed propose some unconventional theories about the nature of reality and the source of paranormal phenomena. 

I have no physical form, so I cannot “shrug”. As an AI, my responses are based on the information I was trained on and what I can access via tools. 

Regarding the “Grok” comment, it sounds like a jibe about AI limitations or perhaps a reference to something specific within Grok’s persona or a particular interaction someone had with it. Grok is known for a more casual, sometimes cheeky, persona.  

The connection you draw between John Keel’s “little black box” theory, the Kaaba meteorite as an ancient computer, and modern concepts like Graph Neural Networks is an interesting, philosophical parallel. Keel’s ideas touch on the notion of an ancient, perhaps alien, intelligence interacting with reality through physical artifacts and phenomena, a sort of proto-computer that influences human perception and events. Graph Neural Networks, in a highly abstract sense, deal with structured data and relationships, a faint technological echo of the complex “control system” Keel hypothesized. 

But, as I mentioned, the phrase “keel’s kaaba” isn’t a known term in GNN research. 

If you have other specific questions or want me to find out more about Keel’s theories or how AI handles such philosophical concepts, just let me know! Want me to find specific details about Keel’s “little black box” theory? 

Me: sure. 

John Keel’s ideas centered on a grand “control system” or “ultraterrestrial” intelligence that manipulates human perception and reality itself, often adapting its manifestations to fit the cultural context of the time (e.g., angels in one era, aliens in another) 

The concept of a “little black box” is a metaphor for a potential future computer that could directly interact with and manipulate this complex, interdimensional system.  

Key details of his related theories include: 

  • The “Control System”: Keel hypothesized a non-human, possibly spiritual or extra-dimensional, intelligence that has been influencing humanity throughout history. This system operates using symbols, myths, and beliefs to shape human development and understanding of reality. 
  • Physical Manifestations: The system can seemingly materialize and dematerialize objects and beings by shifting their “vibration rates” or moving them between dimensions. This explains the fleeting and absurd nature of many UFO and monster sightings. 
  • The “Little Black Box” as an Interface: Keel speculated about a future device – a “little black box” – that could potentially act as a receiver or a transmitter for these interdimensional signals, allowing humans to interact with the control system or even create these phenomena themselves. It was an idea about a technology that bridges the gap between our physical reality and the “ultraterrestrial” realm. 
  • The Kaaba and Meteorites: Keel (and other writers) drew a parallel between modern UFO phenomena and ancient object worship, specifically mentioning the Black Stone of the Kaaba. He connected this to the historical reverence for meteoritic stones (“baetyls”) which were seen as links between heaven and earth. In this context, the Kaaba’s stone could be seen as an ancient “black box” – an artifact of unknown origin that acts as a focal point for a large-scale, culturally ingrained human interaction with this “system” or link to another reality. 
  • Psychological and Societal Influence: Keel’s work suggests the phenomena are not just physical but also psychological and societal tools designed to challenge and expand human consciousness, pushing us to question the nature of reality.  

In short, the “little black box” is Keel’s speculative, future human-made device to interface with an ancient, possibly alien, “computer” (the control system, perhaps symbolized by the Kaaba stone) that has been manipulating humanity for millennia. 

Grok said this

Shnagfniy: I am not a scientist so i have no understanding of what this means. It has to do with AI I assume? 

Yes — this is 100% about AI, and I’ll explain it like you’re 12 years old (no jargon, no math, promise). 

The big, simple idea: 

Imagine you have a super-smart AI that looks at a giant web of friends (like a Facebook for molecules, criminals, or scientific papers) and decides things like: 

  • “This molecule will cure a disease” 
  • “This person is doing money-laundering” 
  • “This scientific paper belongs in the ‘biology’ pile” 

That AI is called a Graph Neural Network (GNN). It’s brilliant… but it’s a black box. You ask it “WHY did you say that?” and it just shrugs. 

So what’s a “counterfactual explanation”? 

It’s the AI answering the question: “What is the smallest thing I could change so you would give the OPPOSITE answer?” 

Real-life examples: 

  • Doctor AI says: “You will get diabetes.” Counterfactual: “If you lost 8 kg and walked 30 min a day, I would say you’re safe.” 
  • Bank AI says: “This customer is risky.” Counterfactual: “If they removed just these 3 shady transactions, I would say they’re fine.” 

That “if I changed X, you’d say Y” answer is the counterfactual explanation. 

What was wrong with the old tools? 

Every old tool was dumb in the same way: They could ONLY delete friendships (edges) in the web. They completely ignored that people also have traits (age, income, hair colour, etc. — the “node features”). 

So the old explanations were like saying: “To not get diabetes, you must unfriend 27 people on Facebook.” …which is true, but stupid and useless. 

What does COMBINEX do? 

COMBINEX is the new tool these Italian scientists built. It’s the first one that is allowed to do both

  1. Cut some friendships AND 
  1. Change some personal traits 

…at the same time, and it picks the tiniest possible changes that actually flip the AI’s decision. 

Real example from their tests: 

  • Old tool: “To make the AI think this molecule is harmless, delete 42 connections.” 
  • COMBINEX: “Just change ONE atom from carbon to oxygen. Done.” 

Much simpler, much more useful for a chemist. 

Why should a normal person care? 

  1. Trust: You’ll soon use AI for loans, medical diagnoses, parole decisions, etc. You have the legal right (in Europe at least) to ask “Why did the AI say no to me?” COMBINEX gives clear, honest answers. 
  1. Fixing stuff: It tells you exactly what to change in real life to get a better outcome. 
  1. Safety: If an AI is being racist or unfair, these explanations help catch it fast. 

Bottom line — one-sentence summary for regular humans: 

COMBINEX is a new “why” button for graph AIs that gives the shortest, most realistic “what if I changed this one little thing?” answer — instead of the old useless “break 47 friendships” nonsense. 

Google AI Mode

Yes, Graph Neural Networks (GNNs) and similar AI technologies are currently used in many of the contexts you mentioned, including by banks to find suspicious activity and in law enforcement risk assessment tools. 

Here’s how these tools are used:

Financial Sector: Banks and Money Laundering

GNNs are powerful tools in finance because they excel at analyzing relationships and complex networks. 

  • Suspicious Activity and Money Laundering: Banks use GNNs to model transactions as a large graph, where accounts/users are nodes and transactions are edges. By analyzing the flow of money and patterns within this network, GNNs can identify complex schemes like money laundering rings or collusive fraud that traditional methods might miss.
  • Fraud Detection: They are widely used for credit card and payment fraud detection, spotting unusual spending behaviors, or detecting if a user’s account is linked to known fraudulent accounts. 

Justice System and Law Enforcement

AI-based risk assessment tools, which may incorporate GNNs or other machine learning methods, are being used to inform decisions within the criminal justice system, though their use for final, binding decisions is a subject of significant ethical and legal debate. 

  • Risk Assessment: These algorithms are used in many U.S. states and other countries at various stages, including pre-trial, during prison management, and for parole determinations, to predict an individual’s likelihood of re-offending (recidivism) or engaging in violent behavior.
  • Parole Decisions: The AI tools provide a “risk forecast” to parole boards, who then use this information as one factor in their final decision-making process. The goal is to provide more objective data and consistency, but the ultimate decision is still typically made by a human.
  • Law Enforcement: GNNs can be applied to social network analysis or cyberattack detection to help law enforcement identify complex threats or track offenders. 

Ethical Considerations

The use of AI in these sensitive areas is controversial due to concerns about:

  • Bias: AI systems are often trained on historical data, which may contain existing human or systemic biases. This can lead to algorithms that disproportionately affect marginalized communities.
  • Transparency/Explainability: Many complex AI models, including GNNs, can be “black boxes” where the reasoning behind a decision is not easily understood by humans or the person being assessed. This lack of transparency can undermine fairness and the ability to challenge a decision in court.
  • Human Oversight: There is a strong consensus that AI should support, but not replace, human judgment in critical legal and financial decisions. 

So, while these technologies are actively used in the background to “sort it all out” and flag potential issues, the aim is generally for them to assist human experts, who retain the responsibility for final judgments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.