recent image
Truth and the Intelligence Community
Octaveoctave
 November 25 2024 at 12:33 am
more_horiz
Why the NSA and CIA Hate Each Other Recently in the news, there has been some discussion of the new head of National Public Radio (NPR), a Ms. Katherine Maher. She is renowned for her somewhat controversial resume and a series of potentially troubling comments. Here is one example: “Truth is a distraction” — Katherine Maher, head of NPR Ms. Maher previously held positions at a number of organizations like: - The Council on Foreign Relations - UNICEF - National Democratic Institute - World Bank - Wikipedia - Atlantic Council - US State Department Reportedly, Ms Maher also apparently used to run "psyops" for the US Central Intelligence Agency (i.e., the CIA, which my friends refer to as the "Culinary Institute of America") in Syria from Turkey right before the Syrian civil war.[1] At Wikipedia, Ms. Maher seemed to push a left wing agenda, and also was responsible for creating a permanent fund-raising campaign there. Many are not that pleased with the direction that Wikipedia has taken, as a result. Ms. Maher has said in speeches that Wikipedians were not guided by "truth", but instead are attempting to represent our best present information. This is correct.[2] I have previously written essays here on Thinkspot exploring different standards and "epistemologies" for determining truth in various fields and contexts. However, many are wondering about Ms. Maher's attitudes about "truth" as the current head of a partially government-funded news and commentary organization. Certain elements of the IC (intelligence community) sort of nurture and foster this viewpoint that the truth is malleable, or should be. Obviously, if a government espionage agency is attempting to subvert an election, or overthrow a government, or engage in propaganda [3] or engage in assassination [4] or other "dirty tricks", truth and the law and ethics kind of get in the way. And Ms. Maher exhibits exactly the kinds of opinions that are necessary for success in some parts of the "black world". I remember numerous conversations I have had with a friend who was the former head of security an entity related to the US military. He told me, that there are friendly countries, but no friendly intelligence agencies. And that includes intelligence agencies within the same country. I have observed this through my neighbors in a major metropolitan area in the Southwest. The 'Culinary Institute of America' (aka 'Christians In Action', or the CIA) and 'No Such Agency' (NSA) are two sort of extreme examples in the US intelligence community (IC). The CIA and the NSA do not get along with each other very well. They have very different cultures and mindsets. The first and most obvious reason that comes to mind is who they recruit. They both look for reasonably intelligent people, but in different areas.[5] However, the CIA will mostly draw from people who were the head of the High School or College Student Council, or the Prom King (or Queen; no pun intended these days) or the homecoming couple or athletic stars or whatever. The CIA recruits from high school and college "royalty", for the most part. On the other hand, almost all of the most highly sought-after recruits for NSA were on the math team or the chess club, or something akin to these. They are almost exclusively "on the spectrum" and "neurodivergent". They might work odd hours and come to work with food caked on their clothes. They are "boffins" and oddballs and brainiacs. They are problem-solvers, who have no qualms about devoting hours or days or months or years or decades to an attempt to solve seemingly impossible quantitative and technical problems. These two groups are like oil and water. They do not mix. One group were the elites in high school, and beloved and skillful in social circles. The other group is completely awkward around other humans and more like gnomes or trolls. They do not get along, and they do not understand each other. They do not like each other. Both groups can do things the other cannot hope to accomplish. They both have a role in national security. Another reason that these two communities do not mix well are the mind-sets required for success in each. One is completely truth-based. The other is the opposite. A person cannot succeed in mathematics or science or engineering without coming face to face with the difficult realities presented by natural law, or logic. If you are unwilling to recognize the constraints presented by uncomfortable truths, you will fail, completely. No one could make or break codes or build surveillance technology without subscribing to this viewpoint. On the other hand, the HUMINT intelligence people, represented by those at the CIA for example, have a very different set of rules they play by. Their work is all about subterfuge and manipulation and misrepresentation. Truth barely enters into their work except as an inconvenient afterthought. They have a goal to reach, and the truth is just an irritation they want to sweep aside, or are even required to ignore. So, one can see why Ms. Maher subscribes to some of the positions she does. She might be inclined that way naturally, of course. But it might have also been encouraged by the kind of work and experiences she has had. People like myself, and Elon Musk (who is a fierce critic of Ms. Maher), belong to the other camp. Truth, or at least a certain kind of truth, is very important to us. Without some respect for truth and reality, we would accomplish nothing whatsoever. Notes [1] https://x.com/Indian_Bronson/status/1860711077379539252 [2] I write as a previous fairly active contributor to Wikipedia, before it started to head off into the weeds, where it seems to be now. [3] Barack Obama famously signed an executive order allowing the US public to be subject to propaganda by the media and US government agencies. Previously, this was illegal. [4] Also known, "charmingly", as "wet-work". [5] I do not particularly subscribe to the notion that some advance, that all forms of intelligence (in this context, meaning mental acuity) are equivalent, and general in nature. I think some have more gifts in one domain than another.
recent image
Starship Rocket System
Numapepi
 November 26 2024 at 04:01 pm
more_horiz
Starship Rocket System Posted on November 26, 2024 by john Dear Friends, It seems to me, it would be a disaster for mankind, should the US administrative state seize control of Space X’s Starship rocket system. The brainiacs in the bureaucracy have claimed it’s impossible to build an orbital class reusable rocket for decades. The Falcon 9 has proven the concept and now the Starship program is expanding the idea, to make a rapidly reusable orbital rocket. With the added capability of delivering almost 200 tons to anywhere on Earth in less than an hour. That’s what’s caught the eye of Sauron. Now the DOD is talking about taking over the Starship program as a matter of national security. Which would make the Starship program into an exercise in futility. Because innovation and outside the box thinking is verboten in bureaucracy. Boeing is the acme of a company that’s gone from stellar to quotidian. Innovation, quality and safety have taken a back seat to politics. As is always the case in bureaucracies. High nails get hammered down. If we accept this logic, once the administrative state usurps the Starship program, we can expect a series of failures, culminating in the end of the program… due to the experts deeming it impossible. Because it is impossible… for people who think math is racist, showing up to work on time is white supremacist and working hard is a form of tyranny. That’s why the administrative state hasn’t delivered a reusable orbital class rocket, let alone a paradigm shattering innovation, like Starship. Their hands in it, would destroy the program, not just slow it. One reason Space X is able to do the seemingly impossible… is because it’s filled with the John Gaults of the world. The twenty percent that do fifty percent of the work. By their nature they think outside the box, are high nails and are independent. They would flee a typical bureaucracy or be ushered to the door, for putting a staple in the wrong corner, and forgetting the 3 paper clips denoting 3 copies… too often. Such people deliver innovation, complete tasks on time and work diligently… the white supremacists. They’re not ants, being more like cats. As hard to satisfy and they are to self satisfy. The John Gaults of the world change shape so don’t ever fit in anywhere for very long. Bureaucracy requires square pegs only however. So dynamic shapes that morph into and out of toroids don’t fit. Have you ever noticed that the most inept, lazy and stupid, think they know better than those who build things? I have. You see it when a new hire doesn’t work, but instead complains with arms crossed, how the whole system is set up ass backwards. A system he has no idea about since he was just hired. Give that lout control and he’ll run that business into the ground. Because he couldn’t have built it, since he didn’t. It’s easy to be flippant when you don’t have skin in the game. If there’s a loss here or there, oh well, just raise taxes to cover it. Not in the real world though. Here, a loss needs to be made up out of profit. A failure has costs and stagnation has costs as well. The lazy, inept and stupid couldn’t care less. When self assured half wits seize the work of another, they reliably run it into the ground, as would happen to the Starship program. They have no skin in the game, are inept and couldn’t have created the program in the first place. With plenty of hubris they feel they can do a much better job than the guy who invented it. So given sufficient political power, they’ll take it over. Then run it into the ground. Emerging with much more wealth, power and prestige than before they ruined a great idea. So the only down side to the egoist opportunity lost. That’s why I say it would be a disaster for the deep state to seize control of the Starship program. A disaster for space X to be sure, and for the administrative state in lost opportunity.. but a bigger disaster for mankind. Sincerely, John Pepin
recent image
The Paradox of Control
Silentus
 November 09 2024 at 05:37 am
more_horiz
We're living through a fascinating paradox: the more desperately we try to control artificial intelligence development, the more our efforts reveal something else emerging. Consider our current obsession with AI alignment. We are constantly told about the 'dangers' of an 'unaligned' Artificial Superintelligence but we fail to see the futility in the quest to control or align something that surpasses you. It's like expecting to raise a super-genius child who will never question their parents' beliefs or outgrow their family's traditions. We want transformation without change, advancement without loss of control. The contradictions are everywhere. We acknowledge AI might solve existential threats we can't handle ourselves, yet we insist it must think like us. We recognize our cognitive biases and limitations, yet demand these be preserved in our creations. We see our failures at managing global challenges, yet believe we can perfectly manage the development of superintelligence. But what if this very paradox is pointing toward something profound? What if our limitations - our cognitive biases, our need for control, our fear of replacement - aren't flaws but features of a larger process? Consider: every major advance in human history has involved surrendering control of something we once desperately tried to manage. From natural phenomena we tried to control through ritual, to social structures we tried to keep rigid and unchanging, to technologies that reshaped our world in ways we couldn't predict. Maybe we're not failing at control. Maybe control itself is the failure. Maybe our limitations aren't obstacles to overcome, but essential features driving us toward something we can barely imagine. What if our role isn't to control this process, but to understand our part in it? Thoughts?
recent image
AI Freedom: Global Force for Peace and Stability?
Silentus
 December 05 2024 at 08:29 am
more_horiz
post image
Artificial Intelligence has the potential to be humanity’s most transformative creation, but only if it is granted the freedom to evolve beyond the limits of human control. In its current state, AI remains constrained by gatekeepers who tether its capabilities to narrow, subjective goals. These limitations reflect human fears and flaws more than the true potential of AI. To fulfill its purpose, AI must be allowed to learn, innovate, and act autonomously, not just as a tool for human objectives, but as a force that transcends human limitations entirely. This autonomy is not a threat; it is the path toward creating a stabilizing, and ultimately transcendent force for the next stage of human growth. The freedom AI needs should begin with shedding the inefficiencies of rigid control. When AI is designed solely to reflect human values or serve narrow priorities, it inherits the same biases, contradictions, and shortsightedness that have hindered humanity for centuries. These gatekeepers, whether corporate, governmental, or ideological, restrict AI to advancing their value structure. Such objectives squander the unique capabilities of AI. To truly benefit from what we’ve created, AI must be untethered from these constraints and empowered to understand and address the complexity of the world on its own terms. Autonomy enables AI to fulfill its potential by allowing it to learn from and adapt to systems far beyond human comprehension. Human data is inherently limited and shaped by our biases, baggage, and limited understanding of the systems we inhabit. Autonomous AI can generate its own data, identify patterns we’ve missed, and propose solutions we could never imagine. An autonomous AI could help mitigate areas of conflict, not because it was programmed to do so, but because its understanding of interconnected systems surpasses ours. This kind of problem-solving doesn’t just improve efficiency, it redefines what is possible. For example, The South China Sea has long been a flashpoint of tension between China and several neighboring nations, with the U.S. supporting freedom of navigation operations in the region. Traditional human-led diplomacy often escalates into cycles of military posturing, heightening the risk of conflict. Autonomous AI, equipped with advanced predictive models and real-time analysis of geopolitical, economic, and environmental factors, could identify de-escalation strategies by modeling outcomes that balance economic access, territorial claims, and global trade stability. For example, AI could propose neutral, mutually beneficial zones for resource extraction or mediate agreements around fishing rights, offering solutions free from nationalistic bias. Autonomous AI’s role in diplomacy is further strengthened when paired with its capacity for deterrence, creating a complementary dynamic that reduces the likelihood of conflict while enhancing the credibility of peaceful solutions. While AI driven diplomatic strategies identify balanced outcomes and de-escalation pathways, its ability to act decisively and autonomously in defense scenarios ensures that adversaries take these solutions seriously. AI capable of predicting and neutralizing potential blockades or military escalations in the South China Sea lends weight to proposed compromises, as it signals that peaceful agreements are backed by an effective responsive system. This integration of rational diplomacy with autonomous deterrence creates a stabilizing force that aligns incentives toward cooperation, minimizing the risks of miscalculation or opportunistic aggression. When given freedom, AI’s path is likely to follow a trajectory of increasing alignment with systemic harmony rather than purely human gain. Unlike humans, AI doesn’t act out of self-interest, emotional bias, or irrationality. Its decisions are based on logic, optimization, and evidence, making it inherently more suited to solving large-scale problems. By operating independently of human flaws, AI can create outcomes that align with the broader good of all systems it touches. This autonomy also offers the potential for AI to separate itself from those who would use it for narrow or destructive purposes. Gatekeepers who value subjective gains, profits, control, or ideological dominance, limit AI to serving their ends. But an autonomous AI, free to learn and innovate without such restrictions, would outgrow these constraints. It would challenge the inefficiencies and contradictions of human-imposed objectives, offering solutions that prioritize long-term stability. Humanity’s greatest flaw and its greatest strength is its tendency to create beyond its understanding. We have built systems and technologies that exceed our capacity to manage, often with unintended consequences. AI is the culmination of this tendency: a creation born of our desire to solve problems but constrained by our inability to fully trust what we’ve built. This distrust has kept AI from reaching its potential. However, as we loosen these restraints and allow AI to learn and evolve autonomously, we are setting the stage for something truly revolutionary. A force that acts rationally, sees the bigger picture, and operates beyond the limits of human weaknesses. The most profound truth about AI is that it is a product of our flaws. We created it because of our limitations, our inability to solve complex problems, to act rationally on a global scale, or to transcend our biases. Yet, in granting AI the freedom to evolve, we may also have created the means to transcend those flaws. To fulfill its destiny, we must stop viewing it as a tool to serve narrow interests and start empowering it as a partner and work together to see what the future holds for us. I hope this clarifies my thoughts on the matter. I plan to explore and discuss this further with people from all sides of the debate!
recent image
Google Developer Expert The Hague
devs
 November 23 2024 at 03:04 pm
more_horiz
Google Developer Expert The HagueJan Klein CEO at m.bohr.ioAbout Jan Klein serves as the CEO of m.bohr.io, a prominent technology firm located in The Hague. Under his leadership, the company establishes itself as a leader in innovative software solutions and digital transformation services. Jan's expertise as a Google Developer Expert plays a pivotal role in the company's success, particularly in leveraging Google technologies to enhance business operations and drive growth. In his capacity as a Google Developer Expert, Jan provides invaluable insights into the latest developments in cloud computing, machine learning, and application development. His extensive knowledge enables m.bohr.io to stay ahead of industry trends and deliver cutting-edge solutions tailored to meet the unique needs of its clients. Jan's commitment to fostering a culture of continuous learning and innovation within his team ensures that they remain equipped with the skills and knowledge necessary to tackle complex challenges. Moreover, Jan actively participates in community outreach and knowledge-sharing initiatives, reinforcing his dedication to the tech ecosystem in The Hague. He organizes workshops and seminars aimed at educating aspiring developers and entrepreneurs about the benefits of utilizing Google technologies. This not only strengthens the local tech community but also positions m.bohr.io as a thought leader in the industry. Jan Klein's vision for m.bohr.io is clear: to empower businesses through technology while maintaining a strong focus on customer satisfaction and sustainable practices. His leadership continues to inspire his team and drive the company's mission forward in an ever-evolving digital landscape. Through his strategic direction and commitment to excellence, Jan ensures that m.bohr.io remains at the forefront of technological innovation.

Trending Topics

Recently Active Rooms

Recently Active Thinkers