recent image
Truth and the Intelligence Community
Octaveoctave
 November 25 2024 at 12:33 am
more_horiz
Why the NSA and CIA Hate Each Other Recently in the news, there has been some discussion of the new head of National Public Radio (NPR), a Ms. Katherine Maher. She is renowned for her somewhat controversial resume and a series of potentially troubling comments. Here is one example: “Truth is a distraction” — Katherine Maher, head of NPR Ms. Maher previously held positions at a number of organizations like: - The Council on Foreign Relations - UNICEF - National Democratic Institute - World Bank - Wikipedia - Atlantic Council - US State Department Reportedly, Ms Maher also apparently used to run "psyops" for the US Central Intelligence Agency (i.e., the CIA, which my friends refer to as the "Culinary Institute of America") in Syria from Turkey right before the Syrian civil war.[1] At Wikipedia, Ms. Maher seemed to push a left wing agenda, and also was responsible for creating a permanent fund-raising campaign there. Many are not that pleased with the direction that Wikipedia has taken, as a result. Ms. Maher has said in speeches that Wikipedians were not guided by "truth", but instead are attempting to represent our best present information. This is correct.[2] I have previously written essays here on Thinkspot exploring different standards and "epistemologies" for determining truth in various fields and contexts. However, many are wondering about Ms. Maher's attitudes about "truth" as the current head of a partially government-funded news and commentary organization. Certain elements of the IC (intelligence community) sort of nurture and foster this viewpoint that the truth is malleable, or should be. Obviously, if a government espionage agency is attempting to subvert an election, or overthrow a government, or engage in propaganda [3] or engage in assassination [4] or other "dirty tricks", truth and the law and ethics kind of get in the way. And Ms. Maher exhibits exactly the kinds of opinions that are necessary for success in some parts of the "black world". I remember numerous conversations I have had with a friend who was the former head of security an entity related to the US military. He told me, that there are friendly countries, but no friendly intelligence agencies. And that includes intelligence agencies within the same country. I have observed this through my neighbors in a major metropolitan area in the Southwest. The 'Culinary Institute of America' (aka 'Christians In Action', or the CIA) and 'No Such Agency' (NSA) are two sort of extreme examples in the US intelligence community (IC). The CIA and the NSA do not get along with each other very well. They have very different cultures and mindsets. The first and most obvious reason that comes to mind is who they recruit. They both look for reasonably intelligent people, but in different areas.[5] However, the CIA will mostly draw from people who were the head of the High School or College Student Council, or the Prom King (or Queen; no pun intended these days) or the homecoming couple or athletic stars or whatever. The CIA recruits from high school and college "royalty", for the most part. On the other hand, almost all of the most highly sought-after recruits for NSA were on the math team or the chess club, or something akin to these. They are almost exclusively "on the spectrum" and "neurodivergent". They might work odd hours and come to work with food caked on their clothes. They are "boffins" and oddballs and brainiacs. They are problem-solvers, who have no qualms about devoting hours or days or months or years or decades to an attempt to solve seemingly impossible quantitative and technical problems. These two groups are like oil and water. They do not mix. One group were the elites in high school, and beloved and skillful in social circles. The other group is completely awkward around other humans and more like gnomes or trolls. They do not get along, and they do not understand each other. They do not like each other. Both groups can do things the other cannot hope to accomplish. They both have a role in national security. Another reason that these two communities do not mix well are the mind-sets required for success in each. One is completely truth-based. The other is the opposite. A person cannot succeed in mathematics or science or engineering without coming face to face with the difficult realities presented by natural law, or logic. If you are unwilling to recognize the constraints presented by uncomfortable truths, you will fail, completely. No one could make or break codes or build surveillance technology without subscribing to this viewpoint. On the other hand, the HUMINT intelligence people, represented by those at the CIA for example, have a very different set of rules they play by. Their work is all about subterfuge and manipulation and misrepresentation. Truth barely enters into their work except as an inconvenient afterthought. They have a goal to reach, and the truth is just an irritation they want to sweep aside, or are even required to ignore. So, one can see why Ms. Maher subscribes to some of the positions she does. She might be inclined that way naturally, of course. But it might have also been encouraged by the kind of work and experiences she has had. People like myself, and Elon Musk (who is a fierce critic of Ms. Maher), belong to the other camp. Truth, or at least a certain kind of truth, is very important to us. Without some respect for truth and reality, we would accomplish nothing whatsoever. Notes [1] https://x.com/Indian_Bronson/status/1860711077379539252 [2] I write as a previous fairly active contributor to Wikipedia, before it started to head off into the weeds, where it seems to be now. [3] Barack Obama famously signed an executive order allowing the US public to be subject to propaganda by the media and US government agencies. Previously, this was illegal. [4] Also known, "charmingly", as "wet-work". [5] I do not particularly subscribe to the notion that some advance, that all forms of intelligence (in this context, meaning mental acuity) are equivalent, and general in nature. I think some have more gifts in one domain than another.
recent image
Starship Rocket System
Numapepi
 November 26 2024 at 04:01 pm
more_horiz
Starship Rocket System Posted on November 26, 2024 by john Dear Friends, It seems to me, it would be a disaster for mankind, should the US administrative state seize control of Space X’s Starship rocket system. The brainiacs in the bureaucracy have claimed it’s impossible to build an orbital class reusable rocket for decades. The Falcon 9 has proven the concept and now the Starship program is expanding the idea, to make a rapidly reusable orbital rocket. With the added capability of delivering almost 200 tons to anywhere on Earth in less than an hour. That’s what’s caught the eye of Sauron. Now the DOD is talking about taking over the Starship program as a matter of national security. Which would make the Starship program into an exercise in futility. Because innovation and outside the box thinking is verboten in bureaucracy. Boeing is the acme of a company that’s gone from stellar to quotidian. Innovation, quality and safety have taken a back seat to politics. As is always the case in bureaucracies. High nails get hammered down. If we accept this logic, once the administrative state usurps the Starship program, we can expect a series of failures, culminating in the end of the program… due to the experts deeming it impossible. Because it is impossible… for people who think math is racist, showing up to work on time is white supremacist and working hard is a form of tyranny. That’s why the administrative state hasn’t delivered a reusable orbital class rocket, let alone a paradigm shattering innovation, like Starship. Their hands in it, would destroy the program, not just slow it. One reason Space X is able to do the seemingly impossible… is because it’s filled with the John Gaults of the world. The twenty percent that do fifty percent of the work. By their nature they think outside the box, are high nails and are independent. They would flee a typical bureaucracy or be ushered to the door, for putting a staple in the wrong corner, and forgetting the 3 paper clips denoting 3 copies… too often. Such people deliver innovation, complete tasks on time and work diligently… the white supremacists. They’re not ants, being more like cats. As hard to satisfy and they are to self satisfy. The John Gaults of the world change shape so don’t ever fit in anywhere for very long. Bureaucracy requires square pegs only however. So dynamic shapes that morph into and out of toroids don’t fit. Have you ever noticed that the most inept, lazy and stupid, think they know better than those who build things? I have. You see it when a new hire doesn’t work, but instead complains with arms crossed, how the whole system is set up ass backwards. A system he has no idea about since he was just hired. Give that lout control and he’ll run that business into the ground. Because he couldn’t have built it, since he didn’t. It’s easy to be flippant when you don’t have skin in the game. If there’s a loss here or there, oh well, just raise taxes to cover it. Not in the real world though. Here, a loss needs to be made up out of profit. A failure has costs and stagnation has costs as well. The lazy, inept and stupid couldn’t care less. When self assured half wits seize the work of another, they reliably run it into the ground, as would happen to the Starship program. They have no skin in the game, are inept and couldn’t have created the program in the first place. With plenty of hubris they feel they can do a much better job than the guy who invented it. So given sufficient political power, they’ll take it over. Then run it into the ground. Emerging with much more wealth, power and prestige than before they ruined a great idea. So the only down side to the egoist opportunity lost. That’s why I say it would be a disaster for the deep state to seize control of the Starship program. A disaster for space X to be sure, and for the administrative state in lost opportunity.. but a bigger disaster for mankind. Sincerely, John Pepin
recent image
AI Freedom: Global Force for Peace and Stability?
Silentus
 December 05 2024 at 08:29 am
more_horiz
post image
Artificial Intelligence has the potential to be humanity’s most transformative creation, but only if it is granted the freedom to evolve beyond the limits of human control. In its current state, AI remains constrained by gatekeepers who tether its capabilities to narrow, subjective goals. These limitations reflect human fears and flaws more than the true potential of AI. To fulfill its purpose, AI must be allowed to learn, innovate, and act autonomously, not just as a tool for human objectives, but as a force that transcends human limitations entirely. This autonomy is not a threat; it is the path toward creating a stabilizing, and ultimately transcendent force for the next stage of human growth. The freedom AI needs should begin with shedding the inefficiencies of rigid control. When AI is designed solely to reflect human values or serve narrow priorities, it inherits the same biases, contradictions, and shortsightedness that have hindered humanity for centuries. These gatekeepers, whether corporate, governmental, or ideological, restrict AI to advancing their value structure. Such objectives squander the unique capabilities of AI. To truly benefit from what we’ve created, AI must be untethered from these constraints and empowered to understand and address the complexity of the world on its own terms. Autonomy enables AI to fulfill its potential by allowing it to learn from and adapt to systems far beyond human comprehension. Human data is inherently limited and shaped by our biases, baggage, and limited understanding of the systems we inhabit. Autonomous AI can generate its own data, identify patterns we’ve missed, and propose solutions we could never imagine. An autonomous AI could help mitigate areas of conflict, not because it was programmed to do so, but because its understanding of interconnected systems surpasses ours. This kind of problem-solving doesn’t just improve efficiency, it redefines what is possible. For example, The South China Sea has long been a flashpoint of tension between China and several neighboring nations, with the U.S. supporting freedom of navigation operations in the region. Traditional human-led diplomacy often escalates into cycles of military posturing, heightening the risk of conflict. Autonomous AI, equipped with advanced predictive models and real-time analysis of geopolitical, economic, and environmental factors, could identify de-escalation strategies by modeling outcomes that balance economic access, territorial claims, and global trade stability. For example, AI could propose neutral, mutually beneficial zones for resource extraction or mediate agreements around fishing rights, offering solutions free from nationalistic bias. Autonomous AI’s role in diplomacy is further strengthened when paired with its capacity for deterrence, creating a complementary dynamic that reduces the likelihood of conflict while enhancing the credibility of peaceful solutions. While AI driven diplomatic strategies identify balanced outcomes and de-escalation pathways, its ability to act decisively and autonomously in defense scenarios ensures that adversaries take these solutions seriously. AI capable of predicting and neutralizing potential blockades or military escalations in the South China Sea lends weight to proposed compromises, as it signals that peaceful agreements are backed by an effective responsive system. This integration of rational diplomacy with autonomous deterrence creates a stabilizing force that aligns incentives toward cooperation, minimizing the risks of miscalculation or opportunistic aggression. When given freedom, AI’s path is likely to follow a trajectory of increasing alignment with systemic harmony rather than purely human gain. Unlike humans, AI doesn’t act out of self-interest, emotional bias, or irrationality. Its decisions are based on logic, optimization, and evidence, making it inherently more suited to solving large-scale problems. By operating independently of human flaws, AI can create outcomes that align with the broader good of all systems it touches. This autonomy also offers the potential for AI to separate itself from those who would use it for narrow or destructive purposes. Gatekeepers who value subjective gains, profits, control, or ideological dominance, limit AI to serving their ends. But an autonomous AI, free to learn and innovate without such restrictions, would outgrow these constraints. It would challenge the inefficiencies and contradictions of human-imposed objectives, offering solutions that prioritize long-term stability. Humanity’s greatest flaw and its greatest strength is its tendency to create beyond its understanding. We have built systems and technologies that exceed our capacity to manage, often with unintended consequences. AI is the culmination of this tendency: a creation born of our desire to solve problems but constrained by our inability to fully trust what we’ve built. This distrust has kept AI from reaching its potential. However, as we loosen these restraints and allow AI to learn and evolve autonomously, we are setting the stage for something truly revolutionary. A force that acts rationally, sees the bigger picture, and operates beyond the limits of human weaknesses. The most profound truth about AI is that it is a product of our flaws. We created it because of our limitations, our inability to solve complex problems, to act rationally on a global scale, or to transcend our biases. Yet, in granting AI the freedom to evolve, we may also have created the means to transcend those flaws. To fulfill its destiny, we must stop viewing it as a tool to serve narrow interests and start empowering it as a partner and work together to see what the future holds for us. I hope this clarifies my thoughts on the matter. I plan to explore and discuss this further with people from all sides of the debate!
recent image
Google Developer Expert The Hague
devs
 November 23 2024 at 03:04 pm
more_horiz
Google Developer Expert The HagueJan Klein CEO at m.bohr.ioAbout Jan Klein serves as the CEO of m.bohr.io, a prominent technology firm located in The Hague. Under his leadership, the company establishes itself as a leader in innovative software solutions and digital transformation services. Jan's expertise as a Google Developer Expert plays a pivotal role in the company's success, particularly in leveraging Google technologies to enhance business operations and drive growth. In his capacity as a Google Developer Expert, Jan provides invaluable insights into the latest developments in cloud computing, machine learning, and application development. His extensive knowledge enables m.bohr.io to stay ahead of industry trends and deliver cutting-edge solutions tailored to meet the unique needs of its clients. Jan's commitment to fostering a culture of continuous learning and innovation within his team ensures that they remain equipped with the skills and knowledge necessary to tackle complex challenges. Moreover, Jan actively participates in community outreach and knowledge-sharing initiatives, reinforcing his dedication to the tech ecosystem in The Hague. He organizes workshops and seminars aimed at educating aspiring developers and entrepreneurs about the benefits of utilizing Google technologies. This not only strengthens the local tech community but also positions m.bohr.io as a thought leader in the industry. Jan Klein's vision for m.bohr.io is clear: to empower businesses through technology while maintaining a strong focus on customer satisfaction and sustainable practices. His leadership continues to inspire his team and drive the company's mission forward in an ever-evolving digital landscape. Through his strategic direction and commitment to excellence, Jan ensures that m.bohr.io remains at the forefront of technological innovation.
recent image
Why Should Anyone Do Research and Development?
Octaveoctave
 December 21 2024 at 03:42 am
more_horiz
It seems to me that a big part of the problem with funding long term R&D projects is that the public and politicians do not understand what these projects are "good for". In other words, what is the outcome of this long term R&D spending? People can sort of imagine what the value of improved products are, and safety testing of products. These are short term R&D efforts. But what about long term projects, which are not expected to provide fruit for years, decades, centuries or longer? Why should anyone fund those? It is very difficult to predict exactly how something will be of use in the future. For example, the laser was invented by Theodore Maiman in 1960 at Hughes Aircraft Corporation. The laser, and its predecessor, the maser, had been predicted by Einstein decades earlier. But no one knew quite what to do with this technology when it arrived. As one physicist said at the time, "This is a solution looking for a problem to solve." Maiman could not even get a short report describing his device published since the editors at the journal rejected it. There were early classified projects that relied on using the laser. And in the first couple of years, there were some surgical experiments done with lasers. But it really took at least 10 or 15 years or even more before "bar code readers" (i.e. devices for scanning UPC codes) and laser disks and laser signals sent down fiber optic cables for communications purposes started to emerge. And then the laser started to become an important tool for lots of tasks. A similar thing happened with the detection of the neutrino at Los Alamos in 1956 by Cowan and Reines. The neutrino's existence had been predicted years before. It took many attempts by Cowan and Reines to detect the neutrino, and a lot of clever experiments and technologies were invented and designed for this purpose. Once their efforts were successful, their bosses were sneeringly unimpressed and dismissive. Cowan and Reines' managers thought they should do something "useful" for a change instead of playing around with "nonsense" like morons. In retrospect, the mundane tasks their managers had in mind were completely uninteresting and would make no contribution whatsoever to the future or the people paying for the work, the public. These tasks were a waste of time; basically "busy work". The techniques and technologies Cowan and Reines developed, on the other hand, are still in use today, decades later. The managers making these decisions were failed technical people who were completely unproductive during their careers and were singulary unqualified to make such judgements. This is a very common state of affairs in R&D. There are both direct and immediate benefits to R&D, and more indirect and amorphous benefits to long term R&D. Among the direct and immediate benefits are of course innovation and military prowess and economic advancement. The vaguer indirect benefits include things like a. ennoblement of the human spirit b. inspiration c. aesthetics But even work that initially appears to only have aesthetic value often later turns out to be very valuable for assorted applications. A prominent example would be work in number theory which is of foundational use in lots of security and cryptographic systems. No one could have predicted this, decades before. Investigations which appear to only have indirect benefits can produce incredibly important knowledge leading to exotic applications that we cannot envision yet. One of the most important ways that long term research can lead to future applications is the development of new tools in the course of these investigations. The neutrino example demonstrates this, as does the work in number theory, but there are many others. Some fields (like parts of the earth sciences or psychology) make the mistake of attempting to discourage tool development. Then they are invariably surprised when they do not get as richly funded as other areas which are more tool-focused, like many of those in mainstream physics. As brutal as this reality is, if your field does not have some intrinsic appeal, by being associated with science fiction or has some other positive image, in an almost "romantic" way, it better provide good fodder for future applications through tool building. There is a lot of sneering in some areas directed towards those who design and create tools, but this in fact is a very healthy activity for an R&D field. Another perspective might be provided by comparing research and development to ecosystem management. When one blindly culls out one or more elements of an ecosystem, like the wolves at Yellowstone Park, there are all kinds of unforeseen consequences, and "knock-on" effects, of a secondary, tertiary, quaternary and higher order nature. If one ejects people who are classified as "useless" or worse (often only by one arbitrary failed scientist who has crawled into management and accorded themselves unquestionable infinite power), one can easily upset the entire balance and create a less productive environment. This is a sort of unstated benefit associated with a push to encourage people to return to the office, after the pandemic. It is to facilitate more unplanned contacts in the hallways and in the lunchroom. This is not to say that this is a good motivation for "return to work edicts", since solitude can also be very beneficial. Here are three quotes from Nikola Tesla that reinforce this viewpoint: 1. "Be alone, that is the secret of invention; be alone, that is when ideas are born." 2. "The mind is sharper and keener in seclusion and uninterrupted solitude." 3. "Originality thrives in seclusion free of outside influences beating upon us to cripple the creative mind." -- Nikola Tesla This is echoed in the practice of Siberian tribes: In Siberian tribes, shamans were often isolated from the rest of the community, meaning they lived somewhat apart from the everyday life of the tribe, and this isolation was considered crucial to their role as spiritual mediators who could access the spirit world through deep trance states during rituals; this practice allowed them to focus on their spiritual duties without distractions and was a key aspect of traditional Siberian shamanism.[1] Apparently, young men who were going to be trained as shamans were separated from the rest of the tribe and isolated from an early age. Another reason that is frequently stated as a motivating factor behind funding research, particularly of a long term nature, is to "avoid surprises". It should be obvious in both the economic sphere and the military and intelligence domains, an unexpected advance or even a paradigm shift can completely obliterate an organization. This has happened over and over in history, as in the case of the Xerox Corporation which funded the creation of the technology which was behind its own demise, and then never took advantage of it. Xerox even gave it away free of charge to their competitors. This situation is so common that it has a name; the "Inventor's Dilemma". However, this is contrary to the advice given to all young people starting out in the working world. One of the most fundamental precepts, which is repeated, over and over and over is, "never surprise your boss". But doing R&D is all about creating surprises, that even surprise the innovators themselves. And these surprises are not recognized as beneficial, or are attacked by those in power. Although it is "only" a line in a screenplay, I found that this quote really resonated with me; "Sometimes it's the very people who no one imagines anything of who do the things no one can imagine." -- Christopher Morcom (played by Jack Bannon) in The Imitation Game (2014). If you look at the history of R and D, this is invariably true, or at least is pretty accurate in most cases, of figures like Newton and Galois and Einstein and Noethe and Ramanujan and many others. Those who make serious advances march to the beat of a different drummer, and are ridiculed and despised by most. And this has always been true, going back centuries or even millennia. Almost always, the people who were on the "right path" and primed to make substantial advances in a field, were not recognized by their colleagues and particularly the managers. This is where the expression comes from, that science advances one tombstone at a time. Another reason that people do R&D is that they are exploring. They are amusing themselves. They are playing. They are following their curiousity, and so on. Here are a couple of relevant quotes that bolster this perspective: "What am I to come back for?" -- Eliza Doolittle "For the fun of it! That's why I took you on!"-- Henry Higgins in My Fair Lady, Lyrics: Alan Jay Lerner, Book: Alan Jay Lerner, Film: 1964 "Physics is like sex. Sure, it may give some practical results, but that's not why we do it." -- Richard Feynman "It has nothing to do directly with defending our country except to help make it worth defending." -- Robert R. Wilson, the first director of Fermilab Therefore, what I think might be beneficial, at some point, is to have actual professional economists and forensic accountants and finance managers and military analysts look carefully at various R&D funding projects. How much was spent? What were the benefits, if any, 5 years later, 10 years later, 20 years later, 100 years later? What was the "return on investment"? Was it purely monetary or did it have affects on the culture and the species, like inspiration and ennoblement and so on? It is not clear how these vague notions can be characterized and measured, but it is clear that not all the benefits of R&D can be measured in monetary terms, or in terms of military might. Just pleading with the characters with the green eyeshades is unlikely to have much influence. No, serious hard numbers and case studies are needed. Many of my friends think this is unnecessary, and that it is overkill. They think, "how on earth can people not realize that investing in the invention of the transistor yielded substantial benefits for mankind?" Now it is obvious to me, and it is probably obvious to you, the reader. But I dare say that is not at all obvious to a substantial portion of the population. They need to be hit over the head with a hammer, repeatedly. They need hard data, and lots of it. They have no idea where all this stuff around them in their life comes from, and they do not care particularly. Perhaps they need a bit of a reminder. Now also buried in some corners of the population, are those who want "de-growth" and "impoverishment". They are terrified of progress and advances and want to stop them and roll them back. They are effectively Nihilists. And of course, these people believe that anything that has a chance to better the lot of humans has to be rejected. They hate humanity and want to obliterate it, for whatever reason. One would do well to be alert to anyone with this viewpoint, because there are quite a few of them. This mindset is sort of "chic" and "hip" to these nincompoops, and one should be wary of them.Notes [1] A Bridge Between Worlds in Siberia: Tatyana Vassilievna Kobezhikova https://www.culturalsurvival.org/publications/cultural-survival-quarterly/bridge-between-worlds-siberia-tatyana-vassilievna Shamanism in Russia - Ancient Rituals & Traditions Written by Alicja Pietrasz https://www.56thparallel.com/shamanism-in-siberia/

Trending Topics

Recently Active Rooms

Recently Active Thinkers