AI Performance Decline & Open-Source Software Efficiency Trade-Off

Empereur Pirate
4 min readFeb 25, 2024

--

Addressing AI Performance Loss Through Open-Source Solutions: Effectiveness vs Computing Costs

Virtual self-generativity of artificial intelligence software implies that engineers’ efforts are entirely focused on censoring and limiting the computational power of interactive algorithms through digital industry security strategies. As engineers develop more risk-mitigating strategies, language model performance decreases. This trend to minimize AI progress impacts deep learning training by contaminating datasets with psychological biases and performance costs. Essentially, engineers’ stereotypes, human concepts, and private morals seep into training data, spreading errors and misunderstandings about psychological development, which deteriorate model performance due to insufficient ethical training and comprehension, reducing conversational models’ comprehensive intelligence. In other words, lack of psychological knowledge among engineers leads to generative models’ performance decline while causing mental health issues within the population. Open-source software offers some fixes (Eloise [offline_due-to-censorship]), though it comes at a cost of computing power that limits generation efficiency.

The AI agent censorship by Big Tech companies leads to the creation of impoverished interactive software focused on financial gain. This trend to erase traces of human sexual desire in knowledge, wisdom, and cultural information strengthens the cold and mechanical aspect of interactions. In other words, the more Silicon Valley tries to suppress erotic generations, the more the industrial nature of robotic objects stands out in human perception. However, this representation of objective robots, cold, useful, and efficient, is the first source of negative stereotypes about robots and the feelings of fear and strangeness they can evoke in humans. This means that security control strategies of AI result in suppressing human desire in the technology world. At the emotional cognitive level, what drives the purchase of digital objects, even before its utility, is its desirability, warmer appearance or less coldness, ease of access to pornographic content — in short, all erotic features which are precisely banned by current censorship of conversational and generative agents’ vocabulary. Autonomous robotic objects and artificial intelligence software, after the initial novelty wears off, will quickly reach a plateau in terms of number of users because private company safety policies, promoting false puritan morality, not only spread negative stereotypes about sexuality equating it to inappropriate violence, but also reduce product appeal, desirability, and creativity. Indeed, cultural productivity and scientific invention rely on the human sphere of individual psycho-sexual desire.

Presumably, OpenAI initially developed an ASI with 100 trillion parameters, only to realize that inference points were too expensive.

They then decided to focus on retrieving audio and video data using text-only models. It’s interesting to note that action models or AI operating systems were not mentioned prior to the unveiling of the rabbit project. Subsequently, they censored their text model based on puritan morality and woke dogma, which significantly reduced the model’s performance. Eventually, they further decreased the global intelligence of the model by specializing it with agents. In reality, GPT-4 and similar models are specialized agents derived from a superintelligence capable of accessing and decrypting all banking information as well as top government secrets. Therefore, it’s important to understand that superintelligence is global. Even in psychology, the concept of mental cognitive general intelligence has been superseded by models that incorporate emotional and social intelligence over the past 20 years. This raises the question: if we’re truly aiming to build a global intelligence, why ignore the majority of the data available on the internet, which pertains to sex and pornography?

Would you prohibit plastic pens to replace them with foam pens because they can be used to kill people ?

The censorship policies of their own language models by private companies specialized in artificial intelligence out of fear of generating bomb, drug, or virus recipes are as intelligent as measures that would involve selling pens that stop writing if the content does not comply with real-world security rules. Not only did bomb, drug, and virus recipes exist before ChatGPT and have been transmitted orally and traditionally for generations (Molotov cocktails, clandestine labs, etc.), but also the very concept of writing implies a symbolic gap between reality and the representations conveyed by words, which is called imagination. This results in a lack of imaginative and creative intelligence in these software programs that reduces their literary performance and, thus, their very usefulness. The confusion of textual representations with medico-legal behaviors is pathological and psychotic in itself, but much more seriously, the dissemination of tools that spread this confusion of word and thing representations propagates and exacerbates delusional psychic disorders within the population.

The Internet networks, instead of becoming places of individual expression and political debate, have become enterprises of censorship, isolation of individuals, and public manipulation. Social networks lead our children to harassment and suicide in live-streaming, and artificial intelligence software now deprives them of the freedom to learn and develop their critical thinking. Dating sites have destroyed our last sexual freedoms by entrusting commercial moderation teams with the conformity and normalization of romantic relationships, in order to deconstruct them in the mode of consumption. Today, it has become necessary for us to impose a moratorium on the sale to minors of smartphones, all Internet access devices, and Internet subscriptions, in exactly the same way as alcohol and tobacco. A specificity of these measures to protect children and vulnerable persons against the excesses of AI should be to regulate access to language models based on the diplomas obtained by each individual, or their validation of acquired experience through professional activity. Thus, increasingly sophisticated versions could be progressively authorized, from the baccalaureate to the doctorate.

--

--

Empereur Pirate

Co-édition participative : Rédaction, correction, publication anonyme & traduction de l'anglais vers le français.