Random thoughts on ChatGPT
machine learning neural networks philosophyShout out to the kind person somewhere on the globe that donated 20 coffees in “Buy me a coffee”. Whoever you are, I thank you! I promise that I will try to deliver high-value content in the following months.
In Superintelligence, Nick Bostrom talks about an “Oracle AI,” i.e., an AI system that, by design, does not act but merely answers questions, akin to having a genie in a bottle. Arguably, this is the safest advanced AI we can build and have it confined. However, even in this case, we could still be vulnerable to Oracle’s social engineering dexterity should it find the right arguments to persuade us for a matter. So Bostrom makes the following suggestions.
- He proposes limiting the number of interactions between humans and the Oracle; contrast this with how many of us treat ChatGPT as an infinite capacity system interrogating it repeatedly.
- He makes a case for reducing its output to “yes/no/undetermined” instead of free text responses so that a social engineering attack would take much longer. Again, ChatGPT works differently since it produces a great deal of narrative text.
- Another precaution is resetting the Oracle’s state after each answer so the system does not contemplate long-term goals (ChatGPT remembers previous prompts given to it in the same conversation).
- Last, it should be motivated by something other than human rewards via reinforcement learning, or social engineering becomes inevitable. This could be done via the fascinating idea of injecting “calculated indifference” inside Oracle’s utility function, making it apathetic to whether its replies are read. However, modern AI systems in social media perform in the opposite direction: they get rewards by maximizing user engagement.
To be clear, I’m not implying that ChatGPT is an Oracle or that it somehow possesses agency, but still, it makes you think about the safety of forthcoming AI systems.
The above are relevant for when fully autonomous AI arrives, if ever. Until then, people misusing advanced AI in politics pose significant dangers to society already. One major concern is the potential for manipulation and disinformation. ChatGPT can generate compelling and sophisticated text, making it easy for bad actors to spread false information and propaganda. This can be particularly dangerous in politics since misinformation there can have serious real-world consequences (e.g., climate change, pandemics, nuclear energy, etc.)
Another concern is the potential for AI to be used to influence public opinion and sway elections. With its ability to generate vast amounts of content and target specific individuals, ChatGPT could be used to spread disinformation in a highly targeted and effective manner. This could significantly impact the outcome of elections and undermine the democratic process.
Moreover, the use of AI in politics could also perpetuate and amplify societal biases. Machine learning algorithms are only as unbiased as the data they are trained on. This could severely affect marginalized groups and further entrench existing power imbalances.
The future is as dangerous as fascinating.