“Attention is the currency of the internet” say journalists, YouTubers, etc. I disagree. Attention is essentially fleeting. The actual currency of the internet is trust. Attention gets people here, trust is what keeps them coming back.
In the same vein, I believe that an essential misunderstanding is happening in AI. Commentators believe that AI is climbing the ladder of legibility, getting more logical and easier to understand at every turn. In reality, AI is becoming more trustworthy.
I would argue that, because writers on the internet are always maximizing for trust, the corpus that artificial intelligence is trained on will inevitably lead to output that increases trust.
In the short term, this looks like developers getting more confident that code will run, students getting more confident in their AI plagiarism, and bing users being confident in their GPT-generated search results. In the longer term, it looks like artists using stable diffusion for genuine use cases, healthcare workers trusting AI diagnoses, and lawmakers trusting artificial intelligence over polling data when deciding what bills to push.
Because AI’s corpus maximizes trust, we should be pessimistic about the chances of AI misalignment in the traditional sense. Rather, we should expect AI to slowly, then quickly, upend the orthodox as the source of what is trustworthy and true.
This may sound unbelievable, but imagine an artificial intelligence that can tell you, in plain English, the true intention of any article or tweet. It isn’t far from what we currently have, yet a trustworthy version of it would completely lower the bar for political understanding. Even simpler, imagine an AI that could tell you in plain English what vote you could cast to best improve your life.
If you at all believe that politics involves some level of double-speak, then the possibility of a trustworthy AI can best be understood as neutralizing that double-speak for good.