Ex-Palantir turned politician Alex Bores says AI deepfakes are a “solvable problem” if we bring back a free, decades-old technique widespread adoption of HTTPS—using digital certificates to verify that a website is authentic

https://fortune.com/2025/12/27/alex-bores-ai-deepfakes-solvable-problem-c2pa-free-open-source-standard/

27 Comments

  1. inhalingsounds on

    How the hell is that going to solve anything? I can deepfake whatever I want in my computer and spread it like wildfire in social media or Whatsapp. What will HTTPS solve then?

  2. Technical_Ad_440 on

    ah yes cause everyone is going to legitimate verified sites for that kinda stuff. these guys are gonna be shocked when decentralized becomes a huge thing

  3. SeniorScienceOfficer on

    Why am I not surprised this dingbat is a data scientist and not a software engineer.

  4. NebulousNitrate on

    All modern browsers already block access to sites by default not using https or using invalid certificates. It takes a lot of effort to get around those blocks, and the people that would be easily fooled by deep fakes are probably not the same people who would go to greater lengths to bypass https protections

  5. I read the article wondering what does he know about HTTPS that I don’t, and the answer is – “nothing”. He knows nothing about it.

  6. Adrian_Alucard on

    It’s true that it is an easily solvable problem.

    Start making and publishing extremely embarrassing deepfakes about politicians like there is no tomorrow and it will be solved in no time by the people in charge

  7. Ah yes, HTTPS will prevent people from running software on local machines and using decentralized supercomputer clusters.

  8. The solution is to create a middle man to sell you the certificates who will probably indiscriminately sell the certs to anyone who applies.

    Sounds about right.

  9. ROFL… It’s trivial to hit up Let’s Encrypt and generate a certificate

    Surprised he didn’t suggest NFTs as the solution, more Galaxy-brained

    Edit: reading the article, and while his plan is a little better than “just use https”, it’s not really much more effective. Basically would require image generators to digitally sign images and declare them made by AI… Even if you got the major AI image generators to agree, it would be pretty trivial to strip and then there’s everyone running models on their personal machine who could sign it however they wanted. *<Insert BartSimpsonYouTried.gif>*

  10. FuckItImLoggingIn on

    Seem unfeasible to implement. Media size would explode.

    Also, this is only loosely related to HTTPS, yes, it would use asymmetric cryptography but that’s not strictly specific to HTTPS, IMO.

  11. DrMaxwellEdison on

    Folks haven’t read the article, have they?

    > Bores pointed to a “free open-source metadata standard” known as C2PA, short for the Coalition for Content Provenance and Authenticity, which allows creators and platforms to attach tamper-evident credentials to files. The standard can cryptographically record whether a piece of content was captured on a real device, generated by AI, and how it has been edited over time.​

  12. If 40% of people don’t just believe everything they see on Facebook or Twitter, and only got news from actual journalistic outlets, then sure.

    But the news industry is dying, journalism doesn’t have a clear easy profit model, AI is only accelerating that, and a huge fraction of the population doesn’t understand even the basics of digital media literacy.

    So…… this amounts to a big “trust us bro”. Which like, no, I don’t think that I will.

  13. Another technologist that is trying to apply a technological solution to what is fundamentally a social problem caused by technology. Taking this approach creates a Chokepoint on information. If we can only trust what’s cryptographically verified we need to be able to trust(the social meaning of the word not the cryptography meaning) a small number of organizations to deliver us the truth. Organizations that may not want certain news stories to get out. So we just swapped one problem, not being able to believe what we see, for another problem, having powerful organizations control what we see. Neither is good.

  14. Holy fuck burn this whole thread. Bunch of idiots yapping about an awfully worded title.

    https://c2pa.org is what he is talking about. And he’s correct if we want to know that an image actually came from the camera of a CNN photographer or something like that. Does it solve every issue? No but verifying the source is a valid approach.