❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Post-Quantum Key Exchange for MCP Authentication

27 November 2025 at 19:42

Explore post-quantum key exchange methods for securing Model Context Protocol (MCP) authentication. Learn about PQuAKE, implementation strategies, and future-proofing AI infrastructure against quantum threats.

The post Post-Quantum Key Exchange for MCP Authentication appeared first on Security Boulevard.

The Trust Crisis: Why Digital Services Are Losing Consumer Confidence

26 November 2025 at 12:45
TrustCloud third party risk Insider threat Security Digital Transformation

According to the Thales Consumer Digital Trust Index 2025, global confidence in digital services is slipping fast. After surveying more than 14,000 consumers across 15 countries, the findings are clear: no sector earned high trust ratings from even half its users. Most industries are seeing trust erode β€” or, at best, stagnate. In an era..

The post The Trust Crisis: Why Digital Services Are Losing Consumer Confidence appeared first on Security Boulevard.

Understanding the Security of Passkeys

Explore the security of passkeys: how they work, their advantages over passwords, potential risks, and best practices for secure implementation in software development.

The post Understanding the Security of Passkeys appeared first on Security Boulevard.

The Death of Legacy MFA and What Must Rise in Its Place

24 November 2025 at 14:37

Tycoon 2FA proves that the old promises of β€œstrong MFA” came with fine print all along: when an attacker sits invisibly in the middle, your codes, pushes, and one-time passwords become their codes, pushes, and one-time passwords too. Tycoon 2FA: Industrial-Scale Phishing Comes of Age Tycoon 2FA delivers a phishing-as-a-service kit that hands even modestly..

The post The Death of Legacy MFA and What Must Rise in Its Place appeared first on Security Boulevard.

Signing In to Online Accounts

Explore secure methods for signing into online accounts, including SSO, MFA, and password management. Learn how CIAM solutions enhance security and user experience for enterprises.

The post Signing In to Online Accounts appeared first on Security Boulevard.

What is Risk-Based Authentication?

Explore risk-based authentication (RBA) in detail. Learn how it enhances security and user experience in software development, with practical examples and implementation tips.

The post What is Risk-Based Authentication? appeared first on Security Boulevard.

Airtight SEAL

11 November 2025 at 20:39
Over the summer and fall, SEAL saw a lot of development. All of the core SEAL requirements are now implemented, and the promised functionality is finally available!



SEAL? What's SEAL?

I've written a lot of blog entries criticizing different aspects of C2PA. In December 2023, I was in a call with representatives from C2PA and CAI (all from Adobe) about the problems I was seeing. That's when their leadership repeatedly asked questions like, "What is the alternative?" and "Do you have a better solution?"

It took me about two weeks to decide on the initial requirements and architect the framework. Then I started writing the specs and building the implementation. The result was initially announced as "VIDA: Verifiable Identity using Distributed Authentication". But due to a naming conflict with a similar project, we renamed it to "SEAL: Secure Evidence Attribution Label".

C2PA tries to do a lot of things, but ends up doing none of it really well. In contrast, SEAL focuses on just one facet, and it does it incredibly well.

Think of SEAL like a digital notary. It verifies that a file hasn't changed since it was signed, and that the signer is who they say they are. Here's what that means in practice:
  • Authentication: You know who signed it. The signer can be found by name or using an anonymized identifier. In either case, the signature is tied to a domain name. Just as your email address is a unique name at a domain, the SEAL signer is unique to a domain.

  • No impersonations: Nobody else can sign as you. You can only sign as yourself. Of course, there are a few caveats here. For example, if someone compromised your computer and steals your signing key, then they are you. (SEAL includes revocation options, so this potential impact can be readily mitigated.) And nothing stops a visually similar name (e.g., "Neal" vs "Nea1" -- spelled with the number "1"), but "similar" is not the same.

  • Tamper proof: After signing, any change to the file or signature will invalidate the signature. (This is a signficantly stronger claim than C2PA's weaker "tamper evident" assertion, which doesn't detect all forms of tampering.)

  • No central authority: Everything about SEAL is distributed. You authenticate your signature, and it's easy for a validator to find you.

  • Privacy: Because SEAL's authentication information is stored in DNS, the signer doesn't know who is trying to validate any signature. DNS uses a store-and-forward request approach with caching. Even if I had the ability to watch my own authoritative DNS server, I wouldn't know who requested the authentication, why they were contacting my server (DNS is used for more things than validation), or how many files they were validating. (This is different from C2PA's X.509 and OCSP system, where the certificate owner definitely knows your IP address and when you tried to authenticate the certificate.)

  • Free: Having a domain name is part of doing business on the internet. With SEAL, there is no added cost beyond having a domain name. Moreover, if you don't have a domain name, then you can use a third-party signing service. I currently provide signmydata.com as a free third-party signer. However, anyone can create their own third-party signer. (This is different from C2PA, where acquiring an X.509 signing certificate can cost hundreds of dollars per year.)

One Label, Every Format

SEAL is based on a proven and battle-tested concept: DKIM. Virtually every email sent today uses DKIM to ensure that the subject, date, sender, recipients, and contents are not altered after pressing "send". (The only emails I see without DKIM are from spammers, and spam filters rapidly reject emails without DKIM.)

Since DKIM is good enough for protecting email, why not extend it to any file format? Today, SEAL supports:
  • Images: JPEG, PNG, WebP, HEIC, AVIF, GIF, TIFF, SVG, DICOM (for medical imaging files), and even portable pixel maps like PPM, PGM, and PNM.

  • Audio: AAC, AVIF, M4A, MKA, MP3, MPEG, and WAV. (Other than 'raw', this covers practically every audio format you will encounter.)

  • Videos: MP4, 3GP, AVI, AVIF, HEIF, HEVC, DIVX, MKV, MOV (Quicktime), MPEG, and WebM. (Again, this covers almost every video format you will encounter.)

  • Documents: PDF, XML, HTML, plain text, OpenDocument (docx, odt, pptx, etc.), and epub.

  • Package Formats: Java Archive (JAR), Android Application Package (APK), iOS Application Archive (iPA), Mozilla Extension (XPI), Zip, Zip64, and others.

  • Metadata Formats: EXIF, XMP, RIFF, ISO-BMFF, and Matroska.
If you're keeping count, then this is way more formats than what C2PA supports. Moreover, it includes some formats that the C2PA and CAI developers have said that they will not support.

What's New?

The newest SEAL release brings major functional improvements. These updates expand how SEAL can sign, reference, and verify media, making it more flexible for real-world workflows. The big changes to SEAL? Sidecars, Zip support, source referencing, and inline public keys.

New: Sidecars!

Typically, the SEAL signature is embedded into the file that is being signed. However, sometimes you cannot (or must not) alter the file. A sidecar stores the signature into a separate file. For verifying the media, you need to have the read-only file that is being checked and the sidecar file.

When are sidecars useful?
  • Read-only media: Whether it's a CD-ROM, DVD, or a write-blocker, sometimes the media cannot be altered. A sidecar can be used to sign the read-only media by storing the signature in a separate file.

  • Unsupported formats: SEAL supports a huge number of file formats, but we don't support everything. You can always use a sidecar to sign a file, even if it's an otherwise unsupported file format. (To the SEAL sidecar, what you are signing is just "data".)

  • Legal evidence: Legal evidence is often tracked with a cryptographic checksum, like SHA256, SHA1, or MD5. (Yes, legal often uses MD5. Ugh. Then again, they still think FAX is a secure transmission method.) If you change the file, then the checksum should fail to match. (I say "should" because of MD5. Without MD5, it becomes "will fail to match".) If it fails to match, then you have a broken chain of custody. A sidecar permits signing evidence without altering the digital media.

New: Zip!

The most recent addition to SEAL is support for Zip and Zip64. This makes SEAL compatible with the myriad of zip-based file types without introducing weird side effects. (OpenDocuments and all of the package formats are really just zip files containing a bunch of internal files.)

Deciding where to add the signature to Zip was the hardest part. I checked with the developers at libzip for the best options. Here's the choices we had and why we went with the approach we use:
  • Option 1: Sidecar. Include a "seal.sig" file (like a sidecar) in the zip archive.
    • Pro: Easy to implement.
    • Con: Users will see an unexpected "seal.sig" file when they open the archive.
    Since we don't want to surprise anyone with an unexpected file, we ruled out this option.

  • Option 2: Archive comment. Stuff the SEAL record in the zip archive's comment field.
    • Pro: Easy to implement.
    • Meh: Limited to 65K. (Unlikely to be a problem.)
    • Con: Repurposes the comment for something other than a comment.
    • Con: Someone using zipinfo or other tools to read the comment will see the SEAL record as a random text string.
    (Although there are more 'cons', none are really that bad.)

  • Option 3: Per-file attribute. Zip permits per-file extra attributes. We can stuff the SEAL in any of these and have it cover the entire archive.
    • Pro: Easy to implement.
    • Con: Repurposes the per-file attribute to span the entire archive. This conflicts with the basic concept of Zip, where each file is stored independently.

  • Option 4: Custom tag. Zip uses a bunch of 4-byte tags to denote different segments. SEAL could define its own unique 4-byte tag.
    • Pro: Flexible.
    • Con: Non-standard. It won't cause problems, but it also won't be retained.
    If this could be standardized, then this would be an ideal solution.

  • Option 5: Custom encryption field. Have the Zip folks add in a place for storing this. For example, they already have a place for storing X.509 certs, but that is very specific to Zip-based encryption.
    • Pro: Could be used by a wide range of Zip-signing technologies.
    • Con: We don't want to repurpose the specific X.509 area because that could cause compatibility problems.
    • Con: There are some numeric codes where you can store data. However, they are not standardized.
    The folks at libzip discouraged this approach.
After chatting with the libzip developers, we agreed that options 1 and 3 are not great, and options 4 and 5 would take years to become standardized. They recommended Option 2, noting that today, almost nobody uses zip archive comments.

For signing a zip file, we just stuff the text-based SEAL signature in the zip archive's comment field. *grin* The signature signs the zip file and all of its contents.

The funny thing about zip files is that they can be embedded into other file formats. (For those computer security "capture the flag" contests, the game makers often stuff zip files in JPEG, MP3, and other files formats.) The sealtool decoder scans the file for any embedded zip files and checks them for SEAL signatures.

New: Source Referencing!

This feature was requested by some CDN providers. Here's the problem: most content delivery networks resize, scale, and re-encode media in order to optimize the last-mile delivery. Any of these changes would invalidate the signer's signature.

With SEAL, you can now specify a source URL (src) for the validator to follow. It basically says "I got this content from here." The signer attests to the accuracy of the remote resource. (And they can typically do this by adding less than 200 bytes to the optimized file.)

Along with the source URL, there can also be a cryptographic checksum. This way, if the URL's contents change at a later date (which happens with web content), then you can determine if the URL still contains the source information. In effect, SEAL would tell you "it came from there, but it's not there anymore." This is similar to how bibliography formats, like APA, MLA, or Chicago, require "accessed on" dates for online citations. But SEAL can include a cryptographic checksum that ensures any content at the location matches the cited reference. (As an example, see the Harvard Referencing Guide. Page 42 shows how to cite social media sources, like this blog, when used as a source.)

As an example, your favorite news site may show a picture along with an article. The picture can be SEAL-signed by the news outlet and contain a link to the uncropped, full-size picture -- in case someone wants to fact-check them.

Source referencing provides a very rudimentary type of provenance. It says "The signer attests that this file came from here." It may not be there at a later date, but it was there at one time.

New: Inline Public Keys!

While Zip impacts the most file formats, inline public keys make the cryptography more flexible and future-proof.

With a typical SEAL signature, the public key is located in DNS. The association with the DNS record authenticates the signer, while the public key validates the cryptography. If the cryptography is invalid, then you cannot authenticate the signer.

With inline public keys, we split the functionality. The public key is stored inside the SEAL signature. This permits validating the cryptography at any time and without network access. You can readily detect post-signing tampering.

To authenticate the signer, we refer to DNS. The DNS record can either store the same public key, or it can store a smaller digest of the public key. If the cryptography is valid and the public key (either the whole key or the digest) exists in the DNS record, then SEAL authenticates the signer.

When should inline public keys be used?
  • Offline validation. Whether you're in airplane mode or sitting in a high security ("air gap") environment, you can still sign and validate media. However, you cannot authenticate the signature until you confirm the public key with the DNS record.

  • Future cryptography. Current cryptographic approaches (e.g., RSA and EC) use public keys that are small enough to fit in a DNS TXT field. However, post-quantum cryptography can have extremely long keys -- too long for DNS. In that case, you can store the public key in the SEAL field and the shorter public key digest in the DNS record.

  • Archivists. Let's face it, companies come and go and domain names may expire or change owners. Data that is verifiable today may not be verifiable if the DNS changes hands. With inline public keys, you can always validate the cryptography, even when the DNS changed and you can no longer authenticate the signer. For archiving, you can combine the archive with a sidecar that uses an inline public key. This way, you can say that this web archive file (WARC) was accurate at the time it was created, even if the source is no longer online.
Basically, inline public keys introduces a flexibility that the original SEAL solution was lacking.

Next Up

All of these new additions are fully backwards-compatible with the initial SEAL release. Things that were signed last year can still be validated with this newer code.

While the command-line signer and validator are complete, SEAL still needs more usability -- like an easy-access web front-end. Not for signing, but for validating. A place where you can load a web page and select the file for validating -- entirely in your web browser and without uploading content to a server. For example, another SEAL developer had created a proof-of-concept SEAL validator using TypeScript/JavaScript. I think the next step is to put more effort in this direction.

I'm also going to start incorporating SEAL into FotoForensics. Right now, every analysis image from FotoForensics is tagged with a source media reference. I think it would be great to replace that with a SEAL signature that includes a source reference. Over the years, I've seen a few people present fake FotoForensics analysis images as part of disinformation campaigns. (It's bound to become a bigger problem in the future.) Using SEAL will make that practice detectable.

While I started this effort, SEAL has definitely been a group project. I especially want to thank Shawn, The Boss, Bill not Bob, Bob not Bill, Dave (master of the dead piano), Dodo, BeamMeUp8, bgon, the folks at PASAWG for their initial feedback, and everyone else who has provided assistance, reviews, and criticisms. It's one thing to have a system that claims to provide authentication, provenance, and tamper detection, but it's another to have one that actually works -- reliably, transparently, at scale, and for free.

C2PA in a Court of Law

20 October 2025 at 09:16
Everyone with a new project or new technology wants rapid adoption. The bigger the customer base, the more successful the project can hope to be. However, in the race to be the first one out the door, these developers often overlook the customer needs. (Just because you can do it, does it mean the customer wants it? And does it solve a problem that the customer has?)

Many of my friends, colleagues, and clients work in security-related industries. That explicitly means that they are risk-averse and very slow to adopt new technologies. Even for existing technologies, some of my clients have multi-month release cycles. Between when I give them a code drop and when they deploy it for their own use, a few months might pass. During that time, the code is scanned for malware, tested in an isolated testing environment (regression testing), then tested in a second testing environment (more aggressive testing), maybe a third testing environment (real-world simulation), and then finally deployed to production. It's rare for large companies to just download the code drop and start using it in production.

There's a big reason for being risk-averse. If you work in the financial, medical, or insurance fields, then you have legal liability. You're not going to adopt something that puts you or your company at risk. If you can't trust that a tool's results will be consistent or trustworthy, then you're not going to use it.

In this blog, I explore whether C2PA-signed media, such as photos from the Google Pixel 10, should be accepted as reliable evidence in a court of law. My findings show serious inconsistencies in timestamps, metadata protection, and AI processing. These issues call its forensic reliability into question.

Legally Acceptable

Forensics effectively means "for use in a court of law". For any type of analysis tool, the biggest concern is whether the tool or the results are admissible in court. This means that it needs to comply with the Daubert or Frye standards and the Federal Rules of Evidence. These are the primary requirements needed to ensure that tools and their results can be admissible in a US courtroom.

Daubert and Frye refer to two different guidelines for accepting any tools in a court of law. As a non-attorney, my non-legal understanding is that they are mostly the same thing. Both require the tools, techniques, and methods to be relevant to the case, scientifically sound, and provide reliable interpretations. The main differences:
  • Daubert is used in federal and some state courts. Frye is used in the states that don't rely on Daubert. However, you might hear arguments related to both the Daubert and Frye interpretations in a single courtroom.

  • Frye is based on a "general acceptance" criteria: Is the underlying scientific principle or method "generally accepted" as reliable within the relevant scientific community?

  • Daubert requires the judge to act as the final "gatekeeper". The judge uses specific criteria to evaluate the principles and methodology (not the conclusion generated) before determining if the approach is acceptable. Because Daubert considers multiple factors (such as the error rate and methods used), it is often considered to be a stricter standard.

  • In both cases, judges often rely on precedence. If your tool is accepted in court one time, then it's more likely to be accepted the next time.
Along with Daubert and Frye, the Federal Rules of Evidence (FRE) define criteria for acceptability of both the evidence and anyone testifying about the evidence. These include guidance about relevancy (FRE Rules 401 and 403), expert testimony (FRE Rule 702), and scientific acceptability (FRE Rules 901 and 902). I'm not an attorney, and I'm sure that legal experts can identify additional requirements.

For my FotoForensics service:
  • All analyzers are based on solid (logical) theory. The outputs are deterministic and repeatable. Moreover, other people have been able to implement variations of my analyzers and can generate similar results. (This goes toward reproducibility.)

  • I explicitly avoid using any kind of deep learning AI. (I don't use AI to detect AI, which is the current fad.) This goes toward provability. In fact, every outcome on the commercial FotoForensics service is easily explainable -- it's a white box system, not a black box. When the expert gets up on the stand, they can explain how the software reached any results.

  • FotoForensics has been peer reviewed and referenced in academic publications. The back-end analysis tools also passed a functional review by the Department of Defense Cyber Crime Center (DC3). (The DC3 only gives a pass/fail rating. It passed, with the note "steep learning curve". FotoForensics was created as a front-end to simplify the learning curve.)

  • FotoForensics has been used in a court of law and deemed acceptable. (Technically, the tool itself was never questioned. The experts using the tool were found to be acceptable under FRE Rule 702.)
In contrast to FotoForensics, we have cameras adopting AI content creation and integrating with C2PA. With the Google Pixel 10, we have them combined. Each poses a problem and when combined, they make the problems worse.

Vetting the Pixel 10

Let's say you have a picture from Google's new Pixel 10 smartphone and you want to use it as evidence in a court of law. Is the picture reliable? Can the C2PA signature be used to authenticate the content?

Keep in mind, if you are law enforcement then you are a trained observer and sworn to uphold the law. In that case, saying "Yes, that picture represents the situation that I observed" is good enough. But if you're anyone else, then we need to do a deep analysis of the image -- in case you are trying to submit false media as evidence. This is where we run into problems.

To illustrate these issues, I'm going to use three pictures that "named router" (that's what he calls himself) captured. He literally walked in to a store, asked to test a Pixel 10 camera, and took three photos in succession. Click click click. (He gave me permission to use them in this blog.) Here are the pictures with links to the analyzers at FotoForensics, Hintfo, and C2PA's official Content Credentials website, as well as the EXIF timestamp and the trusted timestamp (TsT) from the notary that are found in the picture's metadata:

ImageAnalyzersEXIF vs TsT DifferenceMetadata Consistency
FotoForensics

Hintfo

Content Credentials
EXIF: 2025-10-14 17:19:25 +02:00
Notarized: 2025-10-14 15:19:26 GMT
Difference: +1 second
Consistent metadata and expected behavior.
FotoForensics

Hintfo

Content Credentials
EXIF: 2025-10-14 17:19:32 +02:00
Notarized: 2025-10-14 15:19:32 GMT
Difference: 0 seconds
Includes a video; an unexplained variation.
FotoForensics

Hintfo

Content Credentials
EXIF: 2025-10-14 17:19:36 +02:00
Notarized: 2025-10-14 15:19:35 GMT
Difference: -1 seconds
Metadata is consistent with the first image. However, the timestamps are inconsistent: the notary predates the EXIF.

If we accept the EXIF and XMP metadata at face value, then:
  • The pictures were taken seconds apart. (Three pictures in 11 seconds.)

  • The EXIF metadata identifies each picture as coming from a Google Pixel 10 camera. Each includes lens settings and device information that seem rational. The EXIF data matches the expectations.

  • There is one problem in the EXIF metadata: Every picture created by the Pixel 10 has EXIF metadata saying that it is a composite image: Composite Image Captured While Shooting. (Technically, this is EXIF tag 0xa460 with value 3. "3" means it is a composite image that was captured while shooting.) This tag means that every picture was adjusted by the camera. Moreover, it was more than the usual auto-exposure and auto-focus that other cameras provide. We do not know if the alterations significantly altered the meaning of the picture. Given that every picture from a Pixel 10 includes this label, it's expected but problematic.

  • Comparing the files, there is a bigger problem. These files were taken in rapid succession. That means the user didn't have time to intentionally change any camera settings. (I confirmed this with the photographer.) The first and third pictures have the same metadata fields in the same order, and both include a Gain Map. (Consistency is good.) However, the second picture includes an additional video attachment. Inconsistency is bad because it can appear to be tampering. (Talking with the photographer, there was nothing special done. We have no idea why the camera decided to randomly attach a video to the JPEG image.)
Many cameras change metadata based on orientation, zoom, or other settings. However, I don't know of any other cameras that arbitrarily change metadata for no reason.

Metadata, by itself, can be maliciously altered, but that usually isn't the default assumption. Instead, the typical heuristic is: trust the metadata until you seen an indication of alteration. At the first sign of any inconsistency, stop trusting the metadata. The same goes for the content: trust the content until you notice one inconsistency. The instant you notice something wrong, you need to call everything into question.

Add in C2PA

Given that every Pixel 10 picture natively includes AI and can randomly change the metadata, this can be very problematic for an examiner. However, it gets worse, because the Pixel 10 includes C2PA metadata.

As mentioned in my earlier blog entry, the Pixel 10's C2PA metadata (manifest) explicitly excludes the EXIF and XMP metadata from the cryptographic signature. Those values can be easily changed without invalidating the cryptographic signature. Again, this doesn't mean that the metadata was altered, but it does mean that the metadata could be altered without detection.

The C2PA metadata includes a timestamp from a trusted source. This is not when the picture was generated. This is when the picture was notarized. It says that the trusted source saw that the data existed at the time it was signed.

What I didn't know at the time I wrote my previous blog entry was that the Pixel 10's Trusted Timestamp Authority (TSA) is built into the camera! They appear to maintain two separate clocks in the Pixel 10. One clock is for the untrusted EXIF timestamps (when the file was created), while the other is for the trusted timestamp (when the notary signed the file). This is where the problem comes in: if both clocks exist on the same device, then I'd expect drift between the clocks to be negligible and times to be consistent.
  • In the first sample picture, the notary's timestamp is one second after the EXIF timestamp. This doesn't bother me since the actual difference could be a fraction of a second. For example, the EXIF data says the picture was captured at 17:19:25.659 seconds (in GMT+02:00). If the signing took 0.35 seconds, then the trusted timestamp would be at 26.xxx seconds -- with integer rounding, this would be 15:19:26 GMT, which matches the timestamp from the trusted notary.

  • The second picture was captured at 17:19:32 +02:00 and notarized within the same second: 15:19:32 GMT. This is consistent.

  • The third picture was captured at 17:19:36 +02:00 and notarized one second earlier! At 15:19:35 GMT. This is a significant inconsistency. Since it's all on the same device, the notary should be at or after the EXIF is generated; never before. Moreover, since the trusted timestamp shows that the file was notarized before the EXIF data was generated, we don't know what information was actually notarized.
Without C2PA, I would have had no reason to distrust the EXIF timestamps. But with C2PA, I now have a strong reason to suspect that a timestamp was altered. Moreover, we have two inconsistencies among these three pictures: one with different metadata and one with the notary generating a signed timestamp before the data was created.

As an analyst with no information about the ground truth, it's not just that I wouldn't trust the third picture; I wouldn't trust the entire set! It looks no different than if someone modified the EXIF timestamps on the third picture, and likely altered the first two pictures since the alteration would be undetectable. Moreover, the metadata difference in the second picture could be from someone inserting a different image into the sequence. (That's common with forgeries.)

Unfortunately, knowing the ground truth doesn't make this any better.

With other Google Pixel 10 images, I've seen the unaltered EXIF times and trusted timestamp differ by almost 10 seconds. For example, this haystack image came from DP Review:



The metadata says that the photo was captured on 2025-08-27 01:02:43 GMT and notarized a few seconds later, at 2025-08-27 01:02:46 GMT. The problem is, there's also a depth map attached to the image, and it also contains EXIF metadata. The main image's EXIF data is part of the C2PA exclusion list, but the depth map (and its EXIF data) is protected by a C2PA's cryptographic signature. The depth map's EXIF says it was created at "2025-08-27 01:02:47 GMT", which is one second after the picture was notarized.
  • The entire picture took four seconds to be created.
  • The notary signed the trusted timestamp before the camera finished creating the file.
This doesn't just call into question the trusted timestamp in the initial three pictures or the haystack photo. This calls into question every trusted timestamp from every C2PA-enabled application and device.

Time's Up

Back in 2024, I gave a presentation for the IPTC organization titled "C2PA from the Attacker's Perspective". During the talk, I demonstrated some of the problems that I had previously (privately) reported to the C2PA. Specifically, I showed how anyone could alter the trusted timestamp without detection. (Some of C2PA's steering committee members had previously heard me say it was possible, but they had not seen a working demonstration of the exploit.) There were two core problems that made this alteration possible:
  1. The C2PA implementation by Adobe/CAI was never checking if the trusted timestamp was altered. This meant that I could alter it without detection. (It was pretty entertaining when I demonstrated this live, and people were commenting in the real-time chat with remarks like "I just confirmed it.")

  2. As far as I could tell, the trusted timestamp was not protecting anything else. The trusted timestamp is supposed to be cryptographically encompassing some other data:

    1. You generate a hash of the data you want to sign.
    2. You send the hash to the Trusted Timestamp Authority (TSA).
    3. The TSA returns a certificate with a signed timestamp.

    Although my IPTC presentation criticized an earlier version of the C2PA specification, the most recent version (C2PA v2.2) retains the same problem. Unfortunately, C2PA's implementation doesn't appear to signing anything of importance. This appears to be a problem that stems from the C2PA specifications. Specifically, section 10.3.2.5 mentions signing a minor portion of the manifest: (my bold for emphasis)
    10.3.2.5.2. Choosing the Payload

    A previous version of this specification used the same value for the payload field in the time-stamp as was used in the Sig_signature as described in Section 10.3.2.4, β€œSigning a Claim”. This payload is henceforth referred to as a "v1 payload" in a "v1 time-stamp" and is considered deprecated. A claim generator shall not create one, but a validator shall process one if present.

    The "v2 payload", of the "v2 time-stamp", is the value of the signature field of the COSE_Sign1_Tagged structure created as part of Section 10.3.2.4, β€œSigning a Claim”. A "v2 payload" shall be used by claim generators performing a time-stamping operation.


    10.3.2.4. Signing a Claim

    Producing the signature is specified in Section 13.2, β€œDigital Signatures”.

    For both types of manifests, standard and update, the payload field of Sig_structure shall be the serialized CBOR of the claim document, and shall use detached content mode.

    The serialized COSE_Sign1_Tagged structure resulting from the digital signature procedure is written into the C2PA Claim Signature box.

    ...
    14.2.2. Use of COSE

    Payloads can either be present inside a COSE signature, or transported separately ("detached content" as described in RFC 8152 section 4.1). In "detached content" mode, the signed data is stored externally to the COSE_Sign1_Tagged structure, and the payload field of the COSE_Sign1_Tagged structure is always nil.
    It appears that other parts of the manifest (outside of the "v2 payload") can be altered after generating the trusted timestamp's signature. This gives the appearance of claiming that the timestamp authenticates the content or manifest, even though it permits the manifest to be changed after generating the trusted timestamp information.
After my live demo, the Adobe/CAI developers fixed the first bug. You can no longer manually alter the trusted timestamp without it being flagged. If you use the command-line c2patool, any attempt to alter the trusted timestamp returns a "timeStamp.mismatch", like:
"informational": [
{
"code": "timeStamp.mismatch",
"url": "self#jumbf=/c2pa/urn:c2pa:2c33ba25-6343-d662-8a8c-4c638f4a3e68",
"explanation": "timestamp did not match signed data"
}
However, as far as I can tell, they never fixed the second problem: the trusted timestamp isn't protecting anything important. (Well, other than itself.). The manifest can be changed after the trusted timestamp is generated, as long as the change doesn't touch the few bytes that the C2PA specifications says to use with the trusted timestamp.

With C2PA, the trusted timestamp is supposed to work as a notary. However, as a notary, this is effectively signing a blank piece of paper and letting someone fill in everything around it. The trusted timestamp appears to be generated before the file is fully created and the exclusion list permits the EXIF and XMP metadata to be altered without detection. In any legal context, these fatal flaws would immediately void the notary's certification.

This isn't just a problem with every Google Pixel photo. This seems to be a problem with every signed C2PA image that includes a TSA's trusted timestamp. Since the Google Pixel 10 is on C2PA's conforming product list, this is a problem with the C2PA specification and not the individual implementations.

Core Arguments

At this point, we have a photo from a Pixel 10 with a C2PA signature. What can we trust from it?
  • The Pixel 10 uses AI to alter the image prior to the initial saving. As demonstrated in the previous blog entry, the alterations are extreme enough to be confused with AI-generated images. There appears to be no way to disable these alterations. As a result of the AI manipulation, it becomes questionable whether the alterations change the media's meaning and intent.

  • The Pixel 10 does not protect the EXIF or XMP metadata from tampering. (Those metadata blocks are in the C2PA manifest's exclusion list.) An analyst should completely ignore the C2PA metadata and evaluate the EXIF and XMP independently.

  • The trusted timestamp appears to be mostly independent of the picture's creation time. (I say "mostly" because the camera app appears to call the TSA "sometime" around when the picture's C2PA manifest was being created.)
If the C2PA cryptographic signatures are valid, it only means that the file (the parts that were not excluded from the signature) was not altered after signing; it says nothing about the validity of the content or accuracy of the metadata. A valid C2PA manifest can be trusted to mean that parts of the file are cryptographically unaltered since the camera signed it. However, it does not mean that the visual content is a truthful representation of reality (due to AI) or that the metadata is pristine (due to exclusions).

But what if the C2PA signatures are invalid, or says that the media was altered? Does that mean you can still trust it? Unfortunately, the answer is "no". Just consider the case of the Holocaust deniers. (I blogged about them back in 2016.) Some Holocaust deniers were manufacturing fake Holocaust pictures and getting them inserted into otherwise-legitimate photo collections. Later, they would point out their forgeries (without admitting that they created them) in order to call the entire collection into question.

By the same means, if someone attaches an invalid C2PA manifest to an otherwise-legitimate picture, it will appear to be tampered with. If people believe that C2PA works, then they will assume that the visual content was altered even through the alteration involved the manifest. (This is the "Liar's Dividend" effect.)

Regardless of whether the C2PA's cryptographic signatures are valid or invalid, the results from the C2PA manifest cannot be trusted.

Summary Judgment

All of this takes us back to whether pictures from a Google Pixel 10, and C2PA in general, should be accepted for use in a court of law. (To reiterate, I'm not a legal scholar and this reflects my non-attorney understanding of how evidence and forensic tools are supposed to be presented in court.)
  • FRE Rule 901(b)(9) covers "Evidence About a Process or System". The proponent must show the "process or system produces an accurate result." In this blog entry, I have shown that the "Pixel 10/C2PA system" fails this on several counts:

    • Lack of "Accurate Result": The AI/Computational Photography from the Pixel 10 means the image is not a simple, accurate record of light, but an interpretation.

    • System Inconsistency: The random inclusion of a video attachment and the random, unexplained time shifts demonstrate that the "system" (the camera software) is not operating consistently or predictably, thus failing to show that it "produces an accurate result" in a reliably repeatable manner.

    • Lack of Relevancy: The trusted timestamp does not appear to protect anything of importance. The manifest and contents can be written after the Trusted Timestamp Authority generates a signed timestamp.

  • FRE Rule 902 goes toward self-authentication. It is my understanding that this rule exists to reduce the burden of authentication for certain reliable records (like certified business records or electronic process results). By failing to cryptographically protect the foundational metadata (EXIF/XMP), the C2PA system prevents the record from even being considered for self-authentication under modern amendments like FRE Rule 902(13) or 902(14) (certified electronic records and data copies), forcing it back to the more difficult FRE Rule 901 requirements.

  • FRE Rule 702 (Expert Testimony) and black boxes. FRE Rule 702 requires an expert's testimony to be based on "reliable principles and methods" which the expert has "reliably applied to the facts of the case." With AI-driven "computational capture", the expert cannot explain why the image looks the way it does, because the specific AI algorithm used to generate the final image is a proprietary black box. This is a significant Daubert and FRE Rule 702 hurdle that the Pixel 10 creates for any expert trying to authenticate the image.

  • The Kuhmo Tire, Joiner and Rule 702 Precedent. Even if the C2PA specification is found to be generally valid (Daubert/Frye), the Google Pixel 10's specific implementation may fail the Kuhmo Tire test because of its unreliable application (the timestamp reversal, the random video attachment, and the exclusion list). An attorney could argue that the manufacturer's implementation is a sloppy or unpredictable application of the C2PA method.
These failures mean that any evidence created by this system raises fundamental challenges. It fails the reliability, reproducibility, and scientific soundness of any media created by the Pixel 10 and of C2PA in general. Based on these inconsistencies, I believe it would likely fail the Daubert, Frye, and Rule 901 standards for admissibility.

The flaws that I've shown, including inconsistent timestamps and partial signing, are not edge cases or vendor oversights. They are baked into the specification's logic. Even a flawless implementation of C2PA version 2.2 would still produce unverifiable provenance because the standard itself allows essential data to remain outside the trusted boundaries.

Good intent doesn't make a flawed protocol trustworthy. In forensic work, evidence is either verifiable or it isn't. If a standard cannot guarantee integrity, then it doesn't matter how many companies endorse it since it fails to be trustworthy.

On Tuesday Oct 21 (tomorrow), the C2PA/CAI is holding a talk titled, "Beyond a Reasonable Doubt: Authenticity in Courtroom Evidence". (Here's the announcement link that includes the invite.) I've tried to reach out to both the speaker and Adobe about this, but nobody responded to me. I certainly hope people go and ask the hard questions about whether C2PA, in its current form, should be admissible in a court of law. (It would be great if some knowledgeable attorneys showed up and asked questions related to Daubert, Frye, Kuhmo Tire, Joiner, Rule 901, Rule 902, Rule 702, etc.) My concern is that having an Assistant District Attorney present in this forum may give false credibility toward C2PA. While I have no reason to doubt that the Assistant District Attorney is a credible source, I believe that the technology being reviewed and endorsed by association is not credible.

Vulnerability Reporting

19 September 2025 at 13:01
What do you do when you find a flaw in a piece of computer software or hardware? Depending on the bug, a legitimate researcher might:
  • Report it to the vendor. This is the most desirable solution. It should be easy to find a contact point and report the problem.

  • Publicly tell others. Full disclosure and public disclosure, especially with a history showing that you already tried to contact the vendor, helps everyone. Even if there is no patch currently available, it still helps other people know about the problem and work on mitigation options. (Even if you can't patch the system, you may be able to restrict how the vulnerable part is accessed.)

  • Privately tell contacts. This keeps a new vulnerability from being exploited publicly. Often, a vendor may not have a direct method of reporting, but you might know a friend of a friend who can report it to the vendor through other means.

  • Privately sell it. Keeping vulnerabilities quiet also permits making money by selling bugs privately to other interested parties. Of course, you don't know how the others are going to use the new exploit... But that's why you should try to report it to the vendor first. If the vendor isn't interested, then all bets are off. (You can get a premium if you can demonstrate a working exploit and show that the vendor is not interested in fixing it.)

  • Keep it to yourself. While there is a risk of someone else finding the same problem, sometimes it's better to keep the bug handy in case you need it in the future. (This is especially true if compromising systems is part of your job description, such as professional penetration testers or hired guns.)

  • Do nothing. This option is unfortunately common when the reporting method is unidentified or overly complicated. (I'm trying to report a bug, not build a desk from Ikea. I don't want pages of instructions and an Allen wrench.)
Reporting to the vendor, or at least trying to report to the vendor before resorting to some other option, falls under the well-known best practices of responsible disclosure and full disclosure.

Incentives

Some vendors offer incentives in order to receive bugs. These bug bounty programs often include financial rewards in exchange for informing the vendor and working with them to resolve the issue.

In the old days, bug bounties worked pretty well. (I've even made some money that way.) However, over the years, some companies have perverted the incentives. Rather than paying for the information so they can fix it, they pay for the information and require an agreement to legal terms. For example, some companies have attached stipulations to the payments, such as "by agreeing to this transaction, you will not make it public without coordinating the disclosure with us." (And every time you ask, they will say that they have prioritized it or are still working on it, even years later.) More than a few vendors use bug bounties as a way to bury the vulnerability by paying for silence.

I have personally had too many bad experiences with bug bounties and vendors paying for the privilege of being non-responsive. I don't think bounty programs are worth the effort anymore. Additionally, I won't do bug bounty programs if they require enrolling into any service or are associated with some kind of legal agreement. (If they want me to agree to legal terms, then they need to pay for my attorney to review the terms before I sign it.)

In contrast to bounties, some companies send very nice "thank you" response to people who took the effort to report a bug. Often, these are swag and not financial. I've received t-shirts, hats, mugs, stickers, and very nice thank-you letters. Unlike bounties, I've found that each time a vendor sends me an unsolicited "thank you" (even if it's just a nice personalized email), they are responsive and actively fix the bug.

While some people report bugs for the money or swag, I have a different incentive. If I found a bug and it impacts me or my clients, then I want it fixed. The best place to fix it is with the vendor, so I try to report the problem. This is very much a self-serving purpose: I want their code to work better for my needs. Then again, many vendors are incredibly responsive because they want to provide the best solution to their customers. My bug reports help them and me.

Jump Through Hoops

When it comes to bug reporting, the one thing I don't want to do is jump through hoops. The most common hoops include closed lists, scavenger hunts, strict reporting requirements, and forced legal terms.
  • Hoop #1: Closed Lists
    Many open source projects want bugs reported to their mailing list, discord channel, IRC server, private forum, or some other service that requires signing up before reporting. For example, Gentoo Linux wants to you sign up with their Bugzilla service in order to submit a bug, and to keep any discussions in their "forums, IRC or mailing lists". Their various discussion lists also require signing up.

    For me, this is the same as a vendor who is unreachable. When I want to report a bug, I don't want to join any lists or forums; I just want to report a bug.

  • Hoop #2: Scavenger Hunts
    Some organizations have turned "where to report" into a scavenger hunt. The Tor Project used to be a good example of this. Prior to 2019, they wanted you to search through all of their lists and try to find the correct contact point. If you contacted the wrong person, or gave up because you couldn't find the right person, then that's your fault.

    However, after years of complaining, the Tor Project finally simplified the reporting process. Now you can easily find multiple ways to report issues to them. (I still think they usually don't do anything when you report to them, but it's a step in the right direction.)

  • Hoop #3: Strict Requirements
    Some companies and organizations have strict requirements for reporting. While I'm fine with providing my name and contact information (in case they have questions or need more details), some forms require you to provide the device's serial number, software version, and other information that the reporter may not have.

    For example, many years ago I found a problem with a standalone kiosk. (I'm not naming names here.) The vendor only had one way to report problems: use their online form. Unfortunately, the reporting form absolutely would not let me report the problem without providing a valid serial number and software version for the device. The problem is, it's not my kiosk. I don't know the serial number, software version, or even the alphanumeric location code. Moreover, the exploit appeared to work on all of their kiosks. But due to their strict requirements, I was unable to report the problem. (I ended up finding a friend of a friend who had a contact in the company.)

    Some software providers only want reports via their Bugzilla service. Bugzilla often has fields for additional information. (Other bug tracking services have similar features.) Unfortunately, I've had some software groups (again, not naming names) eagerly close out the bug report because I didn't have all of their required information. (It's not that I intentionally didn't supply it; I just didn't have that information.) Beyond an automated message telling me that the bug was closed, they never contacted me. As far as I can tell, they never even looked at the bug because the form wasn't complete enough for them. Keep in mind: my bug reports include detailed step-by-step instructions showing how to do the exploit. (In a few of these cases, I ended up selling the exploits privately to interested parties since the vendors were non-responsive.)

  • Hoop #4: Mandatory Terms
    Some companies, like Google, don't have a means to report vulnerabilities without agreeing to their terms. Years ago, Google would accept vulnerability reports with no strings attached. But today, they direct everything to their "Bug Hunters" page. Before reporting a bug, you must agree to their terms.

    For me, any kind of non-negotiable mandatory agreement is a showstopper. Even though their terms seem simple enough, I cannot agree to them. For example, they expect "ethical conduct". I expect to act ethically, but since it's their terms, they get to determine what is and is not ethical conduct.
I can understand signing up if you want to be paid a bounty. (Google promises some very large payouts for rare conditions.) But often, I'm not looking for a bounty -- I just want to report a bug.

Interestingly, language barriers have never been a problem in my experience. Most companies auto-translate or accept reports in English. This makes the remaining hoops (legal terms, closed lists, etc.) stand out, because they are entirely self-imposed.

The Bug Bounty Scam

As a service provider, I also receive a lot of inquiries from people posing as potential bug reporters. Here's a sample email:
From: EMX Access Control Sensors <no-reply@[redacted]>
To: [redacted]@[redacted].com
Subject: Paid Bug Bounty Program Inquiry enquiry

Name:
Email: yaseen17money@hotmail.com
Message: Hi Team, I’m Ghulam Yaseen, a security researcher. I wanted to ask if you offer a paid bug bounty program or any rewards for responsibly disclosed vulnerabilities. If so, could you please share the details.
Yaseen writes to me often. Each time, from a different (compromised) domain. (I'm not worried about redacting his email since dozens of forums list this entire email in its entirety.) This isn't a real inquiry/enquiry. A legitimate query would come from their own mail server and use their own name as the sender. I think this is nothing more than testing the mail server to see if it's vulnerable.

In contrast, here's another query I received, and it seems to be from real people (names redacted):
Date: Thu, 21 Aug 2025 05:47:47 +0530
From: [Redacted1] <[redated1]@gmail.com>
Cc: [redacted2]@gmail.com, [Redacted1] <redacted1@gmail.com>
Subject: Inquiry Regarding Responsible Disclosure / Bug Bounty Program

Hello Team,

We are [Redacted1] and [Redacted2], independent security researchers specializing in vulnerability discovery and organizational security assessments.

- [Redacted2] – Microsoft MSRC MVR 2023, with prior vulnerability reports to Microsoft, SAP, BBC, and government entities in India and the UK.
- [Redacted1] – Experienced in bug bounty programs, penetration testing, and large-scale reconnaissance, with findings reported to multiple high-profile organizations.

We came across your organization’s security program information and are interested in contributing by identifying and reporting potential vulnerabilities in a responsible manner.Given our combined expertise in deep reconnaissance, application security, and infrastructure assessments, we believe we could contribute to uncovering critical security issues, including hidden or overlooked vulnerabilities.

Before conducting any form of testing, we would like to confirm the following:

1. Does your organization have an active responsible disclosure or bug bounty program?
2. Could you please define the exact scope of assets that are permitted for testing?
3. Are vulnerabilities discovered on assets outside the listed scopes but still belonging to your organization eligible for rewards?
4. Any rules of engagement, limitations, or legal requirements we should be aware of?
5. Bounty reward structure (if applicable).

We follow strict ethical guidelines and ensure all reports include clear technical detail, reproduction steps, and remediation recommendations.

Looking forward to your guidance and confirmation before initiating any further testing.

Best regards,
[Redacted1] & [Redacted2]

There's no way I'm going to respond to them.
  • If I'm going to have someone audit my servers, it will be someone I contact and not who contacts me out of the blue. As covered by Krebs's 3 Basic Rules for Online Safety, "If you didn’t go looking for it, don’t install it!" The same applies to unsolicited offers for help. I didn't go looking for this, so I'm certainly not going to allow them to audit my services.

  • Responding positively to any of their questions effectively gives them permission to attack your site.
These are not the only examples. I receive this kind of query at least weekly.
  • Some of these inquires mention that my server has a known vulnerability. However, they don't want to tell me what it is until after they confirm that I have a bug bounty program. If I don't respond, or respond by saying that I don't offer any bounties, then they never tell me the bug. Assuming they actually found something, then this feels like extortion. (Pay them or they won't tell me.)

  • A few people do point out my vulnerabilities. So far, every single case has either been from one of my honeypots or due to my fake /etc/passwd file. (They asked for a password file, so I gave them one. What's the problem?)
The best thing to do if you receive an unsolicited contact like this? Don't respond. Of course, if they do list a vulnerability, then definitely investigate it. Real vulnerabilities should receive real replies.

Sample Pre-reporting Requirements: Google

In my previous blog entry, I wrote about some significant failures in the Google Pixel 10's C2PA implementation. Before writing about it publicly, I tried multiple ways to report the issue:
  1. Direct contact: I repeatedly disclosed my concerns about C2PA to a Google representative. Unfortunately, I believe my concerns were disregarded. Eventually the Google contact directed me to report issues through Google's Bug Hunters system. (That's right: the direct contact didn't want to hear about bugs in his own system unless it came through Google's Bug Bounty service.)

  2. Google Bug Hunters: Google's Bug Hunters system requires agreeing to Google's vulnerability reporting terms before I could even submit the bug. I wasn't looking for a bounty; I simply wanted to report a serious problem. For me, being forced to accept legal terms before reporting is a showstopper.

  3. Private outreach: After I confirmed the flaws in Pixel 10's C2PA functionality, I reached out to my trusted security contacts. In the past, this network has connected me directly to security teams at Amazon, Facebook, and other major vendors. Since Google's C2PA team was non-responsive, I wanted to contact someone in Google's Android security team or legal department; I suspected that they had not independently reviewed the C2PA implementation for its security, trust, and liability implications. (If they had, I doubt this would have shipped in its current form.) Unfortunately, no one had a contact who could receive a report outside of Google's Bug Hunters. (It's really weird how Google directs everything through Bug Hunters.)
At that point, I had exhausted my options. For me, a reporting process that requires accepting legal terms just to submit a vulnerability -- especially when I am already in direct contact with the team responsible -- is a hoop too many. This is why I posted my blog about Pixel 10's C2PA flaws. (Don't be surprised if I eventually post about more Pixel 10's problems without telling Google first. And if someone inside Google reaches out to me, I'd be happy to discuss this directly, without agreeing to any terms.)

Sample Reporting Hoops: Nikon

Around the same time the Pixel 10 was released, Nikon rolled out C2PA support for the Nikon Z6 III camera. Within days, researcher Horshack discovered that he could get any file signed by the camera, which is a serious flaw in the authenticity system. He even released a signed forgery as a demonstration:



If you upload this picture to Adobe's Content Credentials service, it reports that it is a genuine picture from a "Nikon Z6 3" camera, with no indication that it was altered or forged.

To Nikon's credit, the day after DP Review published an article about this, Nikon temporarily suspended their C2PA signing service, saying they had "identified an issue" and would "work diligently" to resolve it. That's a strong response.



Two weeks after disabling the signing service, Nikon announced that they were revoking their signing certificate. (As of this writing, 2025-09-19, the C2PA and CAI have not revoked Nikon's certificate from their list of trusted certificates. Right now, Every Content Credentials service still says the pictures that are signed using a revoked certificate are still valid and trusted.)

It's unclear if Horshack ever tried to report directly to Nikon. When I searched for a security contact point, Nikon only listed HackerOne as their reporting mechanism. HackerOne is a bug bounty system that requires enrollment, personal information, and banking details. If you aren’t seeking a bounty, then this is a major hoop that discourages reporting.

The community response to Horshack's public disclosure was mostly positive, with many people alarmed and grateful that the issue came to light. However, a few commenters criticized the public release, suggesting it might hurt Nikon's reputation. While lawsuits are always a theoretical risk, I would argue that a vendor that only accepts reports through gated programs effectively pushes researchers toward public disclosure as the only viable reporting path.

In this case, Nikon acted quickly once the problem went public. This demonstrates that they can respond, but the process could have been much smoother if they provided a simple, open reporting channel.

When one problem is reported, it's not unusual to see other people identify related problems. In the comments to the original reporting, other people detailed additional issues. For example:
  • patrol_taking_9j noted that "NX Studio is completely unable to export JPEG's at all for any RAW or RAW+JPEG NEF files shot with C2PA enabled."

  • Horshack replied to his own posting, noting that pictures appear to be signed hours after capture.

  • Pierre Lagarde remarked that "The only thing I can say is C2PA still looks like a problem by itself, not that much like a solution to anything. At least, I think the inclusion of this feature at this stage seems premature." (I fully agree.)

  • To further demonstrate the problem, Horshack created a second signed forgery:



    As with his first forgery, the Content Credentials service reports that it is a photo from a "Nikon Z6 3" camera.
These additional problems show the power of public disclosure. Had Horshack not made the initial problem public, other people may have not looked as closely and these related concerns may not have been brought to light.

Lower Barriers, Better Security

Bug reporting should not feel like running an obstacle course. Every extra hurdle, whether it's mandatory legal terms, scavenger hunts, closed lists, or bounty-program enrollment, increases the likelihood that a researcher will give up, go public, or sell the exploit privately.

The Google and Nikon cases highlight the same lesson: if you make responsible reporting difficult, then you drive researchers toward public disclosure. That might still result in a fix, but it also increases the window of exposure for everyone who uses the product.

The vendors who handle vulnerability reporting the best are the ones who make it simple: a plain-text email address, a short web form, or even a contact page that doesn't require more than a description and a way to follow up. Many of these vendors don't pay bounties, yet they respond quickly and even say "thank you", which is often enough to keep security researchers engaged.

The industry doesn't need more hurdles. It needs frictionless reporting, fast acknowledgment, and a clear path from discovery to resolution. Good security starts with being willing to listen: make it as easy as possible for the next person who finds a flaw to tell you about it.

Innovator Spotlight: Oleria

By: Gary
9 September 2025 at 17:23

Identity’s New Frontier: How CISOs Can Navigate the Complex Landscape of Modern Access Management The cybersecurity battlefield has shifted. No longer are perimeter defenses and traditional identity management sufficient to...

The post Innovator Spotlight: Oleria appeared first on Cyber Defense Magazine.

Google Pixel 10 and Massive C2PA Failures

5 September 2025 at 12:41
Google recently released their latest-greatest Android phone: the Google Pixel 10. The device has been met with mostly-positive reviews, with the main criticisms around the over-abundance of AI in the device.

However, I've been more interested in one specific feature: the built-in support for C2PA's Content Credentials. For the folks who are new to my blog, I've spent years pointing out problem after problem with C2PA's architecture and implementation. Moreover, I've included working demonstrations of these issues; these problems are not theoretical. C2PA is supposed to provide "provenance" and "authenticity" (the P and A in C2PA), but it's really just snake oil. Having a cryptographically verifiable signature doesn't prove anything about whether the file is trustworthy or how it was created.

A Flawed Premise

A great movie script usually results in a great movie, regardless of how bad the actors are. (In my opinion, The Matrix is an incredible movie despite Keanu Reeves' lackluster performance.) In contrast, a bad script will result in a bad movie, regardless of how many exceptional actors appear in the film, like Cloud Atlas or Movie 43. The same observation applies to computer software: a great architecture usually results in a great implementation, regardless of who implements it, while a bad design will result in a bad implementation despite the best developers.

C2PA starts from a bad architecture design: it makes assumptions based on vaporware, depends on hardware that doesn't exist today, and uses the wrong signing technology.

Google Pixel 10

I first heard that the Google Pixel 10 was going to have built-in C2PA support from Google's C2PA Product Lead, Sherif Hanna. As he posted on LinkedIn:
It's official β€” the Google Pixel 10 is the first smartphone to integrate C2PA Content Credentials in the native Pixel Camera app. This is not just for AI: *every photo* will get Content Credentials at capture, and so will every edit in Google Photosβ€”with or without AI.

Best of all, both Pixel Camera and Google Photos are *conformant Generator Products*, having passed through the C2PA Conformance Program.

If you didn't know better, this sounds like a great announcement! However, when I heard this, I knew it would be bad. But honestly, I didn't expect it to be this bad.

Sample Original Photo

One of my associates (Jeff) received the Google Pixel 10 shortly after it became available. He took a sample photo with C2PA enabled (the default configuration) and sent it to me. Here's the unaltered original picture (click to view it at FotoForensics):



If we evaluate the file:
  • Adobe (a C2PA steering committee member) provides the official "Content Credentials" web service for validating C2PA metadata. According to them, all digital signatures are valid. The site reports that this came from the "Google C2PA SDK for Android" and the signature was issued by "Google LLC" on "Aug 28, 2025 at 8:10 PM MDT" (they show the time relative to your own time zone). According to them, the image is legitimate.

  • Truepic (another C2PA steering committee member) runs a different "Content Credentials" web service. According to them, "Content credentials are invalid because this file was signed by an untrusted source."



    If we ignore that Truepic haven't updated their trusted certificate list in quite some time, then they claim that the manifest was signed by this signer and that it indicates no AI:
    detected_attributes: {
    is_ai_generated: false,
    is_ai_edited: false,
    contains_ai: false,
    is_camera_captured: false,
    is_visual_edit: false
    }
    Both authoritative sites should authenticate the same content the same way. This contradiction will definitely lead to user confusion.

  • My FotoForensics and Hintfo services display the metadata inside the file. This picture includes a rich set of EXIF, XMP, and MPF metadata, which is typical for a camera-original photo. The EXIF identifies the make and model (Google Pixel 10 Pro), capture timestamp (2025-08-28 22:10:17), and more. (Jeff didn't include GPS information or anything personal.)

  • There's also a C2PA manifest for the "Content Credentials". (It's in the JUMBF metadata block.) FotoForensics shows the basic JUMBF contents, but it's not easy to read. (FotoForensics doesn't try to format the data into something readable because all C2PA information is unreliable. Displaying it will confuse users by giving the C2PA information false credibility.) My Hintfo service shows the parsed data structure:

    • The manifest says it was created using "Google C2PA SDK for Android" and "Created by Pixel Camera".

    • There is a cryptographically signed timestamp that says "2025-08-29T02:10:21+00:00". This is not when the picture was created; this is when the file was notarized by Google's online timestamp service. This timestamp is four seconds after the EXIF data says the picture was captured. This is because it required a network request to Google in order to sign the media.

    • The manifest includes a chain of X.509 certificates for the signing. The signer's name is "Google LLC" and "Pixel Camera". If you trust the name in this certificate, then you can trust the certificate. However, it's just a name. End-users cannot validate that the certificate actually belongs to Google. Moreover, this does not include any unique identifiers for the device or user. Seeing this name is more "branding" than authentication. It's like having "Levi's" stamped on the butt of your jeans.

    Notice that the C2PA manifest does not list the camera's make, model, photo capture time, lens settings, or anything else. That information is only found in the EXIF metadata.

  • Inside the C2PA actions is a notation about the content:
    "digitalSourceType": "http://cv.iptc.org/newscodes/digitalsourcetype/computationalCapture"
    According to IPTC, this means:
    The media is the result of capturing multiple frames from a real-life source using a digital camera or digital recording device, then automatically merging them into a single frame using digital signal processing techniques and/or non-generative AI. Includes High Dynamic Range (HDR) processing common in smartphone camera apps.

    In other words, this is a composite image. And while it may not be created using a generative AI system ("and/or"), it was definitely combined using some kind of AI-based system.

    (Truepic's results are wrong when they say that no AI was used. They are also wrong when they say that it is not from a camera capture. Of course, someone might point out that Truepic only supports C2PA v2.1 and this picture uses C2PA v2.2. However, there is no C2PA version number in the metadata.)

    As an aside, Jeff assures me that he just took a photo; he didn't do anything special. But the metadata clearly states that it is a composite: "capturing multiple frames" and "automatically merging them". This same tag is seen with other Pixel 10 pictures. It appears that Google's Pixel 10 is taking the same route as the iPhone: they cannot stop altering your pictures and are incapable of taking an unaltered photo.

  • The most disturbing aspect comes from the manifest's exclusion list:
    "assertion_store":  {
    "c2pa.hash.data": {
    "exclusions": {
    [
    {
    "start": "6",
    "length": "11572"
    }
    ],
    [
    {
    "start": "11596",
    "length": "4924"
    }
    ],
    [
    {
    "start": "17126",
    "length": "1158"
    }
    ],
    [
    {
    "start": "18288",
    "length": "65458"
    }
    ],
    [
    {
    "start": "83750",
    "length": "7742"
    }
    ]
    },
    When computing the digital signature, it explicitly ignores:

    • 11,572 bytes beginning at byte 6 in the file. That's the EXIF data. None of the EXIF data is protected by this signature. Unfortunately, that's the only part that defines the make, model, settings, and when the photo was taken.

    • 4,924 bytes starting at position 11,596. That's the JUMBF C2PA manifest. This is the only component that's typically skipped when generating a C2PA record because most of it is protected by different C2PA digital signatures.

    • 1,158 bytes beginning at position 17,126 is the XMP data.

    • 65,458 bytes beginning at position 18,288 is the extended XMP metadata that includes Google's Makernotes.

    • 7,742 bytes beginning at position 83,750 is the continuation of the extended XMP metadata record.

    That's right: everything that identifies when, where, and how this image was created is unprotected by the C2PA signature. C2PA's cryptographic signatures only covers the manifest itself and the visual content. It doesn't cover how the content was created.
Without C2PA, anyone can alter the EXIF or XMP metadata. (It's a very common forgery approach.)

With the Google Pixel's C2PA implementation, anyone can still alter the EXIF or XMP metadata. But now there's a digital signature, even if it doesn't identify any alterations.

The problem is that nothing on either of the "Content Credentials" web services reports the exclusion range. If you're a typical user, then you haven't read through the C2PA specifications and will likely assume that the file is trustworthy with tamper-evident protection since the cryptographic signature is valid.

Forgery Time!

Knowing what I can and cannot edit in the file, I altered the image to create a forgery. Here's my forgery:



  • If you use the official Adobe/CAI Content Credentials validation tool, you will see that the entire file is still cryptographically sound and shows the same authoritative information. There is no indication of alteration or tampering. (The results at Truepic's validation service also haven't changed.)

  • The metadata displayed by FotoForensics and Hintfo shows some of the differences:

    • The device model is "Pixel 11 Pro" instead of "Pixel 10 Pro". I changed the model number.

    • The EXIF software version was "HDR+ 1.0.790960477zd". Now it is "HDR+ 3.14156926536zd". (Really, I can change it to anything.)

    • The EXIF create and modify date has been backdated to 2025-07-20 12:10:17. (One month, 8 days, and 12 hours earlier than the original.)

    Although this is all of the EXIF data that I changed for this example, I could literally change everything.

  • Hintfo shows the decoded JUMBF data that contains the C2PA manifest. I changed the manifest's UUID from "urn:c2pa:486cba89-a3cc-4076-5d91-4557a68e7347" to "urn:neal:neal-wuz-here-neal-wuz-here-neal-wuz". Although the signatures are supposed protect the manifest, they don't. (This is not the only part of the manifest that can be altered without detection.)
While I cannot change the visual content without generating a new signature, I can change everything in the metadata that describes how the visual content came to exist.

Consistently Inconsistent

Forgeries often stand out due to inconsistencies. However, the Pixel 10's camera has been observed making inconsistent metadata without any malicious intervention. For example:



According to Digital Photography Review, this photo of a truck is an out-of-the-camera original picture from a Pixel 10 using 2x zoom. The EXIF metadata records the subject distance. In this case, the distance claims to be "4,294,967,295 meters", or about 11 times the distance from the Earth to the Moon. (That's one hell of a digital zoom!) Of course, programmers will recognize that as uint32(-1). This shows that the Pixel 10 can naturally record invalid values in the metadata fields.

As another example:



DP Review describes this graffiti picture as another out-of-the-camera original using 2x zoom. It also has the "4,294,967,295 meters" problem, but it also has inconsistent timestamps. Specifically:
  • The EXIF metadata has a creation date of "2025-08-25 19:45:28". The time zone is "-06:00", so this is 2025-08-26 01:45:28 GMT.

  • The C2PA-compliant external trusted timestamp authority operated by Google says it notarized the file at 2025-08-26 01:45:30 GMT. This means it took about 2 seconds for the signing request to go over the network.

  • This picture has a few attached parasites. (A parasite refers to non-standard appended data after the end of the main JPEG image.) The XMP metadata identifies these extra JPEG images as the GainMap, Depth, and Confidence maps. Each of these images have their own EXIF data.

    ExifTool only displays the EXIF data for the main image. However, these parasites have their own EXIF data. Using the Strings analyzer at FotoForensics, you can see their EXIF dates. (Scroll to the bottom of the strings listing, then page-up about 3 times.) The data looks like:
    0x0030ed57: 2025:08:25 19:45:31
    0x0030ed8d: 0232
    0x0030ee35: 0100
    0x0030ef09: 2025:08:25 19:45:31
    0x0030ef1d: 2025:08:25 19:45:31
    0x0030ef31: -06:00
    0x0030ef39: -06:00
    0x0030ef41: -06:00
    This data says that the parasites were created at 2025-08-25 19:45:31 -06:00 (that's 2025-08-26 01:45:31 GMT). That is one second after the file was notarized. Moreover, while the C2PA's manifest excludes the main image's EXIF data, it includes these parasites and their EXIF data! This indicates that the parasites were created after the file was notarized by Google.
With photos, it's possible for the times to vary by a second. This is because the timestamps usually don't track fractions of a second. For example, if the picture was taken at 28.99 seconds and the file took 0.01 seconds to write, then the created and modified times might be truncated to 28 and 29 seconds. However, there is no explanation for the parasite's timestamp to be 3 seconds after the file was created, or any time after being notarized by the trusted timestamp provider.

Remember: this is not one of my forgeries. This is native to the camera, and I have no explanation for how Google managed to either post-date the parasites before notarizing, or generated the manifest after having the file notarized. This inconsistent metadata undermines the whole point of C2PA. When genuine Pixel 10 files look forged, investigators will conclude "tampering", even if the file is not manually altered.

With the Pixel 10's C2PA implementation, either the timestamps are untrustworthy, or the C2PA signatures are untrustworthy. But in either case, the recipient of the file cannot trust the data.

However, the problems don't stop there. Both of these sample pictures also include an MPF metadata field. The MPF data typically includes pointers to parasitic images at different resolutions. In the lamp picture, the MPF properly points to the Gain Map (a JPEG attached as a parasite). However, in these truck and graffiti examples, the MPF doesn't point to a JPEG. Typically, applications fail to update the MPF pointers after an alteration, which permits tamper detection. With these examples, we have clear indications of tampering: inconsistent metadata, inconsistent timestamps, evidence of post-dating or an untrusted signature, and a broken MPF. Yet, these are due to the camera app and Google's flawed implementation; they are not caused by a malicious user. Unfortunately, a forensic investigator cannot distinguish an altered Pixel 10 image from an unaltered photo.

Google Pixel 10: Now with Fraud Enabled by Default!

There's a very common insurance fraud scheme where someone will purchase their new insurance policy right after their valuable item is damaged or stolen. They will alter the date on their pre- and post-damage photos so that it appears to be damaged after the policy becomes active.
  • Without C2PA, the insurance investigator will need to carefully evaluate the metadata in order to detect signs of alterations.

  • With C2PA in a Google Pixel 10, the investigator still needs to evaluate the metadata, but now also needs to prove that the C2PA cryptographic signature from Google is meaningless.
Typical users might think that the cryptographic signature provides some assurance that the information is legitimate. However, the Pixel 10's implementation with C2PA is grossly flawed. (Both due to the Pixel 10 and due to C2PA.) There are no trustworthy assurances here.

Privacy Concerns

Beyond their inadequate implementation of the flawed C2PA technology, the Google Pixel 10 introduces serious privacy issues. Specifically, the camera queries Google each time a picture needs to be digitally signed by a trusted signing authority. Moreover, every picture taken on the Google Pixel 10 gets signed.

What can Google know about you?
  • The C2PA signing process generates a digest of the image and sends that digest to the remote trusted timestamp service for signing. Because your device contacted Google to sign the image, Google knows which signature they provided to which IP address and when. The IP address can be used for a rough location estimation. Google may not have a copy of the picture, but they do have a copy of the signature.

  • Since the Pixel 10 queries Google each time a photo is captured, Google knows how often you take pictures and how many pictures you take.

  • While the C2PA metadata can be easily removed, the Pixel 10 reportedly also uses an invisible digital watermark called "SynthID". Of course, the details are kept proprietary because, as Google describes it, "Each watermarking configuration you use should be stored securely and privately, otherwise your watermark may be trivially replicable by others." This means, the only way to validate the watermark is to contact Google and send them a copy of the media for evaluation.
All of this enables user and content tracking. As I understand it, there is no option to disable any of it. If Google's web crawler, email, messaging system, etc. ever sees that signature again, then they know who originated the image, when and where it was created, who received a copy of the media and, depending on how Google acquired the data, when it was received.

With any other company, you might question the data collection: "While they could collect this, we don't know if they are collecting it." However, Google is widely known to collect as much user information as possible. While I have no proof, I have very little doubt that Google is collecting all of this information (and probably much more).

The inclusion of C2PA into the Pixel 10 appears to be more about user data collection and tracking than authenticity or provenance.

Security and Conformance

C2PA recently introduced their new conformance program. This includes two assurance levels. Level 1 has minimal security requirements, while Level 2 is supposed to be much more difficult to achieve and provides greater confidence in the information within the file.

There is currently only one device on the conforming products list that has achieved assurance level 2: The Google Pixel Camera. That's right, the same one that I just used to create an undetectable forgery and that normally generates inconsistent metadata.

The Provenance and Authenticity Standards Assessment Working Group (PASAWG) is performing a formal evaluation on C2PA. Some folks in the group posed an interesting theory: perhaps the Google Pixel Camera is compliant with assurance level 2. Since Google explicitly excludes everything about the hardware, they are technically conforming by omitting that information. Think of this like intentionally not attaching your bicycle lock to the entire bike. Sure, the bike can get stolen, but the lock didn't fail!



What if they fixed it?

You're probably wondering how something like this could happen at Google. I mean, regardless of whether you like the company, Google is usually known for cutting edge technology, high quality, and above-average security.
  • Maybe this is just an implementation bug. Maybe nobody at Google did any kind of quality assurance testing on this functionality and it slipped past quality control.

  • Maybe they were so focused on getting that "we use C2PA" and "Assurance Level 2" checkbox for marketing that they didn't mind that it didn't protect any of the metadata.

  • Maybe nobody in Google's security group evaluated C2PA. This would certainly explain how they could put their corporate reputation on this flawed solution.

  • Maybe nobody in Google's legal department was consulted about Google's liability regarding authenticating a forgery that could be used for financial fraud, harassment, or propaganda.
You might be thinking that Google could fix this if they didn't exclude the EXIF and XMP metadata from the cryptographic protection. (That would certainly be a step in the right direction.) Or maybe they could put some device metadata in the manifest for protection? However, you'd still be wrong. The C2PA implementation is still vulnerable to file system and hardware exploits.

These are not the only problems I've found with Google Pixel 10's C2PA implementation. For example:
  • In the last few days, FotoForensics has received a handful of these pictures, including multiple pictures from the same physical device. As far as I can tell, Google uses the exact same four root certificates on every camera:

    • Google LLC, s/n 4B06ED7C78A80AFEB7193539E42F8418336D2F27
    • Google LLC, s/n 4FCA31F82632E6E6B03D6B83AB98B9D61B453722
    • Google LLC, s/n 5EF6120CF4D31EBAEAF13FB9288800D8446676BA
    • Google LLC, s/n 744428E3A7477CEDFDE9BD4D164607A9B95F5730

    I don't know why Google uses multiple root certs. It doesn't seem to be tied to the selected camera or photo options.

    While there are a limited number of root certs, every picture seems to use a different signing certificate, even if it comes from the same camera. It appears that Google may be generating a new signing certificate per picture. What this means: if a device is compromised and used for fraud, they cannot revoke the certificate for that device. Either they have to revoke a root cert that is on every device (revoking everyone's pictures), or they have to issue revocations on a per-photo basis (that doesn't scale).

  • My associates and I have already identified easy ways to alter the timestamps, GPS information, and more. This includes ways that require no technical knowledge. The C2PA proponents will probably claim something like "The C2PA manifest don't protect that information!" Yeah, but tell that to the typical user who doesn't understand the technical details. They see a valid signature and assume the picture is valid.

  • There's a physical dismantling (teardown) video on YouTube. At 12:27 - 12:47, you can see the cable for the front-facing camera. At 14:18 - 14:35 and 15:30 - 16:20, you can see how to replace the back-facing cameras. Both options provide a straightforward way for hardware hackers to feed in a false image signal for signing. With this device, the C2PA cryptographic signature excludes the metadata but covers the visual content. Unfortunately, you cannot inherently trust the signed image.

  • Even if you assume that the hardware hasn't been modified, every picture has been tagged by Google as a composite image. That will impact insurance claims, legal evidence, and photo journalism. In fields where a composite image is not permitted, the Google Pixel 10 should not be used.
With Google's current implementation, their C2PA cryptographic signature is as reliable as signing a blank piece of paper. It doesn't protect the important information. But even if they fix their exclusion list, they are still vulnerable to C2PA's fundamental limitations. C2PA gives the appearance of authentication and provenance without providing any validation, and Google's flawed implementation just makes it worse. It's a snake oil solution that provides no meaningful information and no reliable assurances.

A lot of people are excited about the new Google Pixel 10. If you want a device that takes a pretty picture, then the Pixel 10 works. However, if you want to prove that you took the picture, value privacy, or plan to use the photos for proof or evidence, then absolutely avoid the Pixel 10. The cryptographic "proof" provided by the Pixel 10 is worse than having a device without a cryptographic signature. Every picture requires contacting Google, the unaltered metadata is inconsistent, the visual content is labeled as an AI-generated composite, the signed data may be post-dated, and there is no difference between an altered picture and an unaltered photo. I have honestly never encountered a device as untrustworthy as the Pixel 10.

Innovator Spotlight: OPSWAT

By: Gary
3 September 2025 at 16:56

Zero Trust: The Unsung Hero of Cybersecurity Cybersecurity professionals are drowning in complexity. Acronyms fly like digital confetti, vendors promise silver bullets, and CISOs find themselves perpetually playing catch-up with...

The post Innovator Spotlight: OPSWAT appeared first on Cyber Defense Magazine.

Innovator Spotlight: Portnox

By: Gary
3 September 2025 at 15:09

Network Security Reimagined: How Portnox is Solving the NAC Nightmare Security professionals have long wrestled with network access control (NAC) solutions that promise more pain than protection.Β  Traditional NAC deployments...

The post Innovator Spotlight: Portnox appeared first on Cyber Defense Magazine.

What C2PA Provides

1 August 2025 at 10:00
Last month I released my big bulleted list of C2PA problems. Any one of these issues should make potential adopters think twice. But 27 pages? They should be running away!

Since then, my list has been discussed at the biweekly Provenance and Authenticity Standards Assessment Working Group (PASAWG). The PASAWG is working on an independent evaluation of C2PA. Myself and the other attendees are only there as resources. As resources, we're answering questions and discussing issues, but not doing their research. (We've had some intense discussions between the different attendees.) The PASAWG researchers have not yet disclosed their findings to the group, and as far as I can tell, they do not agree with me on every topic. (Good! It means my opinion is not biasing them!)

Full disclosure: The PASAWG meetings fall under Chatham House rules. That means I can mention the topics discussed, but not attribute the information to any specific person without permission. (I give myself permission to talk about my own comments.)

Clarifications

For the last few meetings, we have been going over topics related to my bulleted list, the associated issues, and clarifying what the C2PA specification really provides. Having said that, I have found nothing that makes me feel any need to update my big bulleted list, except maybe to add more issues to it. There are no inaccuracies or items needing correction. The 27 pages of issues are serious problems.

However, I do want to make a few clarifications.

First, I often refer to C2PA and CAI as "All Adobe All The Time". One of my big criticisms is that both C2PA and CAI seem to be Adobe-driven efforts with very little difference between Adobe, C2PA, and CAI. I still have that impression. However:
  • The C2PA organization appears to be a weak coalition of large tech companies. Adobe is the primary driving force, and to my knowledge, there are no C2PA working groups that do not have any Adobe employee as the chair or co-chair. The only exception is the top-level C2PA organization -- it's chaired by a Microsoft employee who is surrounded by Adobe employees. I refer to their structure as a "weak coalition" because Adobe appears to be the primary driving force.

  • The Content Authenticity Initiative (CAI) is not "looks like Adobe". No, it is Adobe. Owned, managed, operated, and all code is developed by Adobe employees as part of the Adobe corporation. When you visit the CAI's web site or Content Credentials web service, that's 100% Adobe.
It's the significant overlap of Adobe employees who are part of both C2PA and CAI that causes a lot of confusion. Even some of the Adobe employees mix up the attribution, but they are often corrected by other Adobe employees.

The second clarification comes from the roles of C2PA and CAI. C2PA only provides the specification; there is no implementation or code. CAI provides an implementation. My interpretation is that this is a blame game: if something is wrong, then the C2PA can blame the implementation for not following the specs, while the implementers can blame the specification for not being concise or for any oversights. (If something is broken, then both sides can readily blame the other rather than getting the problem fixed.)

The specifications dictate how the implementation should function. Unless there is a bug in the code, it's a specification issue and not a programming problem. For example, my sidecar swap exploit, which permits undetectable alterations of the visual image in a signed file, is made possible by the specification and not the implementation. The same goes for the new C2PA certificate conformance (which is defined but not implemented yet); choosing to use a new and untested CA management system is a risk from the spec and not the implementation.

The third clarification comes from "defined but not implemented yet". Because C2PA does not release any code, everything about their specification is theoretical. Moreover, the specification is usually 1-2 revisions ahead of any implementations. This makes it easy for C2PA to claim that something works since there are no implementation examples to the contrary. By the time there are implementations that demonstrate the issues, C2PA has moved on to newer requirements and seems to disregard previous findings. However, some of the specification's assumptions are grossly incorrect, such as relying on technologies that do not exist today. (More on that in a moment.)

New Changes: v2.2

The current specification, v2.2, came out a few months ago. Although my bulleted list was written based on v2.1 and earlier, the review was focused on v2.2. When I asked who supports v2.2, the only answer was "OpenAI". Great -- they're a signer. There are no tools that can fully validate v2.2 yet. But there is some partial support.

I've recently noticed that the Adobe/CAI Content Credentials web site no longer displays the embedded user attribution. For example, my Shmoocon forgery used to predominantly display the forged name of a Microsoft employee. However, last month they stopped displaying that. In fact, any pictures (not just my forgeries) that include a user ownership attribution are no longer displayed. This is because the Content Credentials web site is beginning to include some of the C2PA v2.2 specification's features. The feature? User's names are no longer trusted and are now no longer displayed.

That's right: all of those people who previously used C2PA to sign their names will no longer have their names displayed because their names are untrusted. (While the C2PA and Adobe/CAI organizations haven't said this, I think this is in direct response to some of my sample forgeries that included names.)

If you dive into the specifications, there's been a big change: C2PA v2.0 introduced the concepts of "gathered assertions" and "created assertions". However, these concepts were not clearly defined. By v2.2, these became a core requirement. Unfortunately, trying to figure out the purpose and definitions from the specs is as clear as mud. Fortunately, the differences were clarified at the PASAWG meetings. The risks and what can be trusted basically breaks down to gathered assertions, created assertions, trusted certificates, and reused certificates.

Risk #1: Gathered assertions
Gathered assertions cover any metadata or attribution that comes from an unvetted source, such as a user entering their name, copyright information, or even camera settings from unvetted devices. Because the information is unverified, it is explicitly untrusted.

When you see any information under a gathered assertion, it should be viewed skeptically. In effect, it's as reliable as existing standard metadata fields, like EXIF, IPTC, and XMP. (But if it's just as reliable as existing standards, then why do we need yet-another new way to store the same information?)

Risk #2: Created assertions
Created assertions are supposed to come from known-trusted and vetted hardware. (See the "C2PA Generator Product Security Requirements", section 6.1.2.) However, there is currently no such thing as trusted hardware. (There's one spec for some auto parts that describes a trusted camera sensor for the auto industry, but the specs are not publicly accessible. I can find no independent experts who have evaluated these trusted component specs, no devices use the specs right now, and it's definitely not available to general consumers. Until it's officially released, it's vaporware.) Since the GPS, time, camera sensor, etc. can all be forged or injected, none of these created assertions can be trusted.

This disparity between the specification's theoretical "created assertions" and reality creates a big gap in any C2PA implementation. The specs define the use of created assertions based on trusted hardware, but the reality is that there are no trusted hardware technologies available right now. Just consider the GPS sensor. Regardless of the device, it's going to connect to the board over I2C, UART, or some other publicly-known communication protocol. That means it's a straightforward hardware modification to provide false GPS information over the wire. But it can be easier than that! Apps can provide false GPS information to the C2PA signing app, while external devices can provide false GPS signals to the GPS receiver. Forging GPS information isn't even theoretical; the web site GPSwise shows real-time information (mostly in Europe) where GPS spoofing is occurring right now.



And that's just the GPS sensor. The same goes for the time on the device and the camera's sensor. A determined attacker with direct hardware access can always open the device, replace components (or splice traces), and forge the "trusted sensor" information. This means that the "created assertions" that denote what was photographed, when, and where can never be explicitly trusted.



Remember: Even if you trust your hardware, that doesn't help someone who receives the signed media. A C2PA implementation cannot verify that the hardware hasn't been tampered with, and the recipient cannot validate that trusted hardware was used.

Requiring hardware modifications does increase the level of technical difficulty needed to create a forgery. While your typical user cannot do this, it's not a deterrent for organized crime groups (insurance and medical fraud are billion-dollar-per-year industries), political influencers, propaganda generators, nation-states, or even determined individuals. A signed cat video on Tick Tack or Facegram may come from a legitimate source. However, if there is a legal outcome, political influence, money, or reputation on the line, then the signature should not be explicitly trusted even if it says that it used "trusted hardware".

Risk #3: Trusted Certificates
The C2PA specification uses a chain of X.509 certificates. Each certificate in the chain has two components: the cryptography (I have no issues with the cryptography) and the attribution about who owns each certificate. This attribution is a point of contention among the PASAWG attendees:
  • Some attendees believe that, as long as the root is trusted and we trust that every link in the chain follows the defined procedure of validating users before issuing certificates, then we can trust the name in the certificate. This optimistic view assumes that everyone associated with every node in the chain was trustworthy. Having well-defined policies, transparency, and auditing can help increase this trust and mitigate any risks. In effect, you can trust the name in the cert.

  • Other attendees, including myself, believe that trust attenuates as each new node in the chain is issued. In this pessimistic view, you can trust a chain of length "1" because it's the authoritative root. (We're assuming that the root certs are trusted. If that assumption is wrong, then nothing in C2PA works.) You can trust a length of "2" because the trusted root issued the first link. But every link in the chain beyond that cannot be fully trusted.
This pessimistic view even impacts web certificates. HTTPS gets around this trust attenuation by linking the last node in the chain back to the domain for validation. However, C2PA's certificates do not link back to anywhere. This means that we must trust that nobody in the chain made a mistake and that any mistakes are addressed quickly. ("Quickly" is a relative term. When WoSign and StartCom were found to be issuing unauthorized HTTPS certificates, it took years for them to be delisted as trusted CA services.)

In either case, you -- as the end user -- have no means to automatically validate the name in the signing certificate. You have to trust the signing chain.

As an explicit example, consider the HTTPS certificate used by TruePic's web site. (TruePic is a C2PA steering committee member). When you access their web site, their HTTPS connection currently uses a chain of three X.509 certificates:



  1. The root certificate is attributed to the Internet Security Research Group (ISRG Root X1). I trust this top level root certificate because it's in the CCADB list that is included in every web browser. (To be in the CCADB, they had to go through a digital colonoscopy and come out clean.)

  2. The second certificate is from Let's Encrypt. Specifically, ISRG Root X1 issued a certificate to Let's Encrypt's "R11" group. It's named in the cert. Since I trust Root X1, I assume that Root X1 did a thorough audit of Let's Encrypt before issuing the cert, so I trust Let's Encrypt's cert.

  3. Let's Encrypt then issued a cert to "www.truepic.com". However, their vetting process is really not very sophisticated: if you can show control over the host's DNS entry or web server, then you get a cert. In this case, the certificate's common name (CN) doesn't even name the company -- it just includes the hostname. (This is because Let's Encrypt never asked for the actual company name.) There is also no company address, organization, or even a contact person. The certificate has minimum vetting and no reliable attribution. If we just stop here, then I wouldn't trust it.

    However, there's an extra field in the certificate that specifies the DNS name where the cert should come from. Since this field matches the hostname where I received the cert (www.truepic.com), I know it belongs there. That's the essential cross-validation and is the only reason the cert should be trusted. We can't trust the validation process because, really, there wasn't much validation. And we can't trust the attribution because it was set by the second-level issuer and contains whatever information they wanted to include.
With web-based X.509 certificates, there is that link back to the domain that provides the final validation step. In contrast, C2PA uses a different kind of X.509 certificate that lacks this final validation step. If the C2PA signing certificate chain is longer than two certificates, then the pessimistic view calls the certificate's attribution and vetting process into question. The basic question becomes: How much should you trust that attribution?

Risk #4: Reused Certificates
Most services do not have user-specific signing certificates. For example, every picture signed today by Adobe Firefly uses the same Adobe certificate. The same goes for Microsoft Designer (a Microsoft certificate), OpenAI (a certificate issued by TruePic), and every other system that currently uses C2PA.

The attribution in the signature identifies the product that was used, but not the user who created the media. It's like having "Nike" on your shoes or "Levi's" on your jeans -- it names the brand but doesn't identify the individual. Unless you pay to have your own personalized signing certificate, the signature is not distinct to you. This means that it doesn't help artists protect their works. (Saying that the painter used Golden acrylic paint with a brush by Winsor & Newton doesn't identify the artist.)

As an aside, a personalized signing certificate can cost $50-$300 per year. Given all of C2PA's problems, you're better off using the US Copyright Office. They offer group registration for photographers: $55 for 750 photos per year, and the protection lasts for 70 years beyond the creator's lifetime. This seems like a more cost-effective and reliable option than C2PA.

Missing Goals

Each of these risks with C2PA pose serious concerns. And this is before we get into manifest/sidecar manipulations to alter the visual content without detection, inserting false provenance information, competing valid signatures, reissuing signatures without mentioning changes, applying legitimate signatures to false media, etc. Each of these exploits are independent of the implementation, and are due to the specifications.

The C2PA documentation makes many false statements regarding what C2PA provides, including:
  • Section 3, Core Principles: "Content Credentials provides a way to establish provenance of content."

  • Section 5.1: "Helping consumers check the provenance of the media they are consuming."

  • Section 5.2: "Enhancing clarity around provenance and edits for journalistic work."

  • Section 5.3: "Offering publishers opportunities to improve their brand value." (Except that the end consumer cannot prove that it came from the publishers.)

  • Section 5.4: "Providing quality data for indexer / platform content decisions."
This is not the entire list of goals. (I'm literally going section by section through their document.) Unfortunately, you cannot have reliable provenance without validation. C2PA lacks attribution validation so it cannot meet any of these goals. C2PA does not mitigate the risk from someone signing content as you, replacing your own attribution with a competing claim, or associating your valid media with false information (which is a great way to call your own legitimate attribution into question).

What Does C2PA Provide?

An independent report came out of the Netherlands last month that reviews C2PA and whether it can help "combat disinformation by ensuring the authenticity of reporting through digital certificates." (Basically, it's to see if C2PA is appropriate for use by media outlets.) This report was commissioned by NPO Innovatie (NPO), Media Campus NL, and Beeld & Geluid. The report is written in Dutch (Google Translate works well on it) and includes a summary in English. Their key findings (which they included with italics and bold):
C2PA is a representation of authenticity and provenance, but offers no guarantee of the truth or objectivity of the content itself, nor of the factual accuracy of the provenance claims within the manifest.
(Full disclosure: They interviewed many people for this report, including me. However, my opinions are not the dominant view in this report.)

C2PA does not provide trusted attribution information and it provides no means for the end recipient to automatically validate the attribution in the signing certificate. Moreover, the specifications depend on trusted hardware, even though there is no such thing as trusted hardware. This brings up a critical question: If you cannot rely on the information signed using C2PA, then what does C2PA provide?

My colleague, Shawn Masters, likens C2PA's signature to an "endorsement". Like in those political ads, "My name is <name>, and I approve this message." You, as the person watching the commercial, have no means to automatically validate that the entity mentioned in the promotion actually approved the message. (An example of this false attribution happened in New Hampshire in 2024, where a deep fake robocall pretended to be Joe Biden.) Moreover, the endorsement is based on a belief that the information is accurate, backed by the reputation of the endorser.

The same endorsement concept applies to C2PA: As the recipient of signed media, you have no means to automatically validate that the name in the signing cert actually represents the cert. The only things you know: (1) C2PA didn't validate the content, (2) C2PA didn't validate any gathered assertions, and (3) the signer believes the unverifiable created assertions are truthful. When it comes to authenticating media and determining provenance, we need a solution that provides more than "trust", "belief", and endorsements. What we need are verifiable facts, validation, provable attribution, and confirmation.

The Big Bulleted List

10 June 2025 at 12:36
Today's online environment permits the easy manipulation and distribution of digital media, including images, videos, and audio files, which can lead to the spread of misinformation and erode trust. The media authentication problem refers to the challenge of verifying the authenticity and integrity of digital media.

For four years, I have been reviewing the solution offered by the Coalition for Content Provenance and Authenticity (C2PA). In each blog entry, I either focused on different types of exploits, exposed new vulnerabilities, or debunked demonstrations. You would think that, after nearly 30 blogs, that I would run out of new problems with it. And yet...

I'm not the only person evaluating C2PA. I know of three other organizations, four government groups, and a half dozen companies that are doing their own independent evaluations. I learned about these groups the easy way: they start with an online search, looking for issues with C2PA's solution, and Google returns my blog entries in the first few results. They reach out to me, we chat, and then they go on with their own investigations. (With my blog entries, it's not just me voicing an opinion. I include technical explanations and step-by-step demonstrations. It's hard to argue with a working exploit.) In every case, they already had their own concerns and suspicions, but nothing concrete. My working examples and detailed descriptions helped them solidify their concerns.

"I've got a little list."

Near the beginning of this year, a couple of different groups asked me if I had a list of the known issues with C2PA. At the time, I didn't. I have my own private notes and my various blog entries, but no formal list. I ended up going through everything and making a bulleted document of concerns, issues, and vulnerabilities. The initial list was 15 PAGES! That's 15 pages of bulleted items, with each bullet describing a different problem.

I had the paper reviewed by some technical peers. Based on their feedback, I changed some statements, elaborated on some details, added more exploits, and included lots of references. It grew to 22 pages.

I noticed that Google Docs defaulted to really wide margins. (I'm not sure why.) I shrank the margins to something reasonable. The reformatting made it a 20-page paper. Then I added more items that had been overlooked. Today, the paper is 27 pages long, including a one-page introduction, one-page conclusion, and some screenshots. (The bulleted list alone is about 24 pages.)

With blogs, you might not realize how much information is included over time. It wasn't until I saw over 20 pages of bulleted concerns -- strictly from public information in my blogs -- that I realized why I was so passionate about these concerns. It's not just one thing. It's issue after issue after issue!

The current paper is almost entirely a list of items previously mentioned in my blog. There are only a few bullets that are from my "future blog topics" list. (After four years, you'd think I'd run out of new concerns. Yet, I'm still not done. At what point should people realize that C2PA is Adobe's Edsel?)

Last week, C2PA, CAI, and IPTC held their "Content Authenticity Summit". Following this closed-door conference, I was contacted by some folks and asked if they could get a copy of my "C2PA issues" list. I decided that, if it's going to get a wider distribution, then I might as well make my paper public. Having said that, here's a link to the current paper:

C2PA Issues and Evaluation (public release, 2025-06-09)

If you have questions, comments, thoughts, think there is something that should be added, or see any errors, please let me know! I'm very open to updating the document based on feedback.

Organizing Content

There are many ways to organize this data, such as by risk level, functionality, technical vs. usability, etc. I decided to organize the document by root cause:
  • Fundamental problems: Issues that cannot be fixed without a significant redesign.

  • Poor design decisions: Issues that could be fixed if C2PA changed their approach. (However, C2PA has been very clear that they do not want to change these decisions.)

  • Implementation issues: Bugs that could be fixed. (Many haven’t been fixed in years, but they are fixable with little or no undesirable side-effects.)
There are also topics that I am intentionally not including in the paper. These include:
  • Severity. While I don't think there are many minor issues, I don't try to classify the severity. Why? Well, one company might think an issue is "critical" while another may argue that it is "severe" or "medium". Severity is subjective, based on your requirements. I think the paper provides enough detail for readers to make their own judgment call.

  • Simple bugs. The existing code base is far from perfect, but I really wanted to focus only on the bugs due to the specifications or design decisions. Problems due to the specifications or policies impact every implementation and not one specific code base. However, the paper does include some complicated bugs, where addressing them will require a significant effort. (For example, c2patool has over 500 dependency packages, many of which are unvetted, include known vulnerabilities, and have not been patched in years.)

  • Solutions. With few exceptions, I do not recommend possible solutions since I am in no position to fix their problems. There may be an undisclosed reason why C2PA, CAI, or Adobe does not want to implement a particular solution. The paper does include some solution discussions, when every solution I could think of just introduces more problems; sometimes developers work themselves into a corner, where there is no good solution.

  • Alternatives. Except as limited examples, I intentionally avoid discussing alternative technologies. (And yes, there are some alternatives with their own trade-offs.) I want this paper to only be focused on C2PA.

  • Wider ramifications. If C2PA is deployed in its current state, it can lead to incredibly serious problems. It would become the start of something with a massive domino effect. Rather than focusing on theoretical outcomes, the paper is directed toward immediate problems and direct effects.
Simply seeing a random list of problems can be overwhelming. I hope this type of organization makes it easier to absorb the information.

The Most Important Issue

While the first question I usually receive is "Do you have a list?", the second question is almost always "What is the most important issue?" Looking at the bulleted list, you might think it would be the misuse of X.509 for certificate management. I mean, lots of problems in the paper fall under that general topic. However, I think the bigger issue is the lack of validation. C2PA is completely based on 'trust':
  • You trust that the name in the certificate represents the signer.

  • You trust that the information in the file (visual content, metadata, timestamps, etc.) is legitimate and not misrepresented by the signer.

  • You trust that the signer didn't alter any information related to the dependencies.

  • You trust that the timestamps are accurate.
And the list goes on. C2PA provides authentication without validation and assumes that the signer is not intentionally malicious. However, if I trust the source of the media, then I don't need C2PA to tell me that I can trust it. And if I don't trust the signer, then nothing in C2PA helps increase the trustworthiness.

Alternatives

Following some of my initial critiques on my blog, representatives from C2PA and CAI (all Adobe employees) asked me for a video chat. (This was on December 13, 2023.) During the call, C2PA's chief architect became frustrated with my criticisms. He asked me if I could do better. A few weeks later, I created VIDA, which was later renamed to SEAL: the Secure Evidence Attribution Label. SEAL is still being actively developed, with some great additions coming soon. The additions include support for derivation references (simple provenance), support for offline cryptographic validation, and maybe even support for folks who don't have their own domains.

SEAL is a much simpler solution compared to C2PA. While C2PA tries to do a lot, it fails to do any of it properly. In contrast, SEAL focuses on one thing, and it does it incredibly well.

Just as I've been critical of C2PA, I've been looking at SEAL with the same critical view. (I want to know the problems!) I've had my technical peers also review SEAL. I instructed them to be brutal and hyper-critical. The result? Two pages of bullets (it's a 3 page PDF with an introduction). Moreover, almost every bullet is from the general problem of relying on DNS and networked time servers. (Everyone with a domain or time-based signing service has these problems; it's not just SEAL. If/when someone solves these problems, they will be solved for everyone, including SEAL.)

While I don't think the C2PA paper has many "minor" issues, I think all of SEAL's issues appear to be minor. For example, problems like domain squatting apply to anyone with a domain. It doesn't directly impact your own domain name, but could fool users who don't look too closely.

Here's the current draft of the SEAL Issues (2025-06-09 draft). Again, if you see anything wrong, missing, or just have questions or concerns, please let me know!

Independent Reviews

Both of these documents represent my own research and findings, with a few contributions from peers and associates. Other groups are doing their own research. I've shared earlier drafts of these lists with some of those other groups; most seem to use these lists as starting points for their own research. Having said that, I hope that these documents will help raise awareness of the risks associated with adopting new technologies without proper vetting.

I can't help but hum "I'm Got A Little List" from The Mikado whenever I work on these bullet points.
❌
❌