• Archives

    Metadata

    The tokenURI() for the metadata is stored at storage.googleapi.com on it's centralized server. Therefore it's important to have atleast three different prominant community members scrape and authenticate the metadata while the origin server is still running. Authentication can be done by verifying the following message with your signer:

    I [public address] confirm that I have scraped the metadata for [contract address] using the provided tokenURI() and stored it in a sqlite database and then archived it. The following sha512 checksum is of the resulting archive: [checksum]

    The resulting signature will be stored here but eventually it would be nice to store all checksums onchain.

    Backing up the metadata is a much simpler matter because it's only 250kb. If you are hosting, please reply here so I can add your link to this page.

    PFP Images

    Backing up the images is a challenging matter. Each image is around 10mb so for 10k images that's 100gb. There's a couple ways to approach this but ideally we can efficiently store it losslessly. I've tried various quick approaches and one stands out the most: converting all images to webp lossless format. With webp lossless, I can get the file down to 1mb / image totalling 10gb. There's potential for better compression but sadly they will all take a while to implement. If we did decide 10gb was the best we were going to get, as of 2025, it would cost $90 to store it permanently on arweave. It's not bad tbh. It would cost more than $90 in developer time to come up with a better solution however the solution would also scale across many nft collections. That said I could also store 10gb at amazon s3 for 12 cents / month. It would take 60 years to reach $90. Surely we'll come up with a better compression method by then. It's certainly way better than 6 years if we stayed with the original png files.

    One of the biggest challenges with this collection is the noise filter. The reason webp was able to handle it well was because of it's built-in color cache coding. Before we could do anything we would have to not only remove but save the noise for reapplying during decoding. Once that is done, we can start identifying the visual representation of traits. Then we can rebuild the image using only the traits and a final diff and noise. Based on the compression from the metadata, we should be able to compress it 20x which means 500mb which is $4.35 on arweave.