Another DAM Blog

Blog about Digital Asset Management

How do I avoid duplicate assets in a DAM?

10 Comments

In some cases, organizations have unique file naming conventions, but file names are often created by people, which more often yield not-so-unique file names. One person names a file one way and another person names the exact same file another way because they use it differently or in a different place. While this demonstrates a clear lack of consistency and governance, which happens way too often. This is especially true if you not using a Digital Asset Management (DAM) solution with clear guidelines and stop gaps to catch these sort of things as part of a workflow.

We end up with collections of assets which may have:

  • many similar file names (are these true redundancies or simply the result of a poor file naming convention?)
  • some assets with the same file names (whether the content is the same or not throughout a folder structure)
  • every asset having different file names even though some may be exactly the same (content wise)
  • Assets just copied multiple times across folder structures (which makes sorting by file name out of the question)

So here is the dilemma. What do you do with exact duplicates?

  • Throw away all your IP and start over? (Not wise)
  • Painstakingly open and look through each and every asset to try comparing and contrasting them all, one asset at a time? (some people do this, as painful, time-consuming and expensive as it sounds since it may even take a subject matter expert to review some assets)
  • Have  a computer “just do something about it
  • Use a file browser to make this review process go faster? (getting warmer, but we can actually do better than visually checking each asset, with or without any metadata)
  • And we have not even begun to discuss different versions of the same assets and different file types (different file extensions)
  • What would you do?

Some DAM solutions will look for matching file names and catch those during the upload process to the DAM (based on matching file names, regardless of whether the actual content is duplicated or not, as described earlier).

There is even a better way…

Enter the world of algorithms. Yes, an algorithm is complex code. Do not worry because these complex codes can be nicely packaged into easy to use and very powerful tools for data deduplication (also referred to as ‘deduping’ or ‘dedupe’ in shorthand). The algorithm does a bit-by-bit comparison of each asset, regardless of file name, and creates a checksum. Checksums are a string of letters and numbers (alphanumeric) which act like a fingerprint, unique to that asset. If an asset is an exact duplicate (really beyond any visual comparison), it will produce the same checksum. If two assets have the same checksum, they are exact duplicates.

How does it work with assets?

  • Add a period in a text file
  • Move a line in graphic
  • Clip an audio file
  • Color correct a photograph

Doing any of these changes the bits which make up these assets and that will yield a different checksum. If two assets have the same arrangement of bits, you will likely have the same checksum.

So how accurate is it?

One algorithm called MD5 is accurate to 1 in 1 Octillion (That is 10 to the 27th power or 1,000,000,000,000,000,000,000,000,000 according to most American English standard dictionary numbers). That should be accurate enough for quite a while, don’t you think? Read on. This gets better.

Regardless of the operating system (OS) or the computer (PC/MAC/whatever) you use to run this algorithm, MD5 can catch exact duplicate assets and when it does, it will produce the exact same checksum or fingerprint.

I briefly mentioned MD5 during a metadata webinar and people got really excited.

Where did MD5 come from?

MD5 was originally designed in 1991 by MIT’s Ron Rivest as a cryptographic hash function using 128-bit hash values. While the intelligence community encrypts with other algorithms now due to security concerns, MD5 was a standard among software vendors in order to verify whether a download was exactly as intended to be downloaded (not hacked). MD5 has now been replaced by SHA-256 as a U.S. National Standard. Shying away from a 512 bit algorithm (SHA-512) which are even more taxing to a system, MD5 is still one of the commonly used data deduplication methods. Note that MD5 is not recommended for any SSL, password security or any security today. We are talking about using MD5 just for data deduplication here, not security.

How common is this tool found?

The command for MD5 is built into UNIX machines (Apple’s Terminal application). There are a bunch of PC programs which use MD5 (or SHA-256) and are available online for a nominal fee. Some DAM systems are available with MD5. Some DAM systems are available with a less powerful algorithm called CRC32 which is a 32 bit hash.

What does a MD5 checksum look like?

5d41402abc4b2a76b9719d911017c592

To technology folks, this exciting stuff with major potential. For the rest of us, you do not need to run away, but understand that a DAM should be able to create, read and compare these values.  A DAM should also be able to report on this along with the rest of the metadata for every asset available.

What are the benefits of MD5?

  • We can run MD5 on a collection of assets (in a DAM or not) and compare the checksums. If any checksums match, you just found duplicate assets. Several MD5 tools do this comparison of checksums. Handle duplicates however your organization deems fit in a systematic manner. Just be aware of where the assets were intended to be used, particularly if the file names do not match.
  • We could also search for assets using the checksums (even as metadata on a per asset basis in the DAM if you assign a field to it) to reduce duplicates.
  • We could request for a DAM vendor to compare all checksums in the DAM (one per asset) for any uploads.
  • It is very common to have many duplicate assets within an organization. Some organizations have run MD5 on their assets and reduced duplicate assets by over 80%.
  • MD5 can even work on a string of text (outside of a file) to verify if it is the same as another string of text.
  • This can reduce storage on servers of any duplicates. Why would we want to store exact duplicates repeatedly?
  • MD5 runs on any Operating System and any computer which can handle the checksum function.

What are the risks behind MD5?

  • If we have embedded metadata (metadata embedded inside the asset) that is edited differently between two duplicate assets, you may get a different checksum (duplicate not found).
  • Layered masks not visible to the naked eye may throw MD5 off if one asset has a layered mask and another asset with duplicate content does not (duplicate not found).
  • Collisions may happen. MD5 is no longer recommended for any security needs. SHA-256 trumps MD5.
  • The MD5 tool may tax your system performance while creating and comparing checksums for a collection of assets. SHA-256 is even more taxing on a system. SHA-1 may be less taxing on a system, but can also have collisions (not good for security).
  • This does not necessarily identify nor eliminate all duplicates, but MD5 can help address most of the them.
  • People may continue creating and acquiring duplicate assets, but deduplication on DAM system will help act as a stop gap to additional duplicates being introduced to the DAM.

How to use MD5 on assets?

You could…

  • Run MD5 on all assets already in the DAM (dedupe existing DAM assets)
  • Run MD5 on all assets to be uploaded on a ongoing basis and compare those checksums to the checksums of assets already existing in the DAM (dedupe all asset uploads against existing DAM assets)**

**Note this may, depending on the DAM system, require either:

  • A configuration of the DAM (varies among DAM systems) if it already exists as a feature
  • A customization to the DAM for this process to be automated upon upload (if it is not an available feature to the DAM system)
  • A manual effort prior to upload to DAM or even outside of the DAM (which may catch less duplicates if neither the customization nor configuration is available).

Where can we find more information about data deduplication?

  • Google it.
  • Ask DAM vendor(s) about whether they have some data deduplication methods in their DAM system. Many people (include vendors) may not be aware of the need for data deduplication. If your DAM vendor does not have it, ask for data deduplication to be part of their roadmap of upcoming improvements with accompanying documentation. The more people ask, the more likely the vendor will add this to their roadmap.

Let us know when you are ready for assistance in deduplicating your digital assets for your business or consulting for your Digital Asset Management needs.

How do you avoid duplicate assets in a DAM?

Author: Henrik de Gyor

Consultant. Podcaster. Writer.

10 thoughts on “How do I avoid duplicate assets in a DAM?

  1. Very, very good. Thank you, Henrik.

  2. Very good article, however wouldn’t it be time for enterprise dam systems to have an integrated digital master approach so that orphans only exist “virtual”. That would not only decrease all over management of assets but also secure asset integrity.

  3. What about video clip duplication at DAM? When we deal with video clips the digital files are not the same. Even when you take a video clip and you compress it twice by using the same compressing module you will get two different digital files, the checksums of the files will not be equal.

  4. Obviously, if you have two different video clips, you should get two different checksums.

    Whn you produce a checksum from a video clip, then compress the same video clip and try to produce a checksum, you will end up with two different checksums. Why? Remember that compression changes/removes the bits of the asset. Change the bits = get a new checksum.

    Thank you for the example.

  5. So, how your system deals with video clip duplication? if the checksums are no the same.
    How your system deals with versions of the same clip?

  6. I would like to ask the same question as Asher. Does MD5 work with video files, in a video-based environment? Does any check-sum work on video files? Too often DAM practitioners refer to DAM in a generic way, but with specific asset types and specific work-flow environments in mind. Vendors do this far too often, too.

    Would check-sum deduplication algorithms be useless in a mostly-video DAM environment?

    Thanks.

  7. Great question. I will do some tests based on Asher comments and check with the authorities on algorithms specific to video, given the clips are the same audio and video image with the same compression ratios.

    It is a choice to keep each edit of a video clip or not.

    Since most professional video editing involves compiling a variety of audio and video clips, along with some compression, into a variety of desired outputs, this will likely yield a different array of bits each time and would likely pose a challenge for a deduplication algorithm to produce the same checksum.

  8. Henrik-
    Great post I think there are some operational / workflow issues that factor into dealing with duplicates for video. Depending on the situation you may not ; as a rule; want to keep any deritive from the master video encode in a dam archive…conversly you may always want to keep every derivative as a unique asset. The question becomes what are the rules that define what a duplicate is in your environment.

  9. Actually, you would not want to call two video files “duplicate” if they had differing levels of compression. At least I wouldn’t! There is a large difference between a file compressed at 8 MB/sec a second for DVD-R and one compressed at 1 MB/sec for Internet playback -even though the content is the same. I would want both versions of the file because they are for different deployments. To me, being able to distinguish a difference between two video files that are the same resolution but different compression is good!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.