Another DAM Blog

Blog about Digital Asset Management


1 Comment

How do I address user error?

Many people do not read instructions. They may enjoy reading what they want to read, but instructions are not one of those preferred works of non-fiction that come to mind. When was the last time you read instructions? Why? There is often the assumption and expectation things will be easy to understand and easy to use. When we download a new software application to one of our devices, do you see piles of instructions, guides, or manuals? Not likely.

And are we expecting other people to read instructions as well? Newsflash: many do not read instructions even if their lives depend on it. Some manufacturers in certain sectors include instructions to supply warnings, notices and legal documentation attempting to limit their liability in the case their product or service is (was) used incorrectly and disaster occurs. An example of this was a product mailed to homes as a free sample. This sample came in a small packet and included an image of a lemon on it. People instantly assumed it was lemonade mix, so they mixed it with water and drank it without ever reading what it said on the packet. The organization received countless calls and complaints from potential consumers who got sick. Most of them did not read the fact that this was dishwasher soap with a new lemon scent and an added citric acid cleaning agent. Reading is either clearly overrated for the masses or a means of separating the people who want to be informed from those who are too lazy/busy to bother.

Some manufacturers do not even include instructions anymore. Why? The product or service should be easy enough and users should just want to use it. They expect user adoption magically happen with some high hopes.  Maybe that works with some mobile devices and some of their respective apps through simple, smart design. Never mind the precautions, warnings or issues that could arise. What could possibly go wrong? Users are smart enough to just know how to use it, right? Well, if you add some humans to any equation, you will get some inconsistencies, variables, and yes… errors. Sure, we can blame the:

  • poorly designed user interface (usability testing can help identify these issues)
  • lack of forethought in the system implementation, so everyone must think like the person who created it (user testing can help identify these issues as long as they walk through all the processes and note what/where is a miss)
  • the whoops on the real world which avoids the end-to-end walk-through of a solution to be sure it works

One organization had a lot of user errors, so they started focusing on the tasks that caused the user errors and tracked them.  This could identify system flaws needing correction.  Here is how to start on the path of accountability. Every instance an error happened:

  • the user was tracked by name (who)
  • the type of error was tracked (what)
  • the frequency of the error was tracked (when)
  • where the issue was occurring in the system was tracked (where)
  • what the user did incorrectly to cause the error was tracked (why)
  • the recommended changes to the process were communicated and documented (how)

Every time an error happened, a template email was sent to that individual user and their supervisor which included:

  • their specific error
  • the impact of this error to other users and the system (there often was one)
  • a recommended fix (for the user to complete)
  • a set time frame to fix the error properly (one to two business days)
  • a follow-up to be sure it was completed and the error log to close out that particular instance of that error.

Before implementing this error correction, the policy was fully documented and shared openly. As soon as this process started, error rates dropped significantly.  That is the effect of accountability. Prior to this, accountability was not visible. Note that errors do not completely disappear because “perfection” is not a realistic goal for any organization. There is room for improvement for users, processes, and likely the system.

How to have MORE errors

There are the counterpoints to all this…

  • Assume too much or just assume everything will work the way you expect it to (just like the world will continue to revolve around you)
  • Ignore all issues you encounter. Do not verbally mention nor document in writing the issues for anyone to know about.
  • Do not test thoroughly or just ignore all testing completely  (The testing fairy is coming soon. Just don’t wake up from that dream)
  • Do not verify any information down the exact character. In fact, just do not check on anything at all
  • Do not follow specific instructions. Do not have clear, up to date instructions. In fact, do not have any instructions at all (see assumptions for similar results).
  • Do not explain how nor why something works nor even IF it actually works. People are just supposed to know this simply by osmosis or being born with this information.
  • Do not have a simple, easy to use GUI. If you really try, you could skip having a GUI completely. 
  • Ignore all usability experts and their literature. Why would you want anyone to actually use the system your company paid for?
  • Believe everyone works and thinks like you (revisit assumptions again)
  • Be sure to have extra slow processors to make people believe the system is frozen or non-functional. It might be acceptable in some people’s mind if a simple process with a few bits of data take a half-hour to two hours to yield the results requested.
  • Be sure to blame the end-user when the system is not working, but it is best if the results are inconsistent just for that added bonus.
  • Confusion is always welcome. With open arms.
  • Do not document anything. When working with other companies, trust everyone freely and believe that they will document everything for you, understand it all your way, and do not share this documentation openly.
  • Believe everything (including coding) is really easy and it will automagically be completed overnight flawlessly. Every day. With no documentation nor specifications. Nor testing.
  • Every IT department can read minds. They have an app for that.
  • Eventually, everyone can read your mind.
  • Trust everyone. What could possibly go wrong? You do not need any verification either.
  • Do not plan ahead.
  • Do not train users. Ok, maybe once and believe they will remember it all. 
  • Do not supply any ongoing support for your user community. They will figure it out.
  • Errors go away if you ignore them enough. Errors do not multiply when you do this. Errors are so much fun. Dream of getting more over time and it will happen in reality.
  • Never take any vacation nor breaks. It will not catch up with you in any way.

I only wish these were all so ridiculous that these did not ever happen nor were even thought of. Sadly, they actually do. Too often.

How do you address user error?


1 Comment

How do I create use cases for DAM?

A blog reader asked about how to create use cases for DAM.  I gave a presentation about this topic during a DAM conference.

What use cases did you have before DAM was part of the equation? Before you had a DAM, were your workflows documented?

All too often, use cases are not documented. In fact, they may be locked in multiple silos where each person (even within the same group ) do things differently.  Therefore, migrating to a workflow with DAM becomes a mystery. Without use cases, the user adoption of the DAM is often lower if users do not know why nor how nor when to use the DAM.   Where does DAM fit in the users’ daily workflow? Use cases can also affect the choice of a DAM solution.

Use cases need to be documented and shared.

Another reason for having use cases is training for new people. How do newly hired people find out how to do their job? Are they born with this knowledge? Should an employer expect everyone to know how to use all the tools and policies of the organization to get their job done?  Not likely.

Enter a new person (new hire) to the organization. What are they supposed to do? What tools are involved? When do they use the DAM and for what purposes?  Should new people operate differently than people doing the same tasks for years within the same organization? Not likely, but they often do. Does each person who coaches a new person give their own version of how to do things (plus or minus a few steps)? Is this standardized? This is often not only due to a particular level of experience, but lack of documentation and poor training. And we expect consistency. Somehow. Maybe by mind reading? That is not likely going to happen.

When you start researching a DAM for your organization, instead of looking at shiny features, see if it would work well with your use cases by presenting them to the vendor during a demo. Have real assets you would likely be working with along with real use cases. Ask the vendor to demo their solution for your use cases with your assets with metadata from start to finish in front of you.

Start building use cases with what you have and how you do things today.

  • What do you do today?
  • How do you do it?
  • Who does what?
  • When does it happen?
  • Why is it done that way?
  • What is the process?
  • What tools are used?
  • How could this improve?
  • How can this be done more consistently?

Be sure to consider the people, process and technology (in that order) which are involved from start to finish. Not sure who/how/what is involved? Ask by using…

  • Surveys
    • Online or paper form, with long answer questions, not simply ratings
    • All roles (don’t expect 100% return, even with a prize)
    • Send to everyone including decision makers and potential DAM users doing the daily work
  • Group workshops
    • Be aware of who is talking and who is not
    • Include all group members
    • In case extroverts have all the say while introverts remain quiet in the corner getting frustrated, have people take turns talking so everyone contributes
  • Individual interviews of:
    • Not just senior staff, but junior staff for a varying perspective
    • Both computer literate and those who prefer analog
    • All roles

When reviewing who is working, consider their role in the organization, not just their name so you can build and scale these job functions as needed.

Who makes the initial request? Who/What takes the request? Who handles/processes the request? Where does the request go after that? and after that? and after that? (note a pattern to fill the gaps)

How many other people do the same task(s)? Is this redundancy to handle volume or act as a backup? Can this scale up or down today based on the amount of work to do?

What is the volume of requests? Where do the requests get filled/completed? Who does this? Who/What delivers the end product/service?

Consider the whole life cycle of typical project from idea to delivery. And walk through all the steps.

How much communication is involved in all this? Likely not enough.  It is not enough to lock decision makers in a room. As discussed earlier, there are different points of view to keep in mind.

Keep the communication channels open among all differing points of view.

Stay positive. When negative points need focusing, laugh about it, then find a resolution.

Create roles. Envision the end result. Have a goal. Make it clear. Try even mind mapping. Simplify when in doubt. Follow through. Measure the results.

Avoid jargon and acronyms (so anyone can understand it). Be open to feedback, but have a schedule with deadlines and accountability.

However you create use cases, write them down and share it within your organization.

Let us know when you are ready for vendor neutral consulting on Digital Asset Management. We can also help you create your use cases.

How do you create use cases for DAM?