Model Curation techniques for EA – #3: Validating

In the first two articles I described techniques to make sure that your EA model content is accessible (with suggestions for tidying up and a couple of simple content hacks) and includes navigation guides.

So now users can find their way around successfully. The question is, what will they find when they get there?

In this final article about making EA models sharable, the problem is validating or verifying the model content.

VALIDATING YOUR PROJECT

1.     Explicit, verified structure

With correct naming, clear signposting and simple navigation, new users may have found their way to exactly the content they need. But if the models they find are inconsistent, then all your good work might not help: new users will still be confused. If sometimes you link things together using one connector, but elsewhere use a different one, that won’t help. If you use lots of stereotypes inconsistently, that will be confusing.

You need to state clearly, with examples, how your model is structured:

  • what elements types and stereotypes you use,
  • which connectors and where,
  • which EA standard EA attributes users should expect to see populated
  • and which tagged values

This is the ‘meta-model’ which your model obeys, and, like the model itself, is best explained by a set of diagrams, and even better, with examples which show the model structure in action.

For a complicated model, this is really hard to do manually: there may be 100s or even 1000s of examples of a particular element type, and you probably don’t have time to check them all by hand.

Tools do exist to make this easier: Model Expert (one I built earlier) will firstly tell you the meta-model which you are using – which can sometimes be a surprise – and then guide you through finding and fixing the places where you need to do a bit more work to make it consistent. (P.S. Checking of this kind shouldn’t be done just at the end!)

 

2.     Trust – validating the people and the process

At the end of all this, you’re asking a potential new group of users to trust your model, and create a critical project dependency on work they have not produced themselves.

This means they either need to trust you, who produced the original model, or trust the process which created that model.

If you know the next team, and they know you and can talk to you when they have problems, than lots of the work described above can be done much faster. But you are creating a new dependency: on you, the model developer.

A more scalable approach, which means you won’t be answering emails from re-users for the next few years, is to make all the stuff above part of a transparent, repeatable process which people can trust (which leads us on to Model Governance, which is a whole other thing).

Then we have a chance to have many more people benefit from our models.

 

If you have any other suggestions that make model sharing and model curation work better, please share.