This initial design explores the interaction of an author with an authoring system. It begins to answer the question: "If we know the kind of content being created, what metadata can we generate on behalf the user?"
The first image above shows an authoring tool after uploading a video. After the video has been uploaded, the system automatically switches the toggles for Sounds and Visuals. The user also chooses to enable Embedded Text and Scenes with Math. Upon saving, the following would be seen as depicted in the following image.
The second image above shows a video resource while it is being viewed in a content repository. There are icons in the sidebar which help the user quickly assess whether the content meets their preferences.
From this design, we have learned that it is possible to simplify the authoring experience with respect to metadata generated. However, this design doesn't address how a user would interact with relational metadata (i.e. "hasAdaptation" and "isAdaptationOf" metadata).