There are various OAuth 2 workflows available (Simon outlined 4). The most typical profile involves the use of an authorisation code, and secure server to server communication.
Take an inventory of all the sites we have on which UIO might be deployed, and analyse what technologies are currently used, and how they are managed by ops.
All of these sites are somewhat different. However, we plan to use the current deployment structure of the ILDH as a template for any future such sites and normalise them. ILDH currently uses nginx to serve up static content produced by docpad.
The requirements of the typical OAuth workflow mentioned above, appear to militate completely against the use of any kind of fully static site. What can we do about this?
Either we produce a standard "proxy" configuration that makes it as easy as possible to deform the configuration of a static site, OR we allow the use of alternative, less secure OAuth flows (implicit grant type) for this purpose, OR both.
What compromises are possible against the implicit flow variant - how easy are they, and what are their consequences?
If you are able to get someone to visit a site that is not as it claims to be, you can "phish" for their preferences. [This can be mitigated by using HTTPS and verifying the redirect URL]
Access token is sent in HTTP redirect URL query fragment -- this may be exposed in Browser history or in server logs.
Alan: Browsers these days retain a lot of history. Especially in a shared computing environment, this is an unacceptable risk.
Create a "multi-personality" central auth server proxy that can act on behalf of multiple (possibly static) "client" sites, and act via a proxy configuration to just expose the endpoints on the static clients necessary to mediate the oauth and prefs communication.
We would probably do this via nginx. We would need to place the HTTPS layer IN FRONT of this layer of proxying - otherwise we would be unable to inspect and interfere with traffic at the HTTP level, e.g. to add a custom header or URL field.
A common pattern is to "terminate at the edge" - where the load balancing and HTTPS termination occurs at the edge of the cluster/cloud. The suggestion is that this would be the point in our architecture where the "selective proxying" described above would occur - which then implies that the architectural impact of our "otherwise static sites" remains simply that of static sites.
This is a brilliant suggestion.
What are the implications of having the multi-personality server have central control over parts of the workflow that might impact UX with respect to logon? Especially with respect to sites that have some logon semantics of their own - such as Wordpress. Further question - do we currently host Wordpress behind an nginx layer? Answer - yes, we do.
We will need to constuct both nginx-level and Wordpress-plugin-level embodiments of clients of the multi-personality auth proxy server.
1. Draw up the client's portion of the protocol in terms of the URL endpoints that it expects to operate.
The existing FD server is not a prototype for this since it uses the "client credentials" workflow type.
The existing EASIT integration did this - we are not sure where this is or what it looks like.
We will meet to address TASK ONE at 12 EST on Monday.
2. Figure out what needs to be added to this protocol by the "edge proxy" server to provide sufficient information to the "multi-personality" proxy.
Gio: We need to make sure that the edge proxy can be conversationally stateless. Anything stored at the edge should be static info e.g. per client website secret
3. Implement nginx configuration that implements the edge proxy, and built it into (Vagrant script?) that produces a standard Docker container embodying it in a configuration-driven way. It appears that this can be done purely within nginx' configuration system and will not require any custom plugins or code.
Gio: This may one day GO AWAY and implemented in KUBERNEETEES or such similar insane thing.
4. Design and implement the "multi-personality" proxy server - together with its own Vagrant/Docker configuration etc. This will be a standard config-driven Kettle server.