Showing posts with label Federation. Show all posts
Showing posts with label Federation. Show all posts

FICAM Trust Framework Solutions (TFS) Program - Updated

The FICAM Trust Framework Solutions (TFS) is the federated identity framework for the U.S. federal government. It includes guidance, processes and supporting infrastructure to enable secure and streamlined citizen and business facing online service delivery.

For the first time since the inception of the Program in 2009, we are releasing a comprehensive update to the Program to incorporate Agency implementation feedback, ongoing lessons learned regarding the operational needs of shared service initiatives such as the Federal Cloud Credential Exchange (FCCX), as well as updates made as a result of changes in the private sector marketplace of identity services.

The FICAM Trust Framework Solutions Overview provides a holistic overview of the FICAM TFS Program
  • Description of the components that make up the TFS Program
  • The TFS role in supporting Government-wide policy and National Strategy implementations
  • TFS and its implementation by Government Agencies
  • TFS fast-track process for Financial Institutions required to implement a Customer Identification Program by Government regulators 
  • Relationship to the FICAM Testing Program for on-premise vendor solutions that implement FICAM protocol profiles 

The components of the FICAM TFS Program are:
  • The Trust Framework Provider Adoption Process for All Levels of Assurance describes the process by which the TFS Program evaluates and adopts commercial Trust Frameworks for use by the U.S. federal government
    • Overview of the Trust Framework Adoption Process
    • Incorporation of the privacy trust criteria into the Trust Framework adoption process
    • Updated trust criteria to incorporate NIST SP-800-63-2
    • Streamlined LOA 1 Trust Criteria
    • Introduction of ongoing verification as an OPTIONAL trust criteria
    • Support for Component Identity Services, and associated standardized terminology
    • TFS Program's relationship to entities (CSPs etc.) that are assessed and evaluated by an adopted Trust Framework Provider
       
  • The Authority To Offer Services (ATOS) for FICAM TFS Approved Identity Services makes explicit the requirements that identity services need to satisfy in order to offer their services to the U.S. federal government
    • Clarification of approval decision authority of the FICAM TFS Program
    • Explicit testing and verification of service interfaces to assure conformance to approved protocols and profiles
    • Requirement to implement tested interfaces by the solution provider when offering the service to Government
    • Standards based attribute requirements to enable identity resolution by Government relying parties at LOA 2 and greater
       
  • The Identity Scheme and Protocol Profile Adoption Process describes the process by which protocol profiles are created, adopted and used by the government to ensure that the RP application and the CSP communicate in a secure, interoperable and reliable manner.
    • Updated to allow the flexibility for Government to adopt protocol profiles created by industry, provided it meets Government needs for security, privacy and interoperability
    • Standardized assurance level URIs for use in protocol profiles
       
  • The Relying Party Guidance for Accepting Externally Issued Credentials provides guidance to Agencies on leveraging federated identity technologies to accept externally issued credentials
     
  • The E-Governance Trust Services Certificate Authority provides a certificate issuance capability that supports the federated identity use cases of Agencies that require endpoint and message level protections
     
  • The E-Governance Trust Services Metadata Services (EGTS Metadata Services), once implemented and made available, provides a trusted mechanism for the collection and distribution of metadata to enable identity federation capabilities
All of the above documents, except for the Relying Party Guidance and the EGTS CA Concept of Operations, are currently in DRAFT status while we seek feedback from our Public and Private sector stakeholders.

For those outside the U.S. federal government, there will be an opportunity to engage in a facilitated discussion and Q&A with the FICAM TFS Program Manager during the December 4, 2013 meeting of the IDESG Trust Framework and Trustmark (TFTM) Committee.

UPDATE 2/7/2014:  The updates to the FICAM TFS have been finalized and are now available.

RELATED INFO

:- by Anil John
:- Program Manager, FICAM Trust Framework Solutions

FICAM Trust Framework Solutions TFPAP Update v1.1.0

The FICAM Trust Framework Solutions (TFS) Trust Framework Provider Adoption Process (TFPAP) has been updated to v1.1.0 (PDF).
TFS TFPAP v1 1 0
This is a point update that does not change any of the existing TFP processes but instead:
  • Acknowledges an existing internal Government process in order to recognize non-federally issued PKI providers, who are cross-certified with the Federal Bridge, as approved Credential Service Providers under the FICAM Trust Framework Solutions umbrella. 
  • Incorporates the Trust Framework Solutions (TFS) "branding" under FICAM. 
The relevant text that acknowledges the existing processes is the following:
The FICAM Trust Framework Solutions (TFS) cover remote electronic authentication of human users to IT systems over a network. It does not address the authentication of a person who is physically present.
The TFS is inclusive of externally issued PKI and non-PKI credentials at OMB Levels of Assurance 1, 2, 3 and 4:
  • For PKI based credentials the TFS recognizes the Federal PKI Policy Authority (FPKIPA) as a TFS approved Trust Framework Provider and will rely on its proven criteria and methodology for non-Federally issued PKI credentials. 
  • For non-PKI credentials, each Identity Provider and TFP must demonstrate trust comparable to each of five categories (registration and issuance, tokens, token and credential management, authentication process, and assertions) for each Level of Assurance it wishes its credentials trusted by government applications (including physical access control systems).
The other point to note is the establishment of the Trust Framework Solutions "branding" under FICAM to acknowledge the C2G and B2G aspects that FICAM is responsible for (FICAM in the Federal Government covers areas beyond C2G and B2G). At a high level, we are bucketing the C2G and B2G pieces under the TFS umbrella and are expecting the TFS, in the near term, to "own" the:
  1. Trust Framework Provider Adoption Process (TFPAP)
  2. The Relying Party Guidance on Accepting Externally Issued Credentials (Currently under internal review)
  3. FICAM TFS Trust Mark (Future)
RELATED INFO
:- by Anil John

Challenges in Operationalizing Privacy in Identity Federations - Part 3

Part 1 of this series discussed the data minimization principles of anonymity, unlinkability and unobservability and their relationship to identity federation. Part 2 of this series walked through a proxy architecture that provides those principles in a federated authentication system. In this blog post, I would like to expand the discussion of the proxy architecture to include user enrollment and see how the data minimization principles are affected by the need for verified attributes.

In the proxy architecture, when a user arrives for the first time at the relying party (RP) after being authenticated by the IdP/CSP, the RP knows:

  1. A trusted IdP has authenticated the user, 
  2. The Level of Assurance (LOA) of the credential used, and 
  3. The Persistent Anonymous Identifier (PAI) associated with the credential of the authenticated user. 

Assumption: As part of the identity proofing and credential issuance process, the IdP/CSP has collected and verified information about the user (which is a requirement for LOA 2+ scenarios).

The RP starts the user enrollment process by collecting both a shared private piece of data (e.g. account #, access code, SSN etc.) that represents a claim of identity and a set of information (e.g. Name, Address, DOB etc.) that can be used to prove that claim from the user. The data elements that are collected by the RP needs to be a subset of those verified and collected by the IdP/CSP as part of the identity proofing and credential issuance process.

The RP then initiates the attribute verification process:

  • The request is made via the proxy to maintain the PAI to PPID mapping and the PPID to CID mapping
  • The request is for a MATCH/NOMATCH answer and NOT a "Give me all the attributes you have collected during identity proofing"
  • The shared private data (e.g. Account #, Access Code, SSN etc.), which is ALREADY in the RP system, is NOT sent to the IdP/CSP

PrivacyProxy AX

If the response that comes back from the IdP/CSP is positive, the RP:

  • Uses the shared private piece of data to pull up the associated user record 
  • Checks to see if the verified attributes returned match those in the user record
  • If they match, the RP links the PAI to the associated user record locator a.k.a. Program Identifier (PID)

Critical points to note here are that the IdP still does not know which RP the user went to, the RP still does not know which IdP the user is coming from, but the Proxy now has visibility into the attributes flowing through it.  As such, it is critical to make sure that a security policy backed by an independent audit and verification regime is put in place to assure that the proxy does not collect, store or log the attribute values flowing through it.

We would be interested to hear about how this architecture can be improved or modified to enhance its privacy characteristics.

RELATED POSTS


:- by Anil John

What are FICAM Technical Profiles and Identity Schemes?

A critical technology underpinning of the FICAM Trust Framework Solutions process is the need to enable the ability of the federal government to utilize industry standards. This blog post provides an overview of the FICAM protocol profiling work that enables the federal government to utilize industry standards in a secure and interoperable manner.

As anyone who has been involved in technical protocol standards development will know, a finalized standard is often a compromise. In particular there is a great tension around the need to provide flexibility and extensibility, security and privacy, and interoperability in the standards development process. The result often ends up being a standards document that provides multiple ways of accomplishing the same thing, all of which are "compliant" to the standard but often may not be interoperable.

FICAM Profiles and SchemesFor the federal government to utilize industry standards, they need to be widely deployed by multiple vendors, interoperable, and meet the security and privacy policy requirements articulated by authoritative federal government bodies such as OMB, NIST, CIO Council etc.

This requires the standard to undergo a "Profiling" process that:

  • DOES NOT change the standard in any way
  • DOES take into consideration security requirements of the federal government
  • DOES take into consideration privacy requirements of the federal government
  • Locks down the MUSTs, SHOULDs, SHOULD NOTs etc. in the specification language so that there is assured interoperability between profile implementations
  • Results in a "Test-able" product

When this process was initially envisioned, we were very much focused on authentication.  As such, the end result of the profiling process was the development of "portable identity schemes" which enabled the use of identity federation protocols to convey information for the purpose of authentication.

The "FICAM Profile of SAML 2.0 for Web SSO (PDF)" and the "FICAM OpenID 2.0 Profile (PDF)" are clear examples of portable identity schemes that incorporate standards profiling. We will continue to utilize identity schemes as an item that an identity provider needs to implement in order to interoperate securely with a federal government relying party (service provider).

As our requirements have grown, we have found it necessary to expand beyond authentication to areas such as attribute exchange, authorization and more. Profiles such as the "SAML 2.0 Identifier and Protocol Profiles for BAE v2.0 (PDF)" and "SAML 2.0 Metadata Profile for BAE v2.0" stand on their own and are not authentication related.

We expect this to continue and expand in the future.

As an example, the currently underway work on the "FICAM Profile of OAUTH 2" is not an identity scheme, given that OAUTH 2 requires an additional authentication layer to convey identity information. Once the OAUTH 2 profiling is complete, we will be working to identify and profile the pieces that make up that additional identity layer. The combination may result in a FICAM approved portable identity scheme that utilizes OAUTH 2.

In short, going forward we expect to continue our work to profile protocol standards such that they are usable by themselves, as well as use profiles as building blocks to enable portable identity schemes.

RELATED INFORMATION

:- by Anil John

How To Implement the Technical Aspects of an Identity Oracle

In the age of attributes, personal data, and data brokers, the concept of Identity Oracles and how they can help to mediate between diverse entities is something worthwhile to consider.  This blog post provides a short introduction to the Identity Oracle concept and discusses the work FICAM is starting in order to address the technical intersection of Identity Oracles and Attribute Providers via a new Backend Attribute Exchange (BAE) Protocol Profile.

The original definition of an "Identity Oracle" which was coined back in 2006 by Bob Blakley, the current NSTIC IDESG Plenary Chair, is:

  • An organization which derives all of its profit from collection & use of your private information…
  • And therefore treats your information as an asset…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information without disclosing your information…
  • Thus keeping both the Relying Party and you happy, while making money.

While that applies to commercial entities pretty well, let me tweak that a bit for the Government sector:

  • An organization which is the authoritative source of some of your private information…
  • And is constrained by law and policy to safeguard your information…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information, with your consent and without disclosing your information…
  • Thus keeping both You and the Relying Party happy, while enabling you to conduct safe, secure and privacy preserving online transactions

Identity Oracle
A potential technical interaction between the three entities could be:

  1. Person establishes a relationship with the Identity Oracle. The Identity Oracle provides the person with token(s) that allow the person to vouch for his relationship with the Identity Oracle in different contexts
  2. When the Person needs to conduct a transaction with a Relying Party, he presents the appropriate token, which establishes his relationship to the Identity Oracle
  3. The Relying Party asks the Identity Oracle “Am I allowed to offer service X to the Person with a token Y from You under condition Z?”. The Identity Oracle answers “Yes or No”

Conceptually, this type of question is something you would want to ask an Attribute Provider, but current protocols for attribute query and response are really not set up to enable this type of capability.  So putting aside the business and policy aspects, which are huge, a technical piece that needs to happen is to define how the interaction in step (3) above can happen using widely deployed protocols.

Based on multiple information sharing use cases that have come up, and an internal review of need and value, we have decided to address this requirement within FICAM by working to define a "XACML 3.0 Attribute Verification Profile for BAE 2.0":

BAE XACML

The intent of this effort will be to profile XACML 3.0 messages on the wire, not for authorization, but to enable the construction of a verification request and corresponding response, while keeping the message level and transport level security mechanisms consistent between this new profile and the original SAML 2.0 Identifier & Protocol Profile for BAE 2.0.

RELATED POSTS

:- by Anil John

Challenges in Operationalizing Privacy in Identity Federations - Part 2

Federation implementations enabling privacy enhancing characteristics such as anonymity, unlinkability and unobservability are something that we are very interested in for Government service delivery. This blog post describes one such approach using a proxy mechanism between the identity provider and the relying party and articulates some of the trade-off's inherent in such an approach.

The typical identity federation scenario with an externalized Identity/Credential Provider and a Relying Party looks something like this:
Brokered AuthN

  • User's credential identifier (CID) is not released to RP. FICAM Identity Schemes require a Pairwise Pseudonymous Identifier (PPID) to limit the loss of anonymity and unlinkability; IdP keeps track of that mapping internally
  • The Program Identifier (PID) is a user record identifier that only the RP know about, and the RP establishes the one-time PPID to PID mapping via the user enrollment process
  • The IdP knows about the RP the user is interacting with so there is no mitigation for the loss of unobservability

If implementing this using SAML 2.0, this would be the classic SAML Web SSO Sequence Diagram with a persistent identifier for NameID:

Brokered AuthN SAML

One approach that would bring additional privacy enhancing characteristics into this mix is the implementation of a Proxy between the Identity Provider and the Relying Party:

Privacy Proxy

  • CID to PPID mapping is the same as before. Limits loss of anonymity and unlinkability
  • When a PPID comes into the proxy, it generates an associated Persistent Anonymous Identifier (PAI) which is then released to the RP. The proxy manages the persistent PPID to PAI mapping. 
  • Limits loss of unobservability, since IdP has no visibility into which RP the user has gone to or the identifier that they are using at that RP 
  • RP know that a trusted IdP authenticated the user and the associated LOA level but nothing more
  • RP manages the PAI to PID mapping
  • The proxy knows of the IdP and the RP but knows nothing other than the PPID and the PAI
  • Forensics requires coordination across IdP, Proxy and RP

As an aside, I would like to acknowledge our colleagues at TBS Canada for how they defined PAI and PID (PDF); I saw no value in coming up another term to describe the same items, so decided to simply leverage their definitions.

If implementing this in SAML 2.0, the sequence diagram with the proxy taking on both IdP and RP roles as needed would look like:

Privacy Proxy SAML

This architecture works brilliantly as a privacy enhanced credential mediation service and it can very easily be implemented using current technical protocols. But it does require the responsibility for identity proofing (before enrollment) to rest with the RP. So some questions that need to be explored are:

Any insights and lessons learned on working with this type of architecture would be appreciated.

RELATED POSTS

:- by Anil John

Challenges in Operationalizing Privacy in Identity Federations

A critical part of the job of an identity/information management professional is to operationalize privacy in the systems they architect, build and deploy. Unfortunately, it is easier to make that statement than to come up with a rigorous and repeatable process to do it. It is hard because privacy is contextual in nature, and data often moves across organizational and system boundaries where shared context may not exist. This blog post is an attempt to articulate some definitions and considerations regarding operationalizing privacy within the narrow realm of identity federation.

FIPPsNIST, in its DRAFT SP 800-130 "A Framework for Designing Cryptographic Key Management Systems" (PDF) articulates the three privacy characteristics (Section 4.7) of Anonymity, Unlinkability and Unobservability:

An information management and security policy may state that users of the secure information processing system can be assured of anonymity, unlinkability, and unobservability, if these protections are required. Anonymity assures that public data cannot be related to the owner. Unlinkability assures that two or more related events in an information processing system cannot be related to each other. Finally, unobservability assures that an observer is unable to identify or infer the identities of the parties involved in a transaction.

In his write-up of the NIST CKMS Workshop, Dr. Francisco Corella had this to say as it relates to identity federation:

"[...] One way of reducing the number of passwords to be remembered is to rely on a third-party identity provider (IdP), so that one password (presented to the IdP) can be used to authenticate to any number of relying parties. The Federal Government allows citizens to access government web sites through redirection to several Approved Identity Providers.
But third party login has privacy drawbacks. In usual implementations, anonymity is lost because the relying party learns the user’s identity at the IdP, unlinkability is lost by the use of that identity at multiple relying parties, and unobservability is lost because the IdP is informed of the user’s logins. Profiles of third-party login protocols approved for citizen login to government sites mitigate some of these drawbacks by asking the identity provider to provide different identities for the same user to different relying parties. This mitigates the loss of anonymity, and the loss of unlinkability to a certain extent. (Relying parties by themselves cannot track the user, but they can track the user in collusion with the IdP.) But the loss of unobservability is not mitigated, because the IdP is still informed of the user’s activities.
I believe that the Government should work to develop and promote authentication methods that eliminate passwords while preserving anonymity, unlinkability and unobservability."

Agreed.

The Fair Information Practice Principles (FIPPs) are a core part of the NSTIC vision for the Identity Eco-System, and more concretely, a critical part of the Federal Government's implementation of that vision (FICAM). The FICAM Identity Schemes (i.e. Protocol Profiles for Authentication) require the use of pair-wise pseudonymous identifiers to mitigate the loss of anonymity and loss of unlinkability. The loss of unobservability is still very much a concern, which is why as we move out on our FCCX initiative, we are specifically calling out the issue of "panopticality" as something that is critical for us to address.

We are investing both attention and resources to this area, but have little desire to build a closed eco-system of proprietary technologies with limited interoperability that becomes expensive technology road-kill due to lack of support in the marketplace.

We need the help of standards bodies, technology vendors and other stakeholders in making sure the ability to support these privacy characteristics are baked into the current and future generation of identity protocols and standards. Even more so, we need support for these privacy enhancing characteristics to be adopted and used in the implementations of the same protocols and technologies by the identity thought leaders in this space so that Government can leverage and utilize them as part of delivering Citizen facing services.

RELATED POSTS


:- by Anil John

How To Collect and Deliver Attributes to a Relying Party for User Enrollment

In order for user enrollment to work at a Relying Party (RP) it needs a shared private piece of data that represents a claim of identity (e.g. SSN, Drivers License #) and a set of information that can be used to prove the claim. The manner in which the RP obtains the latter depends to a great degree on the identity verification model that is used. This blog post describes the steps and considerations regarding attribute movement for the purpose of user enrollment.

The steps in this process, from the perspective of the RP, are:

  1. Determine the minimal set of attributes (attribute bundle) needed to uniquely identify a person, map them into an existing record at the RP or create a new record if it does not exist
  2. If the attributes are Personally Identifiable Information (PII), implement the necessary protections (both policy and technology) needed to safeguard them during collection, transit, and at rest
  3. Determine who will collect, verify and bind the attributes to an identifier, and if the assurance level of that binding is acceptable
  4. Determine the secure mechanism needed to move the attributes from the entity that collected them to the Relying Party

1. Determine the minimal set of attributes

  • RP DataCollectionWhat is the minimum set of attributes needed to uniquely identify a person within a system?
  • Can we standardize this "attribute bundle" across multiple systems? e.g. Does the combination of Social Security Number, Date of Birth, State of Residence serve to uniquely identify  a person? More? Less?


2. Implement PII Protection on collected attributes 

  • Has a Privacy Impact Assessment been done that includes clear identification of the data needed and collected?
  • Do you have the authority to collect this information? How will you track and verify user consent?
  • Have you implemented technical and policy protections on PII information as required?


3. Who will collect and verify the attributes?

  • Will the attributes be collected and verified by an Identity/Credential Provider or a Registration/Identification Service?
  • Will the attributes be collected by the Relying Party and be verified directly or by leveraging a third party service?
  • Is the verification of attributes and the binding to an identity comply with NIST 800-63-1(PDF) identity proofing requirements?
  • If the verified attributes provided by the IdP/CSP/AP are not sufficient, do you need to implement the ability to request the data directly from the citizen or implement an attribute request/verification capability with a third party service? 

4. How will you securely move the attributes?
  • Back Channel RP Data CollectionHow will you move the verified attributes from an IdP/CSP/AP to the RP?
  • Will the attributes be sent every time by the IdP/CSP/AP to the RP? Is there a mechanism to provide a hint regarding first time enrollment?
  • Does support for using an out-of-band attribute call to the IdP/CSP/AP exist with all entities in the flow? If needed, what will you use as the identifier?
  • Does the ability to capture and pass consent regarding attribute release exist?


One other attribute movement mechanism that is often used by Enterprise as an IdP, is to out of band provision an RP via some sort of an Identity Bridge mechanism. That particular use case comes into play when you are connecting the Enterprise to a SaaS Provider. That use case is out of scope.

Feedback on how you are implementing these types of capabilities in your Enterprise would be appreciated. Are there additional or different approaches or considerations?

RELATED POSTS

:- by Anil John

Attributes Anytime, Anywhere. Extending BAE to Support New Protocols

The Backend Attribute Exchange (BAE) Capability implements a pure Attribute Provider and, by deliberate design, does not provide any authentication functionality. The current technical implementation of the BAE supports a secure FICAM Profile of SAML 2.0 Assertion Query and Response (PDF) which is bound to SOAP.   In this blog post, and as a thought exercise, I am going to walk through some of the approaches, considerations and use cases in how we could extend the BAE to support additional protocols for attribute exchange.

BAE Future Protocols?

XACML

XACML is a protocol that is well understood by the Government community and at v3.0 is a mature standard that has support in multiple COTS products. The use case that I am envisioning is driven more by the need to provide a capability to verify self-asserted attributes rather than pure attribute retrieval:

  1. An entity needs to ask a question and asserts a set of attributes in support of that question . e.g. "Is this person allowed to drive a car in Maryland?" + Data found on a Driver's License
  2. There are privacy and/or data security concerns regarding the attributes such that the attribute provider cannot respond with the verified attribute values from the authoritative source
  3. The attribute provider responds with a boolean "Yes|No" and clarification/error data as appropriate

In order to accomplish this, you could profile XACML messages on the wire, NOT for Authorization, but to construct a verification request and a corresponding response given that XACML 3.0 provides:

  • The ability to send multiple attributes and values in and the ability to get a Yes|No or a Match|NoMatch decision back
  • Attribute Categories (pre-defined and custom) for <Subject>, which can carry the self-asserted attributes of the subject, and <Resource>, which can be used to route the request to the appropriate authoritative sources
  • The request message allows for capturing the consent of the <Subject> for attribute release
  • XACML Advice that can be returned to inform the requester of errors (Don't want to use obligations here given that the standard requires that PEP must discharge obligations)
  • Ability to layer in cross-cutting security functionality, at both the message and transport level using existing infrastructure

The BAE could potentially support this as an additional interface that can in effect act as the technical pieces of an Identity Oracle.

OAUTH 2

OAUTH 2 is a new protocol which, in my mind, has relevance to the Government community because of how it could be utilized to layer in identity into mobile devices. This use case is more about implementing a pure Attribute Provider functionality using a profile of OAUTH 2 rather than supporting the full OAUTH 2 IdP functionality.

If you take a look at the work that has been done on OpenID Connect (OIDC) as an example, they have defined what is called a UserInfo Endpoint. This endpoint is simply an OAUTH 2 Protected Resource with some specific communication semantics:

  • It requires that an Access Token be sent to it (i.e. UserInfo Request is sent as an OAUTH2 Bearer Token)
  • It returns attributes in cleartext JSON or if needed as a signed/encrypted JWT (i.e. UserInfo Response)

One thing I am currently not sure about is if the OpenID Connect specification constrains in any way the implementation of the UserInfo Endpoint to the OIDC Identity Provider (i.e. the entity that actually authenticates the end user), or if it in practice can provide a flow/ability to support a "stand-alone" UserInfo endpoint.

The BAE could potentially support this as an additional Attribute Provider interface, and depending on the Authorization Server (OpenID Connect) or Authorization Manager (UMA) based OAUTH 2 flows, could support the appropriate semantics in the request and response.

Comments and perspectives on both are welcome!

RELATED POSTS

:- by Anil John

What is new with the BAE Operational Deployment?

GSA OGP, together with our partner PM-ISE, is moving out on the operational deployment of the FICAM Backend Attribute Exchange (BAE). The PM-ISE blog post "A Detailed Test Scenario for our Law Enforcement Backend Attribute Exchange Pilot" gives details about our primary use case. In this blog post, I am going to map those business and information sharing aspects to some of the technical details of the deployment.

The operational scenario looks like this:

BAE Operational Pilot Flow
In any such scenario, there are always three parties to the transaction. An Identity Provider, a Relying Party and an Attribute Provider.

"A law enforcement officer in Pennsylvania has an account on the Pennsylvania J-Net network, which supports many public safety, law enforcement, and state government agencies and mission communities. To obtain the J-Net account, the officer’s identity and status were vetted and various facts about identity, assignment, and qualifications were captured and maintained in the J-Net user database.

In the course of an investigation, the officer needs to access data on the RISS-Net system."

J-NET is the Identity Provider and RISS-Net Portal is the Relying Party.

"… both J-Net and RISS-Net are members of the National Information Exchange Federation (NIEF), RISS-Net can accept electronic credentials from J-Net once the officer logs into a J-Net account and is authenticated."

One of the primary reasons we are interested in this scenario (beyond the information sharing value of the deployed capability) is the existence of the NIEF Federation. NIEF already counts J-NET and RISS-NET as members. Which in turn means that there is an existing framework for collaboration and information sharing between them we can plug into, and enhance, with the BAE.

One of the critical technical benefits of this relationship, within the context of the BAE deployment, is that the Federation Identifier has been standardized across NIEF (gfipm:2.0:user:FederationId).

When we created the "SAML 2.0 Identifier and Protocol Profiles for BAE v2.0" (PDF), we deliberately separated out the profiling of the identifiers and the profiling of the protocols precisely so that we could "snap-in" new identifiers, without impacting the security of the protocol. We also put in some specific wording that allowed this flexibility; "It is expected that if credentials with [identifiers] other than what is profiled in this document are used in a BAE implementation, the Federation Operator governing that Community of Interest will define the profiles necessary for that credential type."

As part of this pilot, we will be defining an identifier profile for the NIEF Federation Identifier that will be used in the attribute query made to the BAE Attribute Service.

"RISS-Net has defined a policy for access to their information resources, which is expressed in terms of specific characteristics (“attributes”) of authenticated users. The RISS-Net policy requires that a user is certified as a “Law Enforcement Officer”, and has the necessary 28CFRPart 23 training."

The key to keep in mind is that the existing NIEF SAML Federation and the supporting information sharing framework already allows J-NET to assert the "Law Enforcement Officer" (LEO) attributes for their members when they go to access the RISS-Net Portal.

"… although the officer was trained on 28CFRPart23 in a course offered online by the Bureau of Justice Assistance (BJA), this fact is not part of the officer’s J-Net’s record (28CFRPart23 training status is not one of the facts gathered in their vetting process). Thus J-Net cannot provide all the credentials required by RISS-Net for access to the needed data."

And this is the critical value add for this pilot! There is additional information locked up within RISS-Net that can only be accessed if the 28CFRPart23 attribute is provided. J-Net is not able to assert this, but BJA as the authoritative attribute source can. And we are utilizing the BAE Shared Service Infrastructure deployed at GSA to provide them the capability to do so.

An item that we are still exploring is if the information that is available from the NIEF Federation Identifier as well as the J-NET Attribute Assertion gives enough information such that we can uniquely identify an existing record of a trained LEO at BJA. This is still an open question and is critical in making this work.

As you may have noted, I keep calling the deployment of the BAE Infrastructure at GSA a "Shared Service Infrastructure". That is a deliberate choice of words and I wil expand on that in the future, especially given that this is not our only pilot use case for the BAE deployment!

RELATED POSTS

:- by Anil John

What is new in the FICAM Trust Framework Provider Adoption Process?

The FICAM Trust Framework Provider Adoption Process (TFPAP) is the mechanism used by the Government to leverage industry-based credentials, that citizens already have, for use at Government web sites.

The current version of the Trust Framework Provider Adoption Process (PDF) was finalized in 2009. Since that time there has been great progress in E-Government activities, such as the launching of the National Strategy for Trusted Identities in Cyberspace (NSTIC) and the decision to move out on the FCCX initiative.

Input from Agencies that desire to deliver higher value Government to Citizen services combined with the increasing maturity and practical experience around credential and identity proofing offerings for higher Levels of Assurance are factors that affect this process.

To assure that the TFPAP is keeping pace with policy, technology and process advancements, we are starting the work needed to update the Trust Framework Provider Adoption Process. Some of the items we expect to address as part of this update include:

  • Bringing all externally issued credentials from LOA 1 to 4, both non-PKI and PKI (i.e. PIV-I and Medium/HW credentials), under the TFPAP so that there is a consistent policy and guidance about how Agencies can best utilize these externally issued credentials. 
  • Privacy Guidance, which was separately developed by the FICAM will be updated and integrated directly into the new TFPAP.
  • Exploring how best to bring the TFPAP to bear on the Identity Provider / Attribute Provider / Relying Party aspects individually, and together.
  • Integrating a robust and ongoing Test and Evaluation program into the TFPAP
  • More...

Ultimately we are looking to make the TFPAP a more agile process and will be working with multiple stakeholders including, and especially, our existing approved Trust Framework Providers. The goal, as always, is to assure that we meet the needs of both Citizens and Agencies that seek to leverage these externally issued credentials.

RELATED POSTS


:- by Anil John

RFI/RFP Language for Federation Solutions and Identity Proofing Solutions

As noted in my earlier blog post "Comply with Requirements Quickly and Easily with RFI and RFP Templates", FICAM is working to make it easier for Agencies to align with OMB/NIST/FICAM policies. Given below is recommended language that aligns with policy for incorporation into Agency RFIs and RPFs.  The language covers both identity federation solutions, when the Agency is acting as a relying party, as well as identity proofing solutions.

Identity Federation Solution for Agency as Relying Party

Details: A federation solution is typically integrated with an Agency web application, and needs to support both non-government issued approved credentials as well as government issued credentials. Government issued credentials in this case are Agency issued PIV Cards and approved non-government credentials such as PIV-I and those that are governed by the FICAM Trust Framework Solutions Process.

Identity Proofing Service

  • MUST have an identity proofing service capable of implementing [remote and/or in-person] identity proofing processes at [OMB-O4-04 LOA Level(s) here] per NIST SP 800-63-1

Details: NIST SP 800-63-1(PDF) is the authoritative document that provides information on the technical controls and approaches that an Agency must use for remote as well as in-person identity proofing requirements from LOA 1-4. Currently, FICAM does not have a certification process for a stand-alone identity proofing capability; current FICAM certification, via the Trust Framework Adoption Process, applies to a combined identity proofing-credential issuance solution. As such the requirements levied on an Identity Proofing service are based on the foundational requirements that all US Government Agencies must follow in complying with NIST Guidance.

Do keep in mind the following:

  • The focus above is on the technical bits-n-bytes
  • The above is just a starting point; Agencies are free to modify and add on other requirements as needed
  • The above is subject to change based on new and/or updated policies

RELATED POSTS


:- by Anil John

GSA OGP Announces an Industry Day on Federal Federated Identity Solutions

Earlier this year, the White House convened the Federal Cloud Credential Exchange (FCCX) Tiger Team comprised of several federal agencies that have a vital need to better assist the public and reduce Federal costs by moving more services online. In alignment with President Obama’s National Strategy for Trusted Identities in Cyberspace, the FCCX Tiger Team’s objective is to facilitate the Federal government’s early adoption of secure, privacy-enhancing, efficient, easy-to-use, and interoperable identity solutions.

Over the past few months, the FCCX Tiger Team has worked on the use cases and the functional requirements necessary for the operation of an identity federation capability that can be integrated with a government agency web application to support and consume a full range of digital credentials such as PIV, PIV-I, and other third party credentials issued under a FICAM-approved Trust Framework Provider.

In simple terms, the Federal government is interested in leveraging one or more commercially available cloud service providers to streamline the burden agencies face in trusting and integrating with FICAM-approved credentials.

As the next step, the FCCX Tiger Team would like to hear from industry vendors on how they might implement a privacy-enhancing, cloud-based, federated credential exchange service.

If you are a product or solutions provider that has the ability to offer these capabilities and would like to help inform the service, please submit your name and company via e-mail to icam [at] gsa [dot] gov by Wednesday, August 1, 2012 and we will provide more information about the requested written response and associated logistics.

In addition, for those who contact us, GSA Office of Governmentwide Policy (GSA OGP) will be holding an Industry Day on Tuesday, August 7th, 2012 (9am – 12:30pm EST) at GSA OCS, 1275 First Street NE, Washington DC, Room 1201B (NoMa-Gallaudet Station – DC Metro Red Line) to gather more information and answer questions from industry vendors regarding the FCCX initiative. We will be able to host both virtually and in person. In person space is limited, so let us know your preference when you contact us.

As an overview, the following topics should be addressed in your written response which will be due by 5 P.M. EDT on Monday, August 13 20, 2012:

  • Proposed high level architecture for enabling authentication to an Agency application using third party credentials to include:
    • Shared service operated in a cloud environment servicing multiple Agencies
    • Operation in an Agency-hosted environment
  • User interface approaches for selection of approved credentials
  • Credential registration and authentication strategies for citizens with multiple approved credentials
  • User enrollment approaches
  • Assurance level escalation approaches
  • Attribute request/consumption approaches
  • Supported protocols, profiles and schemas for creating and sending assertions
  • Abstracting and streamlining business relationships with FICAM approved credential providers at all levels of assurance
  • Preserving privacy (minimize storage of personal information and “panopticality” of the service)
  • Auditing
  • Scalability of the service
  • Costs models (Pay per User or application using tiered volume discounts, O&M)
  • Other relevant information

UPDATE (8/3/12): We've had a couple of questions about what is meant by "panopticality" above.

Within the context of FCCX it means two things:

  1. It is the ability of Credential Providers to "see" all the Service Providers to which a citizen authenticates
  2. It is the visibility that the FCCX service itself may have into the citizen information that is flowing thru it


:- by Deb Gallagher (GSA) & Naomi Lefkovitz (NIST) - FCCX Tiger Team Co-Chairs

Comply with Requirements Quickly and Easily with RFI and RFP Templates

A challenge agencies face when putting out an RFI/RFP is in making sure that the intent of the policies and guidance they need to comply with comes through. From the perspective of the organizations that are responsible for policy and guidance, Agencies getting the language right in the RFI/RFP closes the loop by aligning acquisitions with standards and policy. When it comes to Federal Government Agency Identity, Credential and Access Management RFIs and RFPs, FICAM is working to make this easier for Agencies.

We have taken note of the increased RFIs and RFPs for ICAM components that are going out. At the same time, we also realize that the hard working folks who are putting these together face challenges when it comes to making sure that the language in the RFI/RFP reflect the required technical standards and policies.

Let me use language from a recent Agency RFI to discuss how we can help:

[...] requirement of integrating remote/on-line proofing functionality into the [Agency's Identity and Access Management Capability] Identity Proofing Services. To be capable of meeting this requirement, a vendor:
  • Must currently hold a Level 2 FICAM certification
  • Shall have the ability to achieve a Level 3 FICAM certification by [Future Date]
  • [More …]

The above sounds reasonable, but there is a problem; there currently is NO FICAM certification for a stand-alone identity proofing capability. FICAM certification, via our adopted Trust Framework Providers, currently applies only to a combined identity proofing and credential issuance solution. By using the language of FICAM certification above and associating it only with ID Proofing, the results end up being:

  • Confusion in the market about what exactly is being asked for
  • Limiting and/or eliminating qualified vendors who may be able to meet the actual intended requirements
Given that this is a Federal Government Agency who has to comply with OMB Levels of Assurance (LOA) requirements and the associated NIST technical implementation guidance for remote identity proofing, the solution to the above is a minor tweak to the language to convey the actual intent:
  • Must have an identity proofing service capable of implementing remote identity proofing process at LOA 2 per NIST 800-63-1
  • Shall have the ability to implement remote identity proofing processes at LOA 3 per NIST 800-63-1 by [Future Date]

So, in order to help the Agencies up-front to comply with OMB, NIST and FICAM guidance, we are currently working on standardized technical language/templates for specific ICAM capabilities (Identity Proofing, Identity Federation etc.). Agencies will be able to easily incorporate this standard language into their RFI/RPF going forward.

If you are an Agency looking for information on ICAM components or policy for an RFI/RFP you are putting together, please feel free to contact us at icam (at) gsa (dot) gov and we would be happy to answer your questions.

RELATED POSTS


:- by Anil John

If You Don't Plan For User Enrollment Now, You'll Hate Federation Later

User enrollment (a.k.a. user activation, user provisioning, account mapping) into a relying party (RP) application is one of those pesky details that is often glossed over in identity federation conversations. But this piece of the last mile integration with the relying party is fundamental to making identity federation work. This blog post describes this critical step and its components.

User EnrollmentEnrollment is defined here as the process by which a link is established between the (credential) identifier of a person and the identifier used within an RP to uniquely identify the record of that person. For the rest of this blog post, I am going to use the term Program Identifier (PID) to refer to this RP record identifier [A hat tip to our Canadian colleagues].

Especially when it comes to government services, a question that needs to be asked is if the citizen has an existing relationship with the government agency. If it exists, the Gov RP should have the ability to use some shared private information as a starting point to establish the link between a citizen's (credential) identifier (obtained from the credential verifier) and the PID. e.g. Driver's License Number (DL#) if visiting the Motor Vehicle Administration (MVA) or Social Security Number if visiting the Social Security Administration.

It is important to note that this information is already known to the RP, used only as a claim of identity by the citizen, and providing just this information does not constitute proof of identity. i.e. When I provide my DL# to the MVA, I am saying that "Here is DL# XX-XXXXXXX; I claim that I am the Anil John you have on record as being the owner of that DL#". Before enrollment, the MVA would still need to verify that it is indeed me making this claim using an identity proofing process, and not an identity thief who has obtained my DL#.

So for enrollment to work, the RP needs two sets of data:

  1. A shared private piece of data that represents a claim of identity by the citizen
  2. A set of information that can be used to prove that claim to the satisfaction of the RP
If the identity verification is successful, the RP can establish a link the between credential identifier (OpenID PPID, SAML NameID, X.509 SubjectDN etc.) and the PID.

In cases where shared private information does not exist between the citizen and the agency, some options to consider are:
  1. In-person identity proofing
  2. Attestation from a trusted third-party
  3. Shared service for enrollment across multiple RPs (Privacy implications would have to be carefully worked through)
  4. No linking possible; treat as new record
Are there variations or additions to this step that I have not captured above?

RELATED POSTS

:- by Anil John

How to Verify Citizen Identity Easily and Effectively

Identity verification of citizens is critical to delivering high value government services online. Differing approaches to identity verification include front-end identification, which fits well with existing Government trust models, and back-end identification, based on widely deployed commercial practices. This blog post describes these two common models and the approach and trade-offs associated with each.

One of the reasons for trying to articulate this is that I have found myself recently in multiple settings discussing protocol flows and privacy preserving crypto.  But at the end of the discussion, I often feel as though we have not asked ourselves some foundational questions regarding choices made in identity proofing and credential issuance.  This in turn has resulted in a lack of clarity around the downstream impact of those choices on privacy, security and flexibility. So here goes...

Front End IDModel 2

 

The Front-End Identification is the same as the E-Authentication model that is found in NIST SP-800-63-1 (PDF). It follows the sequence of:
  1. Person Registration/Identification
  2. Binding of Identity and Token
  3. Credential Registration/Issuance
  4. Person Enrollment at Relying Party
ApproachTrade-Offs
  • Identity proofing is done up front by a credential provider with registration function or using a separate registration authority
  • Credential incorporates assurances of identity
  • User enrollment at RP based on information available from the identity proofing process and/or other sources [Enrollment is defined here as the creation of an association between the person's identifier and the index for the person within the RP application]
  • Ability to leverage results of identity proofing across multiple relying parties
  • RPs limited to approved credentials that meet assurance criteria
  • May be harder to utilize a higher LOA credential in transactions that support anonymity and pseudonymity without "beautiful math" or other proxy mechanisms
 

Back End IDModel 2


The Back-End Identification is the E-Authentication model that is found widely deployed in the commercial space. It follows the sequence of:
  1. Credential Registration/Issuance
  2. Person Registration/Identification
  3. Binding of Identity and Token at Relying Party
  4. Person Enrollment at Relying Party
ApproachTrade-Offs
  • Credential is often little more than a token (i.e. shared secret) and may have little to no assurances of identity
  • Identity proofing done directly by the RP or by using a separate proofing capability to the level required by policy
  • User enrollment at RP based on information available from the identity proofing process and/or other sources [Enrollment is defined here as the creation of an association between the person's identifier and the index for the person within the RP application]
  • Any token and/or combination of tokens can be used depending on the “technical strength” needed
  • Proofing process can be as stringent as need be, so the information in credential can be as anonymous or revealing as need be
  • Inability to leverage identity proofing across multiple RPs
 

Have I accurately captured the two major approaches in play right now? Are there trade-offs that you can think of that should be added to the lists above? 
 
UPDATED: 12/1/2012 with clarifying text and pictures re: where the token-identity binding takes place
 
RELATED POSTS

:- by Anil John

Level of Confidence of What, When, Where and Who?

Last week's blog post by Dr. Peter Alterman on "Why LOA for Attributes Don’t Really Exist" has generated a good bit of conversation on this topic within FICAM working groups, in the Twitter-verse (@Steve_Lockstep, @independentid, @TimW2JIG, @dak3...) and in may other places.  I also wanted to call out the recent release of the Kantara Initiative's "Attribute Management Discussion Group - Final Report and Recommendations" (via @IDMachines) as being relevant to this conversation as well.

One challenge with this type of discussion is to make sure that at a specific point in the conversation, we are all discussing the same topic from the same perspective. So before attempting to go further, I wanted to put together a simple framework, and hopefully a common frame of reference, to hang this discussion on:

 

"What"
  • Separate out the discussion on Attribute Providers from the discussion on individual Attributes
  • Separate out the discussion on making a decision (to trust/rely-upon/use) based on inputs provided vs making a decision (to trust/rely-upon/use) based on a "score" that has been provided
"When"
(to trust/rely-upon/use)
  • "Design time" and "Run time"
"Where"
  • Where is the calculation done (local or remote)?
  • Where is the decision (to trust/rely-upon/use) done?
"Who"
  • Party relying on attributes to make a calculation, a decision and/or use in a transaction
  • Provider, aggregator and/or re-seller of attributes
  • Value added service that takes in attributes and other information to provide results/judgements/scores based on those inputs
 

Given the above, some common themes and points that surfaced across these conversations are:
  1. Don't blur the conversations on governance/policy and score/criteria  i.e. The conversation around "This is how you will do this within a community of interest" is distinct and separate from the "The criteria for evaluating an Attribute/AP is x, y and z" 
  2. Decisions/Choices regarding Attributes and Attribute Providers, while related, need to be addressed  separately ["What"] 
  3. Decision to trust/rely-upon/use is always local ["Where"], whether it is for attributes or attribute providers
  4. The decision to trust/rely-upon/use an Attribute Provider is typically a design time decision ["When"]
    1. The criteria that feeds this decision (i.e. input to a confidence in AP calculation) is typically more business/process centric e.g. security practices, data quality practices, auditing etc.
    2. There is value in standardizing the above, but it is unknown at present if this standardization can extend beyond a community of interest 
  5. Given that the decision to trust/rely-upon/use an Attribute Provider is typically made out-of-band and at design-time, it is hard to envision a use case for a run-time evaluation based on a confidence score for making a judgement for doing business with an Attribute Provider ["When"]
  6. The decision to trust/rely-upon/use an Attribute is typically a local decision at the Relying Party ["Where"]
  7. The decision to trust/rely-upon/use an Attribute is typically a run-time decision ["When"], given that some of the potential properties associated with an attribute (e.g. unique, authoritative or self-reported, time since last verified, last time changed, last time accessed, last time consented or others) may change in real time
    1. There is value in standardizing these 'attributes of an attribute'
    2. It is currently unknown if these 'attributes of an attribute' can scale beyond a specific community of interest
  8. A Relying Party may choose to directly make the calculation about an Attribute (i.e. local confidence calculation based using the 'attributes of an attribute' as input) or depend on an externally provided confidence "score" ["What"]
    1. The "score" calculation may be outsourced to an external service/capability ["Where"]
    2. This choice of doing it yourself or outsourcing should be left up to the discretion of the RP based on their capabilities and risk profile ["Who"]
Given that we have to evaluate both Attribute Providers and Attributes it is probably in all of our shared interest to come up with a common terminology for what we call these evaluation criteria. A recommendation, taking into account many of the conversations in this space to date:
  • Attribute Provider Practice Statement (APPS) for Attribute Providers, Aggregators, Re-Sellers
  • Level of Confidence Criteria (LOCC) for Attributes

As always, this conversation is just starting... 
 
 

:- by Anil John

Why LOA for Attributes Don’t Really Exist

This is a guest post on Authoritative-ness and Attributes by Dr. Peter Alterman. Peter is the Senior Advisor to the NSTIC NPO at NIST, and a thought leader who has done pioneering and award-winning work in areas ranging from IT Security and PKI to Federated Identity. You may not always agree with Peter, but he brings a perspective that is always worth considering. [NOTE: FICAM ACAG WG has not come to any sort of consensus on this topic yet] - Anil John


 

As I have argued in public and private, I continue to believe that the concept of assigning a Level of Assurance to an attribute is bizarre, making real-time authorization decisions even more massively burdensome than they can be, and does nothing but confuse both Users and Relying Parties.

The Laws of Attribute Validity

The simple, basic logic is this: First, an attribute issuer is authoritative for the attributes it issues. If you ask an issuer if a particular user’s asserted attribute is valid, you’ll get a Yes or No answer. If you ask that issuer what the LOA of the user’s attribute is, it will be confused – after all, the issuer issued it. The answer is binary: 1 or 0, T or F, Y or N. Second, a Relying Party is authoritative for determining what attributes it wants/needs for authorization and more importantly, it is authoritative for deciding what attribute authorities to trust. Again the answer is binary: 1 or 0, T or F, Y or N. Any attribute issuer that is not authoritative for the attribute it issues should not be trusted and any RP that has no policy on which attribute providers OR RESELLERS to trust won’t survive in civil court.

Secondary Source Cavil

“But wait,” the fans of attribute LOA say, what if you ask a secondary source if that same user’s attribute is valid. This is asking an entity that did not issue the attribute to assert its validity. In this case the RP has to decide how much it trusts the secondary source and then how much it trusts the secondary source to assert the true status of the attribute. Putting aside questions of why one would want to rely on secondary sources in the first place, implicit in this use case is the assumption that the RP has previously decided who to ask about the attribute. If the RP has decided to ask the secondary source, that is also a trust decision which one would assume would have been based on an evaluative process of some sort. After all, why would an RP choose to trust a source just a little bit? Really doesn’t make sense and complicates the trust calculation no end. Not to mention raising the eyebrows of both the CISSO and a corporate/agency lawyer, both very bad things.

Thus, the RP decides to trust the assertion of the secondary source. The response back to the RP from the secondary source is binary and the trust decision is binary. Some Federation Operator (or Trust Framework Providers, take your pick) may be serving as repositories of trusted sources for attribute assertions as a member service and in that case it, too, the RP would choose to trust the attribute sources of the FO/TFP explicitly. If a Federation Operator/TFP chooses not to trust certain secondary sources, it simply doesn’t add them to its white list. Member RPs that choose to trust the secondary attribute sources would do so based upon local determinations, underscoring the role of prior policy implementation.

Either directly or indirectly, an RP or a TFP makes a binary trust decision about which attribute providers to trust, and so the example reduces to the original law.

Transient Validity, aka Dynamic Attributes

Another circumstance where attribute LOA might be considered is querying about an attribute which changes rapidly in real time. One must accept that an attribute is either valid or invalid at the time of query. If temporality is of concern, that is a whole second attribute and a trusted timestamp must be a necessary part of the attribute validation process. A query from an online business to an end user’s bank would want to know if the user had sufficient funds to cover the transaction at the time the transaction is confirmed. At the time of the query the answer is binary, yes or no. It would also need a trusted timestamp that itself could be validated as True or False. That is, two separate attributes are required, one for content and one for time, both of which must be true for the RP to trust and therefore complete the transaction. Even for ephemeral attributes the answer is directly relevant to the state of the attribute at the time of query and that answer is binary, Y or N, the only difference being that a second trusted attribute – the timestamp – is required. The business makes a binary decision to trust that the user has the funds to pay for the purchase at the time of purchase – and of query - and concludes the sale or rejects it. The case resolves back to binary again.

Obscurity of Attribute Source

Admittedly, things can get complicated when the identity of the attribute issuer is obscure, such as US Citizenship for the native-born. However, once again the RP makes an up-front decision about which source or sources it’s going to ask about that attribute. It doesn’t matter what the source is; the point is that the RP will make a decision on which source it deems authoritative and it will trust that source’s assertion of that attribute. In the citizenship example, the RP chooses two sources: the State Department because if the user has been issued a US passport that’s a priori legal proof of citizenship, or some other governmental entity that keeps a record of the live birth in a US jurisdiction, which is another a priori legal proof of citizenship. However, if the application is a National Security RP for example, it might query a whole host of data sources to determine if the user holds a passport from some other nation state. In addition to the attribute sources query, which in this case might get quite complex (certainly enough for the user to disconnect and pick up the phone instead), the application will have to include scripts telling it where to look and what answers to look for. And at the end of the whole process, the application is going to make a binary decision about whether to trust that the user is a US citizen or not and all that intermediate drama again resolves down to the original case, that the RP makes an up-front determination what attribute source or sources to trust, though in this one the RP builds a complicated multi-authority search and weigh process as part of its authorization determination.

RPs That Calculate Trust Scores

Many commercial RPs, especially in the financial services industry, calculate scores to determine whether to trust authentication and/or authorization. In these situations the RP is making trust calculations, weighing and scoring. Yet it is the RP that is calculating on the inputs, not calculating the inputs. It uses the authorization data and the attribute data to make authentication and/or authorization decisions with calculation code that is directly relevant to its risk mitigation strategy. In fact, this begins to look a lot like the National Security version of the Obscurity condition.

In these vexed situations, what the RP is doing is not trusting all attribute providers and calculating a trust decision based upon a previously-determined algorithm in which all the responses from all the untrusted providers somehow are transformed into a trusted attribute. The algorithm seems to be based upon determining a trust decision by using multiple attribute sources to reinforce each other in some predetermined way, and this method reminds me of a calculus problem, that is, integrating towards zero (risk) and perhaps that’s what the algorithm even looks like.

Attribute Probability

Colleagues who have reviewed this [position paper] in draft have pointed out that data aggregators sometimes have fewer high quality (attribute) data about certain individuals, such as young people, and therefore some include a probability number along with the transmitted attribute data. While it may mean that the data carries a level of assurance assertion to the attribute authority, it’s not really a level of assurance assertion to the RP. The RP, again, has chosen to trust a data aggregator as an authoritative attribute source, presumably because it has reviewed the aggregator’s business processes, accepts its model for probability ranking and chooses to incorporate that probability into its own local scoring algorithm or authorization determination process. In other words, the aggregator is deemed authoritative and its probability scoring is considered authoritative as well. This is, yet again, a binary Y or N determination.

Why It Matters

There are compelling operational reasons why assigning assurance levels to attribute assertions, or even asserters, is a bad idea. It’s because, simply, anything that complicates the architecture of the global trust infrastructure is bad and especially bad if that complication is built on top of a failure to distinguish between a data input and local data manipulation. As the example above illuminates, the attribute and asserter(s) are both trusted by the RP application while the extent to which the trusted data is reliable is questionable and thus manipulated. Insisting on scoring the so-called trustworthiness of an attribute asserter is in essence assigning an attribute to an attribute, a trustworthiness attribute. The policies, practices, standards and other dangling elements necessary to deploy attributes with attributes, then interpret them and utilize them, even if such global standardization for all RP applications could be attained, constitutes an unsupportable waste of resources. Even worse, it threatens to sap the momentum necessary to deploy an attribute management infrastructure even as solutions are beginning to emerge from the conference rooms around the world.

QED, Sort of

The Two Laws of Attribute Validity not withstanding, people can – and have - created Rube Goldberg-ian use cases that require attribute LOA and have even plopped them in to a deployed solution (to increase billable hours, one suspects), but they’re essentially useless. I hate to beat this dead horse but each case I’ve listened to reduces to a binary decision. The bottom line is that the RP makes policy decisions up front about what attribute provider(s) to trust or not trust and these individual decisions lead to binary assertions of attribute validity either directly or indirectly through local processing.

:- by Peter Alterman, Ph.D.

RELATED POSTS