Showing posts with label Attributes. Show all posts
Showing posts with label Attributes. Show all posts

FICAM TFS TEM on Identity Resolution Needs for Online Service Delivery

The FICAM Trust Framework Solutions (TFS) Program is convening public and private sector experts in identity proofing, identity resolution and privacy for an Identity Resolution Needs for Online Service Delivery Technical Exchange Meeting (TEM) on 5/1/14 from 9:00 AM - 5:00 PM EST in Washington, DC.

REGISTRATION

Save the 5/1/14 date! In-person attendance and early registration (due to limited space) are recommended.

Register Now!

Registration is now closed for this event!

Event Location: GSA, 1800 F St NW, Washington, DC 20405

In-person event logistics information will be provided to registered attendees. Remote attendance information will be made available to registered attendees who are not able to attend in-person.

Questions? Please contact the FICAM TFS Program at TFS.EAO@gsa.gov

BACKGROUND

Identity attributes that are used to uniquely distinguish between individuals (versus describing individuals) are referred to as identifiers. Identity resolution is the ability to resolve identity attributes to a unique individual (e.g. no other individual has the same set of attributes) within a particular context.

Within the context of enabling high value and sensitive online government services to citizens and businesses, the ability to uniquely resolve the identity of an individual is critical to delivering government benefits, entitlements and services.

As part of the recent update to FICAM TFS, we recognized the Agency need for standardized approaches to identity resolution in our Approval process for Credential Service Providers (CSPs) and Identity Managers (IMs).

The study done by the NASPO IDPV Project, "Establishment of Core Identity Attribute Sets & Supplemental Identity Attributes – Report of the IDPV Identity Resolution Project (February 17, 2014)" is currently being used as an industry based starting point for addressing this need. The study proposed 5 equivalent attribute bundles that are sufficient to uniquely distinguish between individuals in at least 95% of cases involving the US population.

TEM FOCUS

However, the FICAM TFS Program recognizes that the NASPO IDPV study, while a starting point, is just the start and not the end. As such, we are convening this TEM to:
  • Articulate the identity federation needs of government agencies as it relates to identity resolution that balances identity assurance, privacy respecting approaches and cost-effectiveness
  • Solicit feedback from participants with expertise in identity proofing and attribute management on publicly sharable data, studies and approaches that enable unique identity resolution within the U.S. population for the explicit purpose of delivering high value online government services
  • Identify short-comings in current studies on this topic, discuss factors to mitigate them, and identify areas to focus on for near term and future research investments

REQUEST FOR DISCUSSION TOPICS and STUDIES

If you have expertise in identity resolution, identity proofing and related privacy aspects, and have data-backed research and results to share on this topic, we are interested in hearing from you. Please contact us at TFS.EAO@gsa.gov by COB 4/16/14 with your proposed discussion topic.


DRAFT AGENDA for 05/01/2014

09:00 AM - 09:30 AM Attendee Check-In
09:30 AM - 10:10 AM Welcome & TEM Overview/Goals/Level Set

10:15 AM - 11:10 AM Agency Viewpoint Panel on Identity Resolution + Privacy
[CMS, DHS, GSA, NIST, SSA, State Dept - Moderated by NIST]
11:15 AM - 11:45 AM Audience Discussion / Q&A

11:45 AM - 01:00 PM LUNCH (On your own) & NETWORKING

01:00 PM - 01:55 PM Industry Viewpoint Panel on Identity Resolution + Privacy
[CertiPath, Experian, ID/DataWeb, LexisNexis, SecureKey, Socure, Symantec - Moderated by NIST]
02:00 PM - 02:30 PM Audience Discussion / Q&A

02:30 PM - 02:45 PM BREAK

02:45 PM - 03:40 PM Joint Panel on Business Models / Cost / Innovation
[Agency & Industry Panelists - Moderated by Kantara Initiative]
03:45 PM - 04:15 PM Audience Discussion and Q&A

04:15 PM - Event Wrap-up

Sign up for our notification list @ http://www.idmanagement.gov/trust-framework-solutions to be kept updated on this and future FICAM TFS news, events and announcements.


:- by Anil John
:- Program Manager, FICAM Trust Framework Solutions

Challenges in Operationalizing Privacy in Identity Federations - Part 3

Part 1 of this series discussed the data minimization principles of anonymity, unlinkability and unobservability and their relationship to identity federation. Part 2 of this series walked through a proxy architecture that provides those principles in a federated authentication system. In this blog post, I would like to expand the discussion of the proxy architecture to include user enrollment and see how the data minimization principles are affected by the need for verified attributes.

In the proxy architecture, when a user arrives for the first time at the relying party (RP) after being authenticated by the IdP/CSP, the RP knows:

  1. A trusted IdP has authenticated the user, 
  2. The Level of Assurance (LOA) of the credential used, and 
  3. The Persistent Anonymous Identifier (PAI) associated with the credential of the authenticated user. 

Assumption: As part of the identity proofing and credential issuance process, the IdP/CSP has collected and verified information about the user (which is a requirement for LOA 2+ scenarios).

The RP starts the user enrollment process by collecting both a shared private piece of data (e.g. account #, access code, SSN etc.) that represents a claim of identity and a set of information (e.g. Name, Address, DOB etc.) that can be used to prove that claim from the user. The data elements that are collected by the RP needs to be a subset of those verified and collected by the IdP/CSP as part of the identity proofing and credential issuance process.

The RP then initiates the attribute verification process:

  • The request is made via the proxy to maintain the PAI to PPID mapping and the PPID to CID mapping
  • The request is for a MATCH/NOMATCH answer and NOT a "Give me all the attributes you have collected during identity proofing"
  • The shared private data (e.g. Account #, Access Code, SSN etc.), which is ALREADY in the RP system, is NOT sent to the IdP/CSP

PrivacyProxy AX

If the response that comes back from the IdP/CSP is positive, the RP:

  • Uses the shared private piece of data to pull up the associated user record 
  • Checks to see if the verified attributes returned match those in the user record
  • If they match, the RP links the PAI to the associated user record locator a.k.a. Program Identifier (PID)

Critical points to note here are that the IdP still does not know which RP the user went to, the RP still does not know which IdP the user is coming from, but the Proxy now has visibility into the attributes flowing through it.  As such, it is critical to make sure that a security policy backed by an independent audit and verification regime is put in place to assure that the proxy does not collect, store or log the attribute values flowing through it.

We would be interested to hear about how this architecture can be improved or modified to enhance its privacy characteristics.

RELATED POSTS


:- by Anil John

How To Implement the Technical Aspects of an Identity Oracle

In the age of attributes, personal data, and data brokers, the concept of Identity Oracles and how they can help to mediate between diverse entities is something worthwhile to consider.  This blog post provides a short introduction to the Identity Oracle concept and discusses the work FICAM is starting in order to address the technical intersection of Identity Oracles and Attribute Providers via a new Backend Attribute Exchange (BAE) Protocol Profile.

The original definition of an "Identity Oracle" which was coined back in 2006 by Bob Blakley, the current NSTIC IDESG Plenary Chair, is:

  • An organization which derives all of its profit from collection & use of your private information…
  • And therefore treats your information as an asset…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information without disclosing your information…
  • Thus keeping both the Relying Party and you happy, while making money.

While that applies to commercial entities pretty well, let me tweak that a bit for the Government sector:

  • An organization which is the authoritative source of some of your private information…
  • And is constrained by law and policy to safeguard your information…
  • And therefore protects your information by answering questions (i.e. providing meta-identity information) based on your information, with your consent and without disclosing your information…
  • Thus keeping both You and the Relying Party happy, while enabling you to conduct safe, secure and privacy preserving online transactions

Identity Oracle
A potential technical interaction between the three entities could be:

  1. Person establishes a relationship with the Identity Oracle. The Identity Oracle provides the person with token(s) that allow the person to vouch for his relationship with the Identity Oracle in different contexts
  2. When the Person needs to conduct a transaction with a Relying Party, he presents the appropriate token, which establishes his relationship to the Identity Oracle
  3. The Relying Party asks the Identity Oracle “Am I allowed to offer service X to the Person with a token Y from You under condition Z?”. The Identity Oracle answers “Yes or No”

Conceptually, this type of question is something you would want to ask an Attribute Provider, but current protocols for attribute query and response are really not set up to enable this type of capability.  So putting aside the business and policy aspects, which are huge, a technical piece that needs to happen is to define how the interaction in step (3) above can happen using widely deployed protocols.

Based on multiple information sharing use cases that have come up, and an internal review of need and value, we have decided to address this requirement within FICAM by working to define a "XACML 3.0 Attribute Verification Profile for BAE 2.0":

BAE XACML

The intent of this effort will be to profile XACML 3.0 messages on the wire, not for authorization, but to enable the construction of a verification request and corresponding response, while keeping the message level and transport level security mechanisms consistent between this new profile and the original SAML 2.0 Identifier & Protocol Profile for BAE 2.0.

RELATED POSTS

:- by Anil John

Challenges in Operationalizing Privacy in Identity Federations - Part 2

Federation implementations enabling privacy enhancing characteristics such as anonymity, unlinkability and unobservability are something that we are very interested in for Government service delivery. This blog post describes one such approach using a proxy mechanism between the identity provider and the relying party and articulates some of the trade-off's inherent in such an approach.

The typical identity federation scenario with an externalized Identity/Credential Provider and a Relying Party looks something like this:
Brokered AuthN

  • User's credential identifier (CID) is not released to RP. FICAM Identity Schemes require a Pairwise Pseudonymous Identifier (PPID) to limit the loss of anonymity and unlinkability; IdP keeps track of that mapping internally
  • The Program Identifier (PID) is a user record identifier that only the RP know about, and the RP establishes the one-time PPID to PID mapping via the user enrollment process
  • The IdP knows about the RP the user is interacting with so there is no mitigation for the loss of unobservability

If implementing this using SAML 2.0, this would be the classic SAML Web SSO Sequence Diagram with a persistent identifier for NameID:

Brokered AuthN SAML

One approach that would bring additional privacy enhancing characteristics into this mix is the implementation of a Proxy between the Identity Provider and the Relying Party:

Privacy Proxy

  • CID to PPID mapping is the same as before. Limits loss of anonymity and unlinkability
  • When a PPID comes into the proxy, it generates an associated Persistent Anonymous Identifier (PAI) which is then released to the RP. The proxy manages the persistent PPID to PAI mapping. 
  • Limits loss of unobservability, since IdP has no visibility into which RP the user has gone to or the identifier that they are using at that RP 
  • RP know that a trusted IdP authenticated the user and the associated LOA level but nothing more
  • RP manages the PAI to PID mapping
  • The proxy knows of the IdP and the RP but knows nothing other than the PPID and the PAI
  • Forensics requires coordination across IdP, Proxy and RP

As an aside, I would like to acknowledge our colleagues at TBS Canada for how they defined PAI and PID (PDF); I saw no value in coming up another term to describe the same items, so decided to simply leverage their definitions.

If implementing this in SAML 2.0, the sequence diagram with the proxy taking on both IdP and RP roles as needed would look like:

Privacy Proxy SAML

This architecture works brilliantly as a privacy enhanced credential mediation service and it can very easily be implemented using current technical protocols. But it does require the responsibility for identity proofing (before enrollment) to rest with the RP. So some questions that need to be explored are:

Any insights and lessons learned on working with this type of architecture would be appreciated.

RELATED POSTS

:- by Anil John

How To Collect and Deliver Attributes to a Relying Party for User Enrollment

In order for user enrollment to work at a Relying Party (RP) it needs a shared private piece of data that represents a claim of identity (e.g. SSN, Drivers License #) and a set of information that can be used to prove the claim. The manner in which the RP obtains the latter depends to a great degree on the identity verification model that is used. This blog post describes the steps and considerations regarding attribute movement for the purpose of user enrollment.

The steps in this process, from the perspective of the RP, are:

  1. Determine the minimal set of attributes (attribute bundle) needed to uniquely identify a person, map them into an existing record at the RP or create a new record if it does not exist
  2. If the attributes are Personally Identifiable Information (PII), implement the necessary protections (both policy and technology) needed to safeguard them during collection, transit, and at rest
  3. Determine who will collect, verify and bind the attributes to an identifier, and if the assurance level of that binding is acceptable
  4. Determine the secure mechanism needed to move the attributes from the entity that collected them to the Relying Party

1. Determine the minimal set of attributes

  • RP DataCollectionWhat is the minimum set of attributes needed to uniquely identify a person within a system?
  • Can we standardize this "attribute bundle" across multiple systems? e.g. Does the combination of Social Security Number, Date of Birth, State of Residence serve to uniquely identify  a person? More? Less?


2. Implement PII Protection on collected attributes 

  • Has a Privacy Impact Assessment been done that includes clear identification of the data needed and collected?
  • Do you have the authority to collect this information? How will you track and verify user consent?
  • Have you implemented technical and policy protections on PII information as required?


3. Who will collect and verify the attributes?

  • Will the attributes be collected and verified by an Identity/Credential Provider or a Registration/Identification Service?
  • Will the attributes be collected by the Relying Party and be verified directly or by leveraging a third party service?
  • Is the verification of attributes and the binding to an identity comply with NIST 800-63-1(PDF) identity proofing requirements?
  • If the verified attributes provided by the IdP/CSP/AP are not sufficient, do you need to implement the ability to request the data directly from the citizen or implement an attribute request/verification capability with a third party service? 

4. How will you securely move the attributes?
  • Back Channel RP Data CollectionHow will you move the verified attributes from an IdP/CSP/AP to the RP?
  • Will the attributes be sent every time by the IdP/CSP/AP to the RP? Is there a mechanism to provide a hint regarding first time enrollment?
  • Does support for using an out-of-band attribute call to the IdP/CSP/AP exist with all entities in the flow? If needed, what will you use as the identifier?
  • Does the ability to capture and pass consent regarding attribute release exist?


One other attribute movement mechanism that is often used by Enterprise as an IdP, is to out of band provision an RP via some sort of an Identity Bridge mechanism. That particular use case comes into play when you are connecting the Enterprise to a SaaS Provider. That use case is out of scope.

Feedback on how you are implementing these types of capabilities in your Enterprise would be appreciated. Are there additional or different approaches or considerations?

RELATED POSTS

:- by Anil John

Attributes Anytime, Anywhere. Extending BAE to Support New Protocols

The Backend Attribute Exchange (BAE) Capability implements a pure Attribute Provider and, by deliberate design, does not provide any authentication functionality. The current technical implementation of the BAE supports a secure FICAM Profile of SAML 2.0 Assertion Query and Response (PDF) which is bound to SOAP.   In this blog post, and as a thought exercise, I am going to walk through some of the approaches, considerations and use cases in how we could extend the BAE to support additional protocols for attribute exchange.

BAE Future Protocols?

XACML

XACML is a protocol that is well understood by the Government community and at v3.0 is a mature standard that has support in multiple COTS products. The use case that I am envisioning is driven more by the need to provide a capability to verify self-asserted attributes rather than pure attribute retrieval:

  1. An entity needs to ask a question and asserts a set of attributes in support of that question . e.g. "Is this person allowed to drive a car in Maryland?" + Data found on a Driver's License
  2. There are privacy and/or data security concerns regarding the attributes such that the attribute provider cannot respond with the verified attribute values from the authoritative source
  3. The attribute provider responds with a boolean "Yes|No" and clarification/error data as appropriate

In order to accomplish this, you could profile XACML messages on the wire, NOT for Authorization, but to construct a verification request and a corresponding response given that XACML 3.0 provides:

  • The ability to send multiple attributes and values in and the ability to get a Yes|No or a Match|NoMatch decision back
  • Attribute Categories (pre-defined and custom) for <Subject>, which can carry the self-asserted attributes of the subject, and <Resource>, which can be used to route the request to the appropriate authoritative sources
  • The request message allows for capturing the consent of the <Subject> for attribute release
  • XACML Advice that can be returned to inform the requester of errors (Don't want to use obligations here given that the standard requires that PEP must discharge obligations)
  • Ability to layer in cross-cutting security functionality, at both the message and transport level using existing infrastructure

The BAE could potentially support this as an additional interface that can in effect act as the technical pieces of an Identity Oracle.

OAUTH 2

OAUTH 2 is a new protocol which, in my mind, has relevance to the Government community because of how it could be utilized to layer in identity into mobile devices. This use case is more about implementing a pure Attribute Provider functionality using a profile of OAUTH 2 rather than supporting the full OAUTH 2 IdP functionality.

If you take a look at the work that has been done on OpenID Connect (OIDC) as an example, they have defined what is called a UserInfo Endpoint. This endpoint is simply an OAUTH 2 Protected Resource with some specific communication semantics:

  • It requires that an Access Token be sent to it (i.e. UserInfo Request is sent as an OAUTH2 Bearer Token)
  • It returns attributes in cleartext JSON or if needed as a signed/encrypted JWT (i.e. UserInfo Response)

One thing I am currently not sure about is if the OpenID Connect specification constrains in any way the implementation of the UserInfo Endpoint to the OIDC Identity Provider (i.e. the entity that actually authenticates the end user), or if it in practice can provide a flow/ability to support a "stand-alone" UserInfo endpoint.

The BAE could potentially support this as an additional Attribute Provider interface, and depending on the Authorization Server (OpenID Connect) or Authorization Manager (UMA) based OAUTH 2 flows, could support the appropriate semantics in the request and response.

Comments and perspectives on both are welcome!

RELATED POSTS

:- by Anil John

From AAES to BAE - Implementing Collection and Sharing of Identity Data

The Federal Identity, Credential and Access Management (FICAM) Roadmap and Implementation Guidance (PDF) calls out the need to implement the ability to streamline the collection and sharing of digital identity data (Initiative 5). The Authoritative Attribute Exchange Services (AAES) is the architectural construct shown in the Roadmap as the mechanism that can implement this capability. This blog post provides a description of the capabilities needed in an AAES, and outlines a concrete method for implementing it; via deploying a Backend Attribute Exchange (BAE) infrastructure.

The AAES is a point of architectural abstraction between authoritative sources of identity information and the systems and applications that need that information.

FICAM AAES

At a high level, you can separate the functional requirements of an AAES into two buckets:

Authoritative Attribute ManagerAuthoritative Attribute Distributer
  • Correlate attributes from various attribute sources
  • De-conflict discrepancies across attribute sources
  • Implement a data model for entity attributes
  • Provide a consolidated view of the pieces of an entity gathered from multiple sources
  • Primary point of query for systems and applications
  • Provide a customized and tailored view of data
  • Support requests for attributes from both internal and external (to organization standing up the AAES) consumers


In order to meet these requirements, the implementation would need to provide capabilities "in the middle" such as Aggregation & Join, Mapping & Transformation, Routing & Load Balancing, Security & Audit and Local Storage (for caching) while providing standardized interfaces and connectors to applications and data sources.

A combination of a Virtual/Meta Directory Engine and a XML Security Gateway provides such a mix of capabilities:

FICAM AAES Implementation
The implementation of such an infrastructure is something we now have extensive experience with, from a combination of prototypes and proof-of-concepts, end-to-end pilots, as well as operational deployments of the various infrastructure elements. That is the reason why we chose these infrastructure elements as the foundational pieces for the Backend Attribute Exchange (BAE) infrastructure we are currently deploying:

FICAM AAES BAE

As you can see above, there are also two supporting elements to the BAE infrastructure that we have deployed/are deploying; the BAE Metadata Service and the E-Government Trust Services (EGTS) Certificate Authority (CA). The BAE Metadata service will be the authoritative source of the metadata related to the BAE deployment and the EGTS CA will issue the Non-Person Entity (NPE) certificates that will be used to assure message level security across the members of the BAE "Attribute Federation".

In short, while the AAES is an abstract architectural construct, the infrastructure elements that make up the BAE are an example of a physical implementation of such a construct. It is being deployed in the near term to demonstrate operational capability with the goal of making it available as a shared service capability going forward.

RELATED POSTS

:- by Anil John

What is new with the BAE Operational Deployment?

GSA OGP, together with our partner PM-ISE, is moving out on the operational deployment of the FICAM Backend Attribute Exchange (BAE). The PM-ISE blog post "A Detailed Test Scenario for our Law Enforcement Backend Attribute Exchange Pilot" gives details about our primary use case. In this blog post, I am going to map those business and information sharing aspects to some of the technical details of the deployment.

The operational scenario looks like this:

BAE Operational Pilot Flow
In any such scenario, there are always three parties to the transaction. An Identity Provider, a Relying Party and an Attribute Provider.

"A law enforcement officer in Pennsylvania has an account on the Pennsylvania J-Net network, which supports many public safety, law enforcement, and state government agencies and mission communities. To obtain the J-Net account, the officer’s identity and status were vetted and various facts about identity, assignment, and qualifications were captured and maintained in the J-Net user database.

In the course of an investigation, the officer needs to access data on the RISS-Net system."

J-NET is the Identity Provider and RISS-Net Portal is the Relying Party.

"… both J-Net and RISS-Net are members of the National Information Exchange Federation (NIEF), RISS-Net can accept electronic credentials from J-Net once the officer logs into a J-Net account and is authenticated."

One of the primary reasons we are interested in this scenario (beyond the information sharing value of the deployed capability) is the existence of the NIEF Federation. NIEF already counts J-NET and RISS-NET as members. Which in turn means that there is an existing framework for collaboration and information sharing between them we can plug into, and enhance, with the BAE.

One of the critical technical benefits of this relationship, within the context of the BAE deployment, is that the Federation Identifier has been standardized across NIEF (gfipm:2.0:user:FederationId).

When we created the "SAML 2.0 Identifier and Protocol Profiles for BAE v2.0" (PDF), we deliberately separated out the profiling of the identifiers and the profiling of the protocols precisely so that we could "snap-in" new identifiers, without impacting the security of the protocol. We also put in some specific wording that allowed this flexibility; "It is expected that if credentials with [identifiers] other than what is profiled in this document are used in a BAE implementation, the Federation Operator governing that Community of Interest will define the profiles necessary for that credential type."

As part of this pilot, we will be defining an identifier profile for the NIEF Federation Identifier that will be used in the attribute query made to the BAE Attribute Service.

"RISS-Net has defined a policy for access to their information resources, which is expressed in terms of specific characteristics (“attributes”) of authenticated users. The RISS-Net policy requires that a user is certified as a “Law Enforcement Officer”, and has the necessary 28CFRPart 23 training."

The key to keep in mind is that the existing NIEF SAML Federation and the supporting information sharing framework already allows J-NET to assert the "Law Enforcement Officer" (LEO) attributes for their members when they go to access the RISS-Net Portal.

"… although the officer was trained on 28CFRPart23 in a course offered online by the Bureau of Justice Assistance (BJA), this fact is not part of the officer’s J-Net’s record (28CFRPart23 training status is not one of the facts gathered in their vetting process). Thus J-Net cannot provide all the credentials required by RISS-Net for access to the needed data."

And this is the critical value add for this pilot! There is additional information locked up within RISS-Net that can only be accessed if the 28CFRPart23 attribute is provided. J-Net is not able to assert this, but BJA as the authoritative attribute source can. And we are utilizing the BAE Shared Service Infrastructure deployed at GSA to provide them the capability to do so.

An item that we are still exploring is if the information that is available from the NIEF Federation Identifier as well as the J-NET Attribute Assertion gives enough information such that we can uniquely identify an existing record of a trained LEO at BJA. This is still an open question and is critical in making this work.

As you may have noted, I keep calling the deployment of the BAE Infrastructure at GSA a "Shared Service Infrastructure". That is a deliberate choice of words and I wil expand on that in the future, especially given that this is not our only pilot use case for the BAE deployment!

RELATED POSTS

:- by Anil John

GSA OGP Announces an Industry Day on Federal Federated Identity Solutions

Earlier this year, the White House convened the Federal Cloud Credential Exchange (FCCX) Tiger Team comprised of several federal agencies that have a vital need to better assist the public and reduce Federal costs by moving more services online. In alignment with President Obama’s National Strategy for Trusted Identities in Cyberspace, the FCCX Tiger Team’s objective is to facilitate the Federal government’s early adoption of secure, privacy-enhancing, efficient, easy-to-use, and interoperable identity solutions.

Over the past few months, the FCCX Tiger Team has worked on the use cases and the functional requirements necessary for the operation of an identity federation capability that can be integrated with a government agency web application to support and consume a full range of digital credentials such as PIV, PIV-I, and other third party credentials issued under a FICAM-approved Trust Framework Provider.

In simple terms, the Federal government is interested in leveraging one or more commercially available cloud service providers to streamline the burden agencies face in trusting and integrating with FICAM-approved credentials.

As the next step, the FCCX Tiger Team would like to hear from industry vendors on how they might implement a privacy-enhancing, cloud-based, federated credential exchange service.

If you are a product or solutions provider that has the ability to offer these capabilities and would like to help inform the service, please submit your name and company via e-mail to icam [at] gsa [dot] gov by Wednesday, August 1, 2012 and we will provide more information about the requested written response and associated logistics.

In addition, for those who contact us, GSA Office of Governmentwide Policy (GSA OGP) will be holding an Industry Day on Tuesday, August 7th, 2012 (9am – 12:30pm EST) at GSA OCS, 1275 First Street NE, Washington DC, Room 1201B (NoMa-Gallaudet Station – DC Metro Red Line) to gather more information and answer questions from industry vendors regarding the FCCX initiative. We will be able to host both virtually and in person. In person space is limited, so let us know your preference when you contact us.

As an overview, the following topics should be addressed in your written response which will be due by 5 P.M. EDT on Monday, August 13 20, 2012:

  • Proposed high level architecture for enabling authentication to an Agency application using third party credentials to include:
    • Shared service operated in a cloud environment servicing multiple Agencies
    • Operation in an Agency-hosted environment
  • User interface approaches for selection of approved credentials
  • Credential registration and authentication strategies for citizens with multiple approved credentials
  • User enrollment approaches
  • Assurance level escalation approaches
  • Attribute request/consumption approaches
  • Supported protocols, profiles and schemas for creating and sending assertions
  • Abstracting and streamlining business relationships with FICAM approved credential providers at all levels of assurance
  • Preserving privacy (minimize storage of personal information and “panopticality” of the service)
  • Auditing
  • Scalability of the service
  • Costs models (Pay per User or application using tiered volume discounts, O&M)
  • Other relevant information

UPDATE (8/3/12): We've had a couple of questions about what is meant by "panopticality" above.

Within the context of FCCX it means two things:

  1. It is the ability of Credential Providers to "see" all the Service Providers to which a citizen authenticates
  2. It is the visibility that the FCCX service itself may have into the citizen information that is flowing thru it


:- by Deb Gallagher (GSA) & Naomi Lefkovitz (NIST) - FCCX Tiger Team Co-Chairs

Access Control and Attribute Management WG Industry Day Invitation

The FICAM Access Control & Attribute Management Working Group (ACAGWG) is working to address the needs of the Federal Government for access control, lifecycle management of attributes, and the associated governance processes around entitlements and priveleges. If you are interested in engaging with this cross-government (Federal, Defense, IC and more…) working group during our upcoming industry day, please read on...

Why are we holding this event?

We have little desire to re-invent the wheel and would like to leverage lessons learned and best practices from real world implementations. 
This event is designed to help us learn more about the current state-of-practice in the commercial sector around attribute providers and their business models, as well as identity and access governance approaches.

When and where is this event being held?

September 5, 2012 in the Washington, DC area. 

What are we are looking for?

A demonstrated case study (references to operational systems are preferred) to include information such as:
  1. Attribute lifecycle management
  2. Provisioning/de-provisioning of attributes
  3. Processes for semantic and syntactic alignment of attributes
  4. Attestation of attributes
  5. Provenance of attributes
  6. Attribute metadata
  7. Data quality management and practices of attribute providers
  8. Attribute provider business models
  9. Defining, generating and sharing access policies
  10. Enterprise privilege and entitlement management practices
  11. Separation of duties
  12. Other topics related to Attribute Providers as well as Identity and Access Governance
Elements the case study should explore and include:
  1. The type of infrastructure in place
  2. Processes in place for managing attributes
  3. The process for deciding the appropriate attribute and access policies for the domain
  4. Determining Levels of confidence in attribute providers
  5. What factors go into making a decision to "trust" an attribute provider
  6. Design time vs. run-time decision factors
  7. Expanded uses beyond the original intent of the attribute and access policies

What are we NOT looking for?

  • Product demos
  • Marketing slide-ware
This is a group that has deep technical and policy expertise. We are fine with you taking some time at the end of the case study to map it into your product/service. The majority of the case study, however, should focus on the concept of operations, business models, processes and decisions that went into the case study. 

What is the first step?

Please submit a one page high-level abstract (PDF) with details of your case study to ICAM [at] gsa [dot] gov by 5:00 p.m. on July 31st, 2012.  A member of our planning committee will be in contact with those whose submissions most closely align with what we are seeking.


:- by Anil John

If You Don't Plan For User Enrollment Now, You'll Hate Federation Later

User enrollment (a.k.a. user activation, user provisioning, account mapping) into a relying party (RP) application is one of those pesky details that is often glossed over in identity federation conversations. But this piece of the last mile integration with the relying party is fundamental to making identity federation work. This blog post describes this critical step and its components.

User EnrollmentEnrollment is defined here as the process by which a link is established between the (credential) identifier of a person and the identifier used within an RP to uniquely identify the record of that person. For the rest of this blog post, I am going to use the term Program Identifier (PID) to refer to this RP record identifier [A hat tip to our Canadian colleagues].

Especially when it comes to government services, a question that needs to be asked is if the citizen has an existing relationship with the government agency. If it exists, the Gov RP should have the ability to use some shared private information as a starting point to establish the link between a citizen's (credential) identifier (obtained from the credential verifier) and the PID. e.g. Driver's License Number (DL#) if visiting the Motor Vehicle Administration (MVA) or Social Security Number if visiting the Social Security Administration.

It is important to note that this information is already known to the RP, used only as a claim of identity by the citizen, and providing just this information does not constitute proof of identity. i.e. When I provide my DL# to the MVA, I am saying that "Here is DL# XX-XXXXXXX; I claim that I am the Anil John you have on record as being the owner of that DL#". Before enrollment, the MVA would still need to verify that it is indeed me making this claim using an identity proofing process, and not an identity thief who has obtained my DL#.

So for enrollment to work, the RP needs two sets of data:

  1. A shared private piece of data that represents a claim of identity by the citizen
  2. A set of information that can be used to prove that claim to the satisfaction of the RP
If the identity verification is successful, the RP can establish a link the between credential identifier (OpenID PPID, SAML NameID, X.509 SubjectDN etc.) and the PID.

In cases where shared private information does not exist between the citizen and the agency, some options to consider are:
  1. In-person identity proofing
  2. Attestation from a trusted third-party
  3. Shared service for enrollment across multiple RPs (Privacy implications would have to be carefully worked through)
  4. No linking possible; treat as new record
Are there variations or additions to this step that I have not captured above?

RELATED POSTS

:- by Anil John

Level of Confidence of What, When, Where and Who?

Last week's blog post by Dr. Peter Alterman on "Why LOA for Attributes Don’t Really Exist" has generated a good bit of conversation on this topic within FICAM working groups, in the Twitter-verse (@Steve_Lockstep, @independentid, @TimW2JIG, @dak3...) and in may other places.  I also wanted to call out the recent release of the Kantara Initiative's "Attribute Management Discussion Group - Final Report and Recommendations" (via @IDMachines) as being relevant to this conversation as well.

One challenge with this type of discussion is to make sure that at a specific point in the conversation, we are all discussing the same topic from the same perspective. So before attempting to go further, I wanted to put together a simple framework, and hopefully a common frame of reference, to hang this discussion on:

 

"What"
  • Separate out the discussion on Attribute Providers from the discussion on individual Attributes
  • Separate out the discussion on making a decision (to trust/rely-upon/use) based on inputs provided vs making a decision (to trust/rely-upon/use) based on a "score" that has been provided
"When"
(to trust/rely-upon/use)
  • "Design time" and "Run time"
"Where"
  • Where is the calculation done (local or remote)?
  • Where is the decision (to trust/rely-upon/use) done?
"Who"
  • Party relying on attributes to make a calculation, a decision and/or use in a transaction
  • Provider, aggregator and/or re-seller of attributes
  • Value added service that takes in attributes and other information to provide results/judgements/scores based on those inputs
 

Given the above, some common themes and points that surfaced across these conversations are:
  1. Don't blur the conversations on governance/policy and score/criteria  i.e. The conversation around "This is how you will do this within a community of interest" is distinct and separate from the "The criteria for evaluating an Attribute/AP is x, y and z" 
  2. Decisions/Choices regarding Attributes and Attribute Providers, while related, need to be addressed  separately ["What"] 
  3. Decision to trust/rely-upon/use is always local ["Where"], whether it is for attributes or attribute providers
  4. The decision to trust/rely-upon/use an Attribute Provider is typically a design time decision ["When"]
    1. The criteria that feeds this decision (i.e. input to a confidence in AP calculation) is typically more business/process centric e.g. security practices, data quality practices, auditing etc.
    2. There is value in standardizing the above, but it is unknown at present if this standardization can extend beyond a community of interest 
  5. Given that the decision to trust/rely-upon/use an Attribute Provider is typically made out-of-band and at design-time, it is hard to envision a use case for a run-time evaluation based on a confidence score for making a judgement for doing business with an Attribute Provider ["When"]
  6. The decision to trust/rely-upon/use an Attribute is typically a local decision at the Relying Party ["Where"]
  7. The decision to trust/rely-upon/use an Attribute is typically a run-time decision ["When"], given that some of the potential properties associated with an attribute (e.g. unique, authoritative or self-reported, time since last verified, last time changed, last time accessed, last time consented or others) may change in real time
    1. There is value in standardizing these 'attributes of an attribute'
    2. It is currently unknown if these 'attributes of an attribute' can scale beyond a specific community of interest
  8. A Relying Party may choose to directly make the calculation about an Attribute (i.e. local confidence calculation based using the 'attributes of an attribute' as input) or depend on an externally provided confidence "score" ["What"]
    1. The "score" calculation may be outsourced to an external service/capability ["Where"]
    2. This choice of doing it yourself or outsourcing should be left up to the discretion of the RP based on their capabilities and risk profile ["Who"]
Given that we have to evaluate both Attribute Providers and Attributes it is probably in all of our shared interest to come up with a common terminology for what we call these evaluation criteria. A recommendation, taking into account many of the conversations in this space to date:
  • Attribute Provider Practice Statement (APPS) for Attribute Providers, Aggregators, Re-Sellers
  • Level of Confidence Criteria (LOCC) for Attributes

As always, this conversation is just starting... 
 
 

:- by Anil John

Why LOA for Attributes Don’t Really Exist

This is a guest post on Authoritative-ness and Attributes by Dr. Peter Alterman. Peter is the Senior Advisor to the NSTIC NPO at NIST, and a thought leader who has done pioneering and award-winning work in areas ranging from IT Security and PKI to Federated Identity. You may not always agree with Peter, but he brings a perspective that is always worth considering. [NOTE: FICAM ACAG WG has not come to any sort of consensus on this topic yet] - Anil John


 

As I have argued in public and private, I continue to believe that the concept of assigning a Level of Assurance to an attribute is bizarre, making real-time authorization decisions even more massively burdensome than they can be, and does nothing but confuse both Users and Relying Parties.

The Laws of Attribute Validity

The simple, basic logic is this: First, an attribute issuer is authoritative for the attributes it issues. If you ask an issuer if a particular user’s asserted attribute is valid, you’ll get a Yes or No answer. If you ask that issuer what the LOA of the user’s attribute is, it will be confused – after all, the issuer issued it. The answer is binary: 1 or 0, T or F, Y or N. Second, a Relying Party is authoritative for determining what attributes it wants/needs for authorization and more importantly, it is authoritative for deciding what attribute authorities to trust. Again the answer is binary: 1 or 0, T or F, Y or N. Any attribute issuer that is not authoritative for the attribute it issues should not be trusted and any RP that has no policy on which attribute providers OR RESELLERS to trust won’t survive in civil court.

Secondary Source Cavil

“But wait,” the fans of attribute LOA say, what if you ask a secondary source if that same user’s attribute is valid. This is asking an entity that did not issue the attribute to assert its validity. In this case the RP has to decide how much it trusts the secondary source and then how much it trusts the secondary source to assert the true status of the attribute. Putting aside questions of why one would want to rely on secondary sources in the first place, implicit in this use case is the assumption that the RP has previously decided who to ask about the attribute. If the RP has decided to ask the secondary source, that is also a trust decision which one would assume would have been based on an evaluative process of some sort. After all, why would an RP choose to trust a source just a little bit? Really doesn’t make sense and complicates the trust calculation no end. Not to mention raising the eyebrows of both the CISSO and a corporate/agency lawyer, both very bad things.

Thus, the RP decides to trust the assertion of the secondary source. The response back to the RP from the secondary source is binary and the trust decision is binary. Some Federation Operator (or Trust Framework Providers, take your pick) may be serving as repositories of trusted sources for attribute assertions as a member service and in that case it, too, the RP would choose to trust the attribute sources of the FO/TFP explicitly. If a Federation Operator/TFP chooses not to trust certain secondary sources, it simply doesn’t add them to its white list. Member RPs that choose to trust the secondary attribute sources would do so based upon local determinations, underscoring the role of prior policy implementation.

Either directly or indirectly, an RP or a TFP makes a binary trust decision about which attribute providers to trust, and so the example reduces to the original law.

Transient Validity, aka Dynamic Attributes

Another circumstance where attribute LOA might be considered is querying about an attribute which changes rapidly in real time. One must accept that an attribute is either valid or invalid at the time of query. If temporality is of concern, that is a whole second attribute and a trusted timestamp must be a necessary part of the attribute validation process. A query from an online business to an end user’s bank would want to know if the user had sufficient funds to cover the transaction at the time the transaction is confirmed. At the time of the query the answer is binary, yes or no. It would also need a trusted timestamp that itself could be validated as True or False. That is, two separate attributes are required, one for content and one for time, both of which must be true for the RP to trust and therefore complete the transaction. Even for ephemeral attributes the answer is directly relevant to the state of the attribute at the time of query and that answer is binary, Y or N, the only difference being that a second trusted attribute – the timestamp – is required. The business makes a binary decision to trust that the user has the funds to pay for the purchase at the time of purchase – and of query - and concludes the sale or rejects it. The case resolves back to binary again.

Obscurity of Attribute Source

Admittedly, things can get complicated when the identity of the attribute issuer is obscure, such as US Citizenship for the native-born. However, once again the RP makes an up-front decision about which source or sources it’s going to ask about that attribute. It doesn’t matter what the source is; the point is that the RP will make a decision on which source it deems authoritative and it will trust that source’s assertion of that attribute. In the citizenship example, the RP chooses two sources: the State Department because if the user has been issued a US passport that’s a priori legal proof of citizenship, or some other governmental entity that keeps a record of the live birth in a US jurisdiction, which is another a priori legal proof of citizenship. However, if the application is a National Security RP for example, it might query a whole host of data sources to determine if the user holds a passport from some other nation state. In addition to the attribute sources query, which in this case might get quite complex (certainly enough for the user to disconnect and pick up the phone instead), the application will have to include scripts telling it where to look and what answers to look for. And at the end of the whole process, the application is going to make a binary decision about whether to trust that the user is a US citizen or not and all that intermediate drama again resolves down to the original case, that the RP makes an up-front determination what attribute source or sources to trust, though in this one the RP builds a complicated multi-authority search and weigh process as part of its authorization determination.

RPs That Calculate Trust Scores

Many commercial RPs, especially in the financial services industry, calculate scores to determine whether to trust authentication and/or authorization. In these situations the RP is making trust calculations, weighing and scoring. Yet it is the RP that is calculating on the inputs, not calculating the inputs. It uses the authorization data and the attribute data to make authentication and/or authorization decisions with calculation code that is directly relevant to its risk mitigation strategy. In fact, this begins to look a lot like the National Security version of the Obscurity condition.

In these vexed situations, what the RP is doing is not trusting all attribute providers and calculating a trust decision based upon a previously-determined algorithm in which all the responses from all the untrusted providers somehow are transformed into a trusted attribute. The algorithm seems to be based upon determining a trust decision by using multiple attribute sources to reinforce each other in some predetermined way, and this method reminds me of a calculus problem, that is, integrating towards zero (risk) and perhaps that’s what the algorithm even looks like.

Attribute Probability

Colleagues who have reviewed this [position paper] in draft have pointed out that data aggregators sometimes have fewer high quality (attribute) data about certain individuals, such as young people, and therefore some include a probability number along with the transmitted attribute data. While it may mean that the data carries a level of assurance assertion to the attribute authority, it’s not really a level of assurance assertion to the RP. The RP, again, has chosen to trust a data aggregator as an authoritative attribute source, presumably because it has reviewed the aggregator’s business processes, accepts its model for probability ranking and chooses to incorporate that probability into its own local scoring algorithm or authorization determination process. In other words, the aggregator is deemed authoritative and its probability scoring is considered authoritative as well. This is, yet again, a binary Y or N determination.

Why It Matters

There are compelling operational reasons why assigning assurance levels to attribute assertions, or even asserters, is a bad idea. It’s because, simply, anything that complicates the architecture of the global trust infrastructure is bad and especially bad if that complication is built on top of a failure to distinguish between a data input and local data manipulation. As the example above illuminates, the attribute and asserter(s) are both trusted by the RP application while the extent to which the trusted data is reliable is questionable and thus manipulated. Insisting on scoring the so-called trustworthiness of an attribute asserter is in essence assigning an attribute to an attribute, a trustworthiness attribute. The policies, practices, standards and other dangling elements necessary to deploy attributes with attributes, then interpret them and utilize them, even if such global standardization for all RP applications could be attained, constitutes an unsupportable waste of resources. Even worse, it threatens to sap the momentum necessary to deploy an attribute management infrastructure even as solutions are beginning to emerge from the conference rooms around the world.

QED, Sort of

The Two Laws of Attribute Validity not withstanding, people can – and have - created Rube Goldberg-ian use cases that require attribute LOA and have even plopped them in to a deployed solution (to increase billable hours, one suspects), but they’re essentially useless. I hate to beat this dead horse but each case I’ve listened to reduces to a binary decision. The bottom line is that the RP makes policy decisions up front about what attribute provider(s) to trust or not trust and these individual decisions lead to binary assertions of attribute validity either directly or indirectly through local processing.

:- by Peter Alterman, Ph.D.

RELATED POSTS

It Depends a.k.a. Access Decisions are Contextual

This week, I had the pleasure of attending and presenting at the InCommon Confab. It was a great day and a half event organized by Jacob Farmer at Indiana University and many others from the InCommon Team.

InCommon's mission is to support a common trust framework for U.S. Education and Research, and their Identity Assurance Program offers its more than 200 Identity Providers the ability to certify their practices at the InCommon Bronze (LOA 1) and InCommon Silver (LOA 2) Levels. Given their scope and reach across the Research and Education Sector, as well as their maturity in the Identity Federation space, we are very fortunate that they have chosen to become a FICAM approved Trust Framework Provider whose Bronze (LOA 1) and Silver (LOA 2) certified IdP Credentials can be used to access Federal Government Web Sites.

Ian Glazer (Gartner), one of the other keynote speakers, has a good write-up on his blog about the event, so won't repeat it here [Go, read, and come back. I'll wait].  The great thing about the conversation that took place is that we are finally getting past the authentication and LOA conversations to what really matters when it comes to getting things done, which is tackling the hard challenges around distributed/federated/cross-organizational authorization to enable collaboration and the sharing of information.

In my presentation, I had the following slide, which broke out attributes of a person into Identity, Authority, Contextual and Preference.

In my own mind, I had lumped together what I, and many others over the years, had taken to calling "Environmental" attributes into the Contextual bucket. But, as you can also see, I had subordinated that context element under the Person umbrella.

The conversation that we had over the last couple of days, in my own mind, called that bucketing into question.

A potential starting point, as articulated by Ian, is that Context is anything that is not Person or Resource related, and as such it promotes Context to be a first class citizen along-side Person and Resource Attributes. This leads to:
The "External Attribute" component of Context maps pretty easily into what we have traditionally called "Environmental Attributes":

  • Operational Status e.g. Threat-Level-1, Declared-DSCA-Event
  • Inside the building on business/agency infrastructure
  • Coming from a specific IP block
  • Connecting using VPN
  • Host based scans report as healthy
  • etc.

But the "Shared Contextual Attributes" are something new to think about and explore, as they bring a relationship component into the mix that could potentially be very interesting and, if we can work through it to a shared understanding, address questions such as:

  • Is context where we can convey data handling expectations that come along with access to data? 
  • Obligations and responsibilities? 
  • Where semantics can be attached to and sent along with authority attributes?
  • ?
There was pretty general agreement that we really are not sure at this point, but that we do need to put some think-time on this. What it should NOT BE is that Context becomes the grab-all bucket into which everything that is not Subject and Resource gets stuffed as that would make it quickly irrelevant and useless from an access control decision point of view.

Really looking forward to further conversations on this topic.


:- by Anil John

Shared Services and Government as Attribute Service Provider

FICAM Roadmap and Implementation Guide articulates the need to provide government-wide services for common ICAM requirements. In addition, an execution priority for FICAM is to demonstrate the value of policy driven access control in Government systems.  One of the ways that we are moving forward in this area is by piloting the operational use of attribute services, backed by attribute providers, that can act as a single point of query for relying parties.

What is an attribute provider (AP)?

The National Strategy for Trusted Identities in Cyberspace (PDF) describes an AP as being "... responsible for the processes associated with establishing and maintaining identity attributes. Attribute maintenance includes validating, updating, and revoking the attribute claim. An attribute provider asserts trusted, validated attribute claims in response to attribute requests from relying parties."

Why is this important?

  • We are moving into an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need.
  • The policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services and not be embedded within, and that policy be treated as a first class citizen.
  • The input to these decisions are based on information about the subject, information about the resource, and contextual information that are often expressed as attributes.
  • These attributes can reside in multiple sources where the level of confidence a relying party can have in an attribute may vary and has many components (Working on this one).
  • The relevant attributes are retrieved (“pulled”) from the variety of sources at the moment when a subject needs to access a system and are not pre-provisioned into the system.
  • Standards! Standards! Standards! All of the moving parts here (finding/correlating attributes, movement of attributes across organizational boundaries, decision control mechanisms etc.) needs to be using standards based interfaces and technologies.


How will this capability be implemented?

As a first step, we are partnering with PM-ISE on an operational pilot (real missions, real data, real systems, real users) of the FICAM Backend Attribute Exchange (BAE) capability.

The BAE capability provides a "... standards-based architecture and interface specification to securely obtain attributes of subjects from authoritative sources in order to make access control decisions."

If interested in its technical details, do check out the final version of the BAE v2 technical documentation set:


As someone who has been involved with the BAE since the first prototype, it is interesting for me to look back on the timeline for how we got here [Full Disclosure: Some of the links below point to blog entries from before I entered Federal Government Service; At that time, I was a Contractor supporting the DHS Science & Technology Directorate as the Technical Lead for their Identity Management Testbed]

RELATED POSTS

:- by Anil John

FICAM Roadmap and Implementation Guide Overview

The Federal Identity, Credential and Access Management (FICAM) Roadmap and Implementation Guidance v2 (PDF) is a pretty comprehensive document i.e. an abundance of good stuff to read!

Deb Gallagher (ICAMSC Co-Chair) and I had to opportunity yesterday to provide an overview of the FICAM Roadmap at an open event to a cross-section of people from .gov/.mil/IC, folks who support those communities, and some who were simply interested in the topic.

Deb provided an Overview of the FICAM Roadmap while I focused on its Initiative 5, which gives guidance to Agencies on how they can streamline the collection and sharing of digital identity data as part of their alignment to the FICAM Roadmap.

The presentation is provided below for your viewing pleasure and for download (PDF). Thanks to everyone who attended and for the many questions on the topic. Hope we were able to provide some of the answers you were looking for.



:- by Anil John

To LOA or not to LOA (for Attributes)... NOT!

At both the recent ISOC sponsored Attribute Workshop as well as the Attribute Management Panel at the NSTIC/IDTrust Workshop today, multiple people used the term "LOA of Attributes".  I protest (protested?) this potentially confusing use of the term in this context.

The term Level of Assurance (LOA), as currently used is all about assurances of identity. In particular, within the context of OMB-04-04 (PDF) and NIST SP-800-63 (PDF), it is defined as:
  1. the degree of confidence in the vetting process used to establish the identity of the individual to whom the credential was issued (i.e. the identity proofing component) and 
  2. the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued (i.e. the "technical strength" of the credential itself)
Applying the LOA terminology to Attributes brings confusion, and given the multiple folks who brought up this point and the resonance the comments got, I hope this usage will be discouraged by the community going forward.

I hope that we can come up with and agree on another term to convey our intent here, which is to denote the Measure of Confidence or a Level of Confidence you can have in an Attribute. The components that make up that level/measure TBD.

:- by Anil John