Showing posts with label ABAC. Show all posts
Showing posts with label ABAC. Show all posts

From AAES to BAE - Implementing Collection and Sharing of Identity Data

The Federal Identity, Credential and Access Management (FICAM) Roadmap and Implementation Guidance (PDF) calls out the need to implement the ability to streamline the collection and sharing of digital identity data (Initiative 5). The Authoritative Attribute Exchange Services (AAES) is the architectural construct shown in the Roadmap as the mechanism that can implement this capability. This blog post provides a description of the capabilities needed in an AAES, and outlines a concrete method for implementing it; via deploying a Backend Attribute Exchange (BAE) infrastructure.

The AAES is a point of architectural abstraction between authoritative sources of identity information and the systems and applications that need that information.

FICAM AAES

At a high level, you can separate the functional requirements of an AAES into two buckets:

Authoritative Attribute ManagerAuthoritative Attribute Distributer
  • Correlate attributes from various attribute sources
  • De-conflict discrepancies across attribute sources
  • Implement a data model for entity attributes
  • Provide a consolidated view of the pieces of an entity gathered from multiple sources
  • Primary point of query for systems and applications
  • Provide a customized and tailored view of data
  • Support requests for attributes from both internal and external (to organization standing up the AAES) consumers


In order to meet these requirements, the implementation would need to provide capabilities "in the middle" such as Aggregation & Join, Mapping & Transformation, Routing & Load Balancing, Security & Audit and Local Storage (for caching) while providing standardized interfaces and connectors to applications and data sources.

A combination of a Virtual/Meta Directory Engine and a XML Security Gateway provides such a mix of capabilities:

FICAM AAES Implementation
The implementation of such an infrastructure is something we now have extensive experience with, from a combination of prototypes and proof-of-concepts, end-to-end pilots, as well as operational deployments of the various infrastructure elements. That is the reason why we chose these infrastructure elements as the foundational pieces for the Backend Attribute Exchange (BAE) infrastructure we are currently deploying:

FICAM AAES BAE

As you can see above, there are also two supporting elements to the BAE infrastructure that we have deployed/are deploying; the BAE Metadata Service and the E-Government Trust Services (EGTS) Certificate Authority (CA). The BAE Metadata service will be the authoritative source of the metadata related to the BAE deployment and the EGTS CA will issue the Non-Person Entity (NPE) certificates that will be used to assure message level security across the members of the BAE "Attribute Federation".

In short, while the AAES is an abstract architectural construct, the infrastructure elements that make up the BAE are an example of a physical implementation of such a construct. It is being deployed in the near term to demonstrate operational capability with the goal of making it available as a shared service capability going forward.

RELATED POSTS

:- by Anil John

What is new with the BAE Operational Deployment?

GSA OGP, together with our partner PM-ISE, is moving out on the operational deployment of the FICAM Backend Attribute Exchange (BAE). The PM-ISE blog post "A Detailed Test Scenario for our Law Enforcement Backend Attribute Exchange Pilot" gives details about our primary use case. In this blog post, I am going to map those business and information sharing aspects to some of the technical details of the deployment.

The operational scenario looks like this:

BAE Operational Pilot Flow
In any such scenario, there are always three parties to the transaction. An Identity Provider, a Relying Party and an Attribute Provider.

"A law enforcement officer in Pennsylvania has an account on the Pennsylvania J-Net network, which supports many public safety, law enforcement, and state government agencies and mission communities. To obtain the J-Net account, the officer’s identity and status were vetted and various facts about identity, assignment, and qualifications were captured and maintained in the J-Net user database.

In the course of an investigation, the officer needs to access data on the RISS-Net system."

J-NET is the Identity Provider and RISS-Net Portal is the Relying Party.

"… both J-Net and RISS-Net are members of the National Information Exchange Federation (NIEF), RISS-Net can accept electronic credentials from J-Net once the officer logs into a J-Net account and is authenticated."

One of the primary reasons we are interested in this scenario (beyond the information sharing value of the deployed capability) is the existence of the NIEF Federation. NIEF already counts J-NET and RISS-NET as members. Which in turn means that there is an existing framework for collaboration and information sharing between them we can plug into, and enhance, with the BAE.

One of the critical technical benefits of this relationship, within the context of the BAE deployment, is that the Federation Identifier has been standardized across NIEF (gfipm:2.0:user:FederationId).

When we created the "SAML 2.0 Identifier and Protocol Profiles for BAE v2.0" (PDF), we deliberately separated out the profiling of the identifiers and the profiling of the protocols precisely so that we could "snap-in" new identifiers, without impacting the security of the protocol. We also put in some specific wording that allowed this flexibility; "It is expected that if credentials with [identifiers] other than what is profiled in this document are used in a BAE implementation, the Federation Operator governing that Community of Interest will define the profiles necessary for that credential type."

As part of this pilot, we will be defining an identifier profile for the NIEF Federation Identifier that will be used in the attribute query made to the BAE Attribute Service.

"RISS-Net has defined a policy for access to their information resources, which is expressed in terms of specific characteristics (“attributes”) of authenticated users. The RISS-Net policy requires that a user is certified as a “Law Enforcement Officer”, and has the necessary 28CFRPart 23 training."

The key to keep in mind is that the existing NIEF SAML Federation and the supporting information sharing framework already allows J-NET to assert the "Law Enforcement Officer" (LEO) attributes for their members when they go to access the RISS-Net Portal.

"… although the officer was trained on 28CFRPart23 in a course offered online by the Bureau of Justice Assistance (BJA), this fact is not part of the officer’s J-Net’s record (28CFRPart23 training status is not one of the facts gathered in their vetting process). Thus J-Net cannot provide all the credentials required by RISS-Net for access to the needed data."

And this is the critical value add for this pilot! There is additional information locked up within RISS-Net that can only be accessed if the 28CFRPart23 attribute is provided. J-Net is not able to assert this, but BJA as the authoritative attribute source can. And we are utilizing the BAE Shared Service Infrastructure deployed at GSA to provide them the capability to do so.

An item that we are still exploring is if the information that is available from the NIEF Federation Identifier as well as the J-NET Attribute Assertion gives enough information such that we can uniquely identify an existing record of a trained LEO at BJA. This is still an open question and is critical in making this work.

As you may have noted, I keep calling the deployment of the BAE Infrastructure at GSA a "Shared Service Infrastructure". That is a deliberate choice of words and I wil expand on that in the future, especially given that this is not our only pilot use case for the BAE deployment!

RELATED POSTS

:- by Anil John

Access Control and Attribute Management WG Industry Day Invitation

The FICAM Access Control & Attribute Management Working Group (ACAGWG) is working to address the needs of the Federal Government for access control, lifecycle management of attributes, and the associated governance processes around entitlements and priveleges. If you are interested in engaging with this cross-government (Federal, Defense, IC and more…) working group during our upcoming industry day, please read on...

Why are we holding this event?

We have little desire to re-invent the wheel and would like to leverage lessons learned and best practices from real world implementations. 
This event is designed to help us learn more about the current state-of-practice in the commercial sector around attribute providers and their business models, as well as identity and access governance approaches.

When and where is this event being held?

September 5, 2012 in the Washington, DC area. 

What are we are looking for?

A demonstrated case study (references to operational systems are preferred) to include information such as:
  1. Attribute lifecycle management
  2. Provisioning/de-provisioning of attributes
  3. Processes for semantic and syntactic alignment of attributes
  4. Attestation of attributes
  5. Provenance of attributes
  6. Attribute metadata
  7. Data quality management and practices of attribute providers
  8. Attribute provider business models
  9. Defining, generating and sharing access policies
  10. Enterprise privilege and entitlement management practices
  11. Separation of duties
  12. Other topics related to Attribute Providers as well as Identity and Access Governance
Elements the case study should explore and include:
  1. The type of infrastructure in place
  2. Processes in place for managing attributes
  3. The process for deciding the appropriate attribute and access policies for the domain
  4. Determining Levels of confidence in attribute providers
  5. What factors go into making a decision to "trust" an attribute provider
  6. Design time vs. run-time decision factors
  7. Expanded uses beyond the original intent of the attribute and access policies

What are we NOT looking for?

  • Product demos
  • Marketing slide-ware
This is a group that has deep technical and policy expertise. We are fine with you taking some time at the end of the case study to map it into your product/service. The majority of the case study, however, should focus on the concept of operations, business models, processes and decisions that went into the case study. 

What is the first step?

Please submit a one page high-level abstract (PDF) with details of your case study to ICAM [at] gsa [dot] gov by 5:00 p.m. on July 31st, 2012.  A member of our planning committee will be in contact with those whose submissions most closely align with what we are seeking.


:- by Anil John

Level of Confidence of What, When, Where and Who?

Last week's blog post by Dr. Peter Alterman on "Why LOA for Attributes Don’t Really Exist" has generated a good bit of conversation on this topic within FICAM working groups, in the Twitter-verse (@Steve_Lockstep, @independentid, @TimW2JIG, @dak3...) and in may other places.  I also wanted to call out the recent release of the Kantara Initiative's "Attribute Management Discussion Group - Final Report and Recommendations" (via @IDMachines) as being relevant to this conversation as well.

One challenge with this type of discussion is to make sure that at a specific point in the conversation, we are all discussing the same topic from the same perspective. So before attempting to go further, I wanted to put together a simple framework, and hopefully a common frame of reference, to hang this discussion on:

 

"What"
  • Separate out the discussion on Attribute Providers from the discussion on individual Attributes
  • Separate out the discussion on making a decision (to trust/rely-upon/use) based on inputs provided vs making a decision (to trust/rely-upon/use) based on a "score" that has been provided
"When"
(to trust/rely-upon/use)
  • "Design time" and "Run time"
"Where"
  • Where is the calculation done (local or remote)?
  • Where is the decision (to trust/rely-upon/use) done?
"Who"
  • Party relying on attributes to make a calculation, a decision and/or use in a transaction
  • Provider, aggregator and/or re-seller of attributes
  • Value added service that takes in attributes and other information to provide results/judgements/scores based on those inputs
 

Given the above, some common themes and points that surfaced across these conversations are:
  1. Don't blur the conversations on governance/policy and score/criteria  i.e. The conversation around "This is how you will do this within a community of interest" is distinct and separate from the "The criteria for evaluating an Attribute/AP is x, y and z" 
  2. Decisions/Choices regarding Attributes and Attribute Providers, while related, need to be addressed  separately ["What"] 
  3. Decision to trust/rely-upon/use is always local ["Where"], whether it is for attributes or attribute providers
  4. The decision to trust/rely-upon/use an Attribute Provider is typically a design time decision ["When"]
    1. The criteria that feeds this decision (i.e. input to a confidence in AP calculation) is typically more business/process centric e.g. security practices, data quality practices, auditing etc.
    2. There is value in standardizing the above, but it is unknown at present if this standardization can extend beyond a community of interest 
  5. Given that the decision to trust/rely-upon/use an Attribute Provider is typically made out-of-band and at design-time, it is hard to envision a use case for a run-time evaluation based on a confidence score for making a judgement for doing business with an Attribute Provider ["When"]
  6. The decision to trust/rely-upon/use an Attribute is typically a local decision at the Relying Party ["Where"]
  7. The decision to trust/rely-upon/use an Attribute is typically a run-time decision ["When"], given that some of the potential properties associated with an attribute (e.g. unique, authoritative or self-reported, time since last verified, last time changed, last time accessed, last time consented or others) may change in real time
    1. There is value in standardizing these 'attributes of an attribute'
    2. It is currently unknown if these 'attributes of an attribute' can scale beyond a specific community of interest
  8. A Relying Party may choose to directly make the calculation about an Attribute (i.e. local confidence calculation based using the 'attributes of an attribute' as input) or depend on an externally provided confidence "score" ["What"]
    1. The "score" calculation may be outsourced to an external service/capability ["Where"]
    2. This choice of doing it yourself or outsourcing should be left up to the discretion of the RP based on their capabilities and risk profile ["Who"]
Given that we have to evaluate both Attribute Providers and Attributes it is probably in all of our shared interest to come up with a common terminology for what we call these evaluation criteria. A recommendation, taking into account many of the conversations in this space to date:
  • Attribute Provider Practice Statement (APPS) for Attribute Providers, Aggregators, Re-Sellers
  • Level of Confidence Criteria (LOCC) for Attributes

As always, this conversation is just starting... 
 
 

:- by Anil John

Why LOA for Attributes Don’t Really Exist

This is a guest post on Authoritative-ness and Attributes by Dr. Peter Alterman. Peter is the Senior Advisor to the NSTIC NPO at NIST, and a thought leader who has done pioneering and award-winning work in areas ranging from IT Security and PKI to Federated Identity. You may not always agree with Peter, but he brings a perspective that is always worth considering. [NOTE: FICAM ACAG WG has not come to any sort of consensus on this topic yet] - Anil John


 

As I have argued in public and private, I continue to believe that the concept of assigning a Level of Assurance to an attribute is bizarre, making real-time authorization decisions even more massively burdensome than they can be, and does nothing but confuse both Users and Relying Parties.

The Laws of Attribute Validity

The simple, basic logic is this: First, an attribute issuer is authoritative for the attributes it issues. If you ask an issuer if a particular user’s asserted attribute is valid, you’ll get a Yes or No answer. If you ask that issuer what the LOA of the user’s attribute is, it will be confused – after all, the issuer issued it. The answer is binary: 1 or 0, T or F, Y or N. Second, a Relying Party is authoritative for determining what attributes it wants/needs for authorization and more importantly, it is authoritative for deciding what attribute authorities to trust. Again the answer is binary: 1 or 0, T or F, Y or N. Any attribute issuer that is not authoritative for the attribute it issues should not be trusted and any RP that has no policy on which attribute providers OR RESELLERS to trust won’t survive in civil court.

Secondary Source Cavil

“But wait,” the fans of attribute LOA say, what if you ask a secondary source if that same user’s attribute is valid. This is asking an entity that did not issue the attribute to assert its validity. In this case the RP has to decide how much it trusts the secondary source and then how much it trusts the secondary source to assert the true status of the attribute. Putting aside questions of why one would want to rely on secondary sources in the first place, implicit in this use case is the assumption that the RP has previously decided who to ask about the attribute. If the RP has decided to ask the secondary source, that is also a trust decision which one would assume would have been based on an evaluative process of some sort. After all, why would an RP choose to trust a source just a little bit? Really doesn’t make sense and complicates the trust calculation no end. Not to mention raising the eyebrows of both the CISSO and a corporate/agency lawyer, both very bad things.

Thus, the RP decides to trust the assertion of the secondary source. The response back to the RP from the secondary source is binary and the trust decision is binary. Some Federation Operator (or Trust Framework Providers, take your pick) may be serving as repositories of trusted sources for attribute assertions as a member service and in that case it, too, the RP would choose to trust the attribute sources of the FO/TFP explicitly. If a Federation Operator/TFP chooses not to trust certain secondary sources, it simply doesn’t add them to its white list. Member RPs that choose to trust the secondary attribute sources would do so based upon local determinations, underscoring the role of prior policy implementation.

Either directly or indirectly, an RP or a TFP makes a binary trust decision about which attribute providers to trust, and so the example reduces to the original law.

Transient Validity, aka Dynamic Attributes

Another circumstance where attribute LOA might be considered is querying about an attribute which changes rapidly in real time. One must accept that an attribute is either valid or invalid at the time of query. If temporality is of concern, that is a whole second attribute and a trusted timestamp must be a necessary part of the attribute validation process. A query from an online business to an end user’s bank would want to know if the user had sufficient funds to cover the transaction at the time the transaction is confirmed. At the time of the query the answer is binary, yes or no. It would also need a trusted timestamp that itself could be validated as True or False. That is, two separate attributes are required, one for content and one for time, both of which must be true for the RP to trust and therefore complete the transaction. Even for ephemeral attributes the answer is directly relevant to the state of the attribute at the time of query and that answer is binary, Y or N, the only difference being that a second trusted attribute – the timestamp – is required. The business makes a binary decision to trust that the user has the funds to pay for the purchase at the time of purchase – and of query - and concludes the sale or rejects it. The case resolves back to binary again.

Obscurity of Attribute Source

Admittedly, things can get complicated when the identity of the attribute issuer is obscure, such as US Citizenship for the native-born. However, once again the RP makes an up-front decision about which source or sources it’s going to ask about that attribute. It doesn’t matter what the source is; the point is that the RP will make a decision on which source it deems authoritative and it will trust that source’s assertion of that attribute. In the citizenship example, the RP chooses two sources: the State Department because if the user has been issued a US passport that’s a priori legal proof of citizenship, or some other governmental entity that keeps a record of the live birth in a US jurisdiction, which is another a priori legal proof of citizenship. However, if the application is a National Security RP for example, it might query a whole host of data sources to determine if the user holds a passport from some other nation state. In addition to the attribute sources query, which in this case might get quite complex (certainly enough for the user to disconnect and pick up the phone instead), the application will have to include scripts telling it where to look and what answers to look for. And at the end of the whole process, the application is going to make a binary decision about whether to trust that the user is a US citizen or not and all that intermediate drama again resolves down to the original case, that the RP makes an up-front determination what attribute source or sources to trust, though in this one the RP builds a complicated multi-authority search and weigh process as part of its authorization determination.

RPs That Calculate Trust Scores

Many commercial RPs, especially in the financial services industry, calculate scores to determine whether to trust authentication and/or authorization. In these situations the RP is making trust calculations, weighing and scoring. Yet it is the RP that is calculating on the inputs, not calculating the inputs. It uses the authorization data and the attribute data to make authentication and/or authorization decisions with calculation code that is directly relevant to its risk mitigation strategy. In fact, this begins to look a lot like the National Security version of the Obscurity condition.

In these vexed situations, what the RP is doing is not trusting all attribute providers and calculating a trust decision based upon a previously-determined algorithm in which all the responses from all the untrusted providers somehow are transformed into a trusted attribute. The algorithm seems to be based upon determining a trust decision by using multiple attribute sources to reinforce each other in some predetermined way, and this method reminds me of a calculus problem, that is, integrating towards zero (risk) and perhaps that’s what the algorithm even looks like.

Attribute Probability

Colleagues who have reviewed this [position paper] in draft have pointed out that data aggregators sometimes have fewer high quality (attribute) data about certain individuals, such as young people, and therefore some include a probability number along with the transmitted attribute data. While it may mean that the data carries a level of assurance assertion to the attribute authority, it’s not really a level of assurance assertion to the RP. The RP, again, has chosen to trust a data aggregator as an authoritative attribute source, presumably because it has reviewed the aggregator’s business processes, accepts its model for probability ranking and chooses to incorporate that probability into its own local scoring algorithm or authorization determination process. In other words, the aggregator is deemed authoritative and its probability scoring is considered authoritative as well. This is, yet again, a binary Y or N determination.

Why It Matters

There are compelling operational reasons why assigning assurance levels to attribute assertions, or even asserters, is a bad idea. It’s because, simply, anything that complicates the architecture of the global trust infrastructure is bad and especially bad if that complication is built on top of a failure to distinguish between a data input and local data manipulation. As the example above illuminates, the attribute and asserter(s) are both trusted by the RP application while the extent to which the trusted data is reliable is questionable and thus manipulated. Insisting on scoring the so-called trustworthiness of an attribute asserter is in essence assigning an attribute to an attribute, a trustworthiness attribute. The policies, practices, standards and other dangling elements necessary to deploy attributes with attributes, then interpret them and utilize them, even if such global standardization for all RP applications could be attained, constitutes an unsupportable waste of resources. Even worse, it threatens to sap the momentum necessary to deploy an attribute management infrastructure even as solutions are beginning to emerge from the conference rooms around the world.

QED, Sort of

The Two Laws of Attribute Validity not withstanding, people can – and have - created Rube Goldberg-ian use cases that require attribute LOA and have even plopped them in to a deployed solution (to increase billable hours, one suspects), but they’re essentially useless. I hate to beat this dead horse but each case I’ve listened to reduces to a binary decision. The bottom line is that the RP makes policy decisions up front about what attribute provider(s) to trust or not trust and these individual decisions lead to binary assertions of attribute validity either directly or indirectly through local processing.

:- by Peter Alterman, Ph.D.

RELATED POSTS

It Depends a.k.a. Access Decisions are Contextual

This week, I had the pleasure of attending and presenting at the InCommon Confab. It was a great day and a half event organized by Jacob Farmer at Indiana University and many others from the InCommon Team.

InCommon's mission is to support a common trust framework for U.S. Education and Research, and their Identity Assurance Program offers its more than 200 Identity Providers the ability to certify their practices at the InCommon Bronze (LOA 1) and InCommon Silver (LOA 2) Levels. Given their scope and reach across the Research and Education Sector, as well as their maturity in the Identity Federation space, we are very fortunate that they have chosen to become a FICAM approved Trust Framework Provider whose Bronze (LOA 1) and Silver (LOA 2) certified IdP Credentials can be used to access Federal Government Web Sites.

Ian Glazer (Gartner), one of the other keynote speakers, has a good write-up on his blog about the event, so won't repeat it here [Go, read, and come back. I'll wait].  The great thing about the conversation that took place is that we are finally getting past the authentication and LOA conversations to what really matters when it comes to getting things done, which is tackling the hard challenges around distributed/federated/cross-organizational authorization to enable collaboration and the sharing of information.

In my presentation, I had the following slide, which broke out attributes of a person into Identity, Authority, Contextual and Preference.

In my own mind, I had lumped together what I, and many others over the years, had taken to calling "Environmental" attributes into the Contextual bucket. But, as you can also see, I had subordinated that context element under the Person umbrella.

The conversation that we had over the last couple of days, in my own mind, called that bucketing into question.

A potential starting point, as articulated by Ian, is that Context is anything that is not Person or Resource related, and as such it promotes Context to be a first class citizen along-side Person and Resource Attributes. This leads to:
The "External Attribute" component of Context maps pretty easily into what we have traditionally called "Environmental Attributes":

  • Operational Status e.g. Threat-Level-1, Declared-DSCA-Event
  • Inside the building on business/agency infrastructure
  • Coming from a specific IP block
  • Connecting using VPN
  • Host based scans report as healthy
  • etc.

But the "Shared Contextual Attributes" are something new to think about and explore, as they bring a relationship component into the mix that could potentially be very interesting and, if we can work through it to a shared understanding, address questions such as:

  • Is context where we can convey data handling expectations that come along with access to data? 
  • Obligations and responsibilities? 
  • Where semantics can be attached to and sent along with authority attributes?
  • ?
There was pretty general agreement that we really are not sure at this point, but that we do need to put some think-time on this. What it should NOT BE is that Context becomes the grab-all bucket into which everything that is not Subject and Resource gets stuffed as that would make it quickly irrelevant and useless from an access control decision point of view.

Really looking forward to further conversations on this topic.


:- by Anil John

Shared Services and Government as Attribute Service Provider

FICAM Roadmap and Implementation Guide articulates the need to provide government-wide services for common ICAM requirements. In addition, an execution priority for FICAM is to demonstrate the value of policy driven access control in Government systems.  One of the ways that we are moving forward in this area is by piloting the operational use of attribute services, backed by attribute providers, that can act as a single point of query for relying parties.

What is an attribute provider (AP)?

The National Strategy for Trusted Identities in Cyberspace (PDF) describes an AP as being "... responsible for the processes associated with establishing and maintaining identity attributes. Attribute maintenance includes validating, updating, and revoking the attribute claim. An attribute provider asserts trusted, validated attribute claims in response to attribute requests from relying parties."

Why is this important?

  • We are moving into an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need.
  • The policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services and not be embedded within, and that policy be treated as a first class citizen.
  • The input to these decisions are based on information about the subject, information about the resource, and contextual information that are often expressed as attributes.
  • These attributes can reside in multiple sources where the level of confidence a relying party can have in an attribute may vary and has many components (Working on this one).
  • The relevant attributes are retrieved (“pulled”) from the variety of sources at the moment when a subject needs to access a system and are not pre-provisioned into the system.
  • Standards! Standards! Standards! All of the moving parts here (finding/correlating attributes, movement of attributes across organizational boundaries, decision control mechanisms etc.) needs to be using standards based interfaces and technologies.


How will this capability be implemented?

As a first step, we are partnering with PM-ISE on an operational pilot (real missions, real data, real systems, real users) of the FICAM Backend Attribute Exchange (BAE) capability.

The BAE capability provides a "... standards-based architecture and interface specification to securely obtain attributes of subjects from authoritative sources in order to make access control decisions."

If interested in its technical details, do check out the final version of the BAE v2 technical documentation set:


As someone who has been involved with the BAE since the first prototype, it is interesting for me to look back on the timeline for how we got here [Full Disclosure: Some of the links below point to blog entries from before I entered Federal Government Service; At that time, I was a Contractor supporting the DHS Science & Technology Directorate as the Technical Lead for their Identity Management Testbed]

RELATED POSTS

:- by Anil John

To LOA or not to LOA (for Attributes)... NOT!

At both the recent ISOC sponsored Attribute Workshop as well as the Attribute Management Panel at the NSTIC/IDTrust Workshop today, multiple people used the term "LOA of Attributes".  I protest (protested?) this potentially confusing use of the term in this context.

The term Level of Assurance (LOA), as currently used is all about assurances of identity. In particular, within the context of OMB-04-04 (PDF) and NIST SP-800-63 (PDF), it is defined as:
  1. the degree of confidence in the vetting process used to establish the identity of the individual to whom the credential was issued (i.e. the identity proofing component) and 
  2. the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued (i.e. the "technical strength" of the credential itself)
Applying the LOA terminology to Attributes brings confusion, and given the multiple folks who brought up this point and the resonance the comments got, I hope this usage will be discouraged by the community going forward.

I hope that we can come up with and agree on another term to convey our intent here, which is to denote the Measure of Confidence or a Level of Confidence you can have in an Attribute. The components that make up that level/measure TBD.

:- by Anil John

Access Control and Attribute Management in FICAM

As mentioned earlier, one of the priorities for FICAM is to invest in and demonstrate the value of policy driven access control within Government systems. To that end, one of the Working Groups that has been stood up as part of our annual program of work review is the "Access Control and Attribute Management Working Group (ACAGWG)" which I am Co-Chairing.

ACAG Working Group's current functions are to:

  • Focus on Person Attributes for Access Control
    • Establish initial set of Enterprise Access Control Attributes
    • Develop processes for modification of the Enterprise Access Control Attribute set
  • Leverage and, when possible, incorporate best practices and lessons learned
    • Outreach and collaboration to gather attribute use best practices and lessons learned
    • Facilitate exchange and trusted use of attributes across the Federal Government
  • Develop and implement attribute governance processes across the Federal Government
    • “Authoritative-ness”

We had an opportunity to engage with the wider community that is doing attribute work at a "Moving Forward with an Internet Attribute Infrastructure" workshop yesterday which was hosted by the Internet Society. Wanted to take a quick moment to thank Heather Flanagan and Karen O'Donoghue for pulling this together and as thank as well the great set of folks who participated (InCommon, OASIS, OIX and more...) and provided their input and perspectives on the work they are doing in this domain.

:- by Anil John