Showing posts with label ACAGWG. Show all posts
Showing posts with label ACAGWG. Show all posts

Access Control and Attribute Management WG Industry Day Invitation

The FICAM Access Control & Attribute Management Working Group (ACAGWG) is working to address the needs of the Federal Government for access control, lifecycle management of attributes, and the associated governance processes around entitlements and priveleges. If you are interested in engaging with this cross-government (Federal, Defense, IC and more…) working group during our upcoming industry day, please read on...

Why are we holding this event?

We have little desire to re-invent the wheel and would like to leverage lessons learned and best practices from real world implementations. 
This event is designed to help us learn more about the current state-of-practice in the commercial sector around attribute providers and their business models, as well as identity and access governance approaches.

When and where is this event being held?

September 5, 2012 in the Washington, DC area. 

What are we are looking for?

A demonstrated case study (references to operational systems are preferred) to include information such as:
  1. Attribute lifecycle management
  2. Provisioning/de-provisioning of attributes
  3. Processes for semantic and syntactic alignment of attributes
  4. Attestation of attributes
  5. Provenance of attributes
  6. Attribute metadata
  7. Data quality management and practices of attribute providers
  8. Attribute provider business models
  9. Defining, generating and sharing access policies
  10. Enterprise privilege and entitlement management practices
  11. Separation of duties
  12. Other topics related to Attribute Providers as well as Identity and Access Governance
Elements the case study should explore and include:
  1. The type of infrastructure in place
  2. Processes in place for managing attributes
  3. The process for deciding the appropriate attribute and access policies for the domain
  4. Determining Levels of confidence in attribute providers
  5. What factors go into making a decision to "trust" an attribute provider
  6. Design time vs. run-time decision factors
  7. Expanded uses beyond the original intent of the attribute and access policies

What are we NOT looking for?

  • Product demos
  • Marketing slide-ware
This is a group that has deep technical and policy expertise. We are fine with you taking some time at the end of the case study to map it into your product/service. The majority of the case study, however, should focus on the concept of operations, business models, processes and decisions that went into the case study. 

What is the first step?

Please submit a one page high-level abstract (PDF) with details of your case study to ICAM [at] gsa [dot] gov by 5:00 p.m. on July 31st, 2012.  A member of our planning committee will be in contact with those whose submissions most closely align with what we are seeking.


:- by Anil John

Level of Confidence of What, When, Where and Who?

Last week's blog post by Dr. Peter Alterman on "Why LOA for Attributes Don’t Really Exist" has generated a good bit of conversation on this topic within FICAM working groups, in the Twitter-verse (@Steve_Lockstep, @independentid, @TimW2JIG, @dak3...) and in may other places.  I also wanted to call out the recent release of the Kantara Initiative's "Attribute Management Discussion Group - Final Report and Recommendations" (via @IDMachines) as being relevant to this conversation as well.

One challenge with this type of discussion is to make sure that at a specific point in the conversation, we are all discussing the same topic from the same perspective. So before attempting to go further, I wanted to put together a simple framework, and hopefully a common frame of reference, to hang this discussion on:

 

"What"
  • Separate out the discussion on Attribute Providers from the discussion on individual Attributes
  • Separate out the discussion on making a decision (to trust/rely-upon/use) based on inputs provided vs making a decision (to trust/rely-upon/use) based on a "score" that has been provided
"When"
(to trust/rely-upon/use)
  • "Design time" and "Run time"
"Where"
  • Where is the calculation done (local or remote)?
  • Where is the decision (to trust/rely-upon/use) done?
"Who"
  • Party relying on attributes to make a calculation, a decision and/or use in a transaction
  • Provider, aggregator and/or re-seller of attributes
  • Value added service that takes in attributes and other information to provide results/judgements/scores based on those inputs
 

Given the above, some common themes and points that surfaced across these conversations are:
  1. Don't blur the conversations on governance/policy and score/criteria  i.e. The conversation around "This is how you will do this within a community of interest" is distinct and separate from the "The criteria for evaluating an Attribute/AP is x, y and z" 
  2. Decisions/Choices regarding Attributes and Attribute Providers, while related, need to be addressed  separately ["What"] 
  3. Decision to trust/rely-upon/use is always local ["Where"], whether it is for attributes or attribute providers
  4. The decision to trust/rely-upon/use an Attribute Provider is typically a design time decision ["When"]
    1. The criteria that feeds this decision (i.e. input to a confidence in AP calculation) is typically more business/process centric e.g. security practices, data quality practices, auditing etc.
    2. There is value in standardizing the above, but it is unknown at present if this standardization can extend beyond a community of interest 
  5. Given that the decision to trust/rely-upon/use an Attribute Provider is typically made out-of-band and at design-time, it is hard to envision a use case for a run-time evaluation based on a confidence score for making a judgement for doing business with an Attribute Provider ["When"]
  6. The decision to trust/rely-upon/use an Attribute is typically a local decision at the Relying Party ["Where"]
  7. The decision to trust/rely-upon/use an Attribute is typically a run-time decision ["When"], given that some of the potential properties associated with an attribute (e.g. unique, authoritative or self-reported, time since last verified, last time changed, last time accessed, last time consented or others) may change in real time
    1. There is value in standardizing these 'attributes of an attribute'
    2. It is currently unknown if these 'attributes of an attribute' can scale beyond a specific community of interest
  8. A Relying Party may choose to directly make the calculation about an Attribute (i.e. local confidence calculation based using the 'attributes of an attribute' as input) or depend on an externally provided confidence "score" ["What"]
    1. The "score" calculation may be outsourced to an external service/capability ["Where"]
    2. This choice of doing it yourself or outsourcing should be left up to the discretion of the RP based on their capabilities and risk profile ["Who"]
Given that we have to evaluate both Attribute Providers and Attributes it is probably in all of our shared interest to come up with a common terminology for what we call these evaluation criteria. A recommendation, taking into account many of the conversations in this space to date:
  • Attribute Provider Practice Statement (APPS) for Attribute Providers, Aggregators, Re-Sellers
  • Level of Confidence Criteria (LOCC) for Attributes

As always, this conversation is just starting... 
 
 

:- by Anil John

Why LOA for Attributes Don’t Really Exist

This is a guest post on Authoritative-ness and Attributes by Dr. Peter Alterman. Peter is the Senior Advisor to the NSTIC NPO at NIST, and a thought leader who has done pioneering and award-winning work in areas ranging from IT Security and PKI to Federated Identity. You may not always agree with Peter, but he brings a perspective that is always worth considering. [NOTE: FICAM ACAG WG has not come to any sort of consensus on this topic yet] - Anil John


 

As I have argued in public and private, I continue to believe that the concept of assigning a Level of Assurance to an attribute is bizarre, making real-time authorization decisions even more massively burdensome than they can be, and does nothing but confuse both Users and Relying Parties.

The Laws of Attribute Validity

The simple, basic logic is this: First, an attribute issuer is authoritative for the attributes it issues. If you ask an issuer if a particular user’s asserted attribute is valid, you’ll get a Yes or No answer. If you ask that issuer what the LOA of the user’s attribute is, it will be confused – after all, the issuer issued it. The answer is binary: 1 or 0, T or F, Y or N. Second, a Relying Party is authoritative for determining what attributes it wants/needs for authorization and more importantly, it is authoritative for deciding what attribute authorities to trust. Again the answer is binary: 1 or 0, T or F, Y or N. Any attribute issuer that is not authoritative for the attribute it issues should not be trusted and any RP that has no policy on which attribute providers OR RESELLERS to trust won’t survive in civil court.

Secondary Source Cavil

“But wait,” the fans of attribute LOA say, what if you ask a secondary source if that same user’s attribute is valid. This is asking an entity that did not issue the attribute to assert its validity. In this case the RP has to decide how much it trusts the secondary source and then how much it trusts the secondary source to assert the true status of the attribute. Putting aside questions of why one would want to rely on secondary sources in the first place, implicit in this use case is the assumption that the RP has previously decided who to ask about the attribute. If the RP has decided to ask the secondary source, that is also a trust decision which one would assume would have been based on an evaluative process of some sort. After all, why would an RP choose to trust a source just a little bit? Really doesn’t make sense and complicates the trust calculation no end. Not to mention raising the eyebrows of both the CISSO and a corporate/agency lawyer, both very bad things.

Thus, the RP decides to trust the assertion of the secondary source. The response back to the RP from the secondary source is binary and the trust decision is binary. Some Federation Operator (or Trust Framework Providers, take your pick) may be serving as repositories of trusted sources for attribute assertions as a member service and in that case it, too, the RP would choose to trust the attribute sources of the FO/TFP explicitly. If a Federation Operator/TFP chooses not to trust certain secondary sources, it simply doesn’t add them to its white list. Member RPs that choose to trust the secondary attribute sources would do so based upon local determinations, underscoring the role of prior policy implementation.

Either directly or indirectly, an RP or a TFP makes a binary trust decision about which attribute providers to trust, and so the example reduces to the original law.

Transient Validity, aka Dynamic Attributes

Another circumstance where attribute LOA might be considered is querying about an attribute which changes rapidly in real time. One must accept that an attribute is either valid or invalid at the time of query. If temporality is of concern, that is a whole second attribute and a trusted timestamp must be a necessary part of the attribute validation process. A query from an online business to an end user’s bank would want to know if the user had sufficient funds to cover the transaction at the time the transaction is confirmed. At the time of the query the answer is binary, yes or no. It would also need a trusted timestamp that itself could be validated as True or False. That is, two separate attributes are required, one for content and one for time, both of which must be true for the RP to trust and therefore complete the transaction. Even for ephemeral attributes the answer is directly relevant to the state of the attribute at the time of query and that answer is binary, Y or N, the only difference being that a second trusted attribute – the timestamp – is required. The business makes a binary decision to trust that the user has the funds to pay for the purchase at the time of purchase – and of query - and concludes the sale or rejects it. The case resolves back to binary again.

Obscurity of Attribute Source

Admittedly, things can get complicated when the identity of the attribute issuer is obscure, such as US Citizenship for the native-born. However, once again the RP makes an up-front decision about which source or sources it’s going to ask about that attribute. It doesn’t matter what the source is; the point is that the RP will make a decision on which source it deems authoritative and it will trust that source’s assertion of that attribute. In the citizenship example, the RP chooses two sources: the State Department because if the user has been issued a US passport that’s a priori legal proof of citizenship, or some other governmental entity that keeps a record of the live birth in a US jurisdiction, which is another a priori legal proof of citizenship. However, if the application is a National Security RP for example, it might query a whole host of data sources to determine if the user holds a passport from some other nation state. In addition to the attribute sources query, which in this case might get quite complex (certainly enough for the user to disconnect and pick up the phone instead), the application will have to include scripts telling it where to look and what answers to look for. And at the end of the whole process, the application is going to make a binary decision about whether to trust that the user is a US citizen or not and all that intermediate drama again resolves down to the original case, that the RP makes an up-front determination what attribute source or sources to trust, though in this one the RP builds a complicated multi-authority search and weigh process as part of its authorization determination.

RPs That Calculate Trust Scores

Many commercial RPs, especially in the financial services industry, calculate scores to determine whether to trust authentication and/or authorization. In these situations the RP is making trust calculations, weighing and scoring. Yet it is the RP that is calculating on the inputs, not calculating the inputs. It uses the authorization data and the attribute data to make authentication and/or authorization decisions with calculation code that is directly relevant to its risk mitigation strategy. In fact, this begins to look a lot like the National Security version of the Obscurity condition.

In these vexed situations, what the RP is doing is not trusting all attribute providers and calculating a trust decision based upon a previously-determined algorithm in which all the responses from all the untrusted providers somehow are transformed into a trusted attribute. The algorithm seems to be based upon determining a trust decision by using multiple attribute sources to reinforce each other in some predetermined way, and this method reminds me of a calculus problem, that is, integrating towards zero (risk) and perhaps that’s what the algorithm even looks like.

Attribute Probability

Colleagues who have reviewed this [position paper] in draft have pointed out that data aggregators sometimes have fewer high quality (attribute) data about certain individuals, such as young people, and therefore some include a probability number along with the transmitted attribute data. While it may mean that the data carries a level of assurance assertion to the attribute authority, it’s not really a level of assurance assertion to the RP. The RP, again, has chosen to trust a data aggregator as an authoritative attribute source, presumably because it has reviewed the aggregator’s business processes, accepts its model for probability ranking and chooses to incorporate that probability into its own local scoring algorithm or authorization determination process. In other words, the aggregator is deemed authoritative and its probability scoring is considered authoritative as well. This is, yet again, a binary Y or N determination.

Why It Matters

There are compelling operational reasons why assigning assurance levels to attribute assertions, or even asserters, is a bad idea. It’s because, simply, anything that complicates the architecture of the global trust infrastructure is bad and especially bad if that complication is built on top of a failure to distinguish between a data input and local data manipulation. As the example above illuminates, the attribute and asserter(s) are both trusted by the RP application while the extent to which the trusted data is reliable is questionable and thus manipulated. Insisting on scoring the so-called trustworthiness of an attribute asserter is in essence assigning an attribute to an attribute, a trustworthiness attribute. The policies, practices, standards and other dangling elements necessary to deploy attributes with attributes, then interpret them and utilize them, even if such global standardization for all RP applications could be attained, constitutes an unsupportable waste of resources. Even worse, it threatens to sap the momentum necessary to deploy an attribute management infrastructure even as solutions are beginning to emerge from the conference rooms around the world.

QED, Sort of

The Two Laws of Attribute Validity not withstanding, people can – and have - created Rube Goldberg-ian use cases that require attribute LOA and have even plopped them in to a deployed solution (to increase billable hours, one suspects), but they’re essentially useless. I hate to beat this dead horse but each case I’ve listened to reduces to a binary decision. The bottom line is that the RP makes policy decisions up front about what attribute provider(s) to trust or not trust and these individual decisions lead to binary assertions of attribute validity either directly or indirectly through local processing.

:- by Peter Alterman, Ph.D.

RELATED POSTS

Shared Services and Government as Attribute Service Provider

FICAM Roadmap and Implementation Guide articulates the need to provide government-wide services for common ICAM requirements. In addition, an execution priority for FICAM is to demonstrate the value of policy driven access control in Government systems.  One of the ways that we are moving forward in this area is by piloting the operational use of attribute services, backed by attribute providers, that can act as a single point of query for relying parties.

What is an attribute provider (AP)?

The National Strategy for Trusted Identities in Cyberspace (PDF) describes an AP as being "... responsible for the processes associated with establishing and maintaining identity attributes. Attribute maintenance includes validating, updating, and revoking the attribute claim. An attribute provider asserts trusted, validated attribute claims in response to attribute requests from relying parties."

Why is this important?

  • We are moving into an era where dynamic, contextual, policy driven mechanisms are needed to make real time access control decisions at the moment of need.
  • The policy driven nature of the decisions require that the decision making capability be externalized from systems/applications/services and not be embedded within, and that policy be treated as a first class citizen.
  • The input to these decisions are based on information about the subject, information about the resource, and contextual information that are often expressed as attributes.
  • These attributes can reside in multiple sources where the level of confidence a relying party can have in an attribute may vary and has many components (Working on this one).
  • The relevant attributes are retrieved (“pulled”) from the variety of sources at the moment when a subject needs to access a system and are not pre-provisioned into the system.
  • Standards! Standards! Standards! All of the moving parts here (finding/correlating attributes, movement of attributes across organizational boundaries, decision control mechanisms etc.) needs to be using standards based interfaces and technologies.


How will this capability be implemented?

As a first step, we are partnering with PM-ISE on an operational pilot (real missions, real data, real systems, real users) of the FICAM Backend Attribute Exchange (BAE) capability.

The BAE capability provides a "... standards-based architecture and interface specification to securely obtain attributes of subjects from authoritative sources in order to make access control decisions."

If interested in its technical details, do check out the final version of the BAE v2 technical documentation set:


As someone who has been involved with the BAE since the first prototype, it is interesting for me to look back on the timeline for how we got here [Full Disclosure: Some of the links below point to blog entries from before I entered Federal Government Service; At that time, I was a Contractor supporting the DHS Science & Technology Directorate as the Technical Lead for their Identity Management Testbed]

RELATED POSTS

:- by Anil John

Access Control and Attribute Management in FICAM

As mentioned earlier, one of the priorities for FICAM is to invest in and demonstrate the value of policy driven access control within Government systems. To that end, one of the Working Groups that has been stood up as part of our annual program of work review is the "Access Control and Attribute Management Working Group (ACAGWG)" which I am Co-Chairing.

ACAG Working Group's current functions are to:

  • Focus on Person Attributes for Access Control
    • Establish initial set of Enterprise Access Control Attributes
    • Develop processes for modification of the Enterprise Access Control Attribute set
  • Leverage and, when possible, incorporate best practices and lessons learned
    • Outreach and collaboration to gather attribute use best practices and lessons learned
    • Facilitate exchange and trusted use of attributes across the Federal Government
  • Develop and implement attribute governance processes across the Federal Government
    • “Authoritative-ness”

We had an opportunity to engage with the wider community that is doing attribute work at a "Moving Forward with an Internet Attribute Infrastructure" workshop yesterday which was hosted by the Internet Society. Wanted to take a quick moment to thank Heather Flanagan and Karen O'Donoghue for pulling this together and as thank as well the great set of folks who participated (InCommon, OASIS, OIX and more...) and provided their input and perspectives on the work they are doing in this domain.

:- by Anil John