Author: iamdavid

  • Troubleshooting Okta Advanced Server Access (ASA)

    This post looks at the tools to use when troubleshooting issues with Okta Advanced Server Access (ASA). It’s not a “if you see this error, go do this” article – Google is great for that! This will look at where to go look for diagnostic info to help troubleshoot issues.

    Revisiting the Okta Components and Flows

    A typical ASA flow will involve the Okta Identity Cloud (for authentication), the ASA Cloud Service (for authorization and PKI), the ASA Client, an ASA Server Agent and maybe one or more Bastions/Gateways.

    This article assumes you’re familiar with the components and their use. If not, have a look at the PAM (Incl. ASA) page.

    There are a few standard flows for any SSH or RDP session in ASA (not including the common authN/authZ with Okta and ASA):

    • Client -> Target
    • Client -> Bastion -> Target
    • Client -> Gateway -> Target
    • Client -> (combinations of bastion/gateway) -> Target
    This article does not specifically address the new AD-Joined feature that is in limited Early Access, but the Client -> Gateway -> Target flow will cover some of the components.

    It is important to understand what flow you’re looking at. There are two places to determine the flow:

    • The ASA Audit report (Logging -> Audits in the ASA admin console) will tell you whether a connection has gone directly to a target or whether it’s been routed via one or more bastions.
    • The ASA Project configuration will tell you what Gateway selectors are used and the Gateways menu item will show the Hostname of the related Gateway.

    The following screen shots show finding the gateway used by a project.

    Understanding the flows will tell you where you need to go (which components) to determine where something is breaking down. The following sections will look at the individual components.


    Logs in Okta and the ASA Cloud Service

    Given that the Okta Identity Cloud and ASA Cloud Service are SaaS services, we can only access the logging via the various admin consoles (or APIs). Okta will give you events related to user authentication. An example is shown below, showing sign-on to Okta, MFA prompts and SSO to ASA.

    As with all Okta system log events you can expand each event and drill down into events and there’s a wealth of detail that may help troubleshooting (like timestamps and userids).

    The ASA audit trail in the ASA admin console will show ASA-specific events. In the example below you can see establishing a client session, issuing of certificates (for the two hops, client -> bastion, bastion -> target), and the hops (bastion and target).

    Again, these event logs indicate IPs/hostnames (ASA hostnames), timestamps and userids.


    Troubleshooting Issues with the Client

    The client (sft) will be running on users workstations, connecting to the ASA cloud service, and calling the local SSH/RDP clients.

    On a Mac or Linux system you should see logs in <user>/Library/Logs/ScaleFT/sft or equivalent. On a Windows system you should see the logs in <user>\AppData\Local\ScaleFT\logs.

    You can also run the sft client in debug mode, as follows.

    Windows Workstations

    In Powershell
    PS> C:\Users\<user>> $env:SFT_DEBUG="1"
    PS> C:\Users\<user>> sft rdp <server_name>
    
    # Windows CMD
    C:\Users\<user>> set SFT_DEBUG="1"
    C:\Users\<user>> sft rdp <server_name>

    Linux Workstations

    ~ % SFT_DEBUG=1 sft rdp <server_name>

    With SSH commands you can also pass the debug argument (-v = debug level 1, -vv = debug level 2 and -vvv = debug level 3). For example:

    <user>@<machine> sft % ssh -v ubuntu-target
    OpenSSH_8.6p1, LibreSSL 3.3.5
    debug1: Reading configuration data /Users/<user>/.ssh/config
    debug1: Executing command: '/usr/local/bin/sft resolve -q  ubuntu-target'
    debug1: /Users/<user>/.ssh/config line 5: Applying options for *
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files
    debug1: /etc/ssh/ssh_config line 54: Applying options for *
    debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
    debug1: Executing proxy command: exec "/usr/local/bin/sft" proxycommand  ubuntu-target
    debug1: identity file /Users/<user>/.ssh/id_rsa type 0
    ...
    debug1: Local version string SSH-2.0-OpenSSH_8.6
    debug1: Remote protocol version 2.0, remote software version SFT-PROXY2
    debug1: compat_banner: no match: SFT-PROXY2
    debug1: Authenticating to ubuntu-target:22 as '<user>'
    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory
    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory
    debug1: SSH2_MSG_KEXINIT sent
    debug1: SSH2_MSG_KEXINIT received
    debug1: kex: algorithm: curve25519-sha256@libssh.org
    debug1: kex: host key algorithm: rsa-sha2-512
    debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
    debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
    debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
    debug1: SSH2_MSG_KEX_ECDH_REPLY received
    debug1: Server host key: ssh-rsa SHA256:<blah>
    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory
    debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory
    debug1: Host 'ubuntu-target' is known and matches the RSA host key.
    debug1: Found key in /Users/<user>/Library/Application Support/ScaleFT/proxycommand_known_hosts:25
    debug1: rekey out after 134217728 blocks
    debug1: SSH2_MSG_NEWKEYS sent
    debug1: expecting SSH2_MSG_NEWKEYS
    debug1: SSH2_MSG_NEWKEYS received
    debug1: rekey in after 134217728 blocks
    debug1: Will attempt key: /Users/<user>/.ssh/id_rsa RSA ...
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug1: Authentication succeeded (none).
    Authenticated to ubuntu-target (via proxy).
    debug1: channel 0: new [client-session]
    debug1: Entering interactive session.
    debug1: pledge: proc
    debug1: Sending environment.
    debug1: channel 0: setting env LANG = "en_AU.UTF-8"
    debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
    Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 4.4.0-1128-aws x86_64)
    

    The above shows how the ssh alias is actually invoking sft, and sft running through its hostname resolution, proxy command and calling ssh with a cert.


    Troubleshooting Issues with the Server Agent

    The ASA server Agent shows up on systems as sftd (ScaleFT daemon).

    Windows Servers

    On a Windows server the logs files can be found in: C:\Windows\System32\config\systemprofile\AppData\Local\scaleft\Logs\sftd.

    It is also worth reading https://help.okta.com/asa/en-us/Content/Topics/Adv_Server_Access/docs/windows.htm to understand the SSH tunnelling mechanism used on Windows.

    Linux Servers

    Logging with the ASA Linux server Agent can be different depending on the Linux platform. From a colleague “When the ASA agent is installed on a Linux server, it identifies if the server is running systemd (RHEL7+, etc.), and specifically the journald component of systemd. If it finds journald, the ASA agent will use systemd-journald.service for logging. If systemd is NOT present, ASA will fall back to whichever syslog server is available, i.e. syslog-ng or rsyslog“. For most Linux servers you can use the journalctl -u sftd command to see the logs. Otherwise, look in the /var/log/sftd/ folder.

    You may also find useful information in system/security logs. For example, the following is from the /var/log/auth.log on an Ubuntu system and shows user activity via ssh.

    Apr  1 01:11:37 localhost sshd[2153]: Accepted publickey for kent_brockman from 172.31.34.158 port 47648 ssh2: RSA-CERT ID rZUPREEytTqER4VPJOvzh5ewB0g= (serial 0) CA RSA SHA256:WzysIL+zhessHyJ8ZbeimAjL1NHE4bnSweblUU8+k6Q
    Apr  1 01:11:37 localhost sshd[2153]: pam_unix(sshd:session): session opened for user kent_brockman by (uid=0)
    Apr  1 01:11:37 localhost systemd: pam_unix(systemd-user:session): session opened for user kent_brockman by (uid=0)
    Apr  1 01:11:37 localhost systemd-logind[1175]: New session 1 of user kent_brockman.
    Apr  1 01:12:08 localhost sudo: kent_brockman : TTY=pts/0 ; PWD=/usr/common-scripts ; USER=root ; COMMAND=/usr/sbin/useradd jsmiff
    Apr  1 01:12:08 localhost sudo: pam_unix(sudo:session): session opened for user root by kent_brockman(uid=0)
    Apr  1 01:12:15 localhost sudo: kent_brockman : TTY=pts/0 ; PWD=/usr/common-scripts ; USER=root ; COMMAND=/usr/sbin/deluser jsmiff
    Apr  1 01:12:15 localhost sudo: pam_unix(sudo:session): session opened for user root by kent_brockman(uid=0)
    Apr  1 01:12:19 localhost sshd[2153]: pam_unix(sshd:session): session closed for user kent_brockman

    If you plumb your system logs to a SIEM, it may be easier to search through events in the SIEM.


    Troubleshooting Issues with the Gateway

    Whilst an ASA Bastion is just another server running the sftd, an ASA Gateway is running a different service entirely – sft-gatewayd (although a gateway may also be running the agent, so you could see the sftd process running).

    root@ip-172-31-35-163:/home/okta_admin# ps -ef | grep sft
    root       804     1  0 00:59 ?        00:00:00 /usr/sbin/sftd
    root       828     1  0 00:59 ?        00:00:00 /usr/sbin/sft-gatewayd service
    sft-gat+   943   828  0 00:59 ?        00:00:00 /usr/sbin/sft-gatewayd proxy --log-level info
    sftd       966   804  0 00:59 ?        00:00:00 /usr/sbin/sftd _broker

    The gateways only run on Linux (see https://help.okta.com/asa/en-us/Content/Topics/Adv_Server_Access/docs/supported-os.htm).

    The /etc/sft/sft-gatewayd.yaml file contains the gateway configuration settings, including log level and session capture storage directory (by default they are stored in /var/log/sft/sessions.).

    root@ip-172-31-35-163:/etc/sft# cat sft-gatewayd.yaml 
    # Setup token from ASA. This is required for the gateway to start correctly.
    SetupToken: sft-gw.<remove-from-capture>
    
    # Verbosity of the logs. info is the default and recommended. debug or error
    # levels are also available.
    LogLevel: info
    
    # The network address clients will be instructed to use to access this gateway.
    # AccessAddress: "1.1.1.1"
    # The network port clients will be instructed to use to access this gateway.
    # AccessPort: 7234
    
    # The network address that the gateway will listen on.
    # ListenAddress: "0.0.0.0"
    # The network port that the gateway will listen on.
    # ListenPort: 7234
    
    # The directory where finalized session logs will be stored.
    # SessionLogDir: "/var/log/sft/sessions"
    
    # SessionLogFlushInterval controls how frequently logs for an active session
    ...

    As with the Server Agent, there are no logs in the /var/log/sftd folder. You need to run journalctl -u sft-gatewayd to see the gateway logs.

    The gateway logs can show a lot of information about traffic traversing the gateway.


    This concludes this post on troubleshooting.

  • IGA and PAM – Managing Identities in a Red Hat OpenShift Environment

    You might have missed it as there wasn’t a lot of press, but IBM recently acquired a small startup called Red Hat. As with many IBMers, I have been on a steep learning curve to understand the capabilities this brings. As an interesting exercise, I thought I’d treat the OpenShift stack as an identity project and look at how the identities their access in the various layers of the stack could be managed and governed.

    This article provides an overview of the Red Hat OpenShift stack and the Identity Governance and Administration (IGA) aspects of it. The stack is also a great candidate for Privileged Access Management (PAM). The article does not provide a detailed explanation of Red Hat Open Shift or Red Hat Enterprise Linux.

    Note, this article is concerned with the Red Hat OpenShift stack not the Red Hat OpenStack which is a different (but related) technology set. 

    Red Hat OpenShift and Other Components in the Stack

    Red Hat OpenShift (https://www.openshift.com and https://www.redhat.com/en/technologies/cloud-computing/openshift) is, to quote the Red Hat website, “an enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud and multicloud deployments.” It provides the middleware to run container-based applications in on-prem, private and public cloud environments. You will sometimes see it referred to as the “OpenShift Container Platform”.

    The packaging of OpenShift contains both Red Hat Enterprise Linux (RHEL) and Red Hat OpenShift (RHOS). The OpenShift components are shown in blue in the following diagram and provide the container infrastructure and management and lifecycle automation for the containers. The Security capabilities include authentication and authorization.

    The Red Hat OpenShift Stack

    From an identity perspective, it’s important to understand the layers in the stack. From bottom to top they are;

    1.     Infrastructure (in teal) – the underlying operating environment; physical or virtual servers, private or public clouds,

    2.     The Container OS (red) – labeled the “Enterprise-Grade Container OS”, which is Red Hat Enterprise Linux (RHEL),

    3.     RedHat OpenShift (blue) – based on Kubernetes and providing the runtime environment, and

    4.     The Applications (green) – the containers running the business applications

    Lets look at these layers and the identity management implications in more detail.

    Layer 1: The Underlying Infrastructure

    The OpenShift stack can run on different types of infrastructure; physical or virtual servers or private or public clouds. OpenShift is running on RHEL, and RHEL is running on various platforms; physical servers, virtual servers, private or public clouds.

    Where RHEL is running on physical servers, it is the native operating system. Thus, management of identities is just management of Linux accounts and groups.

    If there are virtual servers, then there will be the OS underneath the hypervisor. The flavor of operating system will vary (dependent on the hypervisor) but there will be some form of operating system identities (Windows server, Linux/Unix, z/OS etc.) to be managed. These should be tightly controlled administrative users subject to Privileged Access Management (PAM) or highly-governed identity management.

    OpenShift can run on many “clouds”, including RedHat Open Stack, AWS, Microsoft Azure public cloud, Google Cloud Platform, VMWare vSphere, and RedHat virtualization.https://blog.openshift.com/openshift-container-platform-reference-architecture-implementation-guides/.

    When running on private or public cloud infrastructure, such as Amazon Web Services, there will be identities to manage. Each of these platforms have unique interfaces to manage them. For example: 

    • AWS has its own identities and access control tied to “accounts” (a customer using AWS services) and these identities can be managed via AWS APIs
    • RedHat Open Stack uses a combination of LDAP for users and groups, and an internal component called “keystone” to manage the authorization objects (the authZ function (permissions, roles, projects))

    Irrespective of the infrastructure underlying RHOS and RHEL, the identities should be tightly controlled administrative users subject to Privileged Access Management (PAM) or highly-governed identity management.

    Layer 2: The Container OS

    Red Hat Enterprise Linux (RHEL) provides the foundation for Red Hat OpenShift (RHOS).

    The native identities (locally stored credentials) will be RHEL users (and groups). Access control will be a mix of the traditional Unix file permissions and policy defined in SELinux (probably also su/sudo access). IGA integration for local identities will be some form of Linux connector leveraging SSH.

    The RHEL implementation may also leverage remote authentication services via a Pluggable Authentication Module (PAM) that would also return the user and groups to Linux for authorization decisions using the access control mechanisms described above. This may be Active Directory (AD), Identity Management in Red Hat Enterprise Linux (IdM) or just LDAP (RH DS or Open LDAP) and Kerberos. IGA integration for these systems will vary, but most of these systems will support a LDAP connector or similar standard connector.

    As with the identities in the underlying infrastructure, these identities should be tightly controlled administrative users subject to Privileged Access Management (PAM) or highly-governed identity management.

    In addition to the Red Hat documentation, there is a detailed SELinux guide at: http://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf

    SELinux provides better security control over applications that use OpenShift because all processes are labeled according to the SELinux policy. For further information on Red Hat OpenShift and SELinux, see: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-openshift.

    Layer 3: Red Hat OpenShift

    There are many components in OpenShift that may have their own access policy configuration, like Istio, but we will focus on the core OpenShift identity and access objects.

    The OpenShift Container Platform implements a Role-based Access Control (RBAC) model. In addition to users and groups, it has the following objects:

    • Rules – sets of permitted verbs on objects, like having permissions/rights on resources
    • Roles – collections of rules, like (access) roles in most systems, and
    • Bindings – associations between users and/or groups and roles

    These objects may be tied to specific projects or apply to the entire cluster. The following figure is from the OpenShift documentation (https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html-single/architecture/index#architecture-additional-concepts-authorization) and shows some examples.

    Open Shift Access Objects

    Users in OpenShift may be regular users, system users and service accounts. 

    Note that from an identity governance perspective this model supports users being connected directly to roles. This means any governance solution that is only consuming users and groups may not see the entire picture of access all users have.

    OpenShift provides a number of mechanisms to create and manage users:

    • Just-in-time (JIT) provisioning – a user logs on to OpenShift and a user object is created
    • API – APIs are available to create/manage users and groups
    • LDAP Integration – OpenShift can integrate with a LDAP directory so that groups and group memberships in an LDAP directory match the groups and group memberships in OpenShift. 

    If LDAP Integration is used, there is a LDAP Synchronization function in OpenShift. The configuration of this includes mapping of LDAP groups to OpenShift groups and can also specify white- and black-lists of groups. This sync can be scheduled to run periodically via a cron job. There are also commands and the OpenShift API to drive synchronization.

    So, any OpenShift deployment may have user and group objects stored in LDAP, and willhave user, group, rules, roles and bindings definitions stored in etcd. 

    Management of users, groups and membership in LDAP will involve standard LDAP (or Microsoft Active Directory) connectors. This will be dependent on the LDAP sync in OpenShift.

    Management of users, groups and memberships (and any other access-related objects) in OpenShift will involve using the OpenShift REST APIs for the different objects (e.g. https://docs.openshift.com/container-platform/3.11/rest_api/apis-user.openshift.io/v1.Group.htmlfor Groups).

    Users in OpenShift may be privileged or regular users. Management of the privileged users should be via Privileged Access Management (PAM) or highly-governed identity management, whereas regular users can be managed using standard identity management processes.

    Layer 4: Applications and Containers

    The “top” layer in the stack are the business applications running in containers, hosted and managed by OpenShift. These applications will have the same identity management challenges as any cloud applications. Management may be:

    • Via an API directly into the application, either a standard like SCIM or bespoke API,
    • To an application repository running on persistent storage, or
    • To an enterprise repository (like Microsoft Azure Active Directory) that is leveraged by the application

    At this layer you will have a mix of privileged users (more likely application administrators than infrastructure administrators) and ordinary users.

    Management is no different from managing identities on other cloud applications, with standard connectors (like SCIM, LDAP) or application-specific connectors.

    This concludes the exploration of the different layers in an OpenShift stack and the implications on identity management for each layer. The next section brings them together for a holistic view.

    Provisioning & Governance of the Entire OpenShift Stack

    As we have seen above, there are identity management needs at all layers of the stack. There are varying types of users at the different layers; from high-privilege (and high-risk) infrastructure administrators at the lower levels, up to privileged and ordinary application users at the higher layers. There are also varying types of systems with identities and access; some managed using standard interfaces (like LDAP) and others requiring bespoke integration using APIs.

    A single identity management system could manage all the different types of users and their access membership. It could implement the different levels of controls (such as access approval) needed for the different levels of user.

    Traditional identity management systems work best when accounts can be tied to individuals. Given the focus on privileged accounts in the lower layers of the stack, use of a Privileged Access Management (PAM) system would be advisable. PAM solutions control the credentials of the privileged accounts and who has access to them at any particular time – which is great for teams of administrators that need to share privileged accounts.

    Ideally you would leverage a combination of IGA and PAM solutions, with the IGA solution managing all the user-based access and access to the privileged accounts, and the PAM solution managing the privileged accounts themselves. This is shown in the following figure.

    Applying IGA and PAM to the OpenShift Stack

    With a combined IGA+PAM solution, you can easily implement governance controls. You can see all the access for individuals – either directly managed in the IGA solution or via access into the PAM solution (e.g. John Smith can use the RHEL root account). You can apply access (re)certification to users and their access. 

    You can also apply risk ratings to the accounts across the different layers (is access to a privileged account in the infrastructure or RHEL layer higher risk than one in OpenShift or the containers?). You can also apply Separation of Duties (SoD) controls to different accesses across the different layers. For example, it may be appropriate to highlight who has administrative privileges in both AWS and RHEL, or RHEL and OpenShift.

    And as with any IGA solution, it makes sense to tie into an identity-aware Security Information and Event Manager (SEIM) platform, so you have visibility into what users are doing with the access they have.

    One caution on identity governance. As we have seen some of the layers allow users to be connected directly to permissions. Most IGA systems will only manage users and groups, not permissions, so you may not get a complete picture of what access a user has. Even if the IGA system supports the extended data model with permissions and permission membership, consuming this fine-grained access from different layers in the stack (e.g. SELinux in RHEL, roles/rules in OpenShift) may be impractical.

    Conclusion

    In this article I have provided an introduction to the Red Hat OpenShift stack and explored each of the layers; the underlying infrastructure, the container OS (Red Hat Enterprise Linux), the OpenShift Container Platform, and the business applications running in containers. 

    For each layer I have explored the identity management implications. For most layers the identity management is following practices established over many years; users and/or groups, mapped to roles and/or permissions. Often standard interfaces, like LDAP, are used. Sometimes there are bespoke mechanisms leveraging APIs for managing identities and access.

    The stack lends itself to a standalone IGA deployment managing all identities in one place. The large number of privileged accounts involved, particularly in the lower layers of the stack, suggest use of a Privileged Access Management (PAM) solution would be beneficial. A combined IGA and PAM solution would be optimum. This also supports the implementation of governance controls like access recertification and Separation of Duties controls.

    This article originally appeared on 8 Aug 2019 on LinkedIn; https://www.linkedin.com/pulse/iga-pam-managing-identities-red-hat-openshift-edwards-iamdavid-/

  • SCIM Will Solve All Your IGA Problems, Right?

    Continuing my theme of exploring IGA topics and “the Cloud”, I thought it worthwhile looking at SCIM and its adoption since it appeared eight years ago.

    The System for Cross-domain Identity Management, or SCIM, is the current rockstar of Identity Governance and Administration (IGA). It’s a lightweight data model utilizing JSON and REST that seems to solve all the SaaS identity management scenarios. But how pervasive is it and does it address all IGA data needs? In this article we will look at SCIM, it’s pervasiveness and how it addresses different IGA use cases.

    What Is SCIM?

    SCIM (http://www.simplecloud.info) started life as Simple (or Simplified) Cloud Identity Management and was later renamed to the System for Cross-domain Identity Management. Various attempts over the years have tried to provide a universal identity management data model and interchange mechanism. We have seen the Directory Services Markup Language (DSML) and the Service Provisioning Markup Language (SPML). Both failed to take hold in the market, mainly due to the complexity and bloated implementation. The market needed a simple and lightweight model.

    SCIM leverages both JavaScript Object Notation (JSON) and Representational State Transfer (REST) – two common cloud-friendly protocols leveraging HTTP. It provides an extensible data model with the core data objects being User (person + account) and Group (groups of users). This is shown in the following figure from the SCIM website.

    The Standard SCIM Model

    Like we saw with LDAP with inetOrgPerson and group/groupOfNames, the SCIM User and Group objects are generic representations.

    The User object is a generic representation of a person, with attributes such as name, userName, phoneNumbers and emails. The example from the SCIM website shows a sample User object.

    The Group object is just a collection of members. The example from the SCIM website shows a sample Group object.

    The SCIM User and Group objects are suitable for systems that implement simple access models – basic user (or accounts) and access via groups with groups representing a collection of users. This includes many of the current SaaS applications that only need that level of access control.

    But this doesn’t address the need for complex account types or access models (think Microsoft Active Directory or IBM z/OS RACF, and even some SaaS applications). There has also been a draft SCIM Password Management Extension (https://tools.ietf.org/id/draft-hunt-scim-password-mgmt-00.txt).

    SCIM is designed to be extensible; other types of resources can be added. As this is data passed between two components (e.g. between an IGA system and a target system), any changes to the standard objects or addition of new resource types requires the modification of the endpoints at either end (different REST URLs, generation/consumption of the data).

    SCIM is leveraged by many IGA/IAM solutions, many extending the standard for their own use. 

    How Pervasive is SCIM?

    Whilst use of SCIM for on-premise systems is very limited, it is significant and growing in SaaS, such as G Suite, Microsoft Office365 and salesforce.com. 

    Analysis of the connectors for Microsoft Azure Active Directory (https://docs.microsoft.com/en-au/azure/active-directory/saas-apps/tutorial-list) shows that about 2/3 (~40/60) use either standard SCIM (v1 or v2) or a variation of SCIM. Of the remaining a small number (~5) implement their own REST-based API and others (~15) have their own APIs (may be Web Services based). Many of the non-SCIM APIs are addressing more than just Users and Groups.

    Of the SCIM implementations, most will manage Users and Groups. Some will only manage Users (a constraint of the target systems) and some will manage more than just Users and Groups.

    So, whilst the use of SCIM is not universal, it can be considered pervasive in the cloud.

    Can SCIM Address all IGA Integration Needs?

    As discussed above, the standard SCIM resources are User and Group. The User resource represents a generic user object, similar to what inetOrgPerson does in LDAP. The Group resource is a collection of users, similar to group or groupOfNames in LDAP. If the target system (such as an on-prem system or SaaS system) only needs a simple user representation or user groups to manage access, then SCIM is the perfect fit.

    But it will not meet all IGA needs, including the need for access data other than group membership, the need for complex account types, or where the target system does not expose a SCIM interface.

    First, the standard access model implemented by many systems consists of; users (accounts), users mapped to groups, and groups mapped to permissions/rights/resources. This provides a simple delineation of management – the system owners/administrators can manage the group to permissions/rights/resources (i.e. the fine-grained access for the system) and security admins or help desk can manage the user accounts and group memberships. But this is not the model used by many systems; some systems allow users to be connected directly to permissions/rights/resources or have multiple layers of access (e.g. groups and roles). For identity management you may need to manage these relationships. From a governance perspective, just having visibility into accounts and group memberships may not give a complete view of a users’ access, particularly when group naming/descriptions have no real bearing on the access they provide.

    In this scenario SCIM needs to be extended to provide custom resource types, both data objects (e.g. permissions) and memberships. Extending SCIM includes extending the schemas as well as the endpoints on both ends.

    Next, the standard User resource does not support the account complexity required of many systems. For example, Microsoft Active Directory and IBM z/OS RACF accounts have dozens of unique attributes. For example, the IBM AD Identity adapter has around 200 attributes managed on AD accounts (including Lync and Exchange attributes) and the IBM z/OS RACF Identity adapter has over 120 attributes across multiple segments.

    You could extend the common SCIM User schema and add on the extra attributes needed for each target system, but that doesn’t scale and could be incredibly cumbersome. Or you could implement a unique account resource type for each different account type, and then implement REST endpoints and the code supporting them for each. For example, looking at AD and RACF you would need https://example.com/{v}/ADAccount and https://exmaple.com/{v}/RACFAccount endpoints for all the Create/Read/Replace/Delete/Update/Search/Bulk operations, and the underlying code to process each. 

    Finally, many systems just do not implement a SCIM interface. If you were to implement SCIM universally for all your systems, you would need to build some form of adapter layer to interface SCIM with the target system. This may be an unnecessary additional transformation layer (adding another point of potential failure between the IGA system and the target, not to mention additional unnecessary processing).

    Conclusion

    Whilst SCIM implements its original mantra of lightweight and simple, that comes at a cost. It is getting increased adoption across SaaS applications (2/3 of those managed by Microsoft Azure AD) where these applications need only simple users or users and groups. 

    However, there are situations where standard SCIM does not cut it; the need for complex access models, the need for complex accounts, and where systems do not provide a SCIM interface. In these cases, SCIM would need to be extended through custom resource definitions and custom endpoints.

    SCIM is reasonably well suited to most SaaS applications. It may be that the industry moves to more standard access models. However, it’s unlikely that all legacy on-prem systems will be simplified enough to use standard SCIM or expose SCIM interfaces.

    This article originally appeared on 5 Aug 2019 on LinkedIn; https://www.linkedin.com/pulse/scim-solve-all-your-iga-problems-right-david-edwards-iamdavid-/

  • IGA Cloud or On-Prem – Have You Checked the Plumbing?

    A major decision for all software deployments, including Identity Governance and Administration (IGA) deployments, is what platform to deploy to; cloud, on-premise or a hybrid of the two. Many IGA products are available as both cloud-based and on-prem. Some on-prem products can be hosted as SaaS or managed service offerings in the cloud. Some of the newer IGA products are “born on the cloud” and have no on-prem option. There are many non-functional considerations of cloud vs. on-prem, such as installation and maintenance of servers and operating systems, and the dynamic scalability that the cloud offers. These will be relevant to an IGA project equally as any other project. 

    One consideration that is unique to IGA projects is the target systems – specifically the repositories and their accounts and access that are to be managed and/or governed. It may be that the target systems will put constraints on the selection of IGA product. It is better to understand where your identity data is and how you need to manage it prior to selecting an IGA product.

    This article will look at some considerations relating to “the plumbing” – what model of IGA solution is needed, what target systems are affected, and what identity data is to be managed on those systems. Then it will look at the architectural patterns that may need to be used, and how to decide which one you need.

    What Model of IGA Do You Need?

    There are three basic models of IGA I have encountered; read-only identity governance, read-write identity governance and read-write identity management (or mixtures of them).

    Identity governance is concerned with; attestation/certification, risk management, separation of duties (SoD), auditing, reporting analytics. If your governance requirements are just to prove that you are reviewing access periodically and identifying any SoD violations, you may only need a read-only identity governance solution. 

    In this model, the target systems containing the users and access are disconnected from the identity governance system; some disconnected process is used to upload accounts and access into the governance system (like importing csv files), this data is used for analysis/reporting/recertification, and any changes needed (such as revoking access) are manually applied to the target systems. 

    This is a perfectly valid approach if it meets your requirements and is by far the simplest approach to an identity governance solution.

    The read-write identity governance model has the identity governance system connected to the target systems via some integration (referred to as adapters or connectors, sometimes agents). This integration provides two flows; reconciliation involves pulling account and access data and pushing it into the identity governance system, provisioning involves pushing account and access changes (often access membership) back to the target system from the identity governance system.

    This “plumbing” between the identity governance system and the target systems is often implemented in a flexible framework that supports multiple protocols (like LDAP, SCIM, JDBC/ODBC) and communication mechanisms (REST, SOAP, TCPIP). We also talk of agentless and agent-based connections to target systems. Agentless connections don’t need an agent deployed on the target systems and rely on a remote protocol that works across a network (or the internet). Agent-based connections require some sort of agent to be deployed on (or near) the target system as there is no agentless mechanism to access account and access data. The latter is usually required for legacy systems (e.g. mainframe) or systems with rich access control mechanisms (like SAP R/3 systems).

    The read-write identity management model is focused more on the identity management capabilities, like an access catalog, access requests, birthright provisioning, workflow and entitlement policy. Whilst the focus is different to an identity governance system, it has the same read-write integration needs for target systems; reconciliation of accounts and access, and provisioning of account and access changes.

    Many organizations have requirements that cross both identity management and identity governance. There are different business drivers and likely have different owners and sponsors in an organization, and this makes IGA deployments challenging. But it is entirely feasible to have an integrated identity management and identity governance solution (via a single product or an integration of products), leverage the one integration platform.

    If the read-only identity governance model is all that’s required, you’re unlikely to see any constraints between cloud and on-prem IGA products; almost every product supports some batch or bulk-load mechanism. However, if you need a read-write model, then you will need to do further analysis of in-scope target systems and the identity data to be managed and see if that constrains your product options.

    What Target Systems Need to be Managed?

    If your IGA solution is to integrate with target systems, you need to know what they are, where they are and how you can connect to them.

    Do you have SaaS applications in the cloud to connect to, and do they support some standard mechanism for identity management/governance, like SCIM? Most major SaaS applications will provide a standard interface for managing accounts and access. Do you have SaaS systems that don’t have a standard interface? Is a custom connector possible, and can it be developed (by yourself or the vendor)? Deployment of an agent to a SaaS application in the cloud is unlikely. Or can these applications be de-scoped?

    Do you have on-prem applications to connect to? Do they support a standard mechanism for identity management/governance, like LDAP or JDBC? Unlike born-on-the-cloud applications that grew up with standards, many on-prem applications built their own interfaces. There are still many legacy applications that don’t provide an identity management interface and some custom integration is required to directly update datastores or perform screen-scraping against a UI.

    Can these on-prem applications be managed remotely, and from outside your network? And if so, how? Do they support a HTTP-friendly protocol so you don’t need additional holes in your firewall? Do they provide an external gateway “on the edge” that you can securely connect to from a cloud IGA system? Or do they require unique ports and protocols, possibly agents deployed?

    What Identity Data is to be Managed on those Target Systems?

    Identity management is concerned with managing accounts (users) and access, where access may be:

    • Attributes on the account – such as a flag or default access level (such as IBM z/OS RACF user profiles having default access and other flags like “OPERATOR”),
    • A group the account is a member of – where these groups are tied to permissions on the target system (such as Microsoft Active Directory Groups being mapped to permissions on Microsoft Windows systems), and
    • Direct mapping of accounts to permissions – whilst this is considered bad practice it is supported in many target systems (like Amazon Web Services and IBM z/OS RACF).

    Traditionally Identity Management has focused on the first two; accounts with attributes, and account group membership. Direct mapping of accounts to permissions should be reserved for admin or service accounts outside the control of an Identity Management system (not always the case).

    Identity Governance has shifted the focus to understanding all access to systems, the controls around managing this access, and risks associated with access. To get a complete picture of access, you may need visibility into not just the account attributes and group memberships, but also any account to permission mappings.

    Some systems, like IBM z/OS RACF and SAP Application Server ABAP, have incredibly complex access models. Presenting the users and groups in the IGA system may not be sufficient to see all access a person has in these systems. Also, depending on how systems have been configured, access groups may not indicate the access they represent. In these cases, the IGA system may need to dig deeper into the access model on the target systems. This fine-grained entitlement approach is normally read-only – the permissions and the account memberships of the permissions is pulled into the IGA system for visibility, analytics, recertification and reporting. But it is rare for integration to create/modify/delete permissions on target systems – this is the realm of the system administrator.

    This complex IGA data need presents a challenge to many IGA products. All products will provide management of accounts, account attributes and group memberships. Some will support consumption of target system permissions. Very few will consume and/or manage fine-grained permissions in complex systems like RACF or SAP AS. 

    Many products today are standardizing on the System for Cross-domain Identity Management (SCIM, aka Simplified Cloud Identity Management). The standard implementation only supports a standard user (account) and groups, not permissions or target system-specific account schemas. There is scope to extend SCIM. Similarly, directory implementations that support LDAP often focus on accounts and groups (like using LDAP in front of an RACF system) and may not expose fine-grained permissions.

    Whilst most IGA products will support accounts and groups management, if you need more you need to look to IGA products that support different schemas, have integration configured for what you need or support customization to allow you to do it yourself.

    IGA Architectural Patterns for Different Target Systems

    In the previous sections we have looked at the constraints around IGA you need to consider; what model (read-only IdG, read-write IdG, read-write IdM, a mix), the target systems you need to manage, and the identity data to be managed on those systems. This will dictate the architectural pattern you may need to consider for your IGA solution.

    First off, if all you need is a read-only Identity Governance system, then it doesn’t matter what your solution looks like as long as it can consume account and access data in some form. It won’t matter if the IGA system is on-prem or in the cloud.

    Next, if you need to manage accounts and access in a cloud target system then assuming that SaaS solution provides a web-friendly remote mechanism, then it doesn’t matter if the IGA system is on-prem or in the cloud. The secure connectivity concerns about cloud to cloud connectivity are the same as for on-prem to cloud connectivity (most organizations will have applications running on-prem that connect out to cloud services). This is shown below.

    Communicating between IGA Systems and Account Repositories on the Cloud

    There may be a concern about accessing fine-grained access information in a cloud service, and whether specific IGA products support this, but that won’t have a bearing on whether the IGA system is hosted on-prem or in the cloud (other than the ability of IGA products to get and process fine-grained access data).

    This leaves us with the last scenario – the need to manage (read-write) accounts and access in on-prem systems (potentially with the need to access fine-grained access data, but probably in a read-only mode) from either an on-prem IGA system or a cloud IGA system.

    Managing on-prem targets from an on-prem IGA system should be trivial; network connectivity, use of SSL, maybe proxying across network zones. 

    The challenge is when using a cloud IGA solution to manage on-prem targets. If you can access the target system via a http-friendly mechanism (web services, REST) then you only need connections over a http/s port (preferably via a reverse proxy). There are probably external systems accessing the network via this means already, so it should not represent a major issue to the IT Security team.

    If not, you may need to look at one of the following:

    • Using a Virtual Private Network (VPN). VPNs were popular in the early days of cloud, particularly for these one-many scenarios, but have fallen out of favor due to maintenance and issues with visibility. There are similar mechanisms to provide a secured “pipe” from cloud into an on-prem environment.
    • Opening up dedicated ports in the external firewalls to allow direct connection to each target system from the cloud IGA system. This is a bad practice as it provides an increased attack surface to external parties. It’s also not scalable for deployments with many (and changing) target systems as you need to constantly manage firewall rules (which increases the risk of issues).
    • Leveraging some sort of gateway on the edge of the network. This is an emerging pattern from IGA vendors. It may also be called a bridge, bus or exchange.

    These options are shown below.

    Different Cloud IGA to On-prem Target Patterns

    It is highly likely that if you’ve got legacy on-prem systems, some will not support a http-friendly integration and you need to look at one of the options above. VPNs and more firewall holes are not recommended. Some IGA vendors provide a gateway approach, where the cloud IGA system securely connects to the gateway. It might be possible to leverage an on-prem IGA system as a provisioning hub to achieve the same thing. Again, some vendors support this hybrid model. 

    There are other patterns outside the scope of traditional IGA, such as deploying a cloud directory service that synchronizes with on-prem directories (like Microsoft Azure Active Directory). They may be a way to avoid the constrains of cloud IGA to on-prem target system integrations but may not solve all the plumbing problems depending on your requirements. 

    Conclusion

    In the previous sections we have discussed how the requirements for systems to be managed by an IGA solution might impact on the choice of a cloud, on-prem or hybrid IGA product. 

    Before selecting an IGA product, you should consider the IGA model you need (now and future); a read-only identity governance solution, a read-write identity governance solution, a read-write identity management solution, or a mix of these.

    You also need to consider the target systems where the accounts and access reside that need to be managed. Where are they and how can those account and access repositories be accessed? Tied to this is the amount of identity data to be managed on those systems; is users (accounts) and groups sufficient for your governance requirements, or do you need to get into fine-grained access. Can off-the-shelf integration (adapters/connectors) meet these needs or would bespoke integration need to be developed?

    With an understanding of the target systems and identity data to be managed you can look at any architectural constraints they pose;

    • If you only need a read-only identity governance solution, choice of cloud-based or on-prem IGA product is irrelevant; it’s all a matter of how to get account and access data to the product.
    • If your target systems are all cloud and you only need to work with user and group data, then choice of cloud-based or on-prem IGA product is irrelevant; both cloud and on-prem products will have the same considerations in accessing cloud target systems. The complexity will arise if you need more than user and group data and there is no off-the-shelf integration available, and you need to develop bespoke integration. You may need to look at the products that allow this flexibility.
    • If you also have to manage on-prem target systems, then choice of cloud or on-prem IGA product will need more consideration. On-prem IGA systems will be able to access on-prem target systems (with appropriate network connectivity and security). However, cloud IGA systems will need to connect into the network to talk to on-prem target systems. This may dictate the need for a VPN, opening up more firewall ports and use of an on-prem gateway to proxy connections. This may represent a constraint on the choice of IGA product.

    It is important to understand what you are managing prior to deciding on cloud, on-prem or hybrid and choosing an IGA product. Failure to do so could lead to problems down the track.

    This article originally appeared on 29 Jul 2019 on LinkedIn; https://www.linkedin.com/pulse/iga-cloud-on-prem-have-you-checked-plumbing-david-edwards-iamdavid-/

  • How Much Workflow Do You Need for Your IGA Project?

    Workflow is a core capability in any Identity Governance and Administration (IGA) deployment; IGA is all about automating the business processes around managing and governing users and their access. 

    IGA deployments often take much longer than anticipated and don’t achieve all of what the project set out to do. Why? There are many factors, but the automation of workflow processes consistently come up as one of them. 

    We often hear concerns that the workflow engine of the chosen IGA product can’t meet the businesses workflow requirements. Perhaps the issue isn’t the flexibility of the tool but rather the business requirements around workflow. In this article we will explore IGA workflow and why it’s important to understand your workflow needs before choosing an IGA product.

    Workflow in IGA

    Workflow can be used for many functions in an IGA solution, but we tend to focus on two types – let’s call them external (or approval) and internal (or operational). 

    The external workflows present the forms for data entry/review and drive the user interactive processes. These may be approval for access requests, role changes, access certification and other processes. All major IGA products support some level of external workflow. They may provide for simple linear steps for different groups of approvers (like user manager, application owner) perhaps with escalations and risk checking (like Separation of Duties checks). The other end of the scale is for very flexible workflows with branching, looping, returning a request to the initiator for additional information, sub-processes, and custom activities supporting scripts or programs. Most IGA products will sit somewhere on this simple-complex workflow spectrum. 

    A trend over recent years has been to leverage external ticketing or service desk systems to provide the forms and workflow capability and pass the approved request to the IGA system. This has the benefit of providing the same interface and usability as other provisioning requests in an organization. 

    There are often also internal flows that drive the operation of the IGA product; e.g. what steps does the product take when the requested access is approved, what does the product do when bulk-loading new users needing access, what does the product do when reconciling accounts and access against policy. 

    Some IGA products will allow modification of some operational processes through workflow customization, or the provision of exit points in the process to plug in some custom logic to alter the data or the flow. Other IGA products do not provide a means to alter the internal processes.

    What Workflow Should You Strive For?

    When planning an IGA deployment, there is a tendency to focus on the external workflows, specifically approval workflows. This is the area of greatest impact to the end users and something the business wants to get right. If user interaction with a tool is too hard, people will find ways around it or take shortcuts. If a tool is more consumable it will be used not bypassed.

    As an industry we strive for simplification of workflows. Gartner in their “Critical Capabilities for Identity Governance and Administration” (June 2018) report stresses that IGA projects should focus on business process re-engineering and try to adopt a standard (linear) approval workflow across all requests, rather than adopting unique ones for each application. They even recommend a simple four-stage pattern; Policy Analysis, Manager Approval, Resource Approval, Control Approval. 

    Adopting a standard and simple approach to approval workflows has a number of benefits:

    • It gives a greater range of IGA products to select from – almost every IGA product supports this standard four-stage approach. If all vendors support a simple workflow approach, it’s one less differentiator to worry about and you can focus on the other business requirements and which vendor best supports them.
    • Deployment will be simpler and faster – given the design and implementation of workflows is one of the challenges in existing deployments, simplifying and standardizing your workflow processes will reduce the risk and complexity of a deployment.
    • End-user enablement is a lot easier – if you only have one standard process, education is simpler.
    • Operation and maintenance are easier – a single simple approval process is less likely to cause help-desk calls, and there is little maintenance required.

    The goal should be to strive to adopt a single, simple workflow process for all access requests.

    But, There’s Always That Unique Requirement

    Whilst that is a noble goal, and would certainly help with deployment simplicity, almost every major IGA deployment seems to have unique workflow requirements.

    Often there is complexity required by the business for external (approval) workflows, such as branching, looping, and customizable nodes. For example; after manager approval, the flow routes the request off to a department secretary who dynamically decides which senior manager to route the request to. Another common requirement is to be able to route the request back to the originator to supply additional data for the request or have one of the approvers supplement the request data. Sometimes there’s a special condition that needs to be programmatically added to the flow.

    We have also seen deployments that have unique requirements requiring alteration of the internal operations. Perhaps there is a need for internal data massaging after the request has been approved. Perhaps an update, like changing the email account for a user, requires calling out to and updating an external HR system. Perhaps reconciling accounts and access may need to trigger some corrective actions. These are all use cases that come up and whilst some IGA products support them through configuration, often you need to rely on modifying workflows or some programmatic approach.

    Whilst these business requirements may be seen as historical, political or driven by other needs, they are usually valid for the business, or a specific part of the business. There is only so long one can have a design conversation around what “should” be done vs. what “must” be done.

    How to Approach Workflow for an IGA Project?

    As outlined above, there is a spectrum of workflow complexity, from the simple linear approach as recommended by Gartner, right up to the very flexible complex workflows. 

    Almost every IGA product will support the simple approval workflows. So, if you have chosen to adopt simple linear workflows, then you can safely choose any IGA product.

    However, you do not want the situation where you have decided to adopt simple workflows and have chosen a product that supports only simple workflows, then get partway through a deployment to find that you have more complex workflow needs that the product chosen will not support. 

    Therefore it is important to understand what your current workflow processes are and how you plan to implement them into the IGA product before you go to market to select an IGA product. You may need to apply some project discipline on the business stakeholders to decide on the workflow approach and stick to it – if you’ve decided you will only support simple workflows, then get the stakeholders to agree on this and kill any project changes that introduce workflow complexity. If you cannot guarantee that, then your selection of IGA product may need to support very flexible complex workflows.

    Finally, what about cloud and IGA? Surely this should simplify this problem. Cloud provides many operational benefits; but unfortunately it does not help with IGA workflows. Some cloud IGA products support flexible and complex workflows, but many only support the simple workflows. Often cloud-based solutions don’t give you the ability to get under the covers to extend applications. Most of the on-prem (legacy) IGA products support flexible complex workflows and as they are on your servers you have more flexibility to get into the application. 

    In closing, any IGA project will need workflow – we are automating business processes. The challenge is understanding what workflow you really need versus what you think you need. Ideally you would choose the simple approach recommended by Gartner. But if not, you should identify the level of complexity you need, agree on it, then choose the IGA product.  This will save the pain of getting midway through a deployment and discovering your IGA product doesn’t do what your business needs it to do. 

      

    This article originally appeared on 19 Jul 2019 on LinkedIn; https://www.linkedin.com/pulse/how-much-workflow-do-you-need-your-iga-project-edwards-iamdavid-/

  • Risk-based Access Approval with IBMs IGA Products

    Identity Governance and Administration (IGA) solutions are all about reducing the risk to businesses that users and their access represent. But they also need to maintain an ease-of-use so that users don’t find ways to circumvent IGA controls and introduce more risk.  

    With IGA tools, like IBM Security Identity Governance and Intelligence (IGI) and IBM Security Identity Manager (ISIM), we have various controls to reduce risk, such as role-based access, approval workflows on request-based access, risk policy (such as Separation of Duties policy), risk mitigation and recertification. If applied correctly, these can effectively manage and reduce the user-based risk.

    But what if we could use risk to simplify the user experience? One way we could do this is to use risk determination to decide whether we need to apply approval workflow. If a request represents low risk, why waste time in approval? The following article looks at how this could be applied in IBMs IGA solutions – IGI and ISIM.

    Identifying Risk

    There are many measures of risk that can be relevant to an IGA solution, such as risk policies, user risk scores, application risk scores and access entitlement risk scores. Let’s have a look at these.

    Risk Policy

    Risk policy, like Separation of Duties (SoD), Sensitive Access (SA) or Privileged Access, are a way to define risk. These policies will define access entitlements that carry a level of risk and possibly a level of risk. Both IGI and ISIM have the concept of SoD, and IGI adds a risk rating (high, medium, low). IGI also has SA policies, where specific entitlements carry a level of risk. 

    IBM Security Secret Server (https://www.ibm.com/au-en/marketplace/secret-server) defines privileged “secrets” to allow privileged access. When integrated with IGI (via the supplied adapter) these secrets can be managed in risk policies, like SoD and SA policies.

    We leverage this in IGI in the out-of-the-box approval workflows – if a risk violation is detected by an access entitlement request, we can escalate to a risk owner for review. Using Rules in IGI we can programmatically access risk policies. This is also theoretically possible in ISIM using API calls in Operation nodes in workflow.

    User Risk

    We can consider user risk; do we have users that are “riskier” than others; does their access represent greater risk to the business; if so, how do we know and use this risk; can we bypass approval if the user is below a certain threshold of risk?

    IGI maintains a level of risk based on the highest level of all risk violations for that user, resulting in a high/medium/low rating. This can be accessed programmatically in Rules.

    We can also leverage analytics tools to determine user risk. IBM QRadar SIEM User Behavior Analytics (UBA) can determine a degree of risk associated with a user, but currently there is no simple way to get that risk back into IGI or ISIM.

    IBM Security QRadar User Behavior Analytics

    IBM Cloud Identity Analyze (CIA, part of the Cloud Identity family, but currently in beta) uses a combination of rules and intelligence to associate risk with users and their access. This risk information is returned to IGI and stored in the IGI DB that can be accessed programmatically from within a Rule. A similar capability is expected to be added to ISIM in the future, but today one could access the user risk scores in CIA and assign them to an attribute on the PERSON object.

    IBM Cloud Identity Analyze Dashboard

    Thus, it’s possible to programmatically access user risk information, either native to IGI or from an external analytics engine (UBA or CIA) plumbed into IGI. A similar mechanism could be built to get external risk into ISIM.

    Application Risk

    In any business some applications represent greater risk to the business than others. The core customer application carries far greater risk than an intranet information website or learning management tool. Can we associate a risk with an application and use that to decide whether to bypass approval?

    Neither IGI nor ISIM have the concept of application risk. Whilst IGI does have SoD/SA policies and these relate to access entitlements belonging to applications, we don’t consolidate or show a score anywhere. You could arbitrarily assign a risk rating to applications and store it on the Application in IGI (or APPLICATION object in ISIM) and access it programmatically.

    External analytics tools, like UBA and CIA, can also determine risk for applications. With CIA we can return this risk information to the IGI DB and access it programmatically, similar to user risk.

    Access Entitlement Risk

    We have talked about risk associated with users and risk associated with applications. What about risk associated with specific access entitlements? Any application will have access entitlements that are considered riskier than others, such as privileged accesses.

    Neither IGI nor ISIM have the concept of access entitlement risk – there is no attribute stored on the entitlement (e.g. group, role), nor is there a mechanism to determine risk and apply it to an attribute on the entitlement. Risk ratings could be arbitrarily defined and manually assigned to an entitlement and stored on an attribute, then programmatically accessed.

    Within IGI we could also use a SA policy to associate risk with an entitlement. If a specific entitlement is considered risky and needs to be managed as a risk, you can create a SA policy for that entitlement and associate a level of risk.

    If IBM Security Secret Server is integrated with IGI (or ISIM), we could take that association and apply a risk to the associated entitlement. For example, we might have a Rule that processes new entitlements coming from Secret Server and if it’s a secret then set a “low” risk rating, if it’s a folder then set a “medium” risk rating, if it’s a folder under specific branches of a folder tree then set a “high” risk rating. This is done as the Secret Server access entitlement is added to IGI.

    External analytics tools, like UBA and CIA, can also determine risk for access entitlements. With CIA we can return this risk information to the IGI DB and access it programmatically, similar to user and application risk.

    Making Use of the Risk Information to Modify Approval Flows

    The previous section showed how we can collect risk information in our IGA tools (IGI and ISIM). This may be risk policy (SoD, SA), user risk, application risk and access entitlement risk. With this information stored in the local repositories (or accessible programmatically) we can now look at how we use this information.

    Using Risk in ISIM Approval Flows

    As can be seen above, IGI is the strongest when it comes to risk management – it’s one of the main reasons for the product. However, ISIM can access and leverage risk information programmatically and use it to determine approval flow. How? By using Operation nodes in a workflow and an associated Java program.

    Operation nodes allow execution of a Java program. It would be the first step in an approval workflow, which would determine the risk and make a decision, then the resulting flow would branch to the approval steps or to the end. The Java program would need to gather the risk information (SoD, user, application, access entitlement) and decide whether to proceed to approval or to skip.

    Using Risk in IGI Approval Flows

    Implementing a mechanism to skip over approvals in a workflow process involves having a Rule associated as a Post-action on the GEN step. This Rule would go collect the relevant risk information (SoD/SA, user, application, access entitlement) and decide whether to proceed to approval or not. If there is no need for approval (i.e. risk is low) then the Rule can automatically approve the next approval step in the request and let the flow proceed.

    Adding Rules to IGI Workflow

    There are examples of this type of Rules processing on the IGI Rules git (https://github.com/IBM-Security/igi-rules). See the Rules guide (pdf) and also examples in the Sample Rules / Other Rules / Workflow folder for examples.

    Worflow Rules examples on IBM Security igi-rules Git

    Further Reading and Information

    For implementing custom logic in ISIM, there are ample redbooks and tech notes available on the web. For IGI, the best resource is the IGI Rules git (https://github.com/IBM-Security/igi-rules).

    This article was originally published on the IBM Security Community; https://community.ibm.com/community/user/security/communities/community-home?communitykey=e7c36119-46d7-42f2-97a9-b44f0cc89c6d&tab=groupdetails

  • IGDM Part 3 – Implementing the Identity Governance Data Model

    This article is the third in a series of three looking at a proposed common Identity Governance Data Model (IGDM). This third article suggests an implementation of the module using a SCIM-like approach.

    This model attempts to address the needs of managing heterogeneous complex target system access models in an Identity Governance and Administration (IGA) environment.

    The proposed IGDM is designed to standardize identity management and governance data flows between IGA systems and target systems hosting access repositories, by providing a common data structure that could be implemented with clients/servers at both the IGA systems and the target systems.

    The proposed data model is shown below.

    The Proposed Identity Governance Data Model E-R Diagram

    It provides for a standard set of objects, such as Person, Account, Resource and Permission, and relationships between objects. This allows for different target system access models, some which apply changes via objects and attributes, and some which apply changes via relationships.

    This article will propose an implementation of the IGDM using a SCIM-like approach to address the needs of hybrid-cloud and multi-cloud environments, whilst managing the variability that you will get with different IGA object schemas (e.g. different account types). This is the last article in the series.

    Introduction

    The previous articles in this thread presented and validated a proposed Identity Governance Data Model (IGDM) to meet the needs of large and complex Identity Governance and Administration (IGA) ecosystems with identity management/governance tools and target systems with complex access models.

    The proposed IGDM is shown in the following figure.

    The Proposed Identity Governance Data Model

    It describes a set of objects and relationships to provide a standard approach to IGA data that is shipped between IGA tools and target systems.

    This data model supports the variability in object schemas we see across IGA systems, such as different account types, or different access or resource types. However, it needs an implementation to become real.

    Prior attempts at creating a standard identity management data model have had limited traction in the marketplace and vendors have either implemented bespoke models or heavily extended common models. Solutions like LDAP Data Interchange Format (LDIF), Directory Services Markup Language (DSML) and Service Provisioning Markup Language (SPML) were seen by the industry as very flexible but complex and bloated, particularly in the cloud era. They are also more focused on the implementation of the methods rather than the data.

    The System for Cross-Domain Identity Management (SCIM, or Simplified Cloud Identity Management) was designed to be lightweight and internet-friendly leveraging Representational State Transfer (REST) protocols, but the resources defined are way too simple for complex IGA needs.

    This article will present how IGDM could be implemented in a SCIM-like protocol to leverage the benefits of SCIM but also support the variability needed of complex access models without having to constantly rewrite SCIM endpoints.

    Implementing the IGDM

    A possible implementation of the IGDM involves leveraging protocols that are HTTP friendly and lightweight, but flexible enough to support multiple object schemas (like different account types) without constantly rewriting endpoints. As discussed above, older standards like SPML, DSML and LDIF are not appropriate. We will explore implementing IGDM in a SCIM-like protocol.

    Why Not Just Use SCIM?

    SCIM certainly ticks many of the boxes of a modern identity data transfer protocol; it leverages REST for the endpoints and methods and JSON for lightweight data definition.

    But the standard SCIM defines a limited set of resources; User (common user and account object), Group (collection of users) and others you can define. There is no delineation of users from accounts, and the data model is simple so to use it for IGDM would require extensions to the supplied resource types or creation of new ones.

    The standard SCIM resources aren’t a good fit for the IGDM, as shown in the following table.

    Mapping standard SCIM resources to the IGDM

    Note 1.  – The IGDM Account object can be partially met by the SCIM User resource.

    The implication of the SCIM design is that any extended or new resource types involves modifying the endpoints to build/extract the resource contents. If an endpoint is written to process Account objects, then it would need to be modified to handle (for example) AD Account or RACF Account objects. This restricts its use when there will be many resource types (e.g. many account types with different attributes) and also limits interoperability if you change the endpoints.

    Also, there is no concept of accesses or target system resources, so these would also need to be defined. Using standard SCIM for the IGDM would be a lot of work.

    A SCIM-like Implementation – Light and Flexible

    Given that legacy standards like DSML and SPML are considered prescriptive but heavy, and SCIM is considered lightweight but needing extensive work to support the complexity of IGDM, can we develop an implementation that leverages the best of both approaches?

    We could build a SCIM-like implementation (REST-based) that implements a small set of IGDM object types, and each could support a mix of fixed and variable content. The IGDM object types are: Person, Account, Access, Permission, Resource and Mapping.

    Each of these would have a standard fixed component, like in the current SCIM implementation, for each of the object types. They would also have a variable component that supports the different variations of object type. For example, an Account object would have a fixed portion for all Account objects, plus a variable portion for the different account types, like AD and RACF.

    Implementation of the endpoints would need to be coded to handle the object type, like Account, and then pluggable components would provide the logic and schema to support processing specific account types, like AD or RACF. This is shown in the following figure.

    Implementing IGDM in a SCIM-like mechanism

    With the endpoints coded to support the object types defined in the IGDM (Person, Account, Access, Permission, Resource and Mapping) the endpoint should not need to change. Then the pluggable components, say for AD or RACF, can be added or updated without changing the endpoint.

    Being REST-based, this SCIM-like implementation could use the standard REST operations, such as:

    • Create: POST https://example.com/{v}/{resource}
    • Read: GET https://example.com/{v}/{resource}/{id}
    • Replace: PUT https://example.com/{v}/{resource}/{id}
    • Delete: DELETE https://example.com/{v}/{resource}/{id}
    • Update: PATCH https://example.com/{v}/{resource}/{id}
    • Search: GET https://example.com/{v}/{resource}?filter={attribute}{op}{value}&sortBy={attributeName}&sortOrder={ascending|descending}
    • Bulk: POST https://example.com/{v}/Bulk

    It would also support the other SCIM mechanisms, like use of JSON and authentication.

    Implementation Examples

    This section describes some sample implementations of standard IGDM object types; Person, Account, Access, Permission and Membership.

    The Person object may need to support different Person types (such as employee and contractor) that may share some attributes, but also have unique attributes. These could be based on the SCIM Person schema.

    It would have a standard structure, and the flexible blob, as shown in the following example.

    Sample Person object in SCIM-like IGDM

    Similarly, the Account object type needs to support different account schemas, each with very different attribute sets. An example of this is shown below.

    Sample Account object in SCIM-like IGDM

    In IGDM we defined different types of Access; AccessGroup = collection of users, AccessRole = collection of permissions, ACLs = M:N mapping of users/groups to permissions/resources, and AccessRules = code/rule used for access evaluation. As with the Person and Account object types, we can implement a single object type (Access) to support these. An example of this is shown below (an AccessGroup with multiple members).

    Sample Access object in SCIM-like IGDM

    Permissions in IGDM are normally fixed values supplied by a target system. Thus, they would be sent from a target system to the identity management/governance tool, but not pushed back down. The permissions may be single values in a set, or more complex with values specified. The following two examples show this.

    Sample Permission object in SCIM-like IGDM
    Sample Permission object in SCIM-like IGDM

    The previous examples have shown objects shipped between target systems and identity management/governance tools, and the events may represent search results (such as in a reconciliation) or add/modify/delete operations (i.e. provisioning). However, there is often a need to ship relationships, such as the change to a group membership, in IGDM.

    This can be done in the Mapping object type. An example of this, showing an “AssgnmentAndPermit” change, is below.

    Sample Mapping relationship in SCIM-like IGDM

    Thus, we can implement all the IGDM constructs in a SCIM-like implementation leveraging the SCIM mechanisms but adding a pluggable variable structure to support the variations in IGDM data object schemas.

    Conclusion

    The Identity Governance Data Model (IGDM) describes a rich data model for the passage of identity governance data between identity management/governance tools and target systems with access repositories in an IGA environment.

    Legacy identity management models and standards, like DSML and SPML, are data rich, but heavy implementations that have not been widely adopted. SCIM is very popular for lightweight cloud-based implementations but would need significant extension to support a rich model like IGDM.

    In this article we have described a SCIM-like implementation to support the IGDM and variability needed to support different implementations of IGDM object types, like ADAccount objects and RACFAccount objects. It also provided sample implementations showing how the fixed and variable components could be used.

    This is the last article in the series on the Identity Governance Data Model (IGDM). Is it real? Not yet, but hopefully one day it will.

    This article originally appeared on the IBM Security IAM Blog: https://www.ibm.com/blogs/security-identity-access/

  • IGDM Part 2 – Validating the Proposed Identity Governance Data Model

    This article is the second in a series of three looking at a proposed common Identity Governance Data Model (IGDM). This second article validates the model against some common complex applications.

    This model attempts to address the needs of managing heterogeneous complex target system access models in an Identity Governance and Administration (IGA) environment.

    The proposed IGDM is designed to standardize identity management and governance data flows between IGA systems and target systems hosting access repositories, by providing a common data structure that could be implemented with clients/servers at both the IGA systems and the target systems.

    The proposed data model is shown below.

    Proposed Identity Governance Data Model E-R Diagram

    It provides for a standard set of objects, such as Person, Account, Resource and Permission, and relationships between objects. This allows for different target system access models, some which apply changes via objects and attributes, and some which apply changes via relationships.

    This article will validate the data model against the access models used by many enterprise applications. A later article will suggest an implementation.

    Introduction

    The previous article in this thread presented a proposed Identity Governance Data Model (IGDM) to meet the needs of large and complex Identity Governance and Administration (IGA) ecosystems with identity management/governance tools and target systems with complex access models.

    The proposed IGDM is shown in the following figure.

    The Proposed Identity Governance Data Model

    It describes a set of objects and relationships to provide a standard approach to IGA data that is shipped between IGA tools and target systems.

    This data model was developed by analyzing the access models used by many enterprise applications, such as IBM Security Access Manager (ISAM), IBM z/OS RACF, Microsoft Active Directory (AD) with Office365 and Sharepoint, Microsoft SQL Server, Oracle Database, SAP, Amazon Web Services (AWS) and salesforce.com. This represents a mix of on-premise and cloud enterprise applications with complex access models.

    This article will present how some of these enterprise application access models fit into the proposed IGDM as a form of validation of the data model.

    Validating the IGDM

    We will explore the access model or a range of enterprise applications to show how they can fit in the IGDM. The applications explored are; IBM Security Access Manager (ISAM), z/OS RACF, SAP and Amazon Web Services (AWS).

    ISAM and the IGDM

    The IBM Security Access Manager (ISAM) access model can be summarized as follows:

    • ISAM Users are accounts, and there are groups of account
    • ISAM uses Access Control Lists (ACLs) to map users and/or groups to sets of resources with associated access levels. For example, Group PowerUsers may access the Accounting branch of the web objectspace (and objects under it) with Read and Traverse permission
    • ISAM secures Protected Objects, like web pages or branches of the web page structure
    • ISAM has an extensive set of access levels and supports the creation of custom ones
    • ISAM has Protected Object Policies for time-of-day and other restrictions on access
    • ISAM has Access Rules stored as XSL blob and evaluated at run time

    The ISAM access model will map to the following (highlighted in orange) objects and relationships in the IGDM.

    ISAM Access Objects in the IGDM

    The IGDM objects and their mapping to ISAM access model objects is shown in the following table (IdM = IGDM).

    The ISAM access model fits nicely in the IGDM.

    z/OS RACF and the IGDM

    The z/OS Resource Access Control Facility (RACF) is one of the Enterprise Security Managers (ESMs) used on z/OS mainframes. It provides a very rich but complex access model – there are over fifty rules evaluated when determining whether and how a user can access a resource.

    Whilst this is nowhere near a complete representation of RACF, the following constructs are common in RACF environments:

    • RACF User Profiles are accounts, user objects have default access (tied to attributes) and user profiles can have multiple user segments (attribute sets)
    • RACF Group Profiles group users, can be in a hierarchy for administration and can have default access associated with a group
    • RACF defines Resource Profiles; Datasets and other Resources. Resources can have default access.
    • Users are assigned to Groups (via the Connect command) and there may be some default permissions on the connection
    • Access is granted via Resource Access Lists (defined via the Permit command) and can have standard or conditional access lists (conditional has scoping, e.g. can only be run via a specified program)
    • Resources can be defined in access lists as discrete or generic (i.e. with wildcarding)
    • Permission list will include access authority (e.g. NONE, READ, UPDATE etc.). They could also include a set of conditional/WHEN clauses

    This is a simple, but representative, view of the RACF constructs.

    The RACF access model will map to the following (highlighted in orange) objects and relationships in the IGDM.

    z/OS RACF Access Objects and the IGDM

    The IGDM objects and their mapping to RACF access model objects is shown in the following table (IdM = IGDM).

    At this level of detail, the RACF access model fits with the IGDM. There may be more esoteric RACF constructs that don’t fit the model, but this will address many customer deployments of RACF.

    SAP and the IGDM

    SAP represents a suite of products that been developed or acquired by SAP SE over the last thirty years. However, when we talk about SAP, we are normally referring to the core SAP modules that share the common ABAP/Netweaver framework.

    The SAP access model can be summarized as follows:

    • SAP accounts are called Users, and have a user name and other attributes
    • Users are mapped to Roles or Profiles. The Roles can be composite or single, the profiles can be composite or manual. Composite roles contain single roles and a single role may be in multiple composite roles. Composite profiles contain manual profiles and a manual
    • Profile may be in multiple composite profiles.
    • Roles and Profiles contain Authorizations, which may be transactions codes (T-CODES) or other Authorization objects. These can contain multiple fields/values to further restrict access. For example, a user may be able to run transaction ABCD but only for company 124 and in read-only mode. Thus, a role/profile may represent a complex set of access objects.
    • Groups in SAP are used for bulk administration and are not tied to access.

    The SAP access model will map to the following (highlighted in orange) objects and relationships in the IGDM.

    The SAP Access Model in the IGDM

    The IGDM objects and their mapping to SAP access model objects is shown in the following table (IdM = IGDM).

    The complexity of SAP Authorizations can be handled in the Resource + Resource Policy objects, with the Permission object handling the allowable fields and values. The rest is simple Account – Access Role – Resource mapping.

    AWS and the IGDM

    Amazon Web Services (AWS) can use its own accounts and access object and can also leverage external services like LDAP. The term “Account” in AWS refers to an account with AWS – it may represent an individual but will normally refer to a company who has an account to run services in AWS and is not directly related to the access model.

    The AWS access model can be summarized as follows:

    • An AWS user account is called a User, which includes login credentials, and there can be groups of users
    • User Permissions, via attached policies control ability to perform tasks using AWS Resources
    • AWS Roles are a special item. Roles are used for applications to interact with services or for delegated remote access (e.g. via federation), not user-based management. They are collections of permissions, but are not associated directly with users or groups
    • IAM policies grant/deny permission to one or more Amazon EC2 actions, and the polices are mapped to one or more users or groups
    • AWS also has resource-based policies
    • AWS can also use Access Control Lists (ACLs) – mapping of multiple identities to resource access at different levels

    A sample AWS IAM policy is shown below.

    The AWS access model will map to the following (highlighted in orange) objects and relationships in the IGDM.

    The SAP Access Model in the IGDM

    The IGDM objects and their mapping to AWS access model objects is shown in the following table (IdM = IGDM).

    Excluding the objects not directly related to users, the AWS access model fits in the IGDM.

    This concludes the validation of the IGDM by looking at the access models of various enterprise systems.

    Conclusion

    The Identity Governance Data Model (IGDM) was defined by analyzing the access models of various enterprise applications such as IBM Security Access Manager (ISAM), IBM z/OS RACF, Microsoft Active Directory (AD) with Office365 and Sharepoint, Microsoft SQL Server, Oracle Database, SAP, Amazon Web Services (AWS) and salesforce.com. This represents a mix of on-premise and cloud enterprise applications with complex access models.

    In this article we have described the access models of ISAM, z/OS RACF, SAP and AWS and how they map into the proposed IGDM, to show how the IGDM can represent different target system access models.

    The next article in the thread will look at how the IGDM could be implemented in a SCIM-like mechanism.

    This article originally appeared on the IBM Security IAM Blog: https://www.ibm.com/blogs/security-identity-access/

  • IGDM Part 1 – Proposing an Identity Governance Data Model

    This article is the first in a series of three looking at a proposed common Identity Governance Data Model (IGDM). This first article proposes the model.

    This model attempts to address the needs of managing heterogeneous complex target system access models in an Identity Governance and Administration (IGA) environment.

    The proposed IGDM is designed to standardize identity management and governance data flows between IGA systems and target systems hosting access repositories, by providing a common data structure that could be implemented with clients/servers at both the IGA systems and the target systems.

    The proposed data model is shown below.

    Proposed Identity Governance Data Model E-R Diagram

    It provides for a standard set of objects, such as Person, Account, Resource and Permission, and relationships between objects. This allows for different target system access models, some which apply changes via objects and attributes, and some which apply changes via relationships.

    This article will describe the proposed data model. Subsequent articles will validate the proposed data model against common target systems and suggest an implementation.

    Introduction

    Identity Management (the management of user accounts and access) is a mature domain within Identity and Access Management (IAM) having developed over the past twenty years. It covers the standard CRUD operations (create, read, update, delete) for managing target system accounts and access rights.

    More recently this has expanded to include governance scenarios, such as access recertification, risk management, role mining and fine-grained permission analysis and reporting. In parallel, the scope of what’s now known as IGA (Identity Governance and Analytics) has grown from the traditional IT-focused on-prem systems to hybrid-cloud and multi-cloud patterns with cloud and on-prem target systems (and access repositories) managed by one or more identity management/governance tools.

    As the IGA landscape has grown, so has the need for a common data format for the interchange of IGA data.

    The Need for a Common IGA Data Model

    Over the years many data standards and transmission protocols have developed, such as LDAP Data Interchange Format (LDIF), Directory Services Markup Language (DSML) and Service Provisioning Markup Language (SPML). These have enjoyed varying degrees of adoption, but none became universal.

    The System for Cross-Domain Identity Management (SCIM, aka Simplified Cloud Identity Management) is currently popular. It is a very lightweight implementation which is great for cloud deployment patterns, however the data model, as with many before it, does not address complex access models. It has the concepts of users (including accounts) and groups.

    Many enterprise applications and products used today, such as SAP and IBM z/OS RACF, have a very complex access data model. In addition to users/account and groups, there are often multiple levels of resources, permissions, access rights, ACLs etc.

    SCIM has two major limitations when working with complex IGA data needs; the data model is very simple meaning extensions are required, and for each extension for a specific need you need to extend the endpoints that send/receive the data. For example, SCIM only supports users and groups. If many different account schemas are needed (e.g. Microsoft AD, SAP, z/OS RACF, salesforce.com, AWS) then custom SCIM user resources are required, and each one will require SCIM endpoints to be coded to support the different SCIM resources.

    A final challenge in this ever expanding IGA world is a lack of standard terminology – terms such as user, group and role are overloaded and lead to confusion when working with many systems.

    This article proposes an Identity Governance Data Model to address these needs.

    The Proposed Identity Governance Data Model

    By analyzing the access models of many common enterprise applications, I have been able to come up with a data model that addresses most, if not all, access model needs.

    The Proposed Model

    The proposed Identity Governance Data Model (IGDM) is shown in the following figure.

    The Proposed Identity Governance Data Model

    The model includes the primary IGA objects of Person, Account and Resource. It also includes other access objects or relationships that may be used by different target systems to control access for the accounts. These will be explored in the following sections.

    The idea behind the IGDM is that the one model can be used, but message flows between IGA tools and target systems may only use some of the objects or relationship (as needed by the operation and the target system access repository).

    Person and Account Data Objects

    The model has person and account data objects:

    • Person– the person object with attributes describes a real person, such as an employee, contractor or customer. Person objects are often managed in IGA systems. There may be groupings of Persons to support bulk administration on some systems (the groups are not normally managed by IGA systems).
    • Account– the account object represents a set of credentials for a person on a specific target system, including account attributes. Whilst accounts are unique to target systems, account attributes (like userid) may be common across multiple accounts. Some target systems will use attributes as permissions (e.g. z/OS RACF), and some will collect attributes into subsets (like segments in z/OS RACF). Account management is a key component of IGA. There may also be user profiles to define standard or default attributes for types of accounts or users.

    These are fairly standard in any IGA implementation but need to be flexible to support different schemas.

    Access Data Objects

    The model describes different access data objects:

    • Access Group– an Access Group is a collection of accounts, possibly with additional attributes associated with the group object (including attribute permissions) and may be in a hierarchy (i.e. groups within groups). Groups will be associated with Access Roles, ACLs or tied directly to Resources.
    • Access Role– an Access Role is a collection of permissions, possibly with additional attributes associated with the role object. Access Roles are normally used to assign a common set of permissions to one or more users/groups and applied to resources, and thus can be thought of as the accesses needed for a job role or job function.
    • Access Control List– an Access Control List (ACL) is an access construct used by some target systems (such as IBM Security Access Manager, or ISAM) and represents a many-to-many mapping of users/groups to roles/permissions. You could think of Access Groups and Access Roles as limited membership ACLs.
    • Access Rules– an Access Rule is some programmatic logic implemented by a target system to evaluate access (e.g. ISAM). The object contains the code/logic and would be understandable by the target system but not necessarily by the IGA tool. However, there may need to be sharing of the access rules for visibility in the IGA tool.

    Note the use of “Access” as a prefix for Groups, Roles and Rules in the IGDM. This is deliberate to avoid ambiguity with the use of these objects as compared to “group” and “role” which are overloaded terms.

    Resource Data Objects

    The model describes the following resource data objects:

    • Resource– a resource is a target system object being secured and will be accessed by a user on that system, such as a file or transaction definition. In some target system access models, users (accounts) can be mapped to resources directly (which is considered bad practice in IGA) or via groups (better IGA practice). Some target systems will support wildcarding or generic references in resource definitions (such as z/OS RACF).
    • Resource Policies– there may be policies applied to resources, such as time of day restrictions, auditing levels, IP access restrictions (as in ISAM).
    • Permissions– a permission may be an access scope or access level. Often target systems will employ a standard, fixed set of permissions (such as NONE, READ, UPDATE… in z/OS RACF) that are applied to a resource definition when assigning to a user/group. For example, John may use the ABCD resource but only at the READ level. There may be allowable values with a specific permission set, and the target system may allow creation of custom permissions (although this is rare as the permissions are often coded into the target system application).

    Permissions are often used in conjunction with object relationships (next section).

    Relationships in the Data Model

    Relationships are used throughout the data model to define mappings between objects. Some of these may be logical, where a relationship is actually held on one of the objects (e.g. an account having a group attribute with a list of access groups). Others are separate objects in their own right, possibly having attributes/permissions directly attached to the relationship model.

    From an identity management perspective, change to access may be sent as a change to a relationship even if the target system doesn’t implement the relationship as a separate object. For example, if you have a Microsoft Active Directory (AD) system being managed by an IGA tool, changes to group membership may be sent as an “add the account to this group” message or a “remove this account from this group” message – rather than a complete refresh of the account or data object holding the relationship attribute. Some current identity management systems will send the account object with a group list and may inadvertently wipe un-sync’d changes, or multiple messages from group membership changes may get processed out of order causing some group membership to be incorrectly processed. Sending a single membership change in isolation will solve this problem.

    The relationships defined in the data model are shown in the following figure.

    IGDM Relationships

    Note that there are no explicit relationships defined for Permissions as they are normally attached directly to the other objects and relationships.

    This concludes the definition of the proposed Identity Governance Data Model (IGDM).

    Conclusion

    The Identity Governance Data Model (IGDM) proposed in the body of this article represents an attempt to standardize the structure and definition of Identity Governance and Administration (IGA) data flowing between IGA tools (identity management/governance products) and target system account repositories.

    The model provides for some standard definitions and terms to remove the ambiguity that often comes from overloaded IGA terms like group and role.

    The model is based on the analysis of many enterprise applications, both on-prem and cloud, and allows for rich access model information to be passed between systems. This prescriptive approach solves the shortcomings of current approaches such as the System for Cross-Domain Identity Management (SCIM) data model.

    The next article in this series will validate the data model by applying it to some common enterprise systems. The last article in the series will look at how the model could be implemented in a SCIM-like standard.

    This article originally appeared on the IBM Security IAM Blog: https://www.ibm.com/blogs/security-identity-access/

  • Welcome to IAmDavid

    Welcome to my IAM (Identity and Access Management) blog, focusing on IGA (Identity Governance and Administration), PAM (Privileged Access Management) and associated aspects of IAM. I’ve been writing articles on IBMs Security Community and LinkedIn, but I thought it would make more sense to have them in one place – here. Since starting the blog I’ve moved over to Okta products.