Author: iamdavid

  • Using the Secrets API with Okta Privileged Access

    Okta Privileged Access has the ability to store and retrieve generic secrets in it’s vault. This can be done via the user interface, the sft client in the command line or via the Secrets API. This article will explore the Secrets API for managing secrets in the vault.

    Overview

    Secrets management involves both folders (and a folder hierarchy) and the secrets stored in a folder. Access to (and maintenance of) the folders and secrets is controlled by Security Policy in Okta Privileged Access. More information on secrets, folders and management can be found in the Secrets section of the product documentation. Folders and secrets can be managed in the Okta Privileged Access user interface, via the client (sft) or by using the API. This article is focused on the APIs.

    Most of these APIs can be implemented in Okta Workflows leveraging either a Generic API (HTTP) Connector or the new Okta Privileged Access Connector. Some APIs require use of JSON Web Encrypted (JWE) secrets and as Okta Workflows does not have mechanism to encrypt/decrypt JWEs, the Create/Update/Reveal a Secret APIs cannot be run in Okta Workflows.

    Secrets API

    There are a set of Secrets APIs available to programmatically access and manage the secrets and folders.

    When working with secrets/folders, there are two hierarchies you may need to contend with – the resource group/project structure and the folder hierarchy attached to specific resource groups/projects. The following figure shows multiple resource groups, one with multiple projects. Within each project, there is a hierarchy of secret folders.

    You will normally need the secret id (or folder id) with the APIs to manage a secret/folder. In many cases you need to only know the resource group and project id’s to find the relevant secret/folder. However some APIs require traversing or navigating a folder structure within a specific resource group/folder. This is covered when discussing the different APIs in following sections.

    Top-level Folders APIs

    When we talk about folders we often distinguish between top-level folders and other folders. Top-level folders are attached to a Project in a Resource Group and are the top of any tree hierarchy. Other folders are attached to the top-level or other folders to create that hierarchy.

    There are three APIs specifically for top-level folders:

    • List top-level Secret Folders for a Project – list all top-level folders in a project, returning an array that includes the id, name and description of the folders. For example, if this was run for Project BB in the above diagram, it would return a list with Top-level Folder 3, Top-level Folder 3A, and Top-level Folder 3B.
    • List top-level Secret Folders for a Team – list ALL top-level folders in an Okta Privileged Access team. It returns the id, name and description, along with information about the resource group and project. As most other folder/secret APIs need the Id of the resource group and project, this API call may be useful in a search function. In the example above it would return a list with all six Top-level Folders (1, 2, 3, 3A, 3B, 4).
    • List top-level Secret Folders for User – list ALL top-level folders for a user. Note that this API does not allow for a user to be specified and just uses the service user making the API call, so it may not provide a lot of value.

    There are no CRUD APIs specifically for top-level folders. You use the folder APIs to create, read. update and delete top-level folders (see below). To use the create API to create a top-level folder, you do not specify a parent_folder_id in the body.

    Folders APIs

    The folders APIs are:

    • Create a Secret Folder – create a folder in a project. If a parent_folder_id is specified it will place the new folder under the specified parent, otherwise it will create a top-level folder. The response will include the id of the new folder and it’s path (an array representing the hierarchy).
    • Retrieve a Secret Folder – this will return the name, description and create/update info for a folder.
    • Update a Secret Folder – this allows updating the name and optionally the description of a folder.
    • Delete a Secret Folder – this deletes a secret folder.
    • List all items in a Secret Folder – list all items (secrets and folders) in a folder. Note that this is a “single-level” search – it only returns the items at this level (it does not recursively search through sub folders). This API supports pagination.

    There is no Team-wide search API available. If you need to find a folder (or a secret within it) you will need to traverse the folder hierarchy, starting with the list of top-level folders and using the List all items API on each.

    Secrets APIs

    There are five APIs for working with secrets:

    • Create a Secret – this will take an encrypted secret value and store it in the vault, associated with a resource group, project and folder.
    • Retrieve a Secret – this will get details of a secret, such as name and description, created/updated details, and path. Note that you don’t need to traverse the folder structure to find the specific secret, you just specify the resource group, project and secret ids (which can be found using the Resolve Secret or Folder API).
    • Reveal a Secret – whereas the Retrieve API will provide details of the secret, it does not show the secret values. The Reveal API is passed a public key which is used to encrypt the secret values and return them.
    • Update a Secret – update the name, description and/or secret value of a secret.
    • Delete a Secret – delete the secret.

    Most of these are straightforward to use. However the Create, Reveal and Update APIs need to work with public/private keys and JSON Web Encryption (JWE) secrets (i.e. requiring encryption/decryption of secrets). This is explored in more detail below.

    Common APIs for Secrets and Folders

    There is a single API that applies to both secrets and folders – Resolve Secret or Folder. You tell it where the secret or folder is and it will return the id of the secret or folder.

    It has limited value as it’s not a search function – you need to know where the secret/folder is to use it. But it does mean you don’t need to traverse the folder hierarchy, just the resource group/project hierarchy to find the right one. For example if you ran it against Resource Group B / Project BB (in the figure above) it would search for a secret across all seven folders in that project. Thus to search you could list all resource groups (here) and projects in them (here) and then use this Resolve API to search in each RG/Project pair.

    Creating, Updating and Revealing Secrets

    Whilst most of the APIs are reasonably simple to use, the Create/Update a Secret and Reveal a Secret APIs are more complicated as they require JWE encryption/decryption capabilities.

    The following diagram shows the main flows between a client (where the APIs are run from) and Okta Privileged Access when using the Create/Update/Reveal a Secret APIs.

    There are two sets of flows:

    1. The top set (orange arrows) show the flows to create or update a secret, which require obtaining the public key from Okta Privileged Access to encrypt the secret.
    2. The bottom set (green arrows) show the flows to reveal a secret, which require creating a private/public key pair, passing the public key to Okta Privileged Access so it can encrypt the secret, then using the private key to decrypt the secret in the client.

    These flows will be described in the following sections.

    Conventions with Keys and Encryption

    There are some common conventions around the keys and encryption that may not be clear in the product documentation:

    • The create and update options use the public key from the Okta Privileged Access team. This is downloaded using the /v1/teams/:team_name/vault/jwks.json endpoint.
    • The Okta Privileged Access public key uses a RSA key type with the RSA-OAEP-256 algorithm. Any private/public key pair you use should also use this. Key size of 2048 is ok. I don’t think the key id is actually used.
    • The encrypted secret (provided by Okta Privileged Access in a reveal operation, or passed to Okta Privileged Access in a create or update operation) is a fully serialized JWE. The two serialization options are described in the JSON Web Encryption (JWE) Overview section of the RFC. The default for many JWE libraries is for compact serialization. The Okta Privielged Access APIs require full serialization, which in the spec is referred to as JWE JSON Serialization.

    The following sections will expand on the flows and provide examples of keys and encryption used.

    Encrypt and Create/Update a Secret

    The process involved in creating (or updating) a secret is shown in the top half of the diagram above.

    The first step is to get the public key from your instance of Okta Privileged Access. This only needs to be done once as the public key will not change. An example jwks.json file is shown below.

    Once this file is downloaded, the steps to create (or update) a secret are:

    1. Use this public key to encrypt the secret into a fully serialized JWE
    2. Use the Create a Secret (or Update a Secret) API to upload the encrypted secret to Okta Privileged Access
    3. Okta Privileged Access will decrypt the secret_jwe with its private key and store it in the vault

    When the public key is used to encrypt the secret into a fully serialized JWE, it will look something like the following (containing “protected”, “encrypted_key”, “iv”, “ciphertext” and “tag” fields).

    This is then passed to the Create/Update a Secret API in the body along with the secret name and parent_folder_id.

    If the operation is successful, you should be able to reveal the secret in the Okta Privileged Access user interface.

    Reveal and Decrypt a Secret

    The process involved in revealing and decrypting a secret is shown in the bottom half of the diagram above and has three basic steps:

    1. Generate a key pair (private/public)
    2. Call the Reveal a Secret API passing over the public key, where the API will use the public key to encrypt the secret and return the secret_jwe value
    3. Use the matching private key to decrypt the secret_jwe value

    To do this you will need a library for your programming language of choice that can manage keys, JWK/JWE and encryption with RSA.

    The following is a Python example to perform all three steps above (thanks to my colleague Rajesh Kumar for the code).

    It starts by generating a Key ID, then creating a key with Key Type of RSA, size of 2048 and the Key ID. A public key for this is exported. This is a JSON Web Key formatted object (example shown below).

    The next section builds the API URL, sets the payload to be the “public_key” and then POSTs the request. See the Reveal a Secret documentation for more information. If successful (status_code = 200) it returns the fully serialized JWE in the secret_jwe field (example shown below).

    The last section will use the private key to decrypt/decode the secret.

    This concludes the exploration of the create/update and reveal operations.

    Conclusion

    Okta Privileged Access provides a generic secrets capability in it’s vault. Secrets, and the folders for storing them, can be managed via the web user interface, the client or via the Secrets API. This article has explored the different APIs available to manage folders and secrets. It has dived into the secret create, update and reveal operations and the needs of key management and JWE encryption to work with those APIs.

  • Bulk Imports of Sudo Rules for Okta Privileged Access using Workflows

    This article showcases two new features of Okta Privileged AccessSudo command bundles and the Okta Privileged Access Workflows connector. It shows how a standard workflow mechanism can be used for bulk-loading sudo commands, specifically for commands to work with OpenLDAP.

    Introduction

    Okta recently released two new capabilities to Okta Privileged Access.

    The first is Sudo command bundles, where sets of commands can be bundled together and distributed to Linux servers as sudo command files (sudoers.d files). This allows for a more granular access model between user-level access and admin-level access available beforehand.

    The second is an Okta Privileged Access Workflows connector. This provides workflows actions for many of the Okta Privileged Access APIs, and where there isn’t a card provided, there’s a Custom API Action card. Use of the connector significantly simplifies running APIs as you configure the authentication when you setup the connector and don’t need to worry about it when building and running the workflows.

    Combining these two features, we can implement a bulk import mechanism for Sudo command bundles. This article explores a mechanism built in Okta Workflows to import a CSV file with a set of commands and create a command bundle in Okta Privileged Access.

    The article uses a pair CSV files for two sets of OpenLDAP commands. But you could build out a library of command set files and import them into Okta Privileged Access as needed.

    An Example – Importing OpenLDAP Commands

    In this section we look at an implementation of the utility, to import two sets of OpenLDAP commands for an Ubuntu implementation.

    OpenLDAP Commands

    OpenLDAP has two sets of commands:

    • A set of commands for use of the product, like ldapsearch for searching for objects in the directory. These are found in /usr/bin and all start with “ldap“,
    • A set of commands for administration of the product. These include a set of “slap” executables found in /usr/sbin. There are other administrative commands, such as dpkg-reconfigure slapd.

    A standard installation of OpenLDAP will have both sets of commands executable by anyone, relying on directory-level access control to restrict who can do what.

    We have chosen these commands as an example for the import as they are easy to define. If you were to implement them as sudo command bundles in Okta Privileged Access you should tighten up “world” permissions on the files on the relevant server.

    The Import Files

    The import mechanism we are showing here imports a single CSV file for each sudo command bundle in Okta Privileged Access. They have a fixed structure (that will be explained later in this article). The name of the file is used as the name of the sudo command bundle. The filename cannot have spaces.

    The first file is OpenLDAP_Ubuntu_User_commands. It contains all of the /usr/bin/ldap* commands.

    The second file is OpenLDAP_Ubuntu_Admin_commands. it contains all of the /usr/sbin/slap* commands and also the /usr/sbin/dpkg-reconfigure slapd command.

    Using files like this means you can maintain a library of the command bundles outside of Okta Privileged Access, perhaps leveraging some version control mechanism.

    Command Bundles after Import

    After these files have been imported using the Workflows-based import mechanism, you see two sudo command bundles. They are named the same as the filename so that they can be easily tracked back to the source files. The user associated with them is the service user assigned to the Workflows connector.

    Looking at the user commands bundle, we can see the name and description (it can be entered at import time or otherwise the import mechanism will take and – or _ from the name).

    If we scroll down the command bundle, we see the various commands that were in the CSV file.

    Similarly the admin commands can be seen in the other sudo command bundle.

    The import mechanism will only create the command bundles, it does not do anything with policies or rules. Also, if a command bundle has been created and assigned to a policy rule, you cannot re-import the CSV (e.g. to update the list of commands) whilst it is assigned to a policy rule. You need to remove it from the policy rule, update the command bundle and then reassign. You could automate this using the actions in the Okta Privileged Access Workflows connector, but it has not been done in this exercise.

    Policies and Rules

    To test out the new command bundles, we created a new policy and set a group of Principals.

    Within the policy, two rules were created, one for the user command bundle and one for the admin command bundle. As always, controls/conditionals are assigned based on risk (i.e. MFA and session recording for user commands, Access Request and session recording for admin commands).

    Within each rule, the appropriate sudo command bundle was assigned.

    The rules and policy were saved and the policy enabled.

    The User Experience

    When one of the Principals of the new policy attempts to connect to the server, the list of access methods includes the two sudo level individual account options corresponding to the new policy rules.

    In this case the Linux admin selects the LDAP-admin-commands option. When they try to run one of the slap* commands without sudo they are denied. But with sudo they can run the command.

    This concludes looking at how the import tool is used and what it produces.

    The Import Mechanism

    The import mechanism is built in Okta Workflows and leverages the Okta Privileged Access connector. It relies on a standard import file structure. These could be CSV files, but in this example the workflows have been built to consume Google Sheets documents on a Google Drive.

    Import File Structure

    The import file is structured as follows (the file shown deliberately has garbage data).

    The columns are:

    • CommandType – one of executable (an executable with or without arguments), raw (complete command with executable plus a fixed set of arguments) or directory (execute any file in this directory). If any other values are used, the utility may break or produce unexpected results.
    • Command – the executable (for executable), executable+arguments (for raw) or directory (for directory). If it is a directory it must have the trailing slash.
    • ArgsType – for executables, one of custom (expecting a list of arguments), none (no arguments allowed) or any (any arguments are acceptable). If the CommandType is raw or directory, you don’t specify an ArgsType.
    • Args – the arguments to be passed to the executable. This is only valid for executables where the ArgsType is custom.
    • Notes – this is information to make managing the file easier, the values aren’t used in creating the command bundle in Okta Privileged Access

    The column headings must be as shown above (no spaces).

    Workflows

    The import mechanism is implemented in Okta Workflows with a single table and four workflows.

    Connectors

    The flows below use two connections in addition to the normal workflows functions:

    • An Okta Privileged Access connector, and
    • A Google Drive connector (note that the two Google Drive cards are looking for a folder called sudo rules)

    These must be configured before the workflows can run.

    The Table

    A Workflows table is used as a temporary storage location for the import file contents. This is because Workflows cannot process a CSV file directly – the CSV needs to be imported into a table and then each row can be processed.

    The table headings are the same as the import file shown above: CommandType, Command, ArgsType, Args and Notes.

    The table is wiped at the start of each import.

    The Flows

    There are four flows, the main flow and three helper flows.

    Some of the flows include the new Okta Privileged Access Workflows connector. We will explore the main flow and the sub flow using this connector. The other two subflows (helper flows) are using fairly standard workflows cards to massage data and won’t be explored in detail.

    The Main Flow

    The main flow is set as a Delegated Flow so can be run by anyone authorised in the Okta Admin Console (they do not need to be Workflows administrators). You might have a group of Okta Privileged Access administrators who you also grant the relevant Okta admin rights to invoke this delegated flow.

    The workflow has the following flow:

    1. Get the filename and description (optionally) from the requester, and if the filename is empty, abort the flow.
    2. If a description was not set, build one from the filename (replace -_ with spaces)
    3. Import the CSV file and store it in the Workflows table
    4. Get each command (table row) and format them into the body object needed for the API
    5. Check for an existing sudo command bundle and if found, delete it
    6. Use the API to create the new command bundle

    The import step (3. above) looks like the following.

    In this case we have built the import mechanism to import from a specific folder on a Google Drive (you could also import using other connectors like Excel Online). In this set of cards we search the specific folder for a file matching the filename and then use the File ID returned to Download the file into Okta Workflows. We then clear the temporary table and import the CSV file into the table (note that there’s no mapping, so the CSV headers must match the column names).

    This results of this are shown in The Table section above.

    The next section of the flow (4. above) will build the Body object that needs to be passed to the API to create the bundle.

    The table of commands is read into a list and that list is transformed into an object to suit the API. An object is then built for the other arguments in the body. The resulting body looks like the following.

    The last sections of the main flow (5. and 6. above) will use a helper flow to check for and delete any existing command bundle with the same name, then use the API to create the new bundle.

    We are using the Okta Privileged Access Connector Custom API Action card with POST to create the new command bundle. There are no actions in the connector for the sudo command bundles, but you can access any API via the Custom API Action card without having to worry about authentication credentials (which you would if you just used the Workflows HTTP Connector).

    The Relative URL for the API is /sudo_command_bundles. It is passing in the body as shown above to create the command bundle.

    This concludes the main flow.

    Note that at the time of writing this article, the API documentation for the sudo_command_bundles endpoint is not available.

    Subflow To Check and Delete An Existing Command Bundle

    The SUB – Find and Delete Existing Bundle helper flow also uses the Okta Privileged Access connector to run some API commands and is worth exploring. The role of this flow is to check if there is an existing command bundle of the same name and if so, delete it. This is because you cannot have two command bundles with the same name.

    The first part of the flow will build a query parameter (count=200) and then use the Custom API Action to GET from /sudo_command_bundles. This means it will get up to 200 command bundles. It then extracts the list from the Body of the response.

    Then there are some cards to find the current name in the existing list of command bundles and continue if one is found.

    The last part of the flow will get the item found, extract the id of the bundle and use it to build the Relative URL for the API (/sudo_command_bundles/<id>). This is then used in a Custom API Action DELETE card to delete the existing bundle.

    This concludes this subflow.

    Running the Import Tool

    Let’s now walk through the creation of the test sudo bundle shown above. As mentioned, the main flow begins with a Delegated Flow card. So an admin who has access goes to the Delegated flows menu item.

    They can see the MAIN flow to import sudo commands, and click the Run button.

    They enter a filename and optionally a description and click Run.

    The flow will run in the background. To see the results, you can look at the Okta System Log.

    In this case there is an event to show that the Delegated flow ran (with a SUCCESS status). After this there are two PAM events to show an older command bundle of the same name being deleted and the new one being created.

    If we go to the new Sudo command bundle we see the name based on the filename and the description as entered.

    The command bundle also shows the commands in that file, such as the following.

    If these were real commands, the command bundle could be attached to a policy rule and tested.

    Conclusion

    This article has shown how we can implement a bulk import mechanism for Sudo command bundles in Okta Privileged Access using Okta Workflows and the new Okta Privileged Access Workflows connector. We have walked through an example of two sets of OpenLDAP commands and how the import mechanism is configured.

    The article has highlighted the use of the Custom API Action card that comes with the connector and how it can be used to GET, POST and DELETE bundles with the /sudo_command_bundles API endpoint.

    You could use this mechanism to build a library of command sets and import them into Okta Privileged Access as needed. This may be simpler than manually adding them to the Okta Privileged Access UI.

    See also:

  • Centrally Managing SUDO Rules with Okta Privileged Access

    Sudo provides a granular access control mechanism on many *nix variants (if you run a Mac, sudo is the thing prompting for the password when you try to do something). The ability to centrally manage sudo rules and grant access via policy has recently been added to Okta Privileged Access. This article explores the new feature.

    Introduction

    To quote Wikipediasudo is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser.“. In most scenarios, it is used to elevate user privileges so that an ordinary user can run commands as the superuser.

    This fits within the mantras of “least privilege” and “zero standing privileges”, where users do not need elevated privileges, but can be granted them through being assigned to groups mapped to sudo rules files containing allowed commands and restrictions. Most Linux systems (and MacOS machines) will have a default sudo rule that allows any user to run any command as long as they know their password.

    Okta Privileged Access supports use cases where users connect to a server as themselves (account provisioned just in time) and being elevated to the appropriate level of access. You could have a spectrum of access methods:

    1. Individual access with no additional privileges,
    2. Individual access with some elevated privileges based on membership in sudo rules, and
    3. Individual access with full administrative access (i.e. they can run any command as root via sudo).

    This was explored earlier in Leveraging Zero Standing Privileges and Shared Account Acces with Okta Privilefged Access.

    Often the challenge with sudo is managing consistent rules across a large environment. Without good configuration management, there is a risk that someone could get more access than they need. There are tools that simplify this by centrally managing sudo rules tied to a directory and groups in that directory (like RedHat IdM). Okta Privileged Access also centrally manages sudo rules and ties them to users via security policies. But it also dynamically creates and destroys the rules on the servers so users cannot inadvertently get access they aren’t entitled to.

    This help article explores how Okta Privileged Access manages and uses sudo rules.

    If you are familiar with Okta Advanced Server Access and how it implemented sudo rules, this is similar and achieves the same outcome but in the context of the new Okta Privileged Access data model. This is one of the features to bring parity between the two products.

    Configuration

    This feature is generally available across all Okta Privileged Access tenants. To use sudo rules, you need to setup the Sudo Command Bundles and assign them to Policies.

    Resource Management

    There is a new menu item under RESOURCE ADMINISTRATION (available to anyone with the Resource Administration role) called Sudo Commands.

    The term “Sudo Command Bundle” is used to refer to logical groupings of sudo commands. You may define bundles that relate to job role, such as DBA or Website administrator, and put in all the commands for that role. When you assign bundles to policies, you can have multiple bundles assigned to one user, so it may make sense to make the bundles fairly granular (like a job function).

    Sudo Command Bundles are created, managed and deleted from this page. Note that once a command bundle is assigned to a policy, it cannot be modified or deleted (this is a security control). Each command bundle will have a name and description.

    The name will be used to build the sudoers.d file on the relevant Linux servers.

    Then there will be a set of commands that are restricted.

    Commands can be Executables with any, none or specific arguments specified. They can also be Raw (specify the exact command to be executed (users can’t modify)) or Directory (execute any command in the specified directory).

    The last section is the Advanced configurations.

    You can run the commands as any non-root user and also specify how the commands are run. The default is NOPASSWD to allow the commands to be run without needing a password. See the sudo documentation for the other options.

    Policy Management

    With Sudo Command Bundles defined, they need to be assigned to security policies via a rule. Obviously it needs to be a SSH session rule.

    The bundles are assigned in a new option under Access Methods.

    There are now three radio buttons for individual accounts – you can set user-level permissions, admin-level permissions OR the new user-level with sudo commands.

    Selecting this option allows assignment of one or more Sudo Command Bundles. This means that if you have multiple granular command bundles (tied to different job functions) you can collect them together here to give all relevant access to any users mapped as principals to this policy.

    You also need to define a Sudo Display Name. This will appear to the user when then connect to a server defined by this policy.

    Depending on the risk associated with the commands you’re granting via this rule, you may also want to enable controls like session recording, MFA or access request. These are the same as for any other policy rule.

    Use

    Lets look at the user experience and then what’s going on with the sudoers.d files whilst the user is connected.

    User Experience

    With the new rule and policy saved (and enabled), users are given an access method that includes the sudo command bundle. For example, the following screenshot shows different access methods offered to the user based on the policies he is assigned to. It includes the individual and admin-level individual access, but also individual access with sudo restrictions.

    Note that the description includes the Sudo Display Name set in the policy rule – (sudo level individual account with user-management-commands).

    When the user connects to the server with the sudo option, a sudoers.d file is dynamically created for them (along with the user account and user personal group). When they attempt to run a command that is not in that sudoers.d file, they are prompted for their password (the default sudo rules for a Linux server). But there is no password for that account, so they cannot run the command. However any command that is in that file can be run without requiring a password.

    Thus granular user access can be controlled through the Sudo Command Bundles and Policies.

    A Look at the sudoers.d File Created

    Before closing out this article it’s worthwhile looking at how this is implemented on the Linux server. It is leveraging standard sudo on Linux. There are some rules files in the /etc/sudoers.d folder. This includes the 90-*** file shipped with the OS. There may also be 95-scaleft file for full administrative access on the server (from Okta Privileged Access) – used for the admin-level individual account option.

    The new file, created as the user connected, is for the 10-user-management-commands bundle created earlier, for the user larry_linuxadmin. You can see this in the filename (bundle name + “okta” + user name + unique id).

    The new 10_*** file contains some comments and two commands.

    The first defines a command alias (Cmnd_Alias) that lists all of the commands from the command bundle.

    Cmnd_Alias SCALEFT_LARRY_LINUXADMIN_60B360DC_A081_41D3_9470_1AC65F0FD80D_CMDS = /usr/sbin/addgroup *, /usr/sbin/delgroup *, /usr/sbin/deluser *, /usr/sbin/useradd *, /usr/sbin/usermod *

    The second line maps the users personal group (%larry_linuxadmin) to that command alias with the option of ALL=NOPASSWD, which means the commands can be run without requiring a password.

    %larry_linuxadmin ALL= NOPASSWD: SCALEFT_LARRY_LINUXADMIN_60B360DC_A081_41D3_9470_1AC65F0FD80D_CMDS

    This file will be removed when the user session ends. If there are multple users connected from Okta Privileged Access with sudo entitlements you may see multiple files in this folder.

    Conclusion

    In this article we have introduced the new sudo capability with Okta Privileged Access. Being able to centrally manage sudo rules and assign them to security policy significantly reduces risk due to misconfiguration of sudo rules files across many servers and it also increases usability by tying it into the Okta Privileged Access framework, so users can decide how much entitlement they need to do a particular task.

  • Generating Okta Privileged Access Reports with the new Workflows Connector

    Okta recently released a Workflows connector for Okta Privileged Access. It provides an abstraction of many of the Okta Privileged Access APIs to make working with them in Workflows easier.

    This article is an exploration of using the new connector to produce Okta Privileged Access reports, specifically access reports for users and resources.

    Introduction

    Okta Privileged Access (OPA) has a reporting capability exposed through the Access Reports API. The reporting mechanism is asynchronous where you would request a report to be created, check to see if the report is ready to be downloaded, and then download the report in a CSV formatted block of text. Up until the Workflows Connector was released, you would need to establish the appropriate credentials and run the API calls directly.

    The new Workflows Okta Privileged Access Connector handles all of the authentication for the API calls and abstracts the API calls. There are action cards to Create Access Report, Retrieve Access Report, and Download Access Report. As the create report API requires IDs (like a userID and serverID), there are also actions to Find Users, and Find Servers. These will take any string and search for OPA users or servers that match and return a list of users/servers (including IDs) that you can interrogate to get a specific ID to report on. The different actions provided gives you a lot of flexibility in how you build your flows.

    This article provides an example pair of flows, one to produce a User Access Report and one to produce a Server Access Report. As we walk through the examples, we will show how the different actions listed above are used.

    Overview

    For this article, two Workflow flows have been built that will consume a search argument (user or server), generate and download the report using the APIs, then email the report CSV to a user. The two flows are exposed as Delegated flows. This means there is no need for the person who needs to run the report to go into the Okta Workflows UI, they can run them (if they have the access) from within the Okta Admin Console.

    The two delegated flows are shown below.

    Let’s run through the User Access Report flow. Clicking Run beside the report produces a dialog box prompting for any arguments to be passed to the calling flow. In this case we’re passing a search string for a user in OPA.

    Clicking Run submits the flow, and you get a confirmation message.

    After the flow completes, an email is received with the report attached.

    Opening the report file we can see the details of the user, the resource, and the policies available. These show the account the user can use (shared, such as root, or individual), the privileges for that user (user, sudo or admin) and any conditions on using that account (like MFA, Approval, Gateway and Session Recording).

    The mechanism for the Server Access Report is exactly the same, just that the resultant report is focused on all access to a specific resource.

    Now that we’ve seen the inputs and outputs of the reporting mechanism, we will dig into the implementation in Workflows.

    Construction of the Flows

    To implement this we needed:

    • An Okta Privileged Access connector, an Okta connector and an Email connector (in this case GMail), and
    • A set of workflows (delegated and helper flows) to implement the processes.

    The flows are shown below.

    We will explore the Okta Privileged Access connector and the flows in the following sections.

    Creating a Connection

    The steps to create and authorize a new connection are detailed here. You could create a service user in Okta (with super admin rights) and assign it to the Okta Privileged Access application so the Workflows connection will use it.

    To create the connection you will need a service user in Okta Privileged Access and that service user will need to have administrator roles in Okta Privileged Access (currently it needs Resource Administrator and Security Policy Administrator, but may need PAM Administrator in the future if user/group management functions are added). This is because the APIs used involve working with most of the data objects in OPA.

    In this case we have created a service user called workflows-connector and created the API Key for them.

    When creating the connection you will need to specify:

    • The Name of the connection and a Description (optional),
    • The Okta Privileged Access Team Name (this would be in the URL, https://<team>.pam.okta.com or https://<team>.pam.oktapreview.com),
    • The API Key ID and Key for the service account, and
    • The Okta Privileged Access Base URL

    The base URL should not have a trailing /. An example is shown below.

    With this configured the Workflows can leverage the Okta Privileged Access actions.

    The Delegated (Top-level) Flows to Create and Download the Reports

    There are two delegated flows, one for the user access report and one for the server access report. Both have the same structure:

    1. Check a search string has been passed in and call a sub-flow to find the ID of the user / server that matches the query and create the report.
    2. Use a sub-flow to check if the report is ready to be downloaded, and if not, wait a bit and retry
    3. Check if the completed report is empty and if so, end the flow
    4. Use a sub-flow to download the report
    5. Extract the CSV formatted report and store in a Workflows table
    6. Export the table to a CSV file and email it to the user who requested the report

    The following screen shots show this for the Server Access Report. The input card is the Delegated Flow card and it is consuming the search string (server search string in this case). It checks that the search string is non-blank then calls the S20 sub-flow to find that server and generate the report.

    The next section will wait for five seconds and then use a sub-flow (S50) to get the details of the report, including it’s status. The status should be COMPLETED or COMPLETED_BUT_EMPTY to proceed.

    If it doesn’t have one of those two statuses, there is another wait of 15 seconds the the process is repeated.

    Note that this is a very inelegant way of implementing what should be a do-until loop. You could create a sub-flow that is called recursively and breaks out (return) when the status is COMPLETED).

    If the status is COMPLETED_BUT_EMPTY it will end the flow with a message saying it’s empty. If not, it will call a sub-flow (S60) to download the report and then split up the CSV Report text block into a list (with a carriage return as the separator).

    We want to store the report in a Workflows table because Workflows isn’t great at processing CSV blobs of text. To do this we first empty out the report table and then call a sub-flow (S80) to parse out the field in each CSV row and store them in a table. This table is then exported as a CSV file (note that it was a Workflows text field, now it’s a Workflows file).

    The last section will lookup the user who initiated the delegated flow to get their email address(es) and these are used to send an email with the report CSV file.

    This completes the two delegated flows.

    The Sub-flows Used

    Whilst there are six sub-flows listed above, we will focus on the three that use the Workflows Connector actions: S10, S50 and S60.

    The first is the S10 – Create User Report (or the similar S20 – Create Server Report) flow. It is passed the search string (i.e. username string or servername string for S20).

    It uses the OPA Connector – Find Users card to search for a user and return a list of user objects (including the id) and a count of the number use users found that match the string. This is using the “contains” clause with the List all Users for a Team API call. (If using the Find Servers action, it will search all servers for the string being found in any of the hostname, canonical_name or access_address.)

    In this example it is expecting a single user (or server) to match and will error out if none, or more than one, are found.

    In the second part of the workflow it will extract the ID from the single record found. Then this is used with an OPA Connector – Create Access Report card to get OPA to create the report. It returns the ID of the new report.

    As mentioned earlier, reports are generated as an asynchronous process and the time taken to generate a report will depend on the amount of OPA objects it needs to process and other activities going on within you OPA team. Before downloading a report, you need to check for a completed status (“COMPLETED”, “COMPLETED_BUT_EMPTY”, or “ERROR”). A status of “CREATED” means OPA has accepted the request to create the report and “IN PROGRESS” means the report is being created but is not yet ready.

    In this example set of flows, S50 – Is Report Ready is used to get the status and return it. It uses the OPA Connector – Retrieve Access Report card to get the details of the report (including the status) based on the Report ID passed into the sub-flow. It returns the status which is evaluated by the calling flow.

    Once the report is ready to be downloaded, the S60 – Download Report card is used to download the report using the OPA Connector – Download Access Report card and this is returned as a text field to the calling flow.

    From a Workflows design perspective, the last two highlighted flows are trivial and probably don’t justify separate helper flows, but it does make the high-level flows cleaner and easier to read.

    There are other sub flows in use, but these ones show the use of the OPA Connector actions to search for users/servers, create the report, check the report status and download the report

    Conclusion

    Being able to run reports to see what access users have or how users can access resources is important for many Privileged Access Management implementations. Okta Privileged Access has an Access Report capability that has been accessable via APIs since the product went GA last December.

    Okta has just released an Okta Privileged Access Workflows connector that includes action cards to create and download Access Reports. This article has explored how this connector can be used to do this and some considerations around how to structure Workflows using the connector actions.

  • Okta Privileged Access and Automation with DevOps Tools

    This article looks at how Okta Privileged Access (OPA) can leverage DevOps tooling for automation in large infrastructure environments.

    Introduction

    Okta Privileged Access (OPA) provides privileged access management (PAM) for multiple use cases, such as securing access to privileged credentials (secrets) and privileged access to servers. Where there is a large environment needing PAM, customers often look to devops tooling to install and maintain components.

    The following figure shows the architecture of OPA.

    It comprises a number of cloud services (OPA, Okta WIC and Okta Access Requests) plus infrastructure components that are deployed to customer machines. These include the OPA Client deployed to user workstations, the OPA (Server) Agent deployed to servers and optionally a Gateway. More details on the components and interactions can be found in A Technical Introduction to Okta Privileged Access.

    From an automation perspective it is common to look at the deployment and management of clients, and the deployment and automation of agents and gateways (highlighted in red). You may also find some automation of polices, resources and roles (highlighted in green).

    The infrastructure components (Client, Agent and Gateway) are the same as for OPA’s predecessor, Okta Advanced Server Access (ASA), so the mechanisms available for automation are largely the same.

    Client Deployment and Management

    OPA Clients are software components installed onto user’s workstations. The client is supported on macOS, Windows and multiple flavours of Linux (see the list of supported operating systems).

    Whilst the client can be installed manually, many organisations deploy and manage the MSI/package via their normal software deployment tool.

    Once the client software component is installed, the user must enroll the client, and in the process authenticate against Okta/OPA. In some instances you can also silently install the OPA client (which includes auto enrollment).

    Agent and Gateway Deployment and Management

    There are different approaches to automation of the deployment and management of Agents and Gateways, including:

    • Baking the components and enrollment tokens into a “gold” image
    • Including deployment into run scripts provided by the cloud platform
    • Include in devops scripts to create/start a server instance in a cloud platform
    • Leveraging a devops tool like Terraform, Chef, Puppet, or Ansible

    We will explore some of these.

    Baked into the Image

    If you leverage “gold” images or use something like Docker to deploy new instances of a server, you could bake in the relevant OPA agent (or gateway) and enrollment key into that image. When you run up an instance of that image, the new server will connect to OPA and be put into the Resource Group / Project associated with that enrollment token.

    This may be appropriate if there are standard images for server types (e.g. web servers, DB servers) and you want them to be managed in the same way with OPA. You would need to ensure that any Policies you have would automatically adapt to new instances of the image starting up (e.g. using labels tied to the image, either system generated or custom, rather than tying policy to hostnames or IPs).

    This approach will not provide for any sort of management of the Agent (or Gateway). It will remain at the version that was baked into the image unless some other form of systems management is applied. The image should be regularly updated with the latest Agent (or Gateway).

    Included in IaaS Run Scripts

    Cloud platforms, such as AWS, have mechanisms to run scripts on instance startup. For example you can use the user data field on an AWS instance to run a script (see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html).

    The code to install the Agent (or Gateway) and enroll could be added to the run scripts.

    Note that the following is an old example and should not be used verbatim. Check with the deploy and manage servers section of the product documentation.

    Depending on the platform you may be able to parameterise the script and also to get the latest version. But this is for server startup and not for ongoing maintenance.

    Included in Build Scripts for DevOps Tools

    If you use a DevOps tool, like Terraform, you can also pass commands into the startup process. The following is an example of a Terraform script (main,tf in his case) to build a new server in AWS and has code passed to the AWS user_data field to run the install and enrollment of the Agent.

    This can be parameterised (e.g. using a variables.tf file or similar) to pass in arguments to run. Again this is a start state and this mechanism will not perform any management of the Agent once installed.

    Many of the DevOps tools, like Terraform, Chef, Puppet and Ansible, can be used with OPA, but none of them have out-of-the-box capabilities for managing the infrastructure components. For example Terraform has an Okta Provider and an Okta Advanced Server Access Provider that can build objects in Okta or ASA (like groups) but neither will manage the Client, Agent or Gateway software packages. You need to add the native script commands into the tool, such as the example above.

    Note that the Okta ASA Provider for Terraform will not work with OPA given the significant change to the data model. an Okta OPA Provider is not yet available.

    Examples and Samples

    This section looks at some large-scale examples and provides links to some sample code. the two examples are based on Okta Advanced Server Access (ASA) but as the Agents are the same for both ASA and OPA, the mechanisms could be applied to OPA (except for highlighted exceptions).

    Terraform Example

    The following diagram shows how a customer with large and dynamic server environment was able to leverage Terraform to automate deployment of the ASA agent and some ASA configuration into the process of running up a new server instance.

    When a new customer signs up for their service, a set of Terraform scripts are run to create a new tenant for that customer. There are two parallel streams:

    • The top stream in the diagram is using the (ASA) Terraform provider to create a new Project in ASA, assign Groups to that project and generate an Enrollment token for the Project.
    • The bottom stream is creating the objects in AWS, a VPC and EC2 server instances. This includes installing the ASA agent onto the servers and using the Enrollment token from the other stream to enroll the server instances into the Project

    This means that as a new tenant is spun up for a customer, the Agent is installed and assigned to the Project which grants admins access to connect to the server instances.

    Note that this is for ASA and leverages the ASA Terraform provider for the ASA objects (project, group assignment and enrollment token). The Terraform provider has not yet been updated to cover the new OPA data model. To implement this mechanism with OPA would involve using the OPA API instead of the Terraform provider.

    More details on this can be found at https://developer.okta.com/blog/2020/04/24/okta-terraform-automate-identity-and-infrastructure and https://www.okta.com/blog/2020/04/adapting-to-the-cloud-operating-model-using-okta-hashicorp.

    Ansible Example

    Terraform is great when you can describe your environment programmatically and you manage everything by creating/destroying instances. But for systems outside of this, you need a solution that can be used like a systems management tool to manage the installations.

    Most systems management tools will support this. They need to be able to know about the servers and be able to run scripts on the servers. For ASA or OPA you could build scripts to install/enroll the agent, check if the agent is running and restart it if it’s not and even check for outdated and update it.

    There are many large scale ASA customers who have done this with Ansible.

    Some Sample Code

    Some colleagues were developing a set of OPA utilities that are available at https://github.com/Okta-PAM-Resource-Kit. These are not provided by, nor supported by, Okta and are provided as examples to help with OPA deployments.

    Under the scripts/installation folder are scripts for both Linux and Windows agent installation.

    The Linux script will:

    • Add the OPA/ASA repos to the local package manager when possible, allowing easy updates using standard tools
    • Test for the presence of TLS inspection (MITM) which would interfere with outbound calls to Okta
    • Extract a useful server name from the “Name” tag in AWS. (Allow tags in instance metadata must be enabled.)
    • Create default configuration files for ASA Server Tools and ASA Gateway
    • Create enrollment token file for ASA Server Tools
    • Create setup token file for ASA Gateway
    • And finally, install OPA/ASA Server Tools, OPA/ASA Gateway (and Transcoder on RDP capable OSes), OPA/ASA Client Tools, or any combination of the three.

    The Windows script comes in both a CMD and PowerShell form, and is similar to the Linux script.

    These scripts could be used as examples for implementing automation in Terraform, Ansible or other DevOps tools.

    Conclusion

    This article has looked at how automation can be applied to an Okta Privileged Access deployment, particularly the infrastructure components such as the Client and Agent. It has looked at how automation could be applied to both the initial install of the components and also the ongoing management of them, with some examples and links to sample code.

  • Using Custom Labels in OPA for More Flexible Policies

    This article looks at the new custom labels feature in Okta Privileged Access (OPA) and how they can be used to make policy management and assignment more flexible.

    This is a parity feature that was available in Okta Advanced Server Access and is now available in OPA.

    Labels in Okta Privileged Access

    When a server is enrolled in OPA, the system will generate a set of labels for that server.

    The system will generate labels for the hostname, canonical name (if set in the agent configuration file), operating system, and os type. If the server is hosted in a cloud environment like AWS, you will also get labels pertaining to that cloud service (like the cloud_provider, aws_account_id and aws_availability_zone labels shown above). As these are system-generated, they are prefixed with system (e.g. system.hostname).

    These labels are like tags and can be used to assign sets of servers to a Security Policy Rule. For example you could have a Policy Rule to apply to all Linux servers, or only those running a specific ubuntu version. Or you could have a Policy Rule that applies to all servers belonging to a specific AWS account or availability zone.

    Whilst this may be enough for some OPA deployments, there has been calls to allow custom labels. For example you may want to categorize servers into PROD, UAT, DEV etc. and have different Policy Rules. Or maybe you want to have different uses of servers (e.g. DB, Web) and have different Policy Rules for the different types.

    This calls for custom labels. They were available with Okta Advanced Server Access and are now available with OPA.

    Defining Custom Labels

    Before implementing custom labels, you should have a think about the taxonomy you want to apply. You should identify the types of labels and their possible values. That way you can define Policy Rules and then have new servers automatically assigned to them via the labels. You don’t want to have to revisit your policy rules each time you run up a new server instance.

    A note of caution on using custom labels. If someone has access to the configuration file that can set labels and dictate what OPA policies apply to accessing the server. You could allow someone to access that shouldn’t and you could potentially lock out access to the server. As per the documentation “Okta recommends exercising caution when using custom labels.

    Defining the custom labels is straightforward – you add them to the agent configuration file sftd.yaml.

    The file is in different locations for Linux and Windows – see Configure the Okta Privileged Access server agent in the product documentation. It talks about the file location and has a section on custom labels.

    A Linux example is shown below.

    In this case there are two labels – environment and function.

    It is important to note that the sftd.yaml file is very sensitive to spacing. You need to make sure that there are two spaces before each label (e.g. <space><space>environment).

    Custom Labels in OPA

    One the agent is recycled (or it rereads the configuration file) you will see the new labels in OPA.

    Notice that the new labels are prefixed sftd (e.g. sftd.environment and sftd.function). This is to distinguish them from the system-generated labels.

    These labels are ready to use.

    Using Custom Labels

    Custom labels are no different to system-generated labels when defining policy rules.

    When creating or editing a Policy Rule, if you enable the Select resources by system generated label option and click into the Add resources field, you will see the custom labels as well as the system-generated ones.

    That’s it, very straightforward – add the labels to the agent configuration file and them apply the labels to policy. If you use automation to add the agent to servers, you could extend the automation scripts to set relevant labels. If you have the Policy Rules already set, there’s no additional human intervention required.

  • MFA Can Now Be Applied to Secret Access Policy in Okta Privileged Access

    Okta Privileged Access (OPA) has had the option to turn on Multifactor Authentication (MFA) for server access policy for some time. This has now been extended to cover secret access policy.

    If you have worked with OPA Policy Rules for Secrets you will be familiar with the following that shows the permissions that can be set for folder and secret access.

    Below this is the familiar option to apply an Access Request flow to the rule (Approval requests). A new option has been added for Multifactor authentication.

    Enabling MFA expands the dialog to show the MFA options.

    These are the same as applied to server access rules. It will leverage the Authenticators defined in Okta (example below)

    The assurance level dictates whether any two factor types will do (e.g. Possession + Knowledge or Possession + Biometric) or whether a Phishing resistant authenticator is required (e.g. Okta Verify or FIDO2 (WebAuthn)).

    The reauthentication frequency is how often the user is prompted for MFA, either on every action or only after a specified duration from the last authentication. Note that if you have a folder hierarchy, the every guarded action option will mean that every folder traversal and secret access will prompt for MFA which may not be a desired user experience. If this is the case, it may be better to have more secret access rules, with MFA only being applied in high risk scenarios (or set a duration).

    Once applied, MFA will be shown as a Conditional against the policy rule.

    That’s it. It’s the same as for server access policies and has the same user experience.

    It will allow administrators to apply MFA based on risk, alongside the approval requests control.

  • Customisable Access Certification Reviewer Content in OIG

    This article looks at the new customisable reviewer content in Okta Identity Governance (OIG) Access Certifications.

    The doc link for this new feature is https://help.okta.com/oie/en-us/content/topics/identity-governance/access-certification/iga-ac-customizable-context.htm.

    Introduction

    Access Certification (or recertification, attestation) is a key capability in any Identity Governance product and it is the one most likely to cause friction with business users. If you’re responsible for running an aspect of the business, recertifying the access of your direct reports is probably not high on the priority list. So it’s important that the process to review access is as straightforward and usable as possible.

    Okta has gone to great lengths to make the user review interface as simple and usable as possible. But up until now the column headings and attributes displayed when reviewing an Access Certification Campaign were fixed and many customers have asked for the ability to modify the attributes used.

    This new feature makes user reviews more flexible and will allow:

    • Specification of the attributes to appear in a review,
    • The ability to sort and size the columns on the review summary page
    • The ability to filter the reviews by attributes, and
    • The ability for the reviewer to select the attributes displayed on the summary page

    We will explore these features below.

    Enabling the New Feature

    This feature is currently in self-service Early Access (EA) and needs to be enabled in Settings > Features under the Early access heading. It’s called “Access Certifications – Customizable Reviewer Context“.

    When this feature moves out of EA, this feature setting will go away and the feature will be enabled by default.

    Configuration of the New Feature

    When you navigate to the Identity Governance > Access Certifications menu item, you will notice the page has changed subtly. The previous Active, Scheduled and Closed tabs have been made selection boxes (with the number of each shown). In the example below, the Active campaigns are showing (and there is one of them).

    There are now two tabs, Campaigns and Settings, with Campaigns being the default view.

    The Settings tab contains the new contextual information, i.e. the attributes presented for users, resources and other .

    The Edit button allows changing the attributes, with pull-down sections for each.

    The User information section allows for selection/deselection of base and custom attributes (for example the Current Project attribute is a custom attribute).

    The Resource information contains both attributes for applications and groups to be reviewed.

    The Additional information is currently used for entitlements in the Entitlement Management capability and Governance history, but may be expanded in the future.

    When saved and a new campaign is created/launched, it will adopt these changed settings. There is no change to the configuration screens to modify the new context for a specific campaign.

    Review Summary Changes

    When the reviewer opens the new campaign, they will see some changes from previously.

    They are:

    1. Filters – there is a list of active filters, and a button to set/manage the filters
    2. A Sort option for each of the columns
    3. Flexible columns – where you have modified the columns in the Settings page
    4. Resize bars – so you can resize the width of the columns
    5. A Menu icon for more actions – the only current option is to customize the view

    Let’s look at these.

    Filters

    You can apply filters on any attribute available to the campaign.

    Some require exact matches, some can use Contains/Starts with. When selecting items like resources, you will get a matching dropdown list. You can have multiple conditions in the filter.

    This results in a filtered view.

    You can remove filters by clicking the cross icon in the filter bar, or by going back into the filter edit screen and changing them there.

    Sort Option

    Selecting any of the column headings will sort them and you can toggle ascending/descending.

    Column Resizing

    You can grab the resize bars and move them to see more/less of a column (there are minimum widths).

    Changing the Columns

    Using the Menu > Customize view option, you can select/deselect attributes.

    For example removing email and adding in the two description attributes results in the columns changing.

    Note that if you have too much info to display, you get a scroll bar at the bottom.

    Review Details Changes

    The attributes shown on the slide-out Review Details panel reflect those selected in the Access Certification campaign Settings page.

    In this case some user details were removed and the Current Project added, and some of the Resource details have changed as per the Settings changes.

    The reviewer cannot select which of these are displayed.

    Conclusion

    This article has explored the new customizable access certification reviewer context feature in Okta Identity Governance. It introduces a number of changes, such as: selecting which attributes are displayed in a campaign; changing, sorting and sizing of columns; and filtering of data.

    Businesses can apply a blanket set of attributes that make sense to them. Perhaps not all the standard user profile attributes are used, but they have custom ones they want to show to the reviewer. This feature allows that.

    It also makes the review process more usable by the reviewer giving them greater control over the review information they are presented with and use to make review decisions.

    Together these changes make access certification campaigns more consumable and usable, meaning business users are more likely to do them rather than avoiding them.

  • The New Checkout Feature in Okta Privileged Access

    This article provides information on the latest feature released for Okta Privileged Access – Checkout. This feature allows setting exclusive checkout on shared accounts and manage the checkout/checkin of those accounts.

    Pre-Reqs

    The feature is there in Okta Privileged Access preview and production teams. You do not need to “turn on” any features.

    As always you should keep your infrastructure components up to the latest release. In this case the client (“sft”) should be at 1.81.1 or higher (the 5 Jun Release Notes highlighted this – https://help.okta.com/oie/en-us/content/topics/releasenotes/privileged-access/privileged-access-release-notes.htm).

    Enable Checkout for Servers in a Project

    Checkout is enabled at the Project level. This means it can apply to some or all shared accounts to all servers in a Project. Note that it does not apply to individual accounts as they cannot be shared so there’s no need for exclusivity.

    The product documentation describes how to enable the feature. https://help.okta.com/oie/en-us/content/topics/privileged-access/pam-configure-checkout.htm

    When accessing a Project within a Resource Group, there is a new section on the Settings tab titled Checkout Settings.

    When you edit the settings, the section expands to show the options. When you enable it, you have two sets of options:

    • The scope – whether to apply checkout to all shared accounts, an include list or an exclude list
    • The checkout time – how long a shared account is checked out before automatic check in and password rotation

    There is also a new tab in the Project called Checked Out Accounts, that will provide an admin view of checked-out accounts.

    There is also the option to set checkout time overrides in specific Policy Rules.

    New that we have the feature enabled, let’s look at how it’s used from an end-user perspective.

    Checkout from the Command Line

    If checkout is enabled for accounts that a user can use (access methods) there wil be an indication on the command line when they connect.

    If someone else tries to use that account, they will get an indication that it is already checked out.

    The experience is slightly different if using the OPA web UI.

    Checkout from the Web UI

    When a user goes to their server list in the web UI, the first thing they will notice is that the Connect button has gone from the server list page.

    However if they click on the server they want to access they see more options. On the Accounts tab, they can see the status and any conditions on use of the account. There is also a View details button.

    Clicking this button produces a slide-out window that shows infomraiton about the accounts, such as the status (and checkout button if the account isn’t already checked out), the max checkout time and if it is checked out, how long it has remaining.

    Back on the Accounts tab there is also a more options (three vertical dots) icon with a single option – Check in. If the account is not checked out, this option is disabled. If the account is checked out, the option is enabled.

    Users can use this button to check the account in. This will not rotate the password or close any active session, but it will mean that others can check out the account.

    Finally there is the traditional Connect button to start a SSH/RDP session.

    Forcing a Check In

    Administrators can also view and manage checked out accounts. This is done on the Project Checked Out Accounts tab we showed earlier. If an account is Checked Out, you will see this in the Status column. There are two ways to force a check in, the first being to use the Force a checkin option under Actions.

    This will present a confirmation dialog.

    The second way is to click on the account name to see the details of the account. This is the same account view that has been in the product for some time, but has had additional information added to show checkout status and a new Force check-in button.

    Clicking the button produces the same confirmation dialog. Then the Status of the checkout changes to Checkin in progress.

    And on the Checked out accounts list.

    Conclusion

    And that’s it, you can now apply exclusive checkout to shared accounts in Okta Privileged Access. You can apply checkout to specific shared accounts within Projects with checkout duration and also apply overrides to specific servers/accounts within Policy Rules.

    Users can see if an account is enabled for checkout when they go to use it and whether it’s already been checked out. They can manually check it in, as can administrators. Or they can wait until the checkin duration expires and the password is rotated.

    This feature enhances both the usability and auditability of shared accounts for those times when you can’t just use JIT provisioned individual accounts and permissions.

  • Managing Access in Okta Privileged Access with the new OIG Resource Catalog

    Okta has released into Early Access a new feature called the Access Request Conditions and Resource Catalog, or more simply the Resource Catalog. This is a new way to configure and use access requests in Okta Identity Governance. This article shows how this can be applied to access within Okta Privileged Access.

    Introduction

    Okta Privileged Access has users and groups provisioned from Okta. The groups are used to assign users to Okta Privileged Access administration roles and security policies. So whilst the access is defined in Okta Privileged Access, users are assigned to groups in Okta and then pushed to Okta Privileged Access.

    As we’re concerned about controlling privileged access, it makes sense to apply stringent controls through Okta Identity Governance (OIG) such as access requests processes (with approvals and fixed durations) and access certification to the groups pushed to Okta Privileged Access. A new Resource Catalog feature has been added to OIG to allow requesting access from within the Okta Dashboard.

    This article explores using the new OIG Resource Catalog with groups pushed to Okta Privileged Access and assigned to roles and policies. The following figure shows the components involved in what the article will explore.

    The bottom of the diagram is Okta Privileged Access with admin roles and security policies, with users assigned via groups. These groups are pushed from Okta (Workforce Identity Cloud).

    Within Okta, these groups are exposed for access requests through conditions on the Okta Privileged Access app.

    When a user uses the new Request Access function in the Okta Dashboard, they are presented with the apps (in this case Okta Privileged Access) and accesses (in this case groups) they can request. When they request one of these groups, the Okta Access Requests component will drive an approval flow and reviewers (such as the user’s manager) will use the Okta Access Requests component to review and approve/deny the access request.

    When access is granted, the user uses the Okta Privileged Access tile on the dashboard to SSO to the Okta Privileged Access and has the roles/policies based on the groups assigned. This will be come clearer as we walk through the sections below.

    For more information on Okta Privileged Access, see:

    A Quick Revision on Groups in Okta and Access in Okta Privileged Access

    Okta Privileged Access is an application in Okta like most other SaaS applications. It supports OIDC and SCIM. It has users assigned to it, either directly or via groups.

    Any users assigned to the app in Okta will be provisioned to Okta Privileged Access.

    It can also have push groups assigned and these groups (with their membership) will be provisioned to Okta Privileged Access.

    These push groups could be mapped to administrative roles in Okta Privileged Access or mapped to security policies to grant access to privileged resources.

    Thus Okta Privileged Access policy and admin role membership can be managed in Okta through group management. This could be done through manual administration, APIs, lifecycle (group) rules or using the access requests capabilities in Okta Identity Governance (OIG).

    The latter approach is suited to privileged access as you can implement a zero standing privileges model where a user may be defined in the PAM solution but must request appropriate privileged access through a process and have that access automatically removed after a period of time. The remainder of this article will look at how the new OIG Resource Catalog can be used to implement this for Okta Privileged Access.

    Building Request Conditions for Okta Privileged Access

    In this section we will show how we can configure Access request for Okta Privileged Access using the new Request Catalog feature.

    As shown above, Okta Privileged Access is an application in Okta with users and groups assigned. When you enable the new feature, the application will show a new tab called Access requests.

    Within Access requests we can define one or more conditions for users to request access. Different conditions can be tied to different user groups, so you might have a low level of access available to all users but administrative access only available to a small group. Conditions also define what access is granted (e.g. application level, groups associated with the app or application entitlements), the duration of access and the approval sequence to run.

    For our Okta Privileged Access app, we will set up two conditions:

    1. A “system admin” condition to request access to a server (one of the server access groups), with manager approval and a two-hour time limit
    2. A “PAM admin” condition to request access to perform resource or security administration, with both manager and service owner approval, and a four hour time limit

    You could assign these conditions to different groups of users (e.g. a system admin group and a PAM administration group) but for this example, we will use a single group (the PAM All Users group assigned to Okta Privileged Access).

    Lets walk through the creation of a condition.

    Creating the First Condition

    On clicking the Create condition button shown above, the administrator is presented with a page to define the condition. It contains the condition name and four sections:

    1. Requester scope – who can request this access. In this case it is the users in the group PAM All Users
    2. Access level – whether the user is requesting access to the app or to specific groups associated with the app. In this case it will be the sysadmin groups defined in Okta and mapped as push groups to the Okta Privileged Access app (PAM Linux Sysadmins and PAM Windows Sysadmins).
    3. Access duration – how long the user retains access. In this case it will be two hours.
    4. Approval sequence – the approval flow to run. In this case the sequence will prompt for a business justification and request manager approval

    The first three for this condition are shown below.

    Approval sequences are separate items that can be reusable blocks attached to multiple conditions (for example the sequence we will build for manager approval may be appropriate for multiple conditions, not just this one).

    When we click the Select sequence button, there are no pre-existing sequences, so the admin must create a new sequence. This opens a new browser tab where the sequence is defined. It has a name and approval and a set of steps.

    In this case we add the steps to:

    1. Request the Business Justification from the requester (the user requesting access) and
    2. Prompt for approval from the requester’s (user’s) manager

    This is shown below.

    This sequence is saved and assigned to the condition.

    This condition is then saved (created) and enabled. This application now has one access request condition.

    Next we can create the second condition.

    Creating the Second Condition

    The second condition is similar but with a different set of groups, duration and sequence. This is shown below.

    This condition requires a more stringent approval sequence – manager approval and a service owner approval. This is shown below with the first two steps the same as earlier, and a service owner approval set to a specific user (it could be set to the group owners or another Okta group, which would be a more flexible approach).

    Again this sequence is saved and assigned to the condition. The condition is then saved and enabled. We now have two access request conditions assigned to the Okta Privileged Access app.

    Now that there are two conditions applied to the Okta Privileged Access app, we can test access requests.

    Testing the Conditions

    Our first user, Larry Linuxadmin, needs to cross to the dark side and perform some Windows system administration. His colleague, Wendy Winadmin has been temporarily assigned to the PAM admin team and needs some admin access.

    Larry Requests Access to Windows Sysadmin Permissions

    To get the additional sysadmin access, Larry logs into his Okta dashboard and clicks on the Request access button.

    This opens the new Dashboard Request Access app where he can see the apps he can request access for. He clicks on the Okta Privileged Access tile.

    He is presented with a list of all the accesses he can request.

    This list matches all the groups assigned to the two conditions above – the first three shown are on the PAM admin condition, and the last two are on the system admin condition. He see’s the full ist because both conditions are set to a group that he’s a member of.

    Note that the description for the access (group) is pulled from the group description. In this case, the group descriptions have been populated with the permissions in Okta Privileged Access (covered in OPA Getting Roles into the Group Description) and risks (covered in OPA Determining and Highlighting Risks in Roles and Policies). This is not standard configuration and requires Okta Workflows.

    He selects the PAM Windows Sysadmins access and then is prompted for a Business Justification. He enters the justification and submits the request.

    This triggers the approval sequence to run. The only approver is his manager, who gets an email telling them there’s a request to review (or if the Slack or Microsoft Teams integration is enabled, they will see a notification there).

    The manager goes into the Okta Access Requests app and sees the request awaiting his action.

    Selecting the request, he sees the details (who, what app, the group and justification).

    He approves the request and the access is granted. This also triggers the timer to automatically remove access in two hours.

    Looking at the requested group in Okta, you can see Larry has been added.

    The Okta system log shows Larry being added to the group and the group being pushed down to Okta Privileged Access (almost immediately!). Note that it also shows events related to Larry’s request for access which can be used for auditing.

    Within Okta Privileged Access Larry has been added to the PAM Windows Sysadmins group and can perform actions assigned to that group (i.e. RDP’ing to a set of Windows servers).

    In two hours that access will automatically be revoked and this will also be reflected in the Okta system log.

    Wendy Requests Access to PAM Resource Admin Role

    Next Wendy Winadmin, who’s been assigned to the PAM admin team for a while, needs to be made a member of the OPA Resource Administrators group so she can manage a project in a resource group.

    As with Larry above, she goes to the Okta Dashboard, selects the Request access button and is shown the list of apps she can request access for. She selects the Okta Privileged Access app and is presented with the same list of accesses as Larry saw.

    She selects the access, specifies a Business Justification and submits the request. Her manager gets notified, reviews the request and approves.

    Wendy hasn’t seen any notification that her request has been completed, so returns to the Request access menu and clicks on the Okta Privileged Access app again. She can see it’s still submitted with a link.

    She clicks on the link to view the progress of the request and can see her manager has approved, but it’s still waiting on the service owner.

    Once the service owner approves, the access is granted.

    We can see she was added to the group in Okta, it was pushed to Okta Privileged Access and she is now a member of the group there.

    Wendy SSOs into Okta Privileged Access from the Okta Dashboard and sees she can now access the Resource Administration functions.

    After four hours this access will be removed.

    Conclusion

    In this article we have shown how the new OIG Resource Catalog feature can be used to allow users to request privileged access in Okta Privileged Access from the Okta Dashboard. These access requests are based on being assigned to groups that are pushed to Okta Privileged Access and mapped to admin roles and/or policies to grant access.

    The article has shown how these access request processes (conditions) can be built for the Okta Privileged Access app, and then leveraged by end users, with approval flows and fixed access durations.

    Centralised management in Okta Identity Governance of groups used in Okta Privileged Access can be effective in applying a Zero Standing Privileges model and reducing the risks associated with using a PAM solution.

  • Privileged Access Management for AWS using Okta Workforce Solutions

    This article is a summary of a presentation I recently gave looking at Okta Workforce Identity Cloud and Amazon Web Services (AWS). It is focused on how privileged access management can be applied to AWS users and access, leveraging the different Identity and Access Management (IAM) capabilities in Okta.

    Note that this article talks about a CIEM capability that was provided with Okta Privileged Access for a time as no-cost trial. This feature has now been removed. Products like Okta Identity Security Posture Management provide an equivalent capability.

    Introduction

    Privileged Access Management (PAM) as an IAM domain has been around for a long time. The original premise was that privileged access could be separated from regular access, almost a binary user vs. super-admin delineation. Modern app access is far more nuanced that that – there may be shared accounts with privileged access to be managed, but there is also the need to prioritise granting individual access with the right level of privileges delivered in a Just-in-time (JIT) approach to drive towards a Zero Standing Privileges model.

    Amazon Web Services (AWS) is a good example of the complexity of privileged access management. It can involve Account root users, shared users, local users and federated users. Other than the root user (which is effectively the super user), these may have varying levels of access based on the roles/ permission sets they are assigned or can assume. This means privileged access management for AWS will cover the major domains of IAM: controlling which entitlements a user can assume in Access Management, controlling which entitlements a user is allowed to assume in Identity Governance and Administration (IGA) and PAM-specific controls for AWS.

    This article explores these different aspect for AWS privileged access and how Okta addresses them. It’s focussed on user-based access via SSO and doesn’t touch on service-based access.

    It is worthwhile reading this article on a background to AWS users and policies.

    Access Management and AWS Privileges

    Most AWS users will be federated users or local AWS IAM Identity Centre users. When you SSO into AWS, those users assume entitlements (Roles for federated users and Permission Sets in Accounts for local users). These entitlements could include privileged access, where the Roles/Permission Sets are mapped to privileged permissions in AWS.

    Okta has the Okta Integration Network (OIN) which provides multiple integrations for AWS. Of interest to this article are the AWS Account Federation integration and the AWS IAM Identity Center integration.

    The AWS Account Federation integration is for use where AWS Identity and Access Management is used to manage federated access and is tied to a specific AWS Account. With federated users and AWS IAM, there are no local accounts in AWS. When a user clicks on a tile for AWS Account Federation in the Okta Dashboard, they are presented with a list of AWS Roles they can assume and they connect as that Role into AWS.

    The AWS IAM Identity Center integration is used where AWS IAM Identity Center is used across one or more AWS Accounts. Local users are defined in AWS IAM Identity Center, and they are mapped (either directly or via groups) to Permission Sets that allow access in AWS. When a user clicks on a tile for AWS IAM Identity Center in the Okta Dashboard, they select an AWS Account and then the Permission Set for that account, and are given access in AWS as that local user with that Permission Set.

    These two integrations and the flows are shown in the following figure.

    Thus SSO’ing into AWS via one of these integrations may involve assuming some privileged access via Roles or Permission Sets. But how do we map the users to these entitlements?

    Identity Administration, Governance and AWS Access

    In the previous section I showed how the different OIN integrations can let users SSO into AWS and assume privileged entitlements. In this section we look at the identity management and governance aspects.

    Managing AWS IAM Role Assignment

    When defining an application in Okta leveraging the AWS Account Federation integration, the application profile will be populated with a list of Roles for the corresponding AWS Account. These Roles are available to be assigned in the app user profile.

    As with any application user profiles in Okta, attributes (such as Role) can be assigned directly or via groups. It makes sense to setup different groups to assign logical sets of assignments and have the group assignment define the Roles a user gets.

    Thus to assign users to AWS Roles, you assign them to the relevant groups in Okta.

    Note that the AWS Account Federation integration does not provision users to AWS (as they are federated users that don’t exist in AWS IAM) but will pull in the list of roles and also manage the mechanism where users select the role to assume on SSO.

    Managing AWS IAM Identity Centre Permission Set Assignment

    The AWS IAM Identity Center integration acts differently – the AWS Permission Sets are not imported into the Okta app profile and available in app assignment, so you cannot use the group assignment approach to manage the entitlement mapping.

    For applications using AWS IAM Identity Center, you need to define Push Groups for the application and then map these groups in IAM Identity Center to Permissions Sets. This is shown below.

    Thus to assign users to AWS Permission Sets, you assign them to the relevant group in Okta that is then pushed to AWS IAM Identity Center.

    Note that the AWS IAM Identity Center integration provisions users from Okta to AWS IAM Identity Center. Users must be in AWS IAM Identity Center and be assigned to the relevant AWS Groups (pushed from Okta) for them to be able to SSO, so any users in the push groups for the app must also be assigned to the app.

    Using Group Management to Control Assignment

    Common to two approaches above is using membership of groups in Okta – whether it is groups assigned to an AWS Account Federation app assigning Roles or groups pushed to AWS IAM Identity Center mapped to Permission Sets.

    The following figure shows an example of different groups in Okta mapped into AWS IAM Identity Centre and how they are managed in Okta.

    In the example, there are three groups pushed to AWS and assigned to different Permissions Sets. One of these groups (Network Admin Group) is managed through a group rule. The other two groups can be requested via Access Requests.

    The following sections will look at the different patterns: use of group rules for automatic assignment; use of access requests for dynamic assignment; and the user of access certification to re-validate the need for access.

    More information on this pattern with Okta Groups and AWS IAM Identity Center can be found at https://aws.amazon.com/blogs/apn/just-in-time-least-privileged-access-to-aws-administrative-roles-with-okta-and-aws-identity-center/.

    Automatic Assignment to Entitlements via Group Rules

    Automated end-to-end user lifecycle often involves automatic assignment to entitlements based on some attributes coming from HR, such as department, jobcode or location. This is sometimes called role-based or attribute-based assignment. Within Okta, this is achieved through Group Rules tied to attributes on user profiles. When user profiles change, such as in response to a user change coming from a HR system, the group rules are re-evaluated to determine what groups the user should belong to and any application assignment changes that triggers.

    The following figure shows a group rule for automatic assignment to the AWS-NA-Prod-NetworkAdmin group based on the user department being “Network”.

    If the user was moved out of the Network department they would automatically be removed from the group and lose the associated AWS entitlement.

    Requesting Access to AWS Entitlements via Access Requests

    The Access Requests function in Okta Identity Governance (OIG) can be used to allow users to request AWS privileged access via a group, with for a flow to drive approvals (e.g. manager or service owner).

    For example, I’ve setup two Request Types (flows) in OIG, one to grant short-term (2 hour) access to Permission Sets/Roles via one of the Okta groups, and another for a longer-term access (until a specified date). Note that the “Role Required” is a list of the three groups shown in the previous section that are mapped to Permission Sets.

    As these groups represent privileged access, you can apply mitigating controls, such as limiting access to a short time and/or having multiple levels of approval (e.g. manager AND service owner). The first flow above will automatically remove group membership (thus assignment to the Permission Set) after two hours. The second one will automatically remove it on the chosen date.

    Reviewing Access to AWS Entitlements via Access Certification

    We can also use the Okta Identity Governance (OIG) Access Certifications mechanism to review who has access to the groups granting access to the Permission Sets (or Roles in AWS IAM).

    The following figure shows the Access Certification manager view for their employees and the access they have into AWS Permission Sets via Okta groups.

    The reviewer, in this case the user manager but it could be a service owner or another role, reviews the access and can remove it be revoking access.

    Access Certifications for privileged access is a good complementary control to how access is assigned – if there are holes in the access assignment process, regular access reviews can minimise the risk of inappropriate privileged access.

    A Note on Okta Workflows and AWS

    It is worth noting that Okta Workflows has a number of AWS Connectors, including the AWS Multi-Account Access connector and the AWS Lambda connector. Both of these can be used to manage the entitlements within AWS. The former provides actions to List, Add and Remove entitlements. The latter is used to execute Lambda functions, and it’s a common pattern to embed API calls into these Lambda functions and call them from workflows.

    Both could be used to manage AWS entitlements such as assigning users to permission sets in accounts.

    Privileged Access Management Capabilities for AWS

    The previous sections have looked at controlling access to AWS privileged entitlements through groups and different mechanisms in Okta such as the OIN integrations and Okta Identity Governance. But what about Okta Privileged Access, the new Privileged Access Management product? It can help with the following:

    • Controlling access to servers hosted on AWS EC2
    • Storing the AWS Account root user password in the Secrets Vault, and
    • Providing visibility into the entitlement topology and risks with Cloud Infrastructure Entitlement Management

    The first is a fairly standard use case that applies to any servers irrespective of whether they are on-prem physical/virtual servers or cloud-hosted servers. Okta Privileged Access can manage ssh/rdp connections to Linux and Windows servers and through policy control who can access them and how (such as requiring MFA or manager approval).

    Storing the AWS Account Root User Password in the Secrets Vault

    The AWS Account Root User in AWS is a special user. It is the super admin used to setup and manage the AWS Account. As it is such a powerful account, it’s credentials must be controlled appropriately. It should be a break-glass account, with other individual accounts given the administrative rights as needed. See https://docs.aws.amazon.com/IAM/latest/UserGuide/root-user-best-practices.html for more recommendations on the root user.

    But like any shared account, there may be times where it is needed, and potentially needed by different people. It makes sense to store this account in a vault, like the Secrets Vault in Okta Privileged Access. Storing the password in the vault means controls, such as MFA and access approval, can be applied to accessing the password.

    For example, you could set up a secret folder structure in Okta Privileged Access for the root user credentials for your different AWS accounts (perhaps different for development, testing and production), define the access policy for each, and store the root user creds in the appropriate folder. When an admin goes to access the password they drill down to the relevant folder and open the secret.

    In this example, there is a policy applied to that secret requiring the users manager to approve the request to reveal the password. Attempting to reveal the credentials triggers an approval request sent to the manager.

    Once approved, the user can reveal the credentials and copy them out to use when logging into AWS.

    Note that there is no API to manage the password of the Account root user – password management must be performed in the UI. Thus the passwords must be manually managed in the vault. When changed in AWS, they must be updated in the vault to reflect the new value.

    All access to these secrets is recorded in the Okta System Log, as shown below.

    They could be reported on there, plumbed to a SIEM for analysis and reporting, or have bespoke automation run through Okta Workflows.

    Cloud Infrastructure Entitlement Management for AWS

    The last privileged access mechanism for AWS to mention is the new Cloud Infrastructure Entitlement Management (CIEM) capability. It provides a mechanism to go and discover AWS entitlements, build a topology view of who has access to what and how, and perform some risk heuristics to determine over-permissioning.

    Within Okta Privileged Access you define connections to AWS accounts then schedule jobs to gather and analyse entitlements. This produces a topology view of resources, permissions, groups and users with risk.

    This CIEM capability is currently in preview and focussed on the RDS service in AWS. In time it will develop to cover more services and a broader set of risk rules.

    This capability provides another means to understand privileged access in an AWS Account.

    Summary

    In this article I have explored the different ways that Okta can support privileged access management for AWS users and entitlements across the domains of Access Management, Identity Governance and Administration (IGA) and PAM.

    The following figure shows the components and integration points discussed.

    They are:

    • The two OIN integrations (AWS Account Federation and AWS IAM Identity Centre) allow users to single signon to AWS and assume an entitlement within AWS, which could be a privileged entitlement
    • When using the AWS Account Federation integration, AWS Roles are imported and can be assigned to individuals or groups in Okta. When group membership changes in Okta, the Roles a user can select on SSO changes.
    • When usingthe AWS IAM Identity Centre integration, users and groups are provisioned to AWS IAM IC and then the groups are mapped to Permission Sets in Accounts. When group membership changes in Okta, this is reflected in the permission set assignment in AWS.
    • Okta Workflows may also be used for AWS entitlement management
    • Using group to manage AWS Roles / Permission Set assignment means that Okta Identity Governance (IGA) controls like automated lifecycle, access request and access certification can be applied.
    • The Okta Privileged Access CIEM capability can consume entitlement relationships and present the topology and risk of those entitlements.
    • Access to servers running in AWS EC2 can be managed with controls like MFA and access approval.
    • AWS secrets, such as the AWS Account root user, can be stored in the Okta Privileged Access Vault with policy-based controls wrapped around to restrict who can access them and how.

    This is a wide-reaching set of capabilities to manage privileged access in AWS. Not all of them will be relevant to every AWS environment, but effective use can reduce the risk of improper access and use of privileges in AWS.

  • OIG APIs – Use Okta Connector in Workflows Now

    This short post is for the information of people who may look at some of the older OIG API and Workflows articles on this site and find they no longer work. You should be using the Okta Connector with the Custom API Action card now instead of the old generic API Connector card.

    The OLD Way to Use OIG APIs in Workflows

    When we first started writing articles on how to extend Okta Identity Governance (OIG) with APIs and Workflows, the Okta Workflows OAuth app had not been updated to include the OIG API scopes. This meant that you needed to use the generic API Connector.

    The following figure is an example of this:

    The flow shown here is fairly standard:

    1. Setup the Authorization object (SWSS <token>) using a subflow with a token tied to an admin user in Okta
    2. Setup the URL for the generic API call, that includes the https:// and the Okta org domain name, with the API endpoint relative URL
    3. Construct a query object and/or body object depending on the needs of the API
    4. Call the API Connector card passing in the full URL, query/body objects and the authorization object

    The key here is setting up the authorization object and the full URL.

    The NEW Way to Use OIG APIs in Workflows

    The Okta Workflow OAuth app was updated a while back to include the governance scopes. If you try to run the old method you may find workflows failing with a 401 Unauthorized error (errorCode “E0000011” errorSummary “Invalid token provided”). This is an indication you need to update your flows to use the Okta Connector with the Custom API Action card.

    There are two things to be done: enable the relevant scopes for the app and update your flows.

    Enable Okta API Scopes in Okta Workflows OAuth App

    First, there are multiple Okta API Scopes that can be granted in the Okta Workflows OAuth app. They are fairly generic – read or manage for the three types of functions (access certs, access requests and entitments).

    There are also four new scopes introduced for the new Resource Catalog approach to access requests.

    You will need to grant the level of access to the API calls you want to make in workflows.

    Once you update the scopes in the app, you will need to reauthenticate the connection in Workflows.

    Note that there is only one OAuth app, so you will need to grant the highest level of access to cover any API call you will make across all of your workflows, and any workflow could use higher level permissions than they need. You should apply controls around who is building flows and what APIs they are including in the flows.

    Use the Okta Connector with Custom API Action in Workflows

    With the scopes granted, you can update your flows to use the Custom API Action card from the Okta Connector.

    Here is the flow from above, but changed to use the new method.

    Changes from the old approach:

    • You no longer need to use an API token and setup the authorization object – the Okta connector handles this
    • The URL passed into a Custom API Action card is the relative URL (e.g. /governance/api/v1/requests) not the full URL
    • The query/body arguments are the same.

    The output is the same for both (Status Code, Headers and Body), so your processing of the response won’t change.

    This approach is much cleaner. You don’t need to worry about storing an API token somewhere (like in a table or hardcoded into a flow) and you don’t need to worry about updating the token when it expires. Also you don’t need to worry about the okta URL as it’s also tied to the connector, making the flows more portable.

    Hopefully this gives you enough information on updating your flows to the new method of making OIG API calls from Workflows.

  • A Look at the new Govern Okta Admin Roles feature

    This article is a walkthrough of the new Govern Okta Admin Roles feature in Okta Workforce Identity Cloud (WIC).

    Overview of the Feature

    This new feature builds on the flexible and customisable administration roles that have been available on Okta WIC for some time. It treats the Okta Admin Console as an application with entitlements and governance controls are applied to it. These are (from the help documentation):

    • Admin role bundle is a combination of role and resource set. Govern Okta admin roles treats an admin role bundle as a group of entitlements that are associated with the Admin Console.
    • Access Requests allows you to streamline the process of requesting access to an admin role bundle. It provides an easy and secure way for users to submit requests, and automatically sends those requests to approvers for action. Once a request is approved, the user has time-bound access to the requested admin role.
    • Access Certifications helps you create audit campaigns to periodically review, approve, and revoke users’ admin role assignments. This helps avoid the accumulation of elevated or privileged access to a resource.

    If you are familiar with Okta Identity Governance with Entitlement Management these concepts will be familiar to you. Some of the user interface screens will be familiar, but this function is the first to use the new request catalog interfaces.

    The rest of this article is a technical exploration of the new feature, looking at how it’s enabled, and how you setup and use admin role bundles, access requests and access certification.

    Product documentation can be found at: https://help.okta.com/oie/en-us/content/topics/security/governance-admin-roles/govern-admin-roles.htm.

    There is also a video of this functionality: https://www.youtube.com/watch?v=JyeJhv5E09E.

    Enabling the Feature

    This feature is in early access and is being assigned progressively to Okta WIC orgs. Once it is assigned to an org, you will see it appear in the Settings > Features list to be enabled.

    You will need to enable it.

    To confirm the feature is enabled, go to the Security > Administrators menu item and check that the Governance tab is there.

    You can now create Admin role bundles and Access requests for the bundles.

    It’s worth checking the Get started page in the product documentation to ensure you have everything set up correctly.

    Create Admin Role Bundles

    Selecting the Governance tab presents a new page that describes the new Govern Okta admin roles function and allows the creation of Admin role bundles and Access requests.

    To create a new bundle, click the + Create bundle button.

    On the next page specify a Name and Description. You also need to specify the Admin role that this bundle will apply to. Note that you can only associate one role with a bundle (although you could have multiple bundles applying to the same admin role).

    Some roles, such as the Super Administrator role above, does not allow further restricting access. However most admin roles will allow applying resources or resource sets. For example the following role bundle restricting the Application Administrator role to some specific application instances.

    When saving the bundle you will get a confirmation dialog (which you can close or it will go away itself).

    With admin role bundles created you can create access requests to apply to them.

    Create and Test Access Requests

    In this section we will look at creating and running access requests on Admin role bundles.

    Create an Access Request

    The behaviour on creating an Access request will depend on whether this is the first time you are doing it, or not. Access request instances are called conditions, as they define the conditions for a request.

    First Time

    To create a new Access request, you click on the Access request button. If this is the first time you’ve done this in the org, there will be some additional backend provisioning required and you will see the message as shown below.

    Refresh until the Create condition button is available.

    Create Condition

    Click the + Create condition button to create a condition.

    Access requests are using the same functionality as in the Okta Access Requests component (also shipped with Okta Identity Governance and Okta Privileged Access) but is using a new interface embedded in the Okta admin console. If you are familiar with Access Requests in OIG or OktaPA, the following will make sense.

    The Access request condition has four sections: Requester scope, Access scope, Access duration and Approval sequence.

    The Requester scope defines who can request this access – either everyone or a set of Okta groups. This is equivalent to the Audience definition in Access Requests (but you can specify multiple groups and there are no Teams).

    In this example there is an Okta group that contains the admins who are allowed to request access to the Super Admin role.

    The Access scope section defines which Admin role bundles apply to this condition. This is equivalent to the assignment actions in Access Requests.

    The Access duration section defines how long a user will retain the Admin role bundles. It could be something the user specifies (question) or fixed. In the example it is set to 2 hours – two hours after the user gets access to the Super Admin role, they will automatically lose it. This is equivalent to using a timer in Access Requests between assigning access and removing access.

    The last section of a condition is the Approval sequence. You can build up a library of approval sequences that can be used in different Access request conditions. You may have some requests needing two levels of approval (say users manager and a service owner group), so they can share a sequence that does that. Note that the implementation for Admin roles requires a minimum of two levels of approval given the risk the roles represent.

    Note that with the Govern Admin roles feature, it was decided to enforce a minimum of two levels of approval as Okta Admin roles are considered high risk entitlements.

    Click the Select sequence button to see the sequences you can select from.

    The first time you run this, you will need to create a sequence (there are no pre-existing sequences to select from).

    Create Approval Sequence

    To create a new sequence, click the plus icon beside Create sequence.

    A new browser tab opens and you are presented with the new interface to creating or modifying sequences (equivalent to Request Types in Access Requests). A standard template sequence is shown with the Trigger, two Approval steps and a Deliver step. You cannot modify the Trigger or Deliver steps, but you can modify the Approval steps

    The first thing to do is to give the sequence a name. Click the pencil icon.

    Give it a Name and Description then click Continue.

    For this sequence we’re setting the user’s manager as the first approver. Select the first Approval step and select Requester’s manager for the Assign to.

    Note that you can’t name the approval steps.

    For the second approver we set it to a specific user but it could be an Okta group or Okta group owners (e.g. it might make sense to have a group of people who can request admin access, and have the owners of that group be the access approvers, which would make this sequence reusable across different groups).

    We can also add extra steps in by using the plus icons between the existing steps.

    In this case, we’re going to add a question to get a business justification from the requester. Clicking the plus icon allows selection of different steps that can be added, such as questions and actions. We select the question option and set the Prompt.

    When finished you Save the sequence and close the tab. To see the new sequence in the list, you click the refresh icon.

    The new sequence should appear.

    Assign an Approval Sequence

    To assign the sequence, click the sequence and the Select sequence button.

    The sequence is now assigned to the Access request condition.

    The last thing to do is to Create the condition.

    Note that currently there are limitations around editing Access request conditions and sequences that will be improved over time.

    Publish an Access Request Condition

    A new Access Request condition is created in a Disabled state. This means it’s not visible in the Request Catalog.

    To enable a condition you need to use the Enable action.

    Then it should show as enabled.

    This new Access request condition is now ready for use.

    Request Admin Role Access

    This section walks through the request flow.

    User Request

    The user requesting access to an Admin role bundle, uses the new Request access button on the Okta Dashboard.

    This opens the new Request Catalog view. All applications that the user can request are shown in their own tile. In this case only the Okta Admin Console application is shown.

    Selecting this application tile presents the access level selection option. If there were multiple Admin role bundles exposed for this user, they would see multiple items in the list.

    There is also information on the duration of access and the Business Justification field from the question added to the sequence above.

    The user selects the access level, enters a Business Justification and clicks the Submit request.

    Manager Approval

    Once submitted it follows the approval sequence for this access level (i.e. the sequence created above). In this case there was the user’s manager approval step, then a second approval step.

    Performing approvals is the same as for older Access Requests. The reviewer (such as the requester’s manager) will get an email to say they have a request to review. In this case the manager selects the Request Access tile on their Okta Dashboard.

    They are presented with open request needing action.

    They open the reuqest and can see information about the request. They have the option to Approve or Deny the request.

    Once approved it proceeds to the second level approver. Once they approve, the request completes and the access is granted.

    Access Granted

    When the request completes the user gets an email to indicate it has completed. If they refresh their Okta Dashboard, they will see that the Admin button has appeared.

    When they do into Okta they see the full list of menu items as they are a Super Admin.

    After two hours this access will be automatically removed.

    Access Certification of Admin Roles

    The other governance control applied to the Admin role bundles is Access Certification. You can create and execute access review campaigns for the Admin role bundle entitlements in the same way that you do for any other application entitlements.

    Create a Campaign

    Creating a campaign is the same as creating any other Resource campaign in Okta. On the Resources page, you need to select type of Applications, enable the Review entitlements option and select the Okta Admin Console application.

    The subsequent steps to create a campaign are the same as for other campaigns.

    Note that there are some conditions, mostly relating to self-review, that won’t allow a campaign to be built. Okta Admin roles are considered high risk entitlements, so it doesn’t make sense to allow a user to recertify themselves.

    Launch the Campaign

    The campaign will automatically launch on the specified date/time or can be manually launched. This is the same as for every other campaign.

    In this example, we did not restrict the users, so every user assigned to an Admin role (whether it’s via a bundle, via group or or directly) will show up. In the example below, the first entry was assigned the Super Admin bundle, whereas the other users were assigned by traditional approaches.

    This campaign is now launched and notifications have been sent to the reviewers.

    Review Access in the Campaign

    As a reviewer, you review a campaign in the same way that you review any other campaign. It may be via the link in the email sent or by clicking the Okta Access Certification Reviews tile on the Dashboard and selecting the campaign.

    When reviewing a user-resource entry, they can click on the row to see more details about the entry. In this case they can see that the user is assigned to the Super Admin bundle (with the bundle description shown) and what the bundle assigns. This can be used by the reviewer to decide if they should approve or revoke the bundle.

    This completes the example to building and running an Access Certification campaign for an Admin role bundle.

    Conclusion

    This article has provided a technical walkthrough of the new Govern Okta admin roles feature. It has looked at the Admin Console application “entitlements” – the Admin role bundles. It has walked through the creation and user of both Access Requests and Access Governance capabilities for them.

    To borrow from the product documentation, The Govern Okta admin roles feature provides these important security features:

    • Orgs have more control over who can access your org’s admin roles and resources.
    • Time-bound admin access helps ensures that sensitive permissions and resources are protected.
    • Unnecessary standing admin assignments are eliminated.
    • Users can easily request admin access, and orgs can quickly grant and revoke that access.
    • Campaigns allow you to review users’ admin role assignments periodically to avoid accumulation of elevated or privileged access.

    Your Okta implementation will benefit from improved security and reduced risk of compromise if the Govern Okta admin roles feature is deployed effectively allowing the use of standing privileges to be significantly reduced.

  • Consolidating Nested Lists in Okta Workflows

    Working with lists in Okta Workflows is common, but sometimes the list processing actions can be overwhelming and confusing. In this article I look at how I approached a problem of consolidating nested lists with a standard pattern of Lists actions. It should give you an idea of how you can use different Lists actions to achieve operations on complex lists.

    The Problem

    I needed to find a list of server objects that matched a search argument using APIs in Okta Workflows. There was no single API I could use to produce a filtered list of all servers in the environment, so I needed to drill down through various levels of objects to get to the objects I wanted. In this specific case (working with Okta Privileged Access) I needed to find a list of resource groups, and for each resource group I needed to find all projects, and for each project I needed to find all servers defined and then see if the found servers matched my search argument. This involved working with Lists in Okta Workflows.

    There are many ways to process lists, including List – For Each, List – Map and List – Reduce. The For Each action wouldn’t return the data I needed and the Reduce action only returns a single value (see Learn the Differences Between Three Workflows List Functions: For Each, Map, and Reduce). The only option was the Map action, but using it I ended up with nested lists (three-deep) with empty objects (i.e. servers not found) possibly at each level. I had to come up with an approach to filter out the empty rows at each level and consolidate lists of lists into a single list.

    Overview of the Solution

    For this solution I had to drill down into multiple subflows to generate lists. In summary:

    • At the top level flow I called an API to return a list of all Resource Groups. For each item in the Resource Group list, I called a subflow
      • The first subflow used the Resource Group Id to call another API to return a list of Projects in the Resource Group. For each item in the Projects list, I called another subflow
        • The second subflow used the Project Id to call another API to return a list of Servers in the Project. For each item in the Servers list, I called another subflow.
          • The third (and last) subflow checked each server against a search argument and either returned an object for that server (if a match) or nothing (if not a match)
        • The second subflow then returns the list of server objects in the project to the first subflow
      • The first subflow then returns the list of server objects in the projects in the resource group to the main flow
    • The main flow then has a list of all matching server objects in the projects in the resource groups across the Okta PA team.

    At each level (i.e. in the main flow, first subflow and second subflow) there needs to be actions to remove blanks and consolidate any nested lists so that the final list is just a single-level list of matching server objects.

    The standard pattern I used for processing a list of lists at the different levels is shown below.

    The steps are:

    1. First is a List – Map action to call a subflow (helper flow) to process each item in the list. This returns a list of results (objects) from the subflow, with the same number of items as was passed in.
    2. Next is a List – Filter action working on the returned flow to strip out the blank entries
    3. Then a List – Pluck action will pull out the objects by key and return just an object list
    4. Finally a List – Flatten action will consolidate the nested lists into a single list. If the called subflow (helper flow) is returning just an object rather than a list, then this step isn’t needed

    Don’t worry if this doesn’t make sense, the following sections will explain with examples. The examples below are based on the first subflow, which has two levels of subflow below it, so will be returned a list of objects (that are lists themselves).

    Step 1 – List-Map

    The first step is to use a List – Map action to run a subflow across each item in a list. A Map action will return a list with exactly the same number of items passed into the list. The input and output formats can be different.

    In the following example I’m passing in a list of id’s (text fields) and returning a list of objects. There are four items passed into the Map action and four items returned.

    The called subflow will process each item and return a value. In this case the called subflow is making an API call using the id passed in, and the result is a nested list of server objects or blanks (empty lists) if nothing was found.

    In this case, the item returned (called “found_server_list”) is an object that contains a list. This returned list may be empty or contain a list of objects. This is because the called flow is returning a list of objects (and the calling flow is receiving a list of objects). If you had returned a list of text, you would see just the server objects, but no “found_server_list” key – which we need this for the next step.

    Step 2 – List-Filter

    The second step is a List – Filter action to filter out any blank items from the list returned from the List – Map action.

    The Filter action has an operator of is not empty on a path of found_server_list. This will only return non-empty items in the list.

    You can see that the list passed into the Filter action has four items but two of them are empty ("found_server_list":[]). These are discarded to leave two items (which are two objects, one list with two server objects and one list with one server objects).

    Step 3 – List-Pluck

    The third step is a List – Pluck action to pluck out the found_server_objects and return just the objects within them. The Pluck function is to pull every item in objects in a list that match the specified key.

    In this case there are two objects with a key of found_server_object and the result is two lists with three server objects (two in the first list, one in the second).

    Step 4 – List-Flatten

    The last step is a List – Flatten action to reduce a list of lists to a single list.

    In this case, the two lists of servers within the one top-level list, is consolidated into a single list of the three server objects.

    This is the final list as this is a subflow the list is passed up to the next level and filtered/consolidated again until the main flow has a single list of objects representing the servers that match the query.

    Conclusion

    This article has shown how a combination of List actions can be used to consolidate a multi-layered lists of lists. It showed how the List – Map, List – Filter, List – Pluck and List – Flatten cards can be used together to filter out empty entries and produce a single-level list of objects, no matter how deep the list nesting is.

    This article should give you enough detail so you can go try this yourself and understand how the different actions can be used and how to process complex lists of lists.

  • OIG Entitlement Management Videos on YouTube

    Some colleagues have recently published a set of videos on YouTube (okta channel) highlighting some of the features of the new Entitlement Management capability in Okta Identity Governance (see out Entitlement Management page for more information on the product).

    Most of the videos will show up by searching for “entitlement” and “okta” (https://www.youtube.com/results?search_query=entitlement+okta).

    It may pay to subscribe to that channel as there’s bound to be more content added over time.

    Overview Videos

    Application-Specific Integrations

  • Okta Privileged Access – Determining and Highlighting Risk in Roles and Policies

    Okta Privileged Access provides a flexible framework for controlling who can access what privileged resources and how. This includes resource groups for managing resources, security policies for controlling access, administrative roles to manage them, and principals to use them. Invariably configuring the PAM solution will introduce risk. But how to monitor and manage the risk in your environment? This article looks at how risk in an Okta Privileged Access could be determined and exposed through an Access Certification campaign.

    This is a follow-on article to OPA and Access Certification – Getting Roles into the Group Description where I looked at how to update Okta group descriptions to include Okta Privileged Access roles and policies.

    Okta Privileged Access and Risk

    Okta Privileged Access (OPA) is a Privileged Access Management product – it controls access to privileged systems and accounts. There will always be some risk associated with managing this access. It is important to minimise risk through effective implementation of policy. But in a complex deployment, risk may not be apparent due to overlapping or unexpected combinations of policy. It is sometimes hard to see where policies intersect or overlap and the implications of this.

    In OPA, this risk may be due to the user assignment (through groups) to policies – i.e. who can access what privileged resource and how. It may also come from the administrative roles assigned to users (through groups) – i.e. who can set or modify the policy. There should be a plan for the design of these groups, roles and policies. There should also be review mechanisms, such as discussed in this article.

    Defining Risk

    Any risk definition would be a combination of the level of access and the type of resource being accessed. Accessing a production server with a mission critical database as the superuser of that server carries a lot more risk than a regular (non-admin) access to a test server.

    Understanding the risk associated with a resource being accessed is largely a business problem – someone needs to assign a rating to different resources. However we can look at the roles and policies in OPA to get an understanding of risk. For example accessing a server using the superadmin account or admin elevation is higher risk than accessing a server with a non-admin account.

    For this article I looked at the different options for admin roles and security policies in OPA and built a mechanism to evaluate them. The following sections define the risks I’ve identified and the level I’ve assigned to them – LOW, MEDIUM or HIGH. This is subjective but can be used as a guide.

    Also, the data model on the policies means that there are multiple levels, such as rules within policies and different resource assignments and conditions in each rule. The approach I have taken is that the first of the highest risk bubbles up to a higher level. For example if a rule has two HIGH level risks, and one MEDUIM level risk, the rule will be considered to have the first HIGH level rule. For the sake of the exercise, it doesn’t really matter which HIGH risk is used, just that the rule is a HIGH level risk.

    Role Risk

    There are different OPA administrative role assignments, with the system-wide roles having more scope (thus risk) than the delegated roles.

    For this exercise I have assumed the following risk conditions:

    • Any of PAM Administrator (pam_admin), Security Administrator (security_admin) or Resource Administrator (resource_admin) = HIGH
    • Any of the Delegated roles = MEDIUM

    These roles are mapped directly to groups.

    Secret Policy Risk

    OPA policy can be either Secret Policy or Server Policy. Secret policy defines what actions can be taken on folders and secrets, with administrative actions of create/update/delete and read-only actions of list and reveal (secrets). Secret policies can also have conditions tied to them, such as access request and MFA (soon).

    For this exercise I have assumed the following risk conditions:

    • Any of create/update/delete on folders or secrets, without any additional controls = HIGH
    • Any of create/update/delete on folders or secrets, with any additional controls = MEDIUM
    • Read-only (folder/secret list & secret reveal) = LOW

    You could argue that the risk relates to the secrets being accessed, but we cannot determine that just by looking at policies. You would need to assign risk ratings to different folders and secrets.

    Server Policy Risk

    Server Policy in OPA controls which servers a user can access, what accounts they can use (individual or shared), if the individual account can be elevated to admin rights, and what controls to apply (access request, MFA, session recording etc.).

    For this exercise I have assumed the following risk conditions:

    • System admin account (root, Administrator) without additional controls = HIGH
    • System admin account (root, Administrator) with additional controls = MEDIUM
    • Other shared account without additional controls = MEDIUM
    • Other shared account with additional controls = LOW
    • Elevated to admin without additional controls = HIGH
    • Elevated to admin with additional controls = MEDIUM
    • Individual without elevation = LOW

    For this, I have decided that only ‘Access Request’ and ‘MFA’ would be appropriate mitigation controls as they are checked before access is granted. You might consider ‘Session Recording’ a mitigating control, but it is after the fact and less likely to reduce risk.

    As with secret policies, you should also consider the servers being accessed and the risk of them being accessed, but we can only look at the policies to determine risk.

    Determining Risk for Policies and Groups

    The first part of this exercise is to look at the risk associated with policies (and therefore groups of users assigned to the policies). This section will look at the process of collection and then explore the Workflows used to do it.

    A later section will look at combining this with role-based risk and writing that to group descriptions.

    Collecting the Risks by Group and Policy

    Lets look at how we can determine risk in policies and thus the groups assigned to them.

    All detail on the policies can be accessed by using the List All Security Policies API endpoint. This endpoint will return a list of complex objects, each one representing a single security policy. These include:

    • The id, name and description of the policy
    • The principals, a list of the user_groups (and users) assigned to the policy
    • The rules, a list of all the rules in a policy. Each rule may contain:
      • The id, name and resource type (secret or server)
      • A resource selector – defining what resource it is assigned to. This is for secret resources
      • Any resource_groups a secret policy may be assigned to
      • The privileges, such as the actions the policy allows for secrets and folders or whether an individual user can be elevated to an admin level when connecting to a server
      • The conditions, any access conditions such as an access request or requiring MFA

    To determine a risk level (and assign a reason) all of the policies and their rules must be unpacked, and the different aspects of the policies analysed to find the risk conditions shown earlier.

    As there could be multiple risks within a single rule and multiple rules in a policy, the highest must be determined and bubbled up to the policy level. Groups are assigned at the policy level, so the group risk includes the risk of every policy it is connected to.

    The following (Workflows table) shows the completed evaluation of risk in my OPA environment.

    Let’s look at how this was implemented.

    Workflows to Derive Policy Risk

    The following workflows are used to produce the table above.

    The PAM30* workflows are operating on the policy list and each policy in the list:

    • PAM30 – Will use the policies API to get a list of all policies. It calls PAM30a for each policy in the list.
    • PAM30a – Will process each policy in the list, stripping out the principals, the policy details (name, desc etc.) and the rules. For each rule it calls RSK10 to perform the rule risk analysis. It will consolidate the risk from all the rules, then call PAM30b to write out the table shown above.
    • PAM30b – This writes out each group-policy combination to the table.

    There flows are fairly straightforward using standard API cards, and object/list manipulation.

    The RSK** flows are evaluating the risk in each policy rule.

    The RSK10 flow will run for each rule, and it determines whether a rule is for secrets or servers (resource_type). It will call RSK11 for secret rules and RSK12 for server rules. The logic for the different types of policies could all be in the if/else branch in the RSK10 flow, but I decided to split them out into their own subflows to make it easier to understand and maintain.

    The RSK11 flow runs for each secret rule.

    It is passed the rule object and will strip out the privileges[0].privilege_value which contains the actions (such as folder_create). The actions are extracted into their own T/F values. Then a True/False Or card is used to determine if any of the administrative actions are true.

    It then checks if any conditions are present for the rule. There is currently only the access request condition (MFA soon for secrets) so any conditions (i.e. list.length > 0) means there are mitigating controls. In this case the risk level is set of MEDIUM with message, or if there are no conditions the risk level is set to HIGH with message. These are passed back to RSK10 for consolidation with any other rules for this policy.

    The RSK12 flow and it’s subflows run for each server rule. It is passed the rule object and extracts the name, resource_selector, privileges and conditions. Then there are four sections:

    • Find Risky Accounts – use RSK12a to determine if root/Administrator, other shared accounts or ordinary accounts are represented in the resource_selectors
    • Find Elevated Privileges – use RSK12b to determine if there are any admin_level escalations for ordinary accounts
    • Find Mitigating Controls – use RSK12c to determine if any of the conditions are for access request or MFA
    • Check Risk Rules – the earlier risk conditions for servers are built into if/elseif/else loops to determine the risk level and message.

    Each of the first three sections follow a similar pattern.

    They call the subflow using a List Map card, which will return the results in a list, with an entry for each resource_selector/privilege/condition passed in. This is because each of these are lists and could have multiple values. When the list is returned, we check to see if there’s something we’re looking for – if it’s a superadmin account (root/Administrator), another shared account, an elevated individual account and mitigation control. This is done using List Find cards. If found the value is used for the last section – the check risk rules.

    At the end, the risk level and message for the rule is passed back to RSK10 for consolidation with any other rules for this policy.

    The RSK12a flow is checking the selector_type for individual assignments (individual_server_account) or label-based assignments (server_label).

    It will look for root/Administrator in the account list and if found flag it as a superuser account. If there’s another shared account assigned it will flag it as such. It also collects the account list and builds a message for the risk. These are returned to RSK12.

    The RSK12b flow is looking for admin elevations in the privileges list.

    It finds a privilege_type of principal_account_ssh/rdp and if it finds it will flag it as having admin permissions. It will return that flag, a message and accounts list to RSK12.

    The RSK12c flow is checking for access requests or MFA conditions.

    If it finds either an access_request or mfa condition_type it flags it as a mitigation and returns that, along with text, to RSK12.

    As mentioned earlier, the results from RSK12a, RSK12b and RSK12c are used with the risk conditions to determine the highest-risk for this rule. The risks for each rule in a policy are bubbled up to REP30a and the highest one is selected for the policy. This is what is recorded in the table shown earlier.

    Next we will use that information to help access certifications.

    Viewing Risk in Access Certification

    Above we explored how we can determine risk on policies. We can combine risk based on roles to provide a level of risk (and reasons for it) for groups assigned to both policies and roles and present that in the Okta group description. This can be useful in an access certification campaign to see the risk associated with a user’s membership of a specific group.

    We can leverage the mechanism above and merge it with a mechanism I built earlier (see article here) to present the risk along with the roles/policies for each group.

    Showing Risk in Group Description

    In this article I showed how the Okta group description can be updated to include a summary of the roles and policies the group is assigned to. Combining this with the mechanism described above, the risk can be added to that description.

    This is very useful in a User Access Review (Access Certification campaign for users) where a manager needs to decide if the user still needs privileged access. The slide-out window for each review (user-resource) row includes the group description.

    Whilst the description overflows the field, you can mouse over to see the full description including the roles/policies and associated risk levels.

    This should make the life of the business reviewer a lot easier.

    Okta Workflows Implementation

    This was implemented in Okta Workflows and leverages three components:

    • The entire set of flows covered in this article,
    • The mechanism to determine risk for policies covered above, and
    • A subflow to determine risk based on admin roles assigned to the group.

    As per this article, there are two main flows: MAIN and MAINa. MAIN will get get the list of groups in OPA and for each one call MAINa to process that group. MAINa will:

    • Check the group is not “owners” or “everyone” (and stop if it is either)
    • Get all policies this group is assigned to with risk (this is the mechanism described above)
    • Search for the group in Okta and stop if it’s not found
    • Use a new flow, RSK20, to determine the admin role risks for this group
    • Get group details from Okta
    • Build and update the group description in Okta

    This new flow, RSK20, determines the risk based on the admin roles assigned. It looks for each of the five standard roles (three system-wide and two delegated) in the list of roles passed in.

    If any of the three system-wide roles are found, it sets a HIGH risk level. If either of the two delegated roles are found, it sets a MEDIUM risk level.

    Then it converts the role list to a string and appends the risk level. This is returned to the calling flow and added to the group description shown earlier.

    Conclusion

    This article has shown how you can look at the Okta Privileged Access roles and policies to determine a level of risk for users assigned to groups. It presented a set of risky conditions that could be in the policies and roles and how they can be programmatically evaluated and presented, including by using Okta group descriptions to highlight risk in access certification campaigns.

    Ultimately risk will also depend on the privileged resources (such as servers and accounts) that roles and policies apply to, in combination of the types of access and controls applied in Okta Privileged Access. But that would require a wider business evaluation of these resources. Hopefully this article gives you some insight into how you could start the process by looking at the roles and policies and who is assigned to them.