Author: user

Securing Attached IAM Roles

They see me role-in…

I had an interesting question come up recently that I’m ashamed to admit, I hadn’t really given a huge amount of thought to before:

if a very permissive role is attached to an EC2 instance, how do you go about preventing unprivileged users from using the granted permissions?

If you’ve spent any time using AWS, you know that you can attach IAM roles directly to some services, like EC2. This has at least a couple of pretty awesome advantages:

  1. It allows you to grant access to applications and services for specific resources within AWS. If an application running on EC2 needs to store backups to an S3 bucket, attach an appropriate role and it just works.
  2. All credentials are temporary. No access keys or anything to share out which could leak and expose sensitive data or allow unintended access.

How does it work?

Continuing with the EC2 service as an example, there is a special non-routable IP that’s configured within the instance (169.254.169.254). You can even browse this using the curl command:

Whenever the AWS CLI command or the SDK needs access, it makes an HTTP call to this IP address, requesting credentials. The metadata service then calls the AWS Security Token Service (STS) which generates a set of short-term, temporary credentials (an Access Key, Secret Key, and Session Token), and passes them back for use by the application.

So, what’s one to do?

Back to the question: how do you prevent an unprivileged user from using the attached role? Short answer: you can’t, really. There isn’t any way to pass the user’s credentials from within the OS to AWS via the metadata. When an IAM role is attached to an Amazon EC2 instance, any user logged into that instance can assume the role and inherit its permissions. This somewhat comes down to the separation between security “OF” the Cloud versus “IN” the Cloud. You’re responsible for making sure you know what you’re doing once you start poking holes in your security. If you’re letting a user into a system, you’d better be sure about them. So, what’s one to do?

One approach would be to configure the attached role as just a “gatekeeper” role. This would implement an extra step to prevent bad actors from just by default having too much access. As applications or users need to elevate their permissions, they would then use the attached role to call the administrative one.

So, what does this look like?

Without an IAM role associated, no permissions available:

With administrative role attached, S3 bucket list operation is successful:

Ok, now let’s change things up a bit. First off, we attach the gatekeeper role:


IAM Role Turst Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

IAM Policy – assume_s3_bucket_full_access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<ACCOUNT #>:role/s3_test_full_access"
}
]
}

IAM Role – s3_test_full_access

With gatekeeper role attached:

Now, after we assume the alternate role, we are able to successfully use the elevated permissions and view the S3 buckets:

Note I’m using the following script to perform the aws assume-role command

#!/bin/bash

# Get the current Linux username
LINUX_USER=$(whoami)

# Map Linux users to their specific IAM role ARNs
declare -A ROLE_MAPPING
ROLE_MAPPING[ubuntu]="arn:aws:iam::<ACCOUNT#>:role/s3_test_full_access"


# Check if the current user has a mapped role
if [ -n "${ROLE_MAPPING[$LINUX_USER]}" ]; then
  ROLE_ARN=${ROLE_MAPPING[$LINUX_USER]}

  # Assume the role and get temporary credentials
  CREDS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "${LINU                                                                                                                                                             X_USER}-session" --query 'Credentials' --output 'json')

  # Export the credentials as environment variables
  if [ -n "$CREDS" ]; then
    export AWS_ACCESS_KEY_ID=$(echo "$CREDS" | jq -r '.AccessKeyId')
    export AWS_SECRET_ACCESS_KEY=$(echo "$CREDS" | jq -r '.SecretAccessKey')
    export AWS_SESSION_TOKEN=$(echo "$CREDS" | jq -r '.SessionToken')
    echo "Assumed IAM role: $ROLE_ARN"
  else
    echo "Failed to assume IAM role."
    unset AWS_ACCESS_KEY_ID
    unset AWS_SECRET_ACCESS_KEY
    unset AWS_SESSION_TOKEN
  fi
else
  # Unset credentials for users without a mapped role
  unset AWS_ACCESS_KEY_ID
  unset AWS_SECRET_ACCESS_KEY
  unset AWS_SESSION_TOKEN
fi
# Print the AWS Access Key - Troubleshooting only
echo $AWS_ACCESS_KEY_ID
# Perform an AWS S3 ls to test - Troubleshooting only aws s3 ls

Warning!

You’ve probably already guessed this by now, but if the administrative role is known, anybody with access to the system can still assume it. You would then need to take extra precautions around who is allowed to run specific applications. For instance, you could then lock down permissions to the aws command to only allow a specific group to execute it.  That would probably still leave some concerns around general users if they could initiate an API call to assume the role, but that would come down to really being stringent about only allowing specific users access to the system.

So, anyway, this was a fun challenge. Feel free to reach out and let me know how this could be done differently or, preferably, better!

Solo IT – Table for one, please…

I’ve been in IT for a while at this point (20+ years) and have worked with loads of businesses. Something that seems to be constant is that most never seem to take full advantage of the functionality they’ve already bought. Between virtualization and Cloud technologies, there are capabilities baked-in that would not only save time and improve quality of life, but save money. It’s like buying an expensive car and never using the air conditioner, or the Eco function of a lot of newer cars. If you’re moving to the Cloud, and you’re doing everything exactly as you were in your physical data center, you’re 100% guaranteed to increase your costs.

But how can we take the opposite, extreme approach? How could we use the capabilities Cloud enables to manage the infrastructure of a business most efficiently using self-service and automation, while also prioritizing security and resilience? Once we start making progress on this, efficiency and cost reduction don’t just apply to the technology, but to the people as well.

Managing IT is complex and time-consuming. Even with the necessary skills, oftentimes you simply don’t have enough time in the day to properly manage everything. Humans can’t scale quite like the infrastructure. So, taking advantage of what Cloud offers would free people up to spend more time on the “fun” stuff like projects and upskilling. Seriously, nothing hastens burnout more than toil and boredom.

So….where to start?

Let’s look at this from the perspective of being the sole IT operator for a small to medium-sized business with an environment that is either moderately sized or moderately complex. All decisions should prioritize cost and efficiency. This is the beginning of a series that I hope will ultimately provide a playbook that will go from hitting the ground on Day 0 leading to Day X and ongoing operations. The path I aim to follow will be:

  1. Gain Access and Perform Discovery – Gaining and securing initial access to the environment and figuring out what we have to work with.
  2. Establish Foundational Management and Security – Hardening network and security along with implementing disaster recovery
  3. Implement Automation and Optimization

I aim to provide:

  • Recommended tooling
  • Discussion (hit me up on LinkedIn, tell me where I’ve gone wrong. πŸ™‚ )
  • Reference architectures
  • Code examples

With that being said, “The best laid plans of mice and men often go awry.” Plus, I have ADHD and kids with ADHD, so…I’ll do my best. πŸ™‚

Belated Update (One Pass, One Fail)

So I’ve been really slow to post an update, but I took and passed the KCNA (Kubernetes and Cloud Native Associate) back in April. Definitely an easier test than the previous ones, as expected. If you’ve already done the CKA or CKAD, this will be a walk in the park.

Next, I took and absolutely BOMBED the CKS. That was completely avoidable and my own fault as I didn’t give it nearly enough study time.

I’m going to regroup and possibly make an attempt for the KCSA before attempting the CKS test again. Updates coming soon(ish). πŸ™‚

CKAD Complete, KCNA up next!

What’s up?!

Good morning/afternoon/evening, all! So, I took the CKAD test a couple of weeks ago and managed to pass! That’s two down, three to go. πŸ™‚

Overall, this one was more challenging than the CKA, but I enjoyed it more. This one focuses more on the usage than the administration of Kubernetes, which yes, I realize is a very obvious statement. This will have you building and troubleshooting the actual constructs within, like services, deployments, etc.. You will need every minute of the allowed time and you will certainly need to have a good understanding of how to use the docs site.

As far as resources used for this one, I once again used a UDemy course created by Mumshad Mannambeth and KodeKloud. Check out the Kubernetes Certified Application Developer (CKAD) over there if interested and keep an eye out for promos and discount codes.

What’s next?!

I’ll be working on the Kubernetes and Cloud Native Associate (KCNA) next. This one focuses on Kubernetes in the broader context of Cloud Native. From what I understand reading reviews and the exam content (here!), this looks to be more of an introductory certification that covers Kubernetes at a high level with the addition of a lot of public Cloud concepts and terminology. So, I’m hoping that I’ll have a bit of a head start, given my previous experience with AWS and Cloud in general. I’ve tentatively scheduled this one for a few weeks out (April 7th).

Wish me luck!

© 2025 CloudHarbinger

Theme by Anders NorenUp ↑