Directoryless User Administration in AWS/IAM, Terraform and CI/CD

I just completed some work on a little project with some unique requirements. It’s a project that uses Terraform to provision infrastructure within AWS. That’s not too terribly hard. We’re trying to make the platform, infrastructure and code as reusable as possible while maintaining customer-specific privacy and security requirements.

The requirements and curve balls were unique enough to make this project a little challenging:

  • Create and manage IAM users inside an AWS account.
  • Provision IAM roles inside subaccounts within the organization (or inside the main account if your use case is not as complex as this).
  • Provision sts-assume-role permissions on those roles based on group membership from an identity provider.

Sounds simple, right? Well, let’s add in the curve balls:

  • You cannot set sts-assume-role policies based on IAM group membership (this is an AWS limitation). You can do this with SAML and/or some kind of federated access, but in this case that was not available to us. We had to provide some way to do this without an IdP and only manage users inside IAM. If you’re using pure IAM, you can only provision users to assume roles on a user-by-user basis. Ick.
  • Do not hard-code the usernames or group membership inside the Terraform.
  • Make it work with a CI/CD deployment – this means you can’t use a local workstation tfvars file to define the users.
  • Treat the usernames and group memberships as sensitive information – which means they must be encrypted.

Setting up CI/CD to work with your Terraform deployment is outside the scope of this article. I’m only focusing on the little bits of code that I used to make this work. Let’s just assume that code that is pushed to your master branch is deployed to production within AWS.

How did we pull this off? AWS’ EC2 Parameter Store to the rescue. Parameter Store allows you to just store key/value pairs. You can store a string, StringList, or SecureString. A SecureString requires the use of a KMS key. So you’ll have to create a KMS key manually or through your Terraform code.

After the KMS key is created, set up your parameters. In my use case, I set up four parameters. The first parameter is a SecureString. It’s just a comma-delimited list of usernames you wish to create. Terraform can automagically decrypt the parameter store object through code, provided the user executing the code has access to use the key to decrypt the parameter store object. You can use the web console or AWS CLI to create this parameter store object and its value. You don’t want to create the parameter store object in your Terraform code, since one of the requirements was to NOT hard-code the user names or use tfvars.

The Terraform code to read the value looks like this:

data "aws_ssm_parameter" "iam_user_list" {
  name = "iam-user-list"
}

All this does is set up a method inside your Terraform to look at the parameter store object and read the value by calling: ${data.aws_ssm_parameter.iam_user_list.value} elsewhere in your code. Terraform will go out to AWS, find the parameter you supplied in the “name” property and read it into memory. Now it’s available for use in other places.

Remember though, we supplied the values as a comma-delimited list. This is important because that’s where things get tricky.

First we have to create the users identified in that parameter store object. The best way to accomplish this is to user a local to split the comma-delimited value into a usable list, then loop through that list and create the users.

locals {
  user_list = ["${split(",", data.aws_ssm_parameter.iam_user_list.value)}"]
}

resource "aws_iam_user" "iam_users" {
  count = "${length(local.user_list)}"
  name = "${local.user_list[count.index]}"
}

Now if you run your Terraform code, you’ll end up with all new IAM users created by the usernames from the list you provided in the Terraform code. Better yet, if you add/remove information from that string, Terraform will automatically adjust the next time you manually run the code or CI/CD executes it. Congratulations, you have basic user management! It may be even more useful to write a Lambda script that runs this routine every so often, but we didn’t do it that way for this particular use case.

This doesn’t set up the users with access keys, passwords or MFA’s. Sorry, that’s harder to do. For now I just handle that in the web console or CLI.

Next, let’s handle the really tricky part. Again, there’s no way to set up a group and use group membership to decide who should get an assume role permission. But that’s ok. We can handle this in a similar fashion. Build parameters that are similar to the iam_user_list parameter. Put a comma-delimited list of the users that should belong to the “group” in this parameter. Make sure the IAM users actually exist before you go further, because Terraform will get mad at you if you try to set up sts-assume-role policies for users that do not exist.

Just like before, set up a data object that reads your new parameter.

data "aws_ssm_parameter" "admin_iam_role_list" {
  name = "admin-iam-role-list"
}

This will expose the contents of that parameter to your Terraform template as: ${data.aws_ssm_parameter.admin_iam_role_list}. Apply the same locals trick as above and iterate through your list to build out a list of users and the ARNs that should be set in the assume-role permissions.

locals {
  admin_iam_role_list = ["${split(",", data.aws_ssm_parameter.admin_iam_role_list.value)}"]
}

resource "aws_iam_role" "admin_role" {
  name = "${var.admin_role_name}"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": ${jsonencode(concat(formatlist("arn:aws:iam::%s:user/%s", var.aws_account_id, local.tenant_viewonly_iam_role_list)))}
        },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF
}

The next time your Terraform template runs, it will iterate through the comma-delimited list of users in your parameter store and add them to the sts-assume-role policy in your role. We’re actually using this in AWS subaccounts (using provider aliases) so that we can centrally-manage IAM users in one AWS account while provisioning roles in other AWS accounts and managing the use of those roles like group membership.

There you have it. Directoryless, basic IAM user and role management in Terraform with no additional infrastructure and a slightly more secure way of handling it… and best yet, your CI/CD will provision the same aspects of information as your developers that deploy the infrastructure.

How to fix an Elastic Beanstalk/RDS breakup

Did you create a multi-tier Elastic Beanstalk deployment? Did you tie it to CodePipeline to deploy out of Github? Has it been working well until just recently?

…did you accidentally leave RDS attached to your worker tier?

This post is for you.

I built an Elastic Beanstalk for a customer with those characteristics. It’s been working great for about a year, until suddenly… the developer of the application reports that he’s no longer able to deploy his code changes. It keeps failing and rolling back all of the changes to the last known good state, which includes older versions of his code. This was bad news for everyone because we had a Monday-morning deadline to demo code changes to a new customer.

Sunday morning offered me a chance to sit and focus on this. I’ve been trying to understand this problem for a few days. It looks like I was finally able to understand the issue after some focus and coffee.

First, let’s cover what was actually happening. When the developer pushed his code updates through CodePipeline, Elastic Beanstalk was working through its “magic” (cough) to update the config to its “known good state” (which was wrong) and failed to apply the changes because of CloudFormation problems. This triggered a rollback on CloudFormation, CodePipeline, and Elastic Beanstalk config changes. Hence the failure.

How did it all get out of whack?

There were several mistakes committed, most of them on my part. Some of them are just problems with Elastic Beanstalk itself. But I’ll make the no-no list:

  1. Don’t let Elastic Beanstalk manage your RDS instance. Remove all references to RDS in all tiers before you build your RDS instance. Even AWS tells you to not to do this. I missed the one in the worker tier.
  2. If you proceed forward with RDS tied to your EB, do NOT use the RDS console to make any changes to the RDS instance. EB won’t know about the changes and will get really angry when they don’t match. In our case, we did some performance testing and modified the RDS instance size from db.t2.micro to db.m4.large. We also changed the storage setting from 20gb to 100gb. We made those changes in the RDS console and not the EB console. Don’t do that.
  3. You should change one setting in the RDS console. Turn off automatic version upgrade. In our case, RDS was upgrading the minor version of the database and once again, EB got angry. Worse yet, you can’t change the minor version in EB’s console. It’s locked. That’s EB’s fault. But whatever.

Those three items led to a huge bag of fail whenever our developer pushed changes. Elastic Beanstalk would initiate changes, but see that RDS’ configuration was out of whack from its understanding. It would fail and roll everything back.

But wait – there’s more!

Elastic Beanstalk was also using some very old CloudFormation to make changes to the RDS instance. It was still using DBSecurityGroups, which apparently is illegal to use now… at least for our case. We were using postgres and minor version 9.6.6. It looks like the RDS team has moved on from DBSecurityGroups and now enforces the use of VPC Security Groups. Therefore, any changes to RDS would completely fail with the error:

Updating RDS database named: <instance> failed Reason: DB Security Groups can no longer be associated with this DB Instance. Use VPC Security Groups instead.

Ouch.

How do you fix all of this mess?

Let’s go over how Elastic Beanstalk actually works. I’ll be describing some of the simple concepts that are covered in documentation on the AWS site. Bookmark it and keep it handy.

First thing’s first. You need to understand that Elastic Beanstalk is really driven by a simple YAML file. This YAML file is specific to the “environment”, which is a child of the “Application” in Elastic Beanstalk. This always confuses me because I think of an “Environment” as being a place to put an “Application,” but in Elastic Beanstalk it’s backwards of how I think. AWS has a pretty good document on how you can look at this YAML file and see what’s going on.

In this case, I was able to save the configuration as described in the AWS document. I then visited the S3 bucket and was able to see a few things that was making my life difficult. There was also a clue left in this document about how EB was driving changes to the RDS instance via CloudFormation. I knew this was happening. If you’re using Elastic Beanstalk, take a few minutes to go look at your CloudFormation console. You’ll see a template in there – one for each EB “environment” you have deployed. The top of your EB environment dashboard has an “environment ID” displayed in a very small font. This environment ID corresponds to the CloudFormation template ID in the CloudFormation console. You can see the nitty-gritty of what it’s trying to do in there.

But Elastic Beanstalk is coughing up some invalid CloudFormation. How do I know? That security group error that was coming up is actually coming out of CloudFormation. I can see the error event in there. CloudFormation is the service that actually triggers the rollback. CloudFormation and RDS is enforcing the change away from DBSecurityGroups to VPCSecurityGroups. But when Elastic Beanstalk creates the CloudFormation template to initiate the change, it uses DBSecurityGroups.

I used one troubleshooting session to manually fix the CloudFormation JSON that Elastic Beanstalk is spitting out. I pushed it through by hand and it worked. I made the changes to the security groups in the way that CloudFormation and RDS expect – however, if I initiated a change through Elastic Beanstalk or the developer pushed a code update, it would fail with invalid CloudFormation once again.

I’ll take a quick break to break down what’s happening here. When you make a change in Elastic Beanstalk, my new understanding is that this happens:

Elastic Beanstalk console writes new YAML config file to S3 –> Elastic Beanstalk parses the config file and decides what changes should be made –> Elastic Beanstalk generates a CloudFormation JSON template –> Elastic Beanstalk saves the CloudFormation JSON to S3 –> Elastic Beanstalk pokes CloudFormation and asks it to update –> CloudFormation updates… if a failure is encountered, it rolls back and tells Elastic Beanstalk that everything is hosed –> Elastic Beanstalk rolls back the version of code that was deployed to a known good state.

Now I understand the root cause here. RDS made a change to enforce the security group update. Elastic Beanstalk can’t seem to figure that out.

Here’s how to resolve this.

Look at the AWS documentation on Elastic Beanstalk’s config above. Follow their steps to save the configuration file from the console. Then, get your favorite code application out. Download the file and manipulate it by hand.

I changed the RDS properties to reflect reality. EB still thought it was postgres, version 9.6.2, on a db.t2.micro with 20gb of storage. I updated these properties to reality.

Then, I saw it. At the bottom of the file, there is a block of YAML that tells Elastic Beanstalk where to pick up the CloudFormation JSON and feed parameters. The default value was:

Extensions:
 RDS.EBConsoleSnippet:
 Order: null
 SourceLocation: https://s3.amazonaws.com/elasticbeanstalk-env-resources-us-east-1/eb_snippets/rds/rds.json

Take a look at that URL. Go ahead. I’ll wait.

See it?

It’s the bad CloudFormation template.

How did I resolve this? Well, I took that template and downloaded it. I modified it in my code editor to change the DBSecurityGroup resources into VPC Security Group resources. I had to manually add the SecurityGroupIngress information too, but because I speak CloudFormation this wasn’t too hard. It’s cheating a little bit, but not a big deal.

I created a new S3 bucket and uploaded my new CloudFormation JSON template into that bucket. Then, I revisited this YAML config and changed the URL to point to my new private copy of the CloudFormation template.

Go back to the Elastic Beanstalk console and “load” the configuration template and wham, it worked. Everything was fine.

Now I know how Elastic Beanstalk really works, and I figured out some super advanced ways to manipulate it to my bidding.

I hope this helps you understand Elastic Beanstalk a little more – it certainly helped me. Now I know how to trick Elastic Beanstalk into working if it hoses up again.

Since it’s working, turn off minor version upgrade in RDS to prevent this from happening, then use your AWS support plan to tell them that Elastic Beanstalk has a bug with CloudFormation and RDS security groups 🙂

Happy cloud days.

 

US Federal Government Declares War on the Matrixed Contractor Employee

DFARS.

If you don’t know what that is, look it up. I’m not going to go into it in this article. I only want to discuss the ramifications of DFARS and how it’s being interpreted/implemented.

Every federal contractor company I’ve worked for has a “matrixed” business model. This means in order to save money, they will employ you on a single federal contract – but “leverage your expertise” on other federal contracts. The end result of this is that you’ll end up working on multiple projects across multiple agencies. Because federal agencies refuse to get along and agree on standards, this means you get to go through multiple clearances and obtain multiple credentials (i.e. CAC or PIV cards and usernames/passwords).

This is a little disingenius on the part of the contractor company. It’s been my experience that they will tie you to a single contract and then matrix you to others. But if the funding lapses on the primary contract, they’ll show you the door. Valuable employees are kept but others that are lower level (but still matrixed!) will be laid off.

That’s another issue that is between you and your company.

Anyway… DFARS. The way companies and agencies are interpreting DFARS is the subject of this article. Basically, if you’re a matrixed employee, the end result is that you will end up with one laptop and one mobile device per project.

That’s right.

If you’re matrixed across three different projects, you will end up with three laptops and three different mobile devices. None of these devices will be allowed to communicate with the other agency. Your company will likely issue a company-specific laptop and mobile device as well. In my case, this could result in four separate devices to do your work.

That sounds reasonable, but it’s woefully ignorant of how a matrixed employee does business. Every agency expects the employee to be devoted to their contract, even if they are on record as having only a slice of time. The agency/customer expects that employee to be available at any time… not just during certain hours of the day.

The end result is that the matrixed employee is expected to manage multiple meeting requests across multiple devices without a single integrated view of meeting and work conflicts. This means the employee will miss meetings, emails and lort knows what else.

I predict this will be rolled back within a few years.

It’s untenable.

Me? I’m going to set “out of office” replies that notify senders that I only check my email and calendar during certain parts of the day. They’ll receive that autoreply every time they email me. Sure, I can set it to reply once a day.

I wouldn’t want to like… be annoying, or something.

Amazon API Gateway is now in GovCloud

I just got a note that Amazon API Gateway is now available in AWS GovCloud. This makes things more interesting for GovCloud for sure, but it’s just a minor stepping stone. Remember, just because it’s in GovCloud doesn’t mean it’s FedRAMP’d (even though it probably is).

Cloud Migrations… A Word of Advice

If you hired a cloud consultant that heartily recommends a “lift and shift” migration and they assure you that everything will be fine…

Fire them.

It won’t be fine.

Microsoft bought SwiftKey?

I must really be out of the loop. I had no idea Microsoft bought SwiftKey. Anyway, they are killing the Windows Phone keyboard for IOS and focusing exclusively on SwiftKey.

When Microsoft does things that makes sense, I'm always surprised. When they do things that do not make sense (like beefing Skype for the iPhone) I am rarely surprised.

Microsoft's Windows Phone keyboard for the iPhone is dead – The Verge
https://apple.news/AzmC65qmoQ9iobO0hF4VDPA

IOS 11 Dock Tip and a Files App Shortcoming

In iOS 11, you don’t only have to use the dock for single apps. You can drag an app group into the Dock as well. I just found this out by “trying it out.” That makes it much more useful for app switching when you swipe up from the bottom.

I’m really liking iOS 11 a lot. It sold a new iPad to me… an iPad Pro 256gb 12.9”. I just had to have it. I can almost turn it into my work machine, but the Files app let me down. I couldn’t rename a file extension with the Files app. I needed to do that and it hosed me since I couldn’t. I reported that though 🙂

Microsoft Paint is dead

Microsoft will be killing off “Microsoft Paint” in the next release of Windows 10 (the so-called “Fall Creator’s Update”).

This article on the Verge points out the various things that are being shed. Microsoft Paint seems to be the most significant user-facing thing, but I can imagine some enterprises will have difficulty with other changes.

Publish from Github to S3

If you’ve visited this site recently, you’ll discover that I’m really sick of WordPress. I’m trying to get WordPress out of my life completely. I’m sick of the security issues, the overhead, the ridiculousness, the databases… all of it. I’m just sick of it. I wanted to go back to something more static and more simple. WordPress is all well and good and easy to use, but it also suffers from some really nasty performance and security issues. I know there are ways around the performance issues, but I shouldn’t have to deal with that. If I wanted to deal with that for a simple website, then maybe I would stand up something a little more WordPress-esque for multiple authors. But this is my own personal site and there’s just no reason to do it that way.

AWS GovCloud and CloudFormation

Be careful when you’re working with CloudFormation in the AWS GovCloud region. Almost every code snippet available on the Internet refers to the public regions of AWS. If you’re making resources in GovCloud with a Cloudformation templates, there are subtle differences.

For instance, referring to an S3 bucket in a code snippet is:

“Resource”: { “Fn::Join” : [“”, [“arn:aws:s3:::”, { “Ref” : “myExampleBucket” } , “/*” ]]},

But if your bucket is in GovCloud, your arn is different:

“Resource”: { “Fn::Join” : [“”, [“arn:aws-us-gov:s3:::”, { “Ref” : “myExampleBucket” } , “/*” ]]},

Subtle things like that can make CloudFormation development a real hoot. Be careful.