Directoryless User Administration in AWS/IAM, Terraform and CI/CD

I just completed some work on a little project with some unique requirements. It’s a project that uses Terraform to provision infrastructure within AWS. That’s not too terribly hard. We’re trying to make the platform, infrastructure and code as reusable as possible while maintaining customer-specific privacy and security requirements.

The requirements and curve balls were unique enough to make this project a little challenging:

  • Create and manage IAM users inside an AWS account.
  • Provision IAM roles inside subaccounts within the organization (or inside the main account if your use case is not as complex as this).
  • Provision sts-assume-role permissions on those roles based on group membership from an identity provider.

Sounds simple, right? Well, let’s add in the curve balls:

  • You cannot set sts-assume-role policies based on IAM group membership (this is an AWS limitation). You can do this with SAML and/or some kind of federated access, but in this case that was not available to us. We had to provide some way to do this without an IdP and only manage users inside IAM. If you’re using pure IAM, you can only provision users to assume roles on a user-by-user basis. Ick.
  • Do not hard-code the usernames or group membership inside the Terraform.
  • Make it work with a CI/CD deployment – this means you can’t use a local workstation tfvars file to define the users.
  • Treat the usernames and group memberships as sensitive information – which means they must be encrypted.

Setting up CI/CD to work with your Terraform deployment is outside the scope of this article. I’m only focusing on the little bits of code that I used to make this work. Let’s just assume that code that is pushed to your master branch is deployed to production within AWS.

How did we pull this off? AWS’ EC2 Parameter Store to the rescue. Parameter Store allows you to just store key/value pairs. You can store a string, StringList, or SecureString. A SecureString requires the use of a KMS key. So you’ll have to create a KMS key manually or through your Terraform code.

After the KMS key is created, set up your parameters. In my use case, I set up four parameters. The first parameter is a SecureString. It’s just a comma-delimited list of usernames you wish to create. Terraform can automagically decrypt the parameter store object through code, provided the user executing the code has access to use the key to decrypt the parameter store object. You can use the web console or AWS CLI to create this parameter store object and its value. You don’t want to create the parameter store object in your Terraform code, since one of the requirements was to NOT hard-code the user names or use tfvars.

The Terraform code to read the value looks like this:

data "aws_ssm_parameter" "iam_user_list" {
  name = "iam-user-list"

All this does is set up a method inside your Terraform to look at the parameter store object and read the value by calling: ${data.aws_ssm_parameter.iam_user_list.value} elsewhere in your code. Terraform will go out to AWS, find the parameter you supplied in the “name” property and read it into memory. Now it’s available for use in other places.

Remember though, we supplied the values as a comma-delimited list. This is important because that’s where things get tricky.

First we have to create the users identified in that parameter store object. The best way to accomplish this is to user a local to split the comma-delimited value into a usable list, then loop through that list and create the users.

locals {
  user_list = ["${split(",", data.aws_ssm_parameter.iam_user_list.value)}"]

resource "aws_iam_user" "iam_users" {
  count = "${length(local.user_list)}"
  name = "${local.user_list[count.index]}"

Now if you run your Terraform code, you’ll end up with all new IAM users created by the usernames from the list you provided in the Terraform code. Better yet, if you add/remove information from that string, Terraform will automatically adjust the next time you manually run the code or CI/CD executes it. Congratulations, you have basic user management! It may be even more useful to write a Lambda script that runs this routine every so often, but we didn’t do it that way for this particular use case.

This doesn’t set up the users with access keys, passwords or MFA’s. Sorry, that’s harder to do. For now I just handle that in the web console or CLI.

Next, let’s handle the really tricky part. Again, there’s no way to set up a group and use group membership to decide who should get an assume role permission. But that’s ok. We can handle this in a similar fashion. Build parameters that are similar to the iam_user_list parameter. Put a comma-delimited list of the users that should belong to the “group” in this parameter. Make sure the IAM users actually exist before you go further, because Terraform will get mad at you if you try to set up sts-assume-role policies for users that do not exist.

Just like before, set up a data object that reads your new parameter.

data "aws_ssm_parameter" "admin_iam_role_list" {
  name = "admin-iam-role-list"

This will expose the contents of that parameter to your Terraform template as: ${data.aws_ssm_parameter.admin_iam_role_list}. Apply the same locals trick as above and iterate through your list to build out a list of users and the ARNs that should be set in the assume-role permissions.

locals {
  admin_iam_role_list = ["${split(",", data.aws_ssm_parameter.admin_iam_role_list.value)}"]

resource "aws_iam_role" "admin_role" {
  name = "${var.admin_role_name}"

  assume_role_policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "AWS": ${jsonencode(concat(formatlist("arn:aws:iam::%s:user/%s", var.aws_account_id, local.tenant_viewonly_iam_role_list)))}
      "Action": "sts:AssumeRole"

The next time your Terraform template runs, it will iterate through the comma-delimited list of users in your parameter store and add them to the sts-assume-role policy in your role. We’re actually using this in AWS subaccounts (using provider aliases) so that we can centrally-manage IAM users in one AWS account while provisioning roles in other AWS accounts and managing the use of those roles like group membership.

There you have it. Directoryless, basic IAM user and role management in Terraform with no additional infrastructure and a slightly more secure way of handling it… and best yet, your CI/CD will provision the same aspects of information as your developers that deploy the infrastructure.

Amazon API Gateway is now in GovCloud

I just got a note that Amazon API Gateway is now available in AWS GovCloud. This makes things more interesting for GovCloud for sure, but it’s just a minor stepping stone. Remember, just because it’s in GovCloud doesn’t mean it’s FedRAMP’d (even though it probably is).

Microsoft Paint is dead

Microsoft will be killing off “Microsoft Paint” in the next release of Windows 10 (the so-called “Fall Creator’s Update”).

This article on the Verge points out the various things that are being shed. Microsoft Paint seems to be the most significant user-facing thing, but I can imagine some enterprises will have difficulty with other changes.

Publish from Github to S3

If you’ve visited this site recently, you’ll discover that I’m really sick of WordPress. I’m trying to get WordPress out of my life completely. I’m sick of the security issues, the overhead, the ridiculousness, the databases… all of it. I’m just sick of it. I wanted to go back to something more static and more simple. WordPress is all well and good and easy to use, but it also suffers from some really nasty performance and security issues. I know there are ways around the performance issues, but I shouldn’t have to deal with that. If I wanted to deal with that for a simple website, then maybe I would stand up something a little more WordPress-esque for multiple authors. But this is my own personal site and there’s just no reason to do it that way.

AWS GovCloud and CloudFormation

Be careful when you’re working with CloudFormation in the AWS GovCloud region. Almost every code snippet available on the Internet refers to the public regions of AWS. If you’re making resources in GovCloud with a Cloudformation templates, there are subtle differences.

For instance, referring to an S3 bucket in a code snippet is:

“Resource”: { “Fn::Join” : [“”, [“arn:aws:s3:::”, { “Ref” : “myExampleBucket” } , “/*” ]]},

But if your bucket is in GovCloud, your arn is different:

“Resource”: { “Fn::Join” : [“”, [“arn:aws-us-gov:s3:::”, { “Ref” : “myExampleBucket” } , “/*” ]]},

Subtle things like that can make CloudFormation development a real hoot. Be careful.

AWS Solutions Architect Pro Test

Yesterday I sat for the most difficult IT certification test I’ve ever attempted – the AWS Solutions Architect Professional test.

I passed it… by the skin of my teeth.

I’ve essentially studied for this test for two years or more. I took the Solutions Architect Associate test two years ago and I’ve been involved with AWS projects ever since. Actually I was involved in AWS projects since before that test.

I also attended an advanced Solutions Architect Professional bootcamp and another training class provided to AWS partners that I can’t remember right now.

None of them prepared me for the absolute difficulty of this exam.

I’m glad it’s over. But I’m already looking forward to 2018, when I have to take it again to keep the certification current. Sigh.

Unpopular Opinion Post: Microsoft Azure is toast (as a public service)

I really think Microsoft Azure is screwed.

It’ll still be around to power Microsoft’s backend services, but as a public offering to compete against AWS… it’s toast.

Also… OneDrive… seriously, wtf?

AWS Solutions Architect Associate Level

Today (kind of on a lark) I drove to Chattanooga, TN to take the AWS solutions architect associate level test. I passed by the skin of my teeth.

Decided to come to Hooter’s to chill out and have lunch before the drive home.


See you cats at AWS re:invent in November.

Why Windows is Broken: Part 1

The two characters from the ads who personify ...
The two characters from the ads who personify a PC (left, John Hodgman) and a Mac (Justin Long). (Photo credit: Wikipedia)

One of my dear friends on G+ saw my earlier blog post whining about what’s wrong with Windows 8. He challenged me to dive deeper into the complaints. I decided that would would be a good blogging mini-series, even though I’m trying to steer this blog clear of purely technical crap. That’s a long-winded way of saying, “Challenge accepted.” Besides, I’m just idling while Adobe Creative Cloud soaks up my hard drive space.

Let me first put this blog mini-series into context. If you look over this blog you’ll see that in 2006 I made a marked, deliberate switch to the Mac platform. I was sitting in Microsoft building 26, I believe it is (or 25)… wherever the test lab is located… in Redmond. We were running a massive amount of tests on our proposed Exchange system design. This Exchange system had kept all of us up for many, many nights at a time. That’s when Steve Jobs announced the switch to Intel. Since I had to support Mac, Windows and Linux platforms at NASA, I wanted to get one of these machines immediately. I fell in love with it.

During the course of my love affair with the Mac I discovered that there are a great many ills with Windows that are bizarre and ridiculous shortcomings. I know a lot of people have issues with the Windows 8 GUI. I’m one of those people. I didn’t like the Office 2010 ribbon and I still despise it. I generally am not in favor of software that rearranges menus based on what it thinks you want or need to do. Ironically, the people who claim that’s a good feature often complain about Apple’s control over the platform, but that’s another debate.

If this makes you take my feedback with a grain of salt, that’s fine, I understand. Ultimately, we’re all trying to get the same things done. We all work together on this shared collective called the Internet and it’s up to us to choose… individually… how our sausage is made. Every day I find myself booting Windows to do something because I feel like doing it there. Perhaps I feel like booting Linux to do something because it’s more fun to execute it there. Whatever. Let’s take all of that out of the mix and figure out why Windows has not been my platform of choice for full-time production since 2006.

When Chris challenged me to this I decided I would take each one of these topics one by one and work through them, stream of consciousness style. Some of this will come off as ranting or even rage. You’ve been warned. I’m not going to go in order. The first one I’ll discuss is:

The concept of application installations (and all the garbage that comes with it – DLL’s and the like) is completely broken.

When you first use a Mac and understand how applications work and why they’re portable, this annoyance becomes more of a glaring misstep. An application that you install in Windows becomes a permanent extension of your operating system. The only exception to this rule is lightweight applications like Adobe AIR or Java applets.

Don’t believe me? Consider how many hours you’ve spent trying to exorcise a piece of software from your Windows OS since Windows 95. When you install a piece of software using an installer like Windows Installer or Installshield, most or all of the following happens:

  • Binary executables are written to the hard drive.
  • Slices of code are written to the system registry to tell Windows where these binaries are located and what to do with them.
  • Dynamic link libraries (*.DLL) files are written to the hard drive. In most cases these are DLL’s that are provided by Microsoft themselves, written to your hard drive to support the application you installed.
  • Bits are scattered into your Windows folder in some cases.
  • Shell extensions are installed in various places to support plugging into Windows Explorer or other applications (also written to the registry).
  • Bits of files are written into the system to inform Windows what files were written into the system and where they are located so that if you ever hit the uninstall button, Windows can theoretically remove them.

Several of these concepts are completely broken and fully responsible for why your Windows installation gets worse and worse over time. When I was a full-time Windows user, I had to reinstall the operating system at least once a year to return it back to a useful state. OS X and Linux has proven to me that the entire concept of that is just ridiculous. An operating system should not get slower as you use it. Why the hell would you want to use it if that’s going to happen?

Let’s dissect each of these bullet points and dive a little deeper into why they’re bad.

Binary executables are written to the hard drive.

Well, okie. That’s not too bad. That’s why you’re installing something.

Slices of code are written to the system registry to tell Windows where these binaries are located and what to do with them.

The registry is a single-file database that exists on every Windows installation. It’s a single. File. Database. Microsoft has long had this bizarre fascination with databases as The Answer To All Performance Problems. They will cram all of your Outlook data into a single file database. Not only will they do that… let’s one-up that a little. They’ll cram thousands of users’ worth of data into a single-file database when you use a Exchange. That’s right – it doesn’t matter how much money you spend on hardware to make Exchange perform better. If that single file becomes corrupt, thousands of users lose all of their data and you either get to restore it or pray they have an offline copy of it.

But I digress. We’re talking about the registry. It’s a single-file database. Sure, there are bits of the registry that make up the user side of it and it’s stored in your profile, but the fact is the system registry is a single-file database. This means every Windows installation on the planet has a single point of failure. Consider the fact that it’s a database. This means that it suffers from regular database problems. If it gets corrupted, it’s toast. If you add data to it, that data is a permanent relic of the database. Even if you try to delete the data you end up with white space where the data once stood. You can try to compact the white space if you like, but the fact is those little bits of code are permanent. If you pull out those pieces of data from a rogue install there is a good chance it will harm some other portion of the registry or your system. This is why viruses are such a pain to destroy in Windows and once you get one, you really should just nuke the system from orbit and start over with a fresh installation.

The amount of time and money lost to the care and feeding of the registry is insane. The whole concept of this thing is broken and should not exist. I didn’t even realize what a problem this thing is until I used a Mac where… lo and behold… there is no registry to corrupt.

It’s been nice.

If you uninstall an application you will find that bits and pieces of code are left in the registry. The uninstaller does not remove these entries in the registry because it could either damage the system’s ability to handle something else or because the uninstaller is so awful it just forgot to remove it. Even if it does remove it, you get the white space issue I mentioned. So yeah, whatever you installed is a permanent relic of your Windows installation until you reformat. I find that ridiculous.

Dynamic link libraries (*.DLL) files are written to the hard drive. In most cases these are DLL’s that are provided by Microsoft themselves, written to your hard drive to support the application you installed.

These are runtime libraries that are tested and declared compatible with your application. Over time, you will have multiple copies of these DLL files in multiple places. If you install later versions of applications or other applications that use these DLL’s, they could be overwritten or trashed. Most of the time you just end up with multiple copies of multiple versions of multiple DLL’s. The end result is a troubleshooting nightmare when things go wrong. If you spend more than 4 hours fixing a Windows problem, most of the time you should just reformat and reinstall. Ridiculous.

Bits are scattered into your Windows folder in some cases.

This is something that is bound to happen with almost any operating system, I admit. In some cases you’ll install a kext (kernel extension) or whatnot on the Mac and it’ll write something out to operating system’s base folder. I get that. I generally do not like that. I like my operating system to be as read-only as possible. Windows does a lot to keep the user out of the c:\windows folder. Why can’t it do the same to keep applications out of there, if it’s so bad to do?

Shell extensions are installed in various places to support plugging into Windows Explorer or other applications (also written to the registry).

This is kind of the same as writing to the c:\windows folder, but sometimes these shell extensions are written to c:\Program Files or the (x86) version if you have 64-bit Windows. (Three places to store programs? Really?) They’re really, really hard to get rid of if you need to pull them out. (See the registry complaints).

Bits of files are written into the system to inform Windows what files were written into the system and where they are located so that if you ever hit the uninstall button, Windows can theoretically remove them.

My problem with this is that no uninstaller… ever… in the entire history of Windows installers… has ever properly removed all files left behind by a program installation. Seriously. If you go out and uninstall a program, take a few minutes to look over the registry and hard drive folders (Program Files, etc.). That program is still all over your hard drive. Again, Adobe AIR and other lightweight applications are the exception here.

Once you install a program, it’s a permanent part of your OS. Period. The only way to get rid of it is to reformat.

I never really understood that there was a better way to do this until I got on a Mac. Mac handles directories with a .app extension as an application. All of the files and resources required to run that particular app are bundled inside that directory. There are obvious advantages to this:

  • Drag an app to the trash. The directory is deleted and the app is permanently removed from your system. All traces of it are gone.
  • Drag an app to an email message. OS X will zip up the directory and attach the zip file.
  • Applications are sandboxed into their own directory and user space.

Linux is a similar story, but sometimes you end up with binaries in /usr/bin or /usr/local/bin and it can be just as hard to extract.

There you have it. I dissected one of the reasons Windows is broken. All of you people complaining about the GUI in Windows 8… cut Microsoft some slack. They’re trying some new GUI. GUI can be changed and people can get used to it. The real problems with Windows aren’t being addressed. It’s the same problems that have been around since the early 90’s and I don’t see Microsoft getting rid of the way they do things. They’re locked into one type of software engineering and it’s not going to stop.

That’ll keep me away.

Enhanced by Zemanta

Introducing Windows Red: A serious plan to fix Windows 8 | Microsoft windows – InfoWorld

Introducing Windows Red: A serious plan to fix Windows 8 | Microsoft windows – InfoWorld

This is an interesting article, but it still addresses the overall cosmetic flaws with Windows. Windows has had much more fundamental flaws since the days of Windows 3.11 that still go unfixed. I’ll list a few of them here. These flaws prevent me from ever using Windows again as my main personal operating system and if you knew better, you’d feel the same way.

  1. File system events do not work properly.
  2. The concept of application installations (and all the garbage that comes with it – DLL’s and the like) is completely broken.
  3. The concept of drivers, both signed and unsigned, is broken.
  4. Windows does a really shitty job at supporting standards (calendaring, email, instant messaging… I’m looking at you).
  5. NTFS needs to advance beyond its current incarnation. (To be fair, HFS+ does too).
  6. Windows is far, far too fat. It used to be bloated before. Now it’s just ridiculous.

Now let’s look at some of the more recent destruction with Windows that really takes this to a whole new level. Here I’ll rope in some other items that really irritate me.

  1. Office 2013 is really, really broken. The file formats issue was a real problem in Office 2010. Now it’s just a disaster. The amount of people calling me because documents do not open in different versions of Office is absolutely ridiculous.
  2. The Office server ecosystem is in a state that requires your entire enterprise to move to the same versions/releases of Office and servers all at once to support full functionality. I have yet to meet a single enterprise (Microsoft’s own included!) that can support this type of migration and arrangement. This isn’t realistic and I don’t understand why Microsoft continues to shrug it off.
  3. The Microsoft account ecosystem is a disaster. Have you tried telling Office 2013 that you have an Office 365 account AND a Microsoft account? Good luck with that.
  4. On the server side, it’s way… WAY too hard and time consuming to actually implement functionality. The amount of knobs and switches you have to tweak to make something work on EVERY server is insane.

I continue to be baffled as to why anyone puts up with this… or spend money on it.