Created
January 23, 2025 13:33
-
-
Save roychri/dc0e50b5ec1c00943098222d2a80b007 to your computer and use it in GitHub Desktop.
CDK Guide for LLM
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Using local cdk | |
For the most part, this guide assumes you install the CDK Toolkit globally (npm install -g aws-cdk), and the provided command examples (such as cdk synth) follow this assumption. This approach makes it easy to keep the CDK Toolkit up to date, and since the CDK takes a strict approach to backward compatibility, there is generally little risk in always using the latest version. | |
Some teams prefer to specify all dependencies within each project, including tools like the CDK Toolkit. This practice lets you pin such components to specific versions and ensure that all developers on your team (and your CI/CD environment) use exactly those versions. This eliminates a possible source of change, helping to make builds and deployments more consistent and repeatable. | |
The CDK includes a dependency for the CDK Toolkit in the JavaScript project template's package.json, so if you want to use this approach, you don't need to make any changes to your project. All you need to do is use slightly different commands for building your app and for issuing cdk commands. | |
Operation Use global CDK Toolkit Use local CDK Toolkit | |
Initialize project cdk init --language javascript npx aws-cdk init --language javascript | |
Run CDK Toolkit command cdk ... npm run cdk ... or npx aws-cdk ... | |
npx aws-cdk runs the version of the CDK Toolkit installed locally in the current project, if one exists, falling back to the global installation, if any. If no global installation exists, npx downloads a temporary copy of the CDK Toolkit and runs that. You may specify an arbitrary version of the CDK Toolkit using the @ syntax: npx [email protected] --version prints 1.120.0. | |
Tip | |
Set up an alias so you can use the cdk command with a local CDK Toolkit installation. | |
macOS/Linux | |
Windows | |
alias cdk="npx aws-cdk" | |
Managing AWS Construct Library modules | |
Use the Node Package Manager (npm) to install and update AWS Construct Library modules for use by your apps, as well as other packages you need. (You may use yarn instead of npm if you prefer.) npm also installs the dependencies for those modules automatically. | |
Most AWS CDK constructs are in the main CDK package, named aws-cdk-lib, which is a default dependency in new projects created by cdk init. "Experimental" AWS Construct Library modules, where higher-level constructs are still under development, are named like aws-cdk-lib/SERVICE-NAME-alpha. The service name has an aws- prefix. If you're unsure of a module's name, search for it on NPM. | |
Note | |
The CDK API Reference also shows the package names. | |
For example, the command below installs the experimental module for AWS CodeStar. | |
npm install @aws-cdk/aws-codestar-alpha | |
Some services' Construct Library support is in more than one namespace. For example, besides aws-route53, there are three additional Amazon Route 53 namespaces, aws-route53-targets, aws-route53-patterns, and aws-route53resolver. | |
Your project's dependencies are maintained in package.json. You can edit this file to lock some or all of your dependencies to a specific version or to allow them to be updated to newer versions under certain criteria. To update your project's NPM dependencies to the latest permitted version according to the rules you specified in package.json: | |
npm update | |
In JavaScript, you import modules into your code under the same name you use to install them using NPM. We recommend the following practices when importing AWS CDK classes and AWS Construct Library modules in your applications. Following these guidelines will help make your code consistent with other AWS CDK applications as well as easier to understand. | |
Use require(), not ES6-style import directives. Older versions of Node.js do not support ES6 imports, so using the older syntax is more widely compatible. (If you really want to use ES6 imports, use esm to ensure your project is compatible with all supported versions of Node.js.) | |
Generally, import individual classes from aws-cdk-lib. | |
const { App, Stack } = require('aws-cdk-lib'); | |
If you need many classes from aws-cdk-lib, you may use a namespace alias of cdk instead of importing the individual classes. Avoid doing both. | |
const cdk = require('aws-cdk-lib'); | |
Generally, import AWS Construct Libraries using short namespace aliases. | |
const { s3 } = require('aws-cdk-lib/aws-s3'); | |
Managing dependencies in JavaScript | |
In JavaScript CDK projects, dependencies are specified in the package.json file in the project's main directory. The core AWS CDK modules are in a single NPM package called aws-cdk-lib. | |
When you install a package using npm install, NPM records the package in package.json for you. | |
If you prefer, you may use Yarn in place of NPM. However, the CDK does not support Yarn's plug-and-play mode, which is default mode in Yarn 2. Add the following to your project's .yarnrc.yml file to turn off this feature. | |
nodeLinker: node-modules | |
CDK applications | |
The following is an example package.json file generated by the cdk init --language typescript command. The file generated for JavaScript is similar, only without the TypeScript-related entries. | |
{ | |
"name": "my-package", | |
"version": "0.1.0", | |
"bin": { | |
"my-package": "bin/my-package.js" | |
}, | |
"scripts": { | |
"build": "tsc", | |
"watch": "tsc -w", | |
"test": "jest", | |
"cdk": "cdk" | |
}, | |
"devDependencies": { | |
"@types/jest": "^26.0.10", | |
"@types/node": "10.17.27", | |
"jest": "^26.4.2", | |
"ts-jest": "^26.2.0", | |
"aws-cdk": "2.16.0", | |
"ts-node": "^9.0.0", | |
"typescript": "~3.9.7" | |
}, | |
"dependencies": { | |
"aws-cdk-lib": "2.16.0", | |
"constructs": "^10.0.0", | |
"source-map-support": "^0.5.16" | |
} | |
} | |
For deployable CDK apps, aws-cdk-lib must be specified in the dependencies section of package.json. You can use a caret (^) version number specifier to indicate that you will accept later versions than the one specified as long as they are within the same major version. | |
For experimental constructs, specify exact versions for the alpha construct library modules, which have APIs that may change. Do not use ^ or ~ since later versions of these modules may bring API changes that can break your app. | |
Specify versions of libraries and tools needed to test your app (for example, the jest testing framework) in the devDependencies section of package.json. Optionally, use ^ to specify that later compatible versions are acceptable. | |
Third-party construct libraries | |
If you're developing a construct library, specify its dependencies using a combination of the peerDependencies and devDependencies sections, as shown in the following example package.json file. | |
{ | |
"name": "my-package", | |
"version": "0.0.1", | |
"peerDependencies": { | |
"aws-cdk-lib": "^2.14.0", | |
"@aws-cdk/aws-appsync-alpha": "2.10.0-alpha", | |
"constructs": "^10.0.0" | |
}, | |
"devDependencies": { | |
"aws-cdk-lib": "2.14.0", | |
"@aws-cdk/aws-appsync-alpha": "2.10.0-alpha", | |
"constructs": "10.0.0", | |
"jsii": "^1.50.0", | |
"aws-cdk": "^2.14.0" | |
} | |
} | |
In peerDependencies, use a caret (^) to specify the lowest version of aws-cdk-lib that your library works with. This maximizes the compatibility of your library with a range of CDK versions. Specify exact versions for alpha construct library modules, which have APIs that may change. Using peerDependencies makes sure that there is only one copy of all CDK libraries in the node_modules tree. | |
In devDependencies, specify the tools and libraries you need for testing, optionally with ^ to indicate that later compatible versions are acceptable. Specify exactly (without ^ or ~) the lowest versions of aws-cdk-lib and other CDK packages that you advertise your library be compatible with. This practice makes sure that your tests run against those versions. This way, if you inadvertently use a feature found only in newer versions, your tests can catch it. | |
Warning | |
peerDependencies are installed automatically only by NPM 7 and later. If you are using NPM 6 or earlier, or if you are using Yarn, you must include the dependencies of your dependencies in devDependencies. Otherwise, they won't be installed, and you will receive a warning about unresolved peer dependencies. | |
Installing and updating dependencies | |
Run the following command to install your project's dependencies. | |
NPM | |
Yarn | |
# Install the latest version of everything that matches the ranges in 'package.json' | |
npm install | |
# Install the same exact dependency versions as recorded in 'package-lock.json' | |
npm ci | |
To update the installed modules, the preceding npm install and yarn upgrade commands can be used. Either command updates the packages in node_modules to the latest versions that satisfy the rules in package.json. However, they do not update package.json itself, which you might want to do to set a new minimum version. If you host your package on GitHub, you can configure Dependabot version updates to automatically update package.json. Alternatively, use npm-check-updates. | |
Important | |
By design, when you install or update dependencies, NPM and Yarn choose the latest version of every package that satisfies the requirements specified in package.json. There is always a risk that these versions may be broken (either accidentally or intentionally). Test thoroughly after updating your project's dependencies. | |
AWS CDK idioms in JavaScript | |
Props | |
All AWS Construct Library classes are instantiated using three arguments: the scope in which the construct is being defined (its parent in the construct tree), an id, and props, a bundle of key/value pairs that the construct uses to configure the AWS resources it creates. Other classes and methods also use the "bundle of attributes" pattern for arguments. | |
Using an IDE or editor that has good JavaScript autocomplete will help avoid misspelling property names. If a construct is expecting an encryptionKeys property, and you spell it encryptionkeys, when instantiating the construct, you haven't passed the value you intended. This can cause an error at synthesis time if the property is required, or cause the property to be silently ignored if it is optional. In the latter case, you may get a default behavior you intended to override. Take special care here. | |
When subclassing an AWS Construct Library class (or overriding a method that takes a props-like argument), you may want to accept additional properties for your own use. These values will be ignored by the parent class or overridden method, because they are never accessed in that code, so you can generally pass on all the props you received. | |
A future release of the AWS CDK could coincidentally add a new property with a name you used for your own property. Passing the value you receive up the inheritance chain can then cause unexpected behavior. It's safer to pass a shallow copy of the props you received with your property removed or set to undefined. For example: | |
super(scope, name, {...props, encryptionKeys: undefined}); | |
Alternatively, name your properties so that it is clear that they belong to your construct. This way, it is unlikely they will collide with properties in future AWS CDK releases. If there are many of them, use a single appropriately-named object to hold them. | |
Missing values | |
Missing values in an object (such as props) have the value undefined in JavaScript. The usual techniques apply for dealing with these. For example, a common idiom for accessing a property of a value that may be undefined is as follows: | |
// a may be undefined, but if it is not, it may have an attribute b | |
// c is undefined if a is undefined, OR if a doesn't have an attribute b | |
let c = a && a.b; | |
However, if a could have some other "falsy" value besides undefined, it is better to make the test more explicit. Here, we'll take advantage of the fact that null and undefined are equal to test for them both at once: | |
let c = a == null ? a : a.b; | |
Tip | |
Node.js 14.0 and later support new operators that can simplify the handling of undefined values. For more information, see the optional chaining and nullish coalescing proposals. | |
Using TypeScript examples with JavaScript | |
TypeScript is the language we use to develop the AWS CDK, and it was the first language supported for developing applications, so many available AWS CDK code examples are written in TypeScript. These code examples can be a good resource for JavaScript developers; you just need to remove the TypeScript-specific parts of the code. | |
TypeScript snippets often use the newer ECMAScript import and export keywords to import objects from other modules and to declare the objects to be made available outside the current module. Node.js has just begun supporting these keywords in its latest releases. Depending on the version of Node.js you're using (or wish to support), you might rewrite imports and exports to use the older syntax. | |
Imports can be replaced with calls to the require() function. | |
TypeScript | |
JavaScript | |
import * as cdk from 'aws-cdk-lib'; | |
import { Bucket, BucketPolicy } from 'aws-cdk-lib/aws-s3'; | |
Exports can be assigned to the module.exports object. | |
TypeScript | |
JavaScript | |
export class Stack1 extends cdk.Stack { | |
// ... | |
} | |
export class Stack2 extends cdk.Stack { | |
// ... | |
} | |
Note | |
An alternative to using the old-style imports and exports is to use the esm module. | |
Once you've got the imports and exports sorted, you can dig into the actual code. You may run into these commonly-used TypeScript features: | |
Type annotations | |
Interface definitions | |
Type conversions/casts | |
Access modifiers | |
Type annotations may be provided for variables, class members, function parameters, and function return types. For variables, parameters, and members, types are specified by following the identifier with a colon and the type. Function return values follow the function signature and consist of a colon and the type. | |
To convert type-annotated code to JavaScript, remove the colon and the type. Class members must have some value in JavaScript; set them to undefined if they only have a type annotation in TypeScript. | |
TypeScript | |
JavaScript | |
var encrypted: boolean = true; | |
class myStack extends cdk.Stack { | |
bucket: s3.Bucket; | |
// ... | |
} | |
function makeEnv(account: string, region: string) : object { | |
// ... | |
} | |
In TypeScript, interfaces are used to give bundles of required and optional properties, and their types, a name. You can then use the interface name as a type annotation. TypeScript will make sure that the object you use as, for example, an argument to a function has the required properties of the right types. | |
interface myFuncProps { | |
code: lambda.Code, | |
handler?: string | |
} | |
JavaScript does not have an interface feature, so once you've removed the type annotations, delete the interface declarations entirely. | |
When a function or method returns a general-purpose type (such as object), but you want to treat that value as a more specific child type to access properties or methods that are not part of the more general type's interface, TypeScript lets you cast the value using as followed by a type or interface name. JavaScript doesn't support (or need) this, so simply remove as and the following identifier. A less-common cast syntax is to use a type name in brackets, <LikeThis>; these casts, too, must be removed. | |
Finally, TypeScript supports the access modifiers public, protected, and private for members of classes. All class members in JavaScript are public. Simply remove these modifiers wherever you see them. | |
Knowing how to identify and remove these TypeScript features goes a long way toward adapting short TypeScript snippets to JavaScript. But it may be impractical to convert longer TypeScript examples in this fashion, since they are more likely to use other TypeScript features. For these situations, we recommend Sucrase. Sucrase won't complain if code uses an undefined variable, for example, as tsc would. If it is syntactically valid, then with few exceptions, Sucrase can translate it to JavaScript. This makes it particularly valuable for converting snippets that may not be runnable on their own. | |
-- | |
Best practices for developing and deploying cloud infrastructure with the AWS CDK | |
RSS | |
With the AWS CDK, developers or administrators can define their cloud infrastructure by using a supported programming language. CDK applications should be organized into logical units, such as API, database, and monitoring resources, and optionally have a pipeline for automated deployments. The logical units should be implemented as constructs including the following: | |
Infrastructure (such as Amazon S3 buckets, Amazon RDS databases, or an Amazon VPC network) | |
Runtime code (such as AWS Lambda functions) | |
Configuration code | |
Stacks define the deployment model of these logical units. For a more detailed introduction to the concepts behind the CDK, see Getting started with the AWS CDK. | |
The AWS CDK reflects careful consideration of the needs of our customers and internal teams and of the failure patterns that often arise during the deployment and ongoing maintenance of complex cloud applications. We discovered that failures are often related to "out-of-band" changes to an application that aren't fully tested, such as configuration changes. Therefore, we developed the AWS CDK around a model in which your entire application is defined in code, not only business logic but also infrastructure and configuration. That way, proposed changes can be carefully reviewed, comprehensively tested in environments resembling production to varying degrees, and fully rolled back if something goes wrong. | |
Software development lifecycle icons representing infrastructure, application, source code, configuration, and deployment. | |
At deployment time, the AWS CDK synthesizes a cloud assembly that contains the following: | |
AWS CloudFormation templates that describe your infrastructure in all target environments | |
File assets that contain your runtime code and their supporting files | |
With the CDK, every commit in your application's main version control branch can represent a complete, consistent, deployable version of your application. Your application can then be deployed automatically whenever a change is made. | |
The philosophy behind the AWS CDK leads to our recommended best practices, which we have divided into four broad categories. | |
Organization best practices | |
Coding best practices | |
Construct best practices | |
Application best practices | |
Tip | |
Also consider best practices for AWS CloudFormation and the individual AWS services that you use, where applicable to CDK-defined infrastructure. | |
Organization best practices | |
In the beginning stages of AWS CDK adoption, it's important to consider how to set up your organization for success. It's a best practice to have a team of experts responsible for training and guiding the rest of the company as they adopt the CDK. The size of this team might vary, from one or two people at a small company to a full-fledged Cloud Center of Excellence (CCoE) at a larger company. This team is responsible for setting standards and policies for cloud infrastructure at your company, and also for training and mentoring developers. | |
The CCoE might provide guidance on what programming languages should be used for cloud infrastructure. Details will vary from one organization to the next, but a good policy helps make sure that developers can understand and maintain the company's cloud infrastructure. | |
The CCoE also creates a "landing zone" that defines your organizational units within AWS. A landing zone is a pre-configured, secure, scalable, multi-account AWS environment based on best practice blueprints. To tie together the services that make up your landing zone, you can use AWS Control Tower, which configures and manages your entire multi-account system from a single user interface. | |
Development teams should be able to use their own accounts for testing and deploy new resources in these accounts as needed. Individual developers can treat these resources as extensions of their own development workstation. Using CDK Pipelines, the AWS CDK applications can then be deployed via a CI/CD account to testing, integration, and production environments (each isolated in its own AWS Region or account). This is done by merging the developers' code into your organization's canonical repository. | |
Diagram showing deployment process from developer accounts to multiple target accounts via CI/CD pipeline. | |
Coding best practices | |
This section presents best practices for organizing your AWS CDK code. The following diagram shows the relationship between a team and that team's code repositories, packages, applications, and construct libraries. | |
Diagram showing team's code organization: repository, package, CDK app or construct library. | |
Start simple and add complexity only when you need it | |
The guiding principle for most of our best practices is to keep things simple as possible—but no simpler. Add complexity only when your requirements dictate a more complicated solution. With the AWS CDK, you can refactor your code as necessary to support new requirements. You don't have to architect for all possible scenarios upfront. | |
Align with the AWS Well-Architected Framework | |
The AWS Well-Architected Framework defines a component as the code, configuration, and AWS resources that together deliver against a requirement. A component is often the unit of technical ownership, and is decoupled from other components. The term workload is used to identify a set of components that together deliver business value. A workload is usually the level of detail that business and technology leaders communicate about. | |
An AWS CDK application maps to a component as defined by the AWS Well-Architected Framework. AWS CDK apps are a mechanism to codify and deliver Well-Architected cloud application best practices. You can also create and share components as reusable code libraries through artifact repositories, such as AWS CodeArtifact. | |
Every application starts with a single package in a single repository | |
A single package is the entry point of your AWS CDK app. Here, you define how and where to deploy the different logical units of your application. You also define the CI/CD pipeline to deploy the application. The app's constructs define the logical units of your solution. | |
Use additional packages for constructs that you use in more than one application. (Shared constructs should also have their own lifecycle and testing strategy.) Dependencies between packages in the same repository are managed by your repo's build tooling. | |
Although it's possible, we don't recommend putting multiple applications in the same repository, especially when using automated deployment pipelines. Doing this increases the "blast radius" of changes during deployment. When there are multiple applications in a repository, changes to one application trigger deployment of the others (even if the others haven't changed). Furthermore, a break in one application prevents the other applications from being deployed. | |
Move code into repositories based on code lifecycle or team ownership | |
When packages begin to be used in multiple applications, move them to their own repository. This way, the packages can be referenced by application build systems that use them, and they can also be updated on cadences independent of the application lifecycles. However, at first it might make sense to put all shared constructs in one repository. | |
Also, move packages to their own repository when different teams are working on them. This helps enforce access control. | |
To consume packages across repository boundaries, you need a private package repository—similar to NPM, PyPi, or Maven Central, but internal to your organization. You also need a release process that builds, tests, and publishes the package to the private package repository. CodeArtifact can host packages for most popular programming languages. | |
Dependencies on packages in the package repository are managed by your language's package manager, such as NPM for TypeScript or JavaScript applications. Your package manager helps to make sure that builds are repeatable. It does this by recording the specific versions of every package that your application depends on. It also lets you upgrade those dependencies in a controlled manner. | |
Shared packages need a different testing strategy. For a single application, it might be good enough to deploy the application to a testing environment and confirm that it still works. But shared packages must be tested independently of the consuming application, as if they were being released to the public. (Your organization might choose to actually release some shared packages to the public.) | |
Keep in mind that a construct can be arbitrarily simple or complex. A Bucket is a construct, but CameraShopWebsite could be a construct, too. | |
Infrastructure and runtime code live in the same package | |
In addition to generating AWS CloudFormation templates for deploying infrastructure, the AWS CDK also bundles runtime assets like Lambda functions and Docker images and deploys them alongside your infrastructure. This makes it possible to combine the code that defines your infrastructure and the code that implements your runtime logic into a single construct. It's a best practice to do this. These two kinds of code don't need to live in separate repositories or even in separate packages. | |
To evolve the two kinds of code together, you can use a self-contained construct that completely describes a piece of functionality, including its infrastructure and logic. With a self-contained construct, you can test the two kinds of code in isolation, share and reuse the code across projects, and version all the code in sync. | |
Construct best practices | |
This section contains best practices for developing constructs. Constructs are reusable, composable modules that encapsulate resources. They're the building blocks of AWS CDK apps. | |
Model with constructs, deploy with stacks | |
Stacks are the unit of deployment: everything in a stack is deployed together. So when building your application's higher-level logical units from multiple AWS resources, represent each logical unit as a Construct, not as a Stack. Use stacks only to describe how your constructs should be composed and connected for your various deployment scenarios. | |
For example, if one of your logical units is a website, the constructs that make it up (such as an Amazon S3 bucket, API Gateway, Lambda functions, or Amazon RDS tables) should be composed into a single high-level construct. Then that construct should be instantiated in one or more stacks for deployment. | |
By using constructs for building and stacks for deploying, you improve reuse potential of your infrastructure and give yourself more flexibility in how it's deployed. | |
Configure with properties and methods, not environment variables | |
Environment variable lookups inside constructs and stacks are a common anti-pattern. Both constructs and stacks should accept a properties object to allow for full configurability completely in code. Doing otherwise introduces a dependency on the machine that the code will run on, which creates yet more configuration information that you have to track and manage. | |
In general, environment variable lookups should be limited to the top level of an AWS CDK app. They should also be used to pass in information that's needed for running in a development environment. For more information, see Environments for the AWS CDK. | |
Unit test your infrastructure | |
To consistently run a full suite of unit tests at build time in all environments, avoid network lookups during synthesis and model all your production stages in code. (These best practices are covered later.) If any single commit always results in the same generated template, you can trust the unit tests that you write to confirm that the generated templates look the way you expect. For more information, see Test AWS CDK applications. | |
Don't change the logical ID of stateful resources | |
Changing the logical ID of a resource results in the resource being replaced with a new one at the next deployment. For stateful resources like databases and S3 buckets, or persistent infrastructure like an Amazon VPC, this is seldom what you want. Be careful about any refactoring of your AWS CDK code that could cause the ID to change. Write unit tests that assert that the logical IDs of your stateful resources remain static. The logical ID is derived from the id you specify when you instantiate the construct, and the construct's position in the construct tree. For more information, see Logical IDs. | |
Constructs aren't enough for compliance | |
Many enterprise customers write their own wrappers for L2 constructs (the "curated" constructs that represent individual AWS resources with built-in sane defaults and best practices). These wrappers enforce security best practices such as static encryption and specific IAM policies. For example, you might create a MyCompanyBucket that you then use in your applications in place of the usual Amazon S3 Bucket construct. This pattern is useful for surfacing security guidance early in the software development lifecycle, but don't rely on it as the sole means of enforcement. | |
Instead, use AWS features such as service control policies and permission boundaries to enforce your security guardrails at the organization level. Use Aspects and the AWS CDK or tools like CloudFormation Guard to make assertions about the security properties of infrastructure elements before deployment. Use AWS CDK for what it does best. | |
Finally, keep in mind that writing your own "L2+" constructs might prevent your developers from taking advantage of AWS CDK packages such as AWS Solutions Constructs or third-party constructs from Construct Hub. These packages are typically built on standard AWS CDK constructs and won't be able to use your wrapper constructs. | |
Application best practices | |
In this section we discuss how to write your AWS CDK applications, combining constructs to define how your AWS resources are connected. | |
Make decisions at synthesis time | |
Although AWS CloudFormation lets you make decisions at deployment time (using Conditions, { Fn::If }, and Parameters), and the AWS CDK gives you some access to these mechanisms, we recommend against using them. The types of values that you can use and the types of operations you can perform on them are limited compared to what's available in a general-purpose programming language. | |
Instead, try to make all decisions, such as which construct to instantiate, in your AWS CDK application by using your programming language's if statements and other features. For example, a common CDK idiom, iterating over a list and instantiating a construct with values from each item in the list, simply isn't possible using AWS CloudFormation expressions. | |
Treat AWS CloudFormation as an implementation detail that the AWS CDK uses for robust cloud deployments, not as a language target. You're not writing AWS CloudFormation templates in TypeScript or Python, you're writing CDK code that happens to use CloudFormation for deployment. | |
Use generated resource names, not physical names | |
Names are a precious resource. Each name can only be used once. Therefore, if you hardcode a table name or bucket name into your infrastructure and application, you can't deploy that piece of infrastructure twice in the same account. (The name we're talking about here is the name specified by, for example, the bucketName property on an Amazon S3 bucket construct.) | |
What's worse, you can't make changes to the resource that require it to be replaced. If a property can only be set at resource creation, such as the KeySchema of an Amazon DynamoDB table, then that property is immutable. Changing this property requires a new resource. However, the new resource must have the same name in order to be a true replacement. But it can't have the same name while the existing resource is still using that name. | |
A better approach is to specify as few names as possible. If you omit resource names, the AWS CDK will generate them for you in a way that won't cause problems. Suppose you have a table as a resource. You can then pass the generated table name as an environment variable into your AWS Lambda function. In your AWS CDK application, you can reference the table name as table.tableName. Alternatively, you can generate a configuration file on your Amazon EC2 instance on startup, or write the actual table name to the AWS Systems Manager Parameter Store so your application can read it from there. | |
If the place you need it is another AWS CDK stack, that's even more straightforward. Supposing that one stack defines the resource and another stack needs to use it, the following applies: | |
If the two stacks are in the same AWS CDK app, pass a reference between the two stacks. For example, save a reference to the resource's construct as an attribute of the defining stack (this.stack.uploadBucket = amzn-s3-demo-bucket). Then, pass that attribute to the constructor of the stack that needs the resource. | |
When the two stacks are in different AWS CDK apps, use a static from method to use an externally defined resource based on its ARN, name, or other attributes. (For example, use Table.fromArn() for a DynamoDB table). Use the CfnOutput construct to print the ARN or other required value in the output of cdk deploy, or look in the AWS Management Console. Alternatively, the second app can read the CloudFormation template generated by the first app and retrieve that value from the Outputs section. | |
Define removal policies and log retention | |
The AWS CDK attempts to keep you from losing data by defaulting to policies that retain everything you create. For example, the default removal policy on resources that contain data (such as Amazon S3 buckets and database tables) is not to delete the resource when it is removed from the stack. Instead, the resource is orphaned from the stack. Similarly, the CDK's default is to retain all logs forever. In production environments, these defaults can quickly result in the storage of large amounts of data that you don't actually need, and a corresponding AWS bill. | |
Consider carefully what you want these policies to be for each production resource and specify them accordingly. Use Aspects and the AWS CDK to validate the removal and logging policies in your stack. | |
Separate your application into multiple stacks as dictated by deployment requirements | |
There is no hard and fast rule to how many stacks your application needs. You'll usually end up basing the decision on your deployment patterns. Keep in mind the following guidelines: | |
It's typically more straightforward to keep as many resources in the same stack as possible, so keep them together unless you know you want them separated. | |
Consider keeping stateful resources (like databases) in a separate stack from stateless resources. You can then turn on termination protection on the stateful stack. This way, you can freely destroy or create multiple copies of the stateless stack without risk of data loss. | |
Stateful resources are more sensitive to construct renaming—renaming leads to resource replacement. Therefore, don't nest stateful resources inside constructs that are likely to be moved around or renamed (unless the state can be rebuilt if lost, like a cache). This is another good reason to put stateful resources in their own stack. | |
Commit cdk.context.json to avoid non-deterministic behavior | |
Determinism is key to successful AWS CDK deployments. An AWS CDK app should have essentially the same result whenever it is deployed to a given environment. | |
Since your AWS CDK app is written in a general-purpose programming language, it can execute arbitrary code, use arbitrary libraries, and make arbitrary network calls. For example, you could use an AWS SDK to retrieve some information from your AWS account while synthesizing your app. Recognize that doing so will result in additional credential setup requirements, increased latency, and a chance, however small, of failure every time you run cdk synth. | |
Never modify your AWS account or resources during synthesis. Synthesizing an app should not have side effects. Changes to your infrastructure should happen only in the deployment phase, after the AWS CloudFormation template has been generated. This way, if there's a problem, AWS CloudFormation can automatically roll back the change. To make changes that can't be easily made within the AWS CDK framework, use custom resources to execute arbitrary code at deployment time. | |
Even strictly read-only calls are not necessarily safe. Consider what happens if the value returned by a network call changes. What part of your infrastructure will that impact? What will happen to already-deployed resources? Following are two example situations in which a sudden change in values might cause a problem. | |
If you provision an Amazon VPC to all available Availability Zones in a specified Region, and the number of AZs is two on deployment day, then your IP space gets split in half. If AWS launches a new Availability Zone the next day, the next deployment after that tries to split your IP space into thirds, requiring all subnets to be recreated. This probably won't be possible because your Amazon EC2 instances are still running, and you'll have to clean this up manually. | |
If you query for the latest Amazon Linux machine image and deploy an Amazon EC2 instance, and the next day a new image is released, a subsequent deployment picks up the new AMI and replaces all your instances. This might not be what you expected to happen. | |
These situations can be pernicious because the AWS-side change might occur after months or years of successful deployments. Suddenly your deployments are failing "for no reason" and you long ago forgot what you did and why. | |
Fortunately, the AWS CDK includes a mechanism called context providers to record a snapshot of non-deterministic values. This allows future synthesis operations to produce exactly the same template as they did when first deployed. The only changes in the new template are the changes that you made in your code. When you use a construct's .fromLookup() method, the result of the call is cached in cdk.context.json. You should commit this to version control along with the rest of your code to make sure that future executions of your CDK app use the same value. The CDK Toolkit includes commands to manage the context cache, so you can refresh specific entries when you need to. For more information, see Context values and the AWS CDK. | |
If you need some value (from AWS or elsewhere) for which there is no native CDK context provider, we recommend writing a separate script. The script should retrieve the value and write it to a file, then read that file in your CDK app. Run the script only when you want to refresh the stored value, not as part of your regular build process. | |
Let the AWS CDK manage roles and security groups | |
With the AWS CDK construct library's grant() convenience methods, you can create AWS Identity and Access Management roles that grant access to one resource by another using minimally scoped permissions. For example, consider a line like the following: | |
amzn-s3-demo-bucket.grantRead(myLambda) | |
This single line adds a policy to the Lambda function's role (which is also created for you). That role and its policies are more than a dozen lines of CloudFormation that you don't have to write. The AWS CDK grants only the minimal permissions required for the function to read from the bucket. | |
If you require developers to always use predefined roles that were created by a security team, AWS CDK coding becomes much more complicated. Your teams could lose a lot of flexibility in how they design their applications. A better alternative is to use service control policies and permission boundaries to make sure that developers stay within the guardrails. | |
Model all production stages in code | |
In traditional AWS CloudFormation scenarios, your goal is to produce a single artifact that is parameterized so that it can be deployed to various target environments after applying configuration values specific to those environments. In the CDK, you can, and should, build that configuration into your source code. Create a stack for your production environment, and create a separate stack for each of your other stages. Then, put the configuration values for each stack in the code. Use services like Secrets Manager and Systems Manager Parameter Store for sensitive values that you don't want to check in to source control, using the names or ARNs of those resources. | |
When you synthesize your application, the cloud assembly created in the cdk.out folder contains a separate template for each environment. Your entire build is deterministic. There are no out-of-band changes to your application, and any given commit always yields the exact same AWS CloudFormation template and accompanying assets. This makes unit testing much more reliable. | |
Measure everything | |
Achieving the goal of full continuous deployment, with no human intervention, requires a high level of automation. That automation is only possible with extensive amounts of monitoring. To measure all aspects of your deployed resources, create metrics, alarms, and dashboards. Don't stop at measuring things like CPU usage and disk space. Also record your business metrics, and use those measurements to automate deployment decisions like rollbacks. Most of the L2 constructs in AWS CDK have convenience methods to help you create metrics, such as the metricUserErrors() method on the dynamodb.Table class. | |
-- | |
AWS CDK security best practices | |
RSS | |
The AWS Cloud Development Kit (AWS CDK) is a powerful tool that developers can use to configure AWS services and provision infrastructure on AWS. With any tool that provides such control and capabilities, organizations will need to establish policies and practices to ensure that the tool is being used in safe and secure ways. For example, organizations may want to restrict developer access to specific services to ensure that they can’t tamper with compliance or cost control measures that are configured in the account. | |
Often, there can be a tension between security and productivity, and each organization needs to establish the proper balance for themselves. This topic provides security best practices for the AWS CDK that you can consider as you create and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions. | |
Follow IAM security best practices | |
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. Organizations, individuals, and the AWS CDK use IAM to manage permissions that determine the actions that can be performed on AWS resources. When using IAM, follow the IAM security best practices. For more information, see Security best practices and use cases in AWS Identity and Access Management in the IAM User Guide. | |
Manage permissions for the AWS CDK | |
When you use the AWS CDK across your organization to develop and manage your infrastructure, you’ll want to consider the following scenarios where managing permissions will be important: | |
Permissions for AWS CDK deployments – These permissions determine who can make changes to your AWS resources and what changes they can make. | |
Permissions between resources – These are the permissions that allow interactions between the AWS resources that you create and manage with the AWS CDK. | |
Manage permissions for AWS CDK deployments | |
Developers use the AWS CDK to define infrastructure locally on their development machines. This infrastructure is implemented in AWS environments through deployments that typically involve using the AWS CDK Command Line Interface (AWS CDK CLI). With deployments, you may want to control what changes developers can make in your environments. For example, you might have an Amazon Virtual Private Cloud (Amazon VPC) resource that you don’t want developers to modify. | |
By default, the CDK CLI uses a combination of the actor’s security credentials and IAM roles that are created during bootstrapping to receive permissions for deployments. The actor’s security credentials are first used for authentication and IAM roles are then assumed to perform various actions during deployment, such as using the AWS CloudFormation service to create resources. For more information on how CDK deployments work, including the IAM roles that are used, see Deploy AWS CDK applications. | |
To restrict who can perform deployments and the actions that can be performed during deployment, consider the following: | |
The actor’s security credentials are the first set of credentials used to authenticate to AWS. From here, the permissions used to perform actions during deployment are granted to the IAM roles that are assumed during the deployment workflow. You can restrict who can perform deployments by limiting who can assume these roles. You can also restrict the actions that can be performed during deployment by replacing these IAM roles with your own. | |
Permissions for performing deployments are given to the DeploymentActionRole. You can control permissions for who can perform deployments by limiting who can assume this role. By using a role for deployments, you can perform cross-account deployments since the role can be assumed by AWS identities in a different account. By default, all identities in the same AWS account with the appropriate AssumeRole policy statement can assume this role. | |
Permissions for creating and modifying resources through AWS CloudFormation are given to the CloudFormationExecutionRole. This role also requires permission to read from the bootstrap resources. You control the permissions that CDK deployments have by using a managed policy for the CloudFormationExecutionRole and optionally by configuring a permissions boundary. By default, this role has AdministratorAccess permissions with no permission boundary. | |
Permissions for interacting with bootstrap resources are given to the FilePublishingRole and ImagePublishingRole. The actor performing deployments must have permission to assume these roles. By default, all identities in the same AWS account with the appropriate AssumeRole policy statement can assume this role. | |
Permissions for accessing bootstrap resources to perform lookups are given to the LookupRole. The actor performing deployments must have permission to assume this role. By default, this role has readOnly access to the bootstrap resources. By default, all identities in the same AWS account with the appropriate AssumeRole policy statement can assume this role. | |
To configure the IAM identities in your AWS account with permission to assume these roles, add a policy with the following policy statement to the identities: | |
{ | |
"Version": "2012-10-17", | |
"Statement": [{ | |
"Sid": "AssumeCDKRoles", | |
"Effect": "Allow", | |
"Action": "sts:AssumeRole", | |
"Resource": "*", | |
"Condition": { | |
"StringEquals": { | |
"iam:ResourceTag/aws-cdk:bootstrap-role": [ | |
"image-publishing", | |
"file-publishing", | |
"deploy", | |
"lookup" | |
] | |
} | |
} | |
}] | |
} | |
Modify the permissions for the roles assumed during deployment | |
By modifying permissions for the roles assumed during deployment, you can manage the actions that can be performed during deployment. To modify permissions, you create your own IAM roles and specify them when bootstrapping your environment. When you customize bootstrapping, you will have to customize synthesis. For general instructions, see Customize AWS CDK bootstrapping. | |
Modify the security credentials and roles used during deployment | |
The roles and bootstrap resources that are used during deployments are determined by the CDK stack synthesizer that you use. To modify this behavior, you can customize synthesis. For more information, see Configure and customize CDK stack synthesis. | |
Considerations for granting least privilege access | |
Granting least privilege access is a security best practice that we recommend that you consider as you develop your security strategy. For more information, see SEC03-BP02 Grant least privilege access in the AWS Well-Architected Framework Guide. | |
Often, granting least privilege access involves restricting IAM policies to the minimum access necessary to perform a given task. Attempting to grant least privilege access through fine-grained permissions with the CDK using this approach can impact CDK deployments and cause you to have to create wider-scoped permissions than you’d like. The following are a few things to consider when using this approach: | |
Determining an exhaustive list of permissions that allow developers to use the AWS CDK to provision infrastructure through CloudFormation is difficult and complex. | |
If you want to be fine-grained, permissions may become too long to fit within the maximum length of IAM policy documents. | |
Providing an incomplete set of permissions can severely impact developer productivity and deployments. | |
With the CDK, deployments are performed using CloudFormation. CloudFormation initiates a set of AWS API calls in order using the permissions that are provided. The permissions necessary at any point in time depends on many factors: | |
The AWS services that are being modified. Specifically, the resources and properties that are being used and changed. | |
The current state of the CloudFormation stack. | |
Issues that may occur during deployments and if rollbacks are needed, which will require Delete permissions in addition to Create. | |
When the provided permissions are incomplete, manual intervention will be required. The following are a few examples: | |
If you discover incomplete permissions during roll forward, you’ll need to pause deployment, and take time to discuss and provision new permissions before continuing. | |
If deployment rolls back and the permissions to apply the roll back are missing, it may leave your CloudFormation stack in a state that will require a lot of manual work to recover from. | |
Since this approach can result in complications and severely limit developer productivity, we don’t recommend it. Instead, we recommend implementing guardrails and preventing bypass. | |
Implementing guardrails and preventing bypass | |
You can implement guardrails, compliance rules, auditing, and monitoring by using services such as AWS Control Tower, AWS Config, AWS CloudTrail, AWS Security Hub, and others. With this approach, you grant developers permission to do everything, except tampering with the existing validation mechanisms. Developers have the freedom to implement changes quickly, as long as they stay within policy. This is the approach we recommend when using the AWS CDK. For more information on guardrails, see Controls in the Management and Governance Cloud Environment Guide. | |
We also recommend using permissions boundaries or service control policies (SCPs) as a way of implementing guardrails. For more information on implementing permissions boundaries with the AWS CDK, see Create and apply permissions boundaries for the AWS CDK. | |
If you are using any compliance control mechanisms, set them up during the bootstrapping phase. Make sure that the CloudFormationExecutionRole or developer-accessible identities have policies or permissions boundaries attached that prevent bypass of the mechanisms that you put in place. The appropriate policies depends on the specific mechanisms that you use. | |
Manage permissions between resources provisioned by the AWS CDK | |
How you manage permissions between resources that are provisioned by the AWS CDK depends on whether you allow the CDK to create roles and policies. | |
When you use L2 constructs from the AWS Construct Library to define your infrastructure, you can use the provided grant methods to provision permissions between resources. With grant methods, you specify the type of access you want between resources and the AWS CDK provisions least privilege IAM roles to accomplish your intent. This approach meets security requirements for most organizations while being efficient for developers. For more information, see Define permissions for L2 constructs with the AWS CDK. | |
If you want to work around this feature by replacing the automatically generated roles with manually created ones, consider the following: | |
Your IAM roles will need to be manually created, slowing down application development. | |
When IAM roles need to be manually created and managed, people will often combine multiple logical roles into a single role to make them easier to manage. This runs counter to the least privilege principle. | |
Since these roles will need to be created before deployment, the resources that need to be referenced will not yet exist. Therefore, you’ll need to use wildcards, which runs counter to the least privilege principle. | |
A common workaround to using wildcards is to mandate that all resources be given a predictable name. However, this interferes with CloudFormation’s ability to replace resources when necessary and may slow down or block development. Because of this, we recommend that you allow CloudFormation to create unique resource names for you. | |
It will be impossible to perform continuous delivery since manual actions must be performed prior to every deployment. | |
When organizations want to prevent the CDK from creating roles, it is usually to prevent developers from being able to create IAM roles. The concern is that by giving developers permission to create IAM roles using the AWS CDK, they could possibly elevate their own privileges. To mitigate against this, we recommend using permission boundaries or service control policies (SCPs). With permission boundaries, you can set limits for what developers and the CDK are allowed to do. For more information on using permission boundaries with the CDK, see Create and apply permissions boundaries for the AWS CDK. | |
-- | |
Bootstrap your environment for use with the AWS CDK | |
RSS | |
Bootstrap your AWS environment to prepare it for AWS Cloud Development Kit (AWS CDK) stack deployments. | |
For an introduction to environments, see Environments for the AWS CDK. | |
For an introduction to bootstrapping, see AWS CDK bootstrapping. | |
How to bootstrap your environment | |
You can use the AWS CDK Command Line Interface (AWS CDK CLI) or your preferred AWS CloudFormation deployment tool to bootstrap your environment. | |
Use the CDK CLI | |
You can use the CDK CLI cdk bootstrap command to bootstrap your environment. This is the method that we recommend if you don't require significant modifications to bootstrapping. | |
Bootstrap from any working directory | |
To bootstrap from any working directory, provide the environment to bootstrap as a command line argument. The following is an example: | |
$ cdk bootstrap aws://123456789012/us-east-1 | |
Tip | |
If you don't have your AWS account number, you can get it from the AWS Management Console. You can also use the following AWS CLI command to display your default account information, including your account number: | |
$ aws sts get-caller-identity | |
If you have named profiles in your AWS config and credentials files, use the --profile option to retrieve account information for a specific profile. The following is an example: | |
$ aws sts get-caller-identity --profile prod | |
To display the default Region, use the aws configure get command: | |
$ aws configure get region | |
$ aws configure get region --profile prod | |
When providing an argument, the aws:// prefix is optional. The following is valid: | |
$ cdk bootstrap 123456789012/us-east-1 | |
To bootstrap multiple environments at the same time, provide multiple arguments: | |
$ cdk bootstrap aws://123456789012/us-east-1 aws://123456789012/us-east-2 | |
Bootstrap from the parent directory of a CDK project | |
You can run cdk bootstrap from the parent directory of a CDK project containing a cdk.json file. If you don’t provide an environment as an argument, the CDK CLI will obtain environment information from default sources, such as your config and credentials files or any environment information specified for your CDK stack. | |
When you bootstrap from the parent directory of a CDK project, environments provided from command line arguments take precedence over other sources. | |
To bootstrap an environment that is specified in your config and credentials files, use the --profile option: | |
$ cdk bootstrap --profile prod | |
For more information on the cdk bootstrap command and supported options, see cdk bootstrap. | |
Use any AWS CloudFormation tool | |
You can copy the bootstrap template from the aws-cdk GitHub repository or obtain the template with the cdk bootstrap --show-template command. Then, use any AWS CloudFormation tool to deploy the template into your environment. | |
With this method, you can use AWS CloudFormation StackSets or AWS Control Tower. You can also use the AWS CloudFormation console or the AWS Command Line Interface (AWS CLI). You can make modifications to your template before you deploy it. This method may be more flexible and suitable for large-scale deployments. | |
The following is an example of using the --show-template option to retrieve and save the bootstrap template to your local machine: | |
macOS/Linux | |
Windows | |
$ cdk bootstrap --show-template > bootstrap-template.yaml | |
To deploy this template using the CDK CLI, you can run the following: | |
$ cdk bootstrap --template bootstrap-template.yaml | |
The following is an example of using the AWS CLI to deploy the template: | |
macOS/Linux | |
Windows | |
aws cloudformation create-stack \ | |
--stack-name CDKToolkit \ | |
--template-body file://path/to/bootstrap-template.yaml \ | |
--capabilities CAPABILITY_NAMED_IAM \ | |
--region us-west-1 | |
For information on using CloudFormation StackSets to bootstrap multiple environments, see Bootstrapping multiple AWS accounts for AWS CDK using CloudFormation StackSets in the AWS Cloud Operations & Migrations Blog. | |
When to bootstrap your environment | |
You must bootstrap each AWS environment before you deploy into the environment. We recommend that you proactively bootstrap each environment that you plan to use. You can do this before you plan on actually deploying CDK apps into the environment. By proactively bootstrapping your environments, you prevent potential future issues such as Amazon S3 bucket name conflicts or deploying CDK apps into environments that haven't been bootstrapped. | |
It’s okay to bootstrap an environment more than once. If an environment has already been bootstrapped, the bootstrap stack will be upgraded if necessary. Otherwise, nothing will happen. | |
If you attempt to deploy a CDK stack into an environment that hasn’t been bootstrapped, you will see an error like the following: | |
$ cdk deploy | |
✨ Synthesis time: 2.02s | |
❌ Deployment failed: Error: BootstrapExampleStack: SSM parameter /cdk-bootstrap/hnb659fds/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html) | |
Update your bootstrap stack | |
Periodically, the CDK team will update the bootstrap template to a new version. When this happens, we recommend that you update your bootstrap stack. If you haven’t customized the bootstrapping process, you can update your bootstrap stack by following the same steps that you took to originally bootstrap your environment. For more information, see Bootstrap template version history. | |
Default resources created during bootstrapping | |
IAM roles created during bootstrapping | |
By default, bootstrapping provisions the following AWS Identity and Access Management (IAM) roles in your environment: | |
CloudFormationExecutionRole | |
DeploymentActionRole | |
FilePublishingRole | |
ImagePublishingRole | |
LookupRole | |
CloudFormationExecutionRole | |
This IAM role is a CloudFormation service role that grants CloudFormation permission to perform stack deployments on your behalf. This role gives CloudFormation permission to perform AWS API calls in your account, including deploying stacks. | |
By using a service role, the permissions provisioned for the service role determine what actions can be performed on your CloudFormation resources. Without this service role, the security credentials you provide with the CDK CLI would determine what CloudFormation is allowed to do. | |
DeploymentActionRole | |
This IAM role grants permission to perform deployments into your environment. It is assumed by the CDK CLI during deployments. | |
By using a role for deployments, you can perform cross-account deployments since the role can be assumed by AWS identities in a different account. | |
FilePublishingRole | |
This IAM role grants permission to perform actions against the bootstrapped Amazon Simple Storage Service (Amazon S3) bucket, including uploading and deleting assets. It is assumed by the CDK CLI during deployments. | |
ImagePublishingRole | |
This IAM role grants permission to perform actions against the bootstrapped Amazon Elastic Container Registry (Amazon ECR) repository. It is assumed by the CDK CLI during deployments. | |
LookupRole | |
This IAM role grants readOnly permission to look up context values from the AWS environment. It is assumed by the CDK CLI when performing tasks such as template synthesis and deployments. | |
Resource IDs created during bootstrapping | |
When you deploy the default bootstrap template, physical IDs for bootstrap resources are created using the following structure: cdk-qualifier-description-account-ID-Region. | |
Qualifier – A nine character unique string value of hnb659fds. The actual value has no significance. | |
Description – A short description of the resource. For example, container-assets. | |
Account ID – The AWS account ID of the environment. | |
Region – The AWS Region of the environment. | |
The following is an example physical ID of the Amazon S3 staging bucket created during bootstrapping: cdk-hnb659fds-assets-012345678910-us-west-1. | |
Permissions to use when bootstrapping your environment | |
When bootstrapping an AWS environment, the IAM identity performing the bootstrapping must have at least the following permissions: | |
{ | |
"Version": "2012-10-17", | |
"Statement": [ | |
{ | |
"Effect": "Allow", | |
"Action": [ | |
"cloudformation:*", | |
"ecr:*", | |
"ssm:*", | |
"s3:*", | |
"iam:*" | |
], | |
"Resource": "*" | |
} | |
] | |
} | |
Over time, the bootstrap stack, including the resources that are created and permissions they require, may change. With future changes, you may need to modify the permissions required to bootstrap an environment. | |
Customize bootstrapping | |
If the default bootstrap template doesn’t suit your needs, you can customize the bootstrapping of resources into your environment in the following ways: | |
Use command line options with the cdk bootstrap command – This method is best for making small, specific changes that are supported through command line options. | |
Modify the default bootstrap template and deploy it – This method is best for making complex changes or if you want complete control over the configuration of resources provisioned during bootstrapping. | |
For more information on customizing bootstrapping, see Customize AWS CDK bootstrapping. | |
Bootstrapping with CDK Pipelines | |
If you are using CDK Pipelines to deploy into another account's environment, and you receive a message like the following: | |
Policy contains a statement with one or more invalid principals | |
This error message means that the appropriate IAM roles do not exist in the other environment. The most likely cause is that the environment has not been bootstrapped. Bootstrap the environment and try again. | |
Protecting your bootstrap stack from deletion | |
If a bootstrap stack is deleted, the AWS resources that were originally provisioned in the environment to support CDK deployments will also be deleted. This will cause the pipeline to stop working. If this happens, there is no general solution for recovery. | |
After your environment is bootstrapped, do not delete and recreate the environment’s bootstrap stack. Instead, try to update the bootstrap stack to a new version by running the cdk bootstrap command again. | |
To protect against accidental deletion of your bootstrap stack, we recommend that you provide the --termination-protection option with the cdk bootstrap command to enable termination protection. You can enable termination protection on new or existing bootstrap stacks. For instructions on enabling termination protection, see Enable termination protection for the bootstrap stack. | |
Bootstrap template version history | |
The bootstrap template is versioned and evolves over time with the AWS CDK itself. If you provide your own bootstrap template, keep it up to date with the canonical default template. You want to make sure that your template continues to work with all CDK features. | |
Note | |
Earlier versions of the bootstrap template created an AWS KMS key in each bootstrapped environment by default. To avoid charges for the KMS key, re-bootstrap these environments using --no-bootstrap-customer-key. The current default is no KMS key, which helps avoid these charges. | |
This section contains a list of the changes made in each version. | |
Template version AWS CDK version Changes | |
1 1.40.0 Initial version of template with Bucket, Key, Repository, and Roles. | |
2 1.45.0 Split asset publishing role into separate file and image publishing roles. | |
3 1.46.0 Add FileAssetKeyArn export to be able to add decrypt permissions to asset consumers. | |
4 1.61.0 AWS KMS permissions are now implicit via Amazon S3 and no longer require FileAsetKeyArn. Add CdkBootstrapVersion SSM parameter so the bootstrap stack version can be verified without knowing the stack name. | |
5 1.87.0 Deployment role can read SSM parameter. | |
6 1.108.0 Add lookup role separate from deployment role. | |
6 1.109.0 Attach aws-cdk:bootstrap-role tag to deployment, file publishing, and image publishing roles. | |
7 1.110.0 Deployment role can no longer read Buckets in the target account directly. (However, this role is effectively an administrator, and could always use its AWS CloudFormation permissions to make the bucket readable anyway). | |
8 1.114.0 The lookup role has full read-only permissions to the target environment, and has a aws-cdk:bootstrap-role tag as well. | |
9 2.1.0 Fixes Amazon S3 asset uploads from being rejected by commonly referenced encryption SCP. | |
10 2.4.0 Amazon ECR ScanOnPush is now enabled by default. | |
11 2.18.0 Adds policy allowing Lambda to pull from Amazon ECR repos so it survives re-bootstrapping. | |
12 2.20.0 Adds support for experimental cdk import. | |
13 2.25.0 Makes container images in bootstrap-created Amazon ECR repositories immutable. | |
14 2.34.0 Turns off Amazon ECR image scanning at the repository level by default to allow bootstrapping Regions that do not support image scanning. | |
15 2.60.0 KMS keys cannot be tagged. | |
16 2.69.0 Addresses Security Hub finding KMS.2. | |
17 2.72.0 Addresses Security Hub finding ECR.3. | |
18 2.80.0 Reverted changes made for version 16 as they don't work in all partitions and are are not recommended. | |
19 2.106.1 Reverted changes made to version 18 where AccessControl property was removed from the template. (#27964) | |
20 2.119.0 Add ssm:GetParameters action to the AWS CloudFormation deploy IAM role. For more information, see #28336. | |
21 2.149.0 Add condition to the file publishing role. | |
Upgrade from legacy to modern bootstrap template | |
The AWS CDK v1 supported two bootstrapping templates, legacy and modern. CDK v2 supports only the modern template. For reference, here are the high-level differences between these two templates. | |
Feature Legacy (v1 only) Modern (v1 and v2) | |
Cross-account deployments Not allowed Allowed | |
AWS CloudFormation Permissions Deploys using current user's permissions (determined by AWS profile, environment variables, etc.) Deploys using the permissions specified when the bootstrap stack was provisioned (for example, by using --trust) | |
Versioning Only one version of bootstrap stack is available Bootstrap stack is versioned; new resources can be added in future versions, and AWS CDK apps can require a minimum version | |
Resources* Amazon S3 bucket Amazon S3 bucket | |
AWS KMS key | |
IAM roles | |
Amazon ECR repository | |
SSM parameter for versioning | |
Resource naming Automatically generated Deterministic | |
Bucket encryption Default key AWS managed key by default. You can customize to use a customer managed key. | |
* We will add additional resources to the bootstrap template as needed. | |
An environment that was bootstrapped using the legacy template must be upgraded to use the modern template for CDK v2 by re-bootstrapping. Re-deploy all AWS CDK applications in the environment at least once before deleting the legacy bucket. | |
Address Security Hub Findings | |
If you are using AWS Security Hub, you may see findings reported on some of the resources created by the AWS CDK bootstrapping process. Security Hub findings help you find resource configurations you should double-check for accuracy and safety. We have reviewed these specific resource configurations with AWS Security and are confident they do not constitute a security problem. | |
[KMS.2] IAM principals should not have IAM inline policies that allow decryption actions on all KMS keys | |
The deploy role (DeploymentActionRole) grants permission to read encrypted data, which is necessary for cross-account deployments with CDK Pipelines. Policies in this role do not grant permission to all data. It only grants permission to read encrypted data from Amazon S3 and AWS KMS, and only when those resources allow it through their bucket or key policy. | |
The following is a snippet of these two statements in the deploy role from the bootstrap template: | |
DeploymentActionRole: | |
Type: AWS::IAM::Role | |
Properties: | |
... | |
Policies: | |
- PolicyDocument: | |
Statement: | |
... | |
- Sid: PipelineCrossAccountArtifactsBucket | |
Effect: Allow | |
Action: | |
- s3:GetObject* | |
- s3:GetBucket* | |
- s3:List* | |
- s3:Abort* | |
- s3:DeleteObject* | |
- s3:PutObject* | |
Resource: "*" | |
Condition: | |
StringNotEquals: | |
s3:ResourceAccount: | |
Ref: AWS::AccountId | |
- Sid: PipelineCrossAccountArtifactsKey | |
Effect: Allow | |
Action: | |
- kms:Decrypt | |
- kms:DescribeKey | |
- kms:Encrypt | |
- kms:ReEncrypt* | |
- kms:GenerateDataKey* | |
Resource: "*" | |
Condition: | |
StringEquals: | |
kms:ViaService: | |
Fn::Sub: s3.${AWS::Region}.amazonaws.com | |
... | |
Why does Security Hub flag this? | |
The policies contain a Resource: * combined with a Condition clause. Security Hub flags the * wildcard. This wildcard is used because at the time the account is bootstrapped, the AWS KMS key created by CDK Pipelines for the CodePipeline artifact bucket does not exist yet, and therefore, can’t be referenced on the bootstrap template by ARN. In addition, Security Hub does not consider the Condition clause when raising this flag. This Condition restricts Resource: * to requests made from the same AWS account of the AWS KMS key. These requests must come from Amazon S3 in the same AWS Region as the AWS KMS key. | |
Do I need to fix this finding? | |
As long as you have not modified the AWS KMS key on your bootstrap template to be overly permissive, the deploy role does not allow more access than it needs. Therefore, it is not necessary to fix this finding. | |
What if I want to fix this finding? | |
How you fix this finding depends on whether or not you will be using CDK Pipelines for cross-account deployments. | |
To fix the Security Hub finding and use CDK Pipelines for cross-account deployments | |
If you have not done so, deploy the CDK bootstrap stack using the cdk bootstrap command. | |
If you have not done so, create and deploy your CDK Pipeline. For instructions, see Continuous integration and delivery (CI/CD) using CDK Pipelines. | |
Obtain the AWS KMS key ARN of the CodePipeline artifact bucket. This resource is created during pipeline creation. | |
Obtain a copy of the CDK bootstrap template to modify it. The following is an example, using the AWS CDK CLI: | |
$ cdk bootstrap --show-template > bootstrap-template.yaml | |
Modify the template by replacing Resource: * of the PipelineCrossAccountArtifactsKey statement with your ARN value. | |
Deploy the template to update your bootstrap stack. The following is an example, using the CDK CLI: | |
$ cdk bootstrap aws://account-id/region --template bootstrap-template.yaml | |
To fix the Security Hub finding if you’re not using CDK Pipelines for cross-account deployments | |
Obtain a copy of the CDK bootstrap template to modify it. The following is an example, using the CDK CLI: | |
$ cdk bootstrap --show-template > bootstrap-template.yaml | |
Delete the PipelineCrossAccountArtifactsBucket and PipelineCrossAccountArtifactsKey statements from the template. | |
Deploy the template to update your bootstrap stack. The following is an example, using the CDK CLI: | |
$ cdk bootstrap aws://account-id/region --template bootstrap-template.yaml | |
Considerations | |
Since bootstrapping provisions resources in your environment, you may incur AWS charges when those resources are used with the AWS CDK. | |
-- | |
Customize constructs from the AWS Construct Library | |
RSS | |
Customize constructs from the AWS Construct Library through escape hatches, raw overrides, and custom resources. | |
Topics | |
Use escape hatches | |
Use un-escape hatches | |
Use raw overrides | |
Use custom resources | |
Use escape hatches | |
The AWS Construct Library provides constructs of varying levels of abstraction. | |
At the highest level, your AWS CDK application and the stacks in it are themselves abstractions of your entire cloud infrastructure, or significant chunks of it. They can be parameterized to deploy them in different environments or for different needs. | |
Abstractions are powerful tools for designing and implementing cloud applications. The AWS CDK gives you the power not only to build with its abstractions, but also to create new abstractions. Using the existing open-source L2 and L3 constructs as guidance, you can build your own L2 and L3 constructs to reflect your own organization's best practices and opinions. | |
No abstraction is perfect, and even good abstractions cannot cover every possible use case. During development, you may find a construct that almost fits your needs, requiring a small or large customization. | |
For this reason, the AWS CDK provides ways to break out of the construct model. This includes moving to a lower-level abstraction or to a different model entirely. Escape hatches let you escape the AWS CDK paradigm and customize it in ways that suit your needs. Then, you can wrap your changes in a new construct to abstract away the underlying complexity and provide a clean API for other developers. | |
The following are examples of situations where you can use escape hatches: | |
An AWS service feature is available through AWS CloudFormation, but there are no L2 constructs for it. | |
An AWS service feature is available through AWS CloudFormation, and there are L2 constructs for the service, but these don't yet expose the feature. Because L2 constructs are curated by the CDK team, they may not be immediately available for new features. | |
The feature is not yet available through AWS CloudFormation at all. | |
To determine whether a feature is available through AWS CloudFormation, see AWS Resource and Property Types Reference. | |
Develop escape hatches for L1 constructs | |
If L2 constructs are not available for the service, you can use the automatically generated L1 constructs. These resources can be recognized by their name starting with Cfn, such as CfnBucket or CfnRole. You instantiate them exactly as you would use the equivalent AWS CloudFormation resource. | |
For example, to instantiate a low-level Amazon S3 bucket L1 with analytics enabled, you would write something like the following. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
new s3.CfnBucket(this, 'amzn-s3-demo-bucket', { | |
analyticsConfigurations: [ | |
{ | |
id: 'Config', | |
// ... | |
} | |
] | |
}); | |
There might be rare cases where you want to define a resource that doesn't have a corresponding CfnXxx class. This could be a new resource type that hasn't yet been published in the AWS CloudFormation resource specification. In cases like this, you can instantiate the cdk.CfnResource directly and specify the resource type and properties. This is shown in the following example. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
new cdk.CfnResource(this, 'amzn-s3-demo-bucket', { | |
type: 'AWS::S3::Bucket', | |
properties: { | |
// Note the PascalCase here! These are CloudFormation identifiers. | |
AnalyticsConfigurations: [ | |
{ | |
Id: 'Config', | |
// ... | |
} | |
] | |
} | |
}); | |
Develop escape hatches for L2 constructs | |
If an L2 construct is missing a feature or you're trying to work around an issue, you can modify the L1 construct that's encapsulated by the L2 construct. | |
All L2 constructs contain within them the corresponding L1 construct. For example, the high-level Bucket construct wraps the low-level CfnBucket construct. Because the CfnBucket corresponds directly to the AWS CloudFormation resource, it exposes all features that are available through AWS CloudFormation. | |
The basic approach to get access to the L1 construct is to use construct.node.defaultChild (Python: default_child), cast it to the right type (if necessary), and modify its properties. Again, let's take the example of a Bucket. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
// Get the CloudFormation resource | |
const cfnBucket = bucket.node.defaultChild as s3.CfnBucket; | |
// Change its properties | |
cfnBucket.analyticsConfiguration = [ | |
{ | |
id: 'Config', | |
// ... | |
} | |
]; | |
You can also use this object to change AWS CloudFormation options such as Metadata and UpdatePolicy. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
cfnBucket.cfnOptions.metadata = { | |
MetadataKey: 'MetadataValue' | |
}; | |
Use un-escape hatches | |
The AWS CDK also provides the capability to go up an abstraction level, which we might refer to as an "un-escape" hatch. If you have an L1 construct, such as CfnBucket, you can create a new L2 construct (Bucket in this case) to wrap the L1 construct. | |
This is convenient when you create an L1 resource but want to use it with a construct that requires an L2 resource. It's also helpful when you want to use convenience methods like .grantXxxxx() that aren't available on the L1 construct. | |
You move to the higher abstraction level using a static method on the L2 class called .fromCfnXxxxx()—for example, Bucket.fromCfnBucket() for Amazon S3 buckets. The L1 resource is the only parameter. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
b1 = new s3.CfnBucket(this, "buck09", { ... }); | |
b2 = s3.Bucket.fromCfnBucket(b1); | |
L2 constructs created from L1 constructs are proxy objects that refer to the L1 resource, similar to those created from resource names, ARNs, or lookups. Modifications to these constructs do not affect the final synthesized AWS CloudFormation template (since you have the L1 resource, however, you can modify that instead). For more information on proxy objects, see Referencing resources in your AWS account. | |
To avoid confusion, do not create multiple L2 constructs that refer to the same L1 construct. For example, if you extract the CfnBucket from a Bucket using the technique in the previous section, you shouldn't create a second Bucket instance by calling Bucket.fromCfnBucket() with that CfnBucket. It actually works as you'd expect (only one AWS::S3::Bucket is synthesized) but it makes your code more difficult to maintain. | |
Use raw overrides | |
If there are properties that are missing from the L1 construct, you can bypass all typing using raw overrides. This also makes it possible to delete synthesized properties. | |
Use one of the addOverride methods (Python: add_override) methods, as shown in the following example. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
// Get the CloudFormation resource | |
const cfnBucket = bucket.node.defaultChild as s3.CfnBucket; | |
// Use dot notation to address inside the resource template fragment | |
cfnBucket.addOverride('Properties.VersioningConfiguration.Status', 'NewStatus'); | |
cfnBucket.addDeletionOverride('Properties.VersioningConfiguration.Status'); | |
// use index (0 here) to address an element of a list | |
cfnBucket.addOverride('Properties.Tags.0.Value', 'NewValue'); | |
cfnBucket.addDeletionOverride('Properties.Tags.0'); | |
// addPropertyOverride is a convenience function for paths starting with "Properties." | |
cfnBucket.addPropertyOverride('VersioningConfiguration.Status', 'NewStatus'); | |
cfnBucket.addPropertyDeletionOverride('VersioningConfiguration.Status'); | |
cfnBucket.addPropertyOverride('Tags.0.Value', 'NewValue'); | |
cfnBucket.addPropertyDeletionOverride('Tags.0'); | |
Use custom resources | |
If the feature isn't available through AWS CloudFormation, but only through a direct API call, you must write an AWS CloudFormation Custom Resource to make the API call you need. You can use the AWS CDK to write custom resources and wrap them into a regular construct interface. From the perspective of a consumer of your construct, the experience will feel native. | |
Building a custom resource involves writing a Lambda function that responds to a resource's CREATE, UPDATE, and DELETE lifecycle events. If your custom resource needs to make only a single API call, consider using the AwsCustomResource. This makes it possible to perform arbitrary SDK calls during an AWS CloudFormation deployment. Otherwise, you should write your own Lambda function to perform the work you need to get done. | |
The subject is too broad to cover completely here, but the following links should get you started: | |
Custom Resources | |
Custom-Resource Example | |
For a more fully fledged example, see the DnsValidatedCertificate class in the CDK standard library. This is implemented as a custom resource. | |
-- | |
Use CloudFormation parameters to get a CloudFormation value | |
RSS | |
Use AWS CloudFormation parameters within AWS Cloud Development Kit (AWS CDK) applications to input custom values into your synthesized CloudFormation templates at deployment. | |
For an introduction, see Parameters and the AWS CDK. | |
Define parameters in your CDK app | |
Use the CfnParameter class to define a parameter. You'll want to specify at least a type and a description for most parameters, though both are technically optional. The description appears when the user is prompted to enter the parameter's value in the AWS CloudFormation console. For more information on the available types, see Types. | |
Note | |
You can define parameters in any scope. However, we recommend defining parameters at the stack level so that their logical ID doesn't change when you refactor your code. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const uploadBucketName = new CfnParameter(this, "uploadBucketName", { | |
type: "String", | |
description: "The name of the Amazon S3 bucket where uploaded files will be stored."}); | |
Use parameters | |
A CfnParameter instance exposes its value to your CDK app via a token. Like all tokens, the parameter's token is resolved at synthesis time. But it resolves to a reference to the parameter defined in the AWS CloudFormation template (which will be resolved at deploy time), rather than to a concrete value. | |
You can retrieve the token as an instance of the Token class, or in string, string list, or numeric encoding. Your choice depends on the kind of value required by the class or method that you want to use the parameter with. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
Property kind of value | |
value Token class instance | |
valueAsList The token represented as a string list | |
valueAsNumber The token represented as a number | |
valueAsString The token represented as a string | |
For example, to use a parameter in a Bucket definition: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const bucket = new Bucket(this, "amzn-s3-demo-bucket", | |
{ bucketName: uploadBucketName.valueAsString}); | |
Deploy CDK apps containing parameters | |
When you deploy a generated AWS CloudFormation template through the AWS CloudFormation console, you will be prompted to provide the values for each parameter. | |
You can also provide parameter values using the CDK CLI cdk deploy command, or by specifying parameter values in your CDK project’s stack file. | |
Provide parameter values with cdk deploy | |
When you deploy using the CDK CLI cdk deploy command, you can provide parameter values at deployment with the --parameters option. | |
The following is an example of the cdk deploy command structure: | |
$ cdk deploy stack-logical-id --parameters stack-name:parameter-name=parameter-value | |
If your CDK app contains a single stack, you don’t have to provide the stack logical ID argument or the stack-name value in the --parameters option. The CDK CLI will automatically find and provide these values. The following is an example that specifies an uploadbucket value for the uploadBucketName parameter of the single stack in our CDK app: | |
$ cdk deploy --parameters uploadBucketName=uploadbucket | |
Provide parameter values with cdk deploy for multi-stack applications | |
The following is an example CDK application in TypeScript that contains two CDK stacks. Each stack contains an Amazon S3 bucket instance and a parameter to set the Amazon S3 bucket name: | |
import * as cdk from 'aws-cdk-lib'; | |
import { Construct } from 'constructs'; | |
import * as s3 from 'aws-cdk-lib/aws-s3'; | |
// Define the CDK app | |
const app = new cdk.App(); | |
// First stack | |
export class MyFirstStack extends cdk.Stack { | |
constructor(scope: Construct, id: string, props?: cdk.StackProps) { | |
super(scope, id, props); | |
// Set a default parameter name | |
const bucketNameParam = new cdk.CfnParameter(this, 'bucketNameParam', { | |
type: 'String', | |
default: 'myfirststackdefaultbucketname' | |
}); | |
// Define an S3 bucket | |
new s3.Bucket(this, 'MyFirstBucket', { | |
bucketName: bucketNameParam.valueAsString | |
}); | |
} | |
} | |
// Second stack | |
export class MySecondStack extends cdk.Stack { | |
constructor(scope: Construct, id: string, props?: cdk.StackProps) { | |
super(scope, id, props); | |
// Set a default parameter name | |
const bucketNameParam = new cdk.CfnParameter(this, 'bucketNameParam', { | |
type: 'String', | |
default: 'mysecondstackdefaultbucketname' | |
}); | |
// Define an S3 bucket | |
new s3.Bucket(this, 'MySecondBucket', { | |
bucketName: bucketNameParam.valueAsString | |
}); | |
} | |
} | |
// Instantiate the stacks | |
new MyFirstStack(app, 'MyFirstStack', { | |
stackName: 'MyFirstDeployedStack', | |
}); | |
new MySecondStack(app, 'MySecondStack', { | |
stackName: 'MySecondDeployedStack', | |
}); | |
For CDK apps that contain multiple stacks, you can do the following: | |
Deploy one stack with parameters – To deploy a single stack from a multi-stack application, provide the stack logical ID as an argument. | |
The following is an example that deploys MySecondStack with mynewbucketname as the parameter value for bucketNameParam: | |
$ cdk deploy MySecondStack --parameters bucketNameParam='mynewbucketname' | |
Deploy all stacks and specify parameter values for each stack – Provide the '*' wildcard or the --all option to deploy all stacks. Provide the --parameters option multiple times in a single command to specify parameter values for each stack. The following is an example: | |
$ cdk deploy '*' --parameters MyFirstDeployedStack:bucketNameParam='mynewfirststackbucketname' --parameters MySecondDeployedStack:bucketNameParam='mynewsecondstackbucketname' | |
Deploy all stacks and specify parameter values for a single stack – Provide the '*' wildcard or the --all option to deploy all stacks. Then, specify the stack to define the parameter for in the --parameters option. The following are examples that deploys all stacks in a CDK app and specifies a parameter value for the MySecondDeployedStack AWS CloudFormation stack. All other stacks will deploy and use the default parameter value: | |
$ cdk deploy '*' --parameters MySecondDeployedStack:bucketNameParam='mynewbucketname' | |
$ cdk deploy --all --parameters MySecondDeployedStack:bucketNameParam='mynewbucketname' | |
Provide parameter values with cdk deploy for applications with nested stacks | |
The CDK CLI behavior when working with applications containing nested stacks is similar to multi-stack applications. The main difference is, if you want to deploy all nested stacks, use the '**' wildcard. The '*' wildcard deploys all stacks but will not deploy nested stacks. The '**' wildcard deploys all stacks, including nested stacks. | |
The following is an example that deploys nested stacks while specifying the parameter value for one nested stack: | |
$ cdk deploy '**' --parameters MultiStackCdkApp/SecondStack:bucketNameParam='mysecondstackbucketname' | |
For more information on cdk deploy command options, see cdk deploy. | |
-- | |
Import an existing AWS CloudFormation template | |
RSS | |
Import resources from an AWS CloudFormation template into your AWS Cloud Development Kit (AWS CDK) applications by using the cloudformation-include.CfnInclude construct to convert resources to L1 constructs. | |
After import, you can work with these resources in your app in the same way that you would if they were originally defined in AWS CDK code. You can also use these L1 constructs within higher-level AWS CDK constructs. For example, this can let you use the L2 permission grant methods with the resources they define. | |
The cloudformation-include.CfnInclude construct essentially adds an AWS CDK API wrapper to any resource in your AWS CloudFormation template. Use this capability to import your existing AWS CloudFormation templates to the AWS CDK a piece at a time. By doing this, you can manage your existing resources using AWS CDK constructs to utilize the benefits of higher-level abstractions. You can also use this feature to vend your AWS CloudFormation templates to AWS CDK developers by providing an AWS CDK construct API. | |
Note | |
AWS CDK v1 also included aws-cdk-lib.CfnInclude, which was previously used for the same general purpose. However, it lacks much of the functionality of cloudformation-include.CfnInclude. | |
Topics | |
Import an AWS CloudFormation template | |
Access imported resources | |
Replace parameters | |
Import other template elements | |
Import nested stacks | |
Import an AWS CloudFormation template | |
The following is a sample AWS CloudFormation template that we will use to provide examples in this topic. Copy and save the template as my-template.json to follow along. After working through these examples, you can explore further by using any of your existing deployed AWS CloudFormation templates. You can obtain them from the AWS CloudFormation console. | |
{ | |
"Resources": { | |
"amzn-s3-demo-bucket": { | |
"Type": "AWS::S3::Bucket", | |
"Properties": { | |
"BucketName": "amzn-s3-demo-bucket", | |
} | |
} | |
} | |
} | |
You can work with either JSON or YAML templates. We recommend JSON if available since YAML parsers can vary slightly in what they accept. | |
The following is an example of how to import the sample template into your AWS CDK app using cloudformation-include. Templates are imported within the context of an CDK stack. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
import * as cdk from 'aws-cdk-lib'; | |
import * as cfninc from 'aws-cdk-lib/cloudformation-include'; | |
import { Construct } from 'constructs'; | |
export class MyStack extends cdk.Stack { | |
constructor(scope: Construct, id: string, props?: cdk.StackProps) { | |
super(scope, id, props); | |
const template = new cfninc.CfnInclude(this, 'Template', { | |
templateFile: 'my-template.json', | |
}); | |
} | |
} | |
By default, importing a resource preserves the resource's original logical ID from the template. This behavior is suitable for importing an AWS CloudFormation template into the AWS CDK, where logical IDs must be retained. AWS CloudFormation needs this information to recognize these imported resources as the same resources from the AWS CloudFormation template. | |
If you are developing an AWS CDK construct wrapper for the template so that it can be used by other AWS CDK developers, have the AWS CDK generate new resource IDs instead. By doing this, the construct can be used multiple times in a stack without name conflicts. To do this, set the preserveLogicalIds property to false when importing the template. The following is an example: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const template = new cfninc.CfnInclude(this, 'MyConstruct', { | |
templateFile: 'my-template.json', | |
preserveLogicalIds: false | |
}); | |
To put imported resources under the control of your AWS CDK app, add the stack to the App: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
import * as cdk from 'aws-cdk-lib'; | |
import { MyStack } from '../lib/my-stack'; | |
const app = new cdk.App(); | |
new MyStack(app, 'MyStack'); | |
To verify that there won't be any unintended changes to the AWS resources in the stack, you can perform a diff. Use the AWS CDK CLI cdk diff command and omit any AWS CDK-specific metadata. The following is an example: | |
cdk diff --no-version-reporting --no-path-metadata --no-asset-metadata | |
After you import an AWS CloudFormation template, the AWS CDK app should become the source of truth for your imported resources. To make changes to your resources, modify them in your AWS CDK app and deploy with the AWS CDK CLI cdk deploy command. | |
Access imported resources | |
The name template in the example code represents the imported AWS CloudFormation template. To access a resource from it, use the object's getResource() method. To access the returned resource as a specific kind of resource, cast the result to the desired type. This isn't necessary in Python or JavaScript. The following is an example: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const cfnBucket = template.getResource('amzn-s3-demo-bucket') as s3.CfnBucket; | |
From this example, cfnBucket is now an instance of the aws-s3.CfnBucket class. This is an L1 construct that represents the corresponding AWS CloudFormation resource. You can treat it like any other resource of its type. For example, you can get its ARN value with the bucket.attrArn property. | |
To wrap the L1 CfnBucket resource in an L2 aws-s3.Bucket instance instead, use the static methods fromBucketArn(), fromBucketAttributes(), or fromBucketName(). Usually, the fromBucketName() method is most convenient. The following is an example: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const bucket = s3.Bucket.fromBucketName(this, 'Bucket', cfnBucket.ref); | |
Other L2 constructs have similar methods for creating the construct from an existing resource. | |
When you wrap an L1 construct in an L2 construct, it doesn't create a new resource. From our example, we are not creating a second S3; bucket. Instead, the new Bucket instance encapsulates the existing CfnBucket. | |
From the example, the bucket is now an L2 Bucket construct that behaves like any other L2 construct. For example, you can grant an AWS Lambda function write access to the bucket by using the bucket's convenient grantWrite() method. You don't have to define the necessary AWS Identity and Access Management (IAM) policy manually. The following is an example: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
bucket.grantWrite(lambdaFunc); | |
Replace parameters | |
If your AWS CloudFormation template contains parameters, you can replace them with build time values at import by using the parameters property. In the following example, we replace the UploadBucket parameter with the ARN of a bucket defined elsewhere in our AWS CDK code. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const template = new cfninc.CfnInclude(this, 'Template', { | |
templateFile: 'my-template.json', | |
parameters: { | |
'UploadBucket': bucket.bucketArn, | |
}, | |
}); | |
Import other template elements | |
You can import any AWS CloudFormation template element, not just resources. The imported elements become a part of the AWS CDK stack. To import these elements, use the following methods of the CfnInclude object: | |
getCondition() – AWS CloudFormation conditions. | |
getHook() – AWS CloudFormation hooks for blue/green deployments. | |
getMapping() – AWS CloudFormation mappings. | |
getOutput() – AWS CloudFormation outputs. | |
getParameter() – AWS CloudFormation parameters. | |
getRule() – AWS CloudFormation rules for AWS Service Catalog templates. | |
Each of these methods return an instance of a class that represents the specific type of AWS CloudFormation element. These objects are mutable. Changes that you make to them will appear in the template that gets generated from the AWS CDK stack. The following is an example that imports a parameter from the template and modifies its default value: | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const param = template.getParameter('MyParameter'); | |
param.default = "AWS CDK" | |
Import nested stacks | |
You can import nested stacks by specifying them either when you import their main template, or at some later point. The nested template must be stored in a local file, but referenced as a NestedStack resource in the main template. Also, the resource name used in the AWS CDK code must match the name used for the nested stack in the main template. | |
Given this resource definition in the main template, the following code shows how to import the referenced nested stack both ways. | |
"NestedStack": { | |
"Type": "AWS::CloudFormation::Stack", | |
"Properties": { | |
"TemplateURL": "https://my-s3-template-source.s3.amazonaws.com/nested-stack.json" | |
} | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
// include nested stack when importing main stack | |
const mainTemplate = new cfninc.CfnInclude(this, 'MainStack', { | |
templateFile: 'main-template.json', | |
loadNestedStacks: { | |
'NestedStack': { | |
templateFile: 'nested-template.json', | |
}, | |
}, | |
}); | |
// or add it some time after importing the main stack | |
const nestedTemplate = mainTemplate.loadNestedStack('NestedTemplate', { | |
templateFile: 'nested-template.json', | |
}); | |
You can import multiple nested stacks with either methods. When importing the main template, you provide a mapping between the resource name of each nested stack and its template file. This mapping can contain any number of entries. To do it after the initial import, call loadNestedStack() once for each nested stack. | |
After importing a nested stack, you can access it using the main template's getNestedStack() method. | |
TypeScript | |
JavaScript | |
Python | |
Java | |
C# | |
const nestedStack = mainTemplate.getNestedStack('NestedStack').stack; | |
The getNestedStack() method returns an IncludedNestedStack instance. From this instance, you can access the AWS CDK NestedStack instance via the stack property, as shown in the example. You can also access the original AWS CloudFormation template object via includedTemplate, from which you can load resources and other AWS CloudFormation elements. | |
-- | |
Cross-account and cross-region deployment using GitHub actions and AWS CDK | |
by DAMODAR SHENVI WAGLE | on 15 SEP 2020 | in Advanced (300), Developer Tools, DevOps, How-To | Permalink | Share | |
GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub. | |
A cross-account deployment strategy is a CI/CD pattern or model in AWS. In this pattern, you have a designated AWS account called tools, where all CI/CD pipelines reside. Deployment is carried out by these pipelines across other AWS accounts, which may correspond to dev, staging, or prod. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline. | |
In this post, we show you how to use GitHub Actions to deploy an AWS Lambda-based API to an AWS account and Region using the cross-account deployment strategy. | |
Using GitHub Actions may have associated costs in addition to the cost associated with the AWS resources you create. For more information, see About billing for GitHub Actions. | |
Prerequisites | |
Before proceeding any further, you need to identify and designate two AWS accounts required for the solution to work: | |
Tools – Where you create an AWS Identity and Access Management (IAM) user for GitHub Actions to use to carry out deployment. | |
Target – Where deployment occurs. You can call this as your dev/stage/prod environment. | |
You also need to create two AWS account profiles in ~/.aws/credentials for the tools and target accounts, if you don’t already have them. These profiles need to have sufficient permissions to run an AWS Cloud Development Kit (AWS CDK) stack. They should be your private profiles and only be used during the course of this use case. So, it should be fine if you want to use admin privileges. Don’t share the profile details, especially if it has admin privileges. I recommend removing the profile when you’re finished with this walkthrough. For more information about creating an AWS account profile, see Configuring the AWS CLI. | |
Solution overview | |
You start by building the necessary resources in the tools account (an IAM user with permissions to assume a specific IAM role from the target account to carry out deployment). For simplicity, we refer to this IAM role as the cross-account role, as specified in the architecture diagram. | |
You also create the cross-account role in the target account that trusts the IAM user in the tools account and provides the required permissions for AWS CDK to bootstrap and initiate creating an AWS CloudFormation deployment stack in the target account. GitHub Actions uses the tools account IAM user credentials to the assume the cross-account role to carry out deployment. | |
In addition, you create an AWS CloudFormation execution role in the target account, which AWS CloudFormation service assumes in the target account. This role has permissions to create your API resources, such as a Lambda function and Amazon API Gateway, in the target account. This role is passed to AWS CloudFormation service via AWS CDK. | |
You then configure your tools account IAM user credentials in your Git secrets and define the GitHub Actions workflow, which triggers upon pushing code to a specific branch of the repo. The workflow then assumes the cross-account role and initiates deployment. | |
The following diagram illustrates the solution architecture and shows AWS resources across the tools and target accounts. | |
Architecture diagram | |
Creating an IAM user | |
You start by creating an IAM user called git-action-deployment-user in the tools account. The user needs to have only programmatic access. | |
Clone the GitHub repo aws-cross-account-cicd-git-actions-prereq and navigate to folder tools-account. Here you find the JSON parameter file src/cdk-stack-param.json, which contains the parameter CROSS_ACCOUNT_ROLE_ARN, which represents the ARN for the cross-account role we create in the next step in the target account. In the ARN, replace <target-account-id> with the actual account ID for your designated AWS target account. Replace <target-account-id> with designated AWS account id | |
Run deploy.sh by passing the name of the tools AWS account profile you created earlier. The script compiles the code, builds a package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code: | |
cd aws-cross-account-cicd-git-actions-prereq/tools-account/ | |
./deploy.sh "<AWS-TOOLS-ACCOUNT-PROFILE-NAME>" | |
Bash | |
You should now see two stacks in the tools account: CDKToolkit and cf-GitActionDeploymentUserStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an Amazon Simple Storage Service (Amazon S3) bucket needed to hold deployment assets such as a CloudFormation template and Lambda code package. cf-GitActionDeploymentUserStack creates the IAM user with permission to assume git-action-cross-account-role (which you create in the next step). On the Outputs tab of the stack, you can find the user access key and the AWS Secrets Manager ARN that holds the user secret. To retrieve the secret, you need to go to Secrets Manager. Record the secret to use later. | |
Stack that creates IAM user with its secret stored in secrets manager | |
Creating a cross-account IAM role | |
In this step, you create two IAM roles in the target account: git-action-cross-account-role and git-action-cf-execution-role. | |
git-action-cross-account-role provides required deployment-specific permissions to the IAM user you created in the last step. The IAM user in the tools account can assume this role and perform the following tasks: | |
Upload deployment assets such as the CloudFormation template and Lambda code package to a designated S3 bucket via AWS CDK | |
Create a CloudFormation stack that deploys API Gateway and Lambda using AWS CDK | |
AWS CDK passes git-action-cf-execution-role to AWS CloudFormation to create, update, and delete the CloudFormation stack. It has permissions to create API Gateway and Lambda resources in the target account. | |
To deploy these two roles using AWS CDK, complete the following steps: | |
In the already cloned repo from the previous step, navigate to the folder target-account. This folder contains the JSON parameter file cdk-stack-param.json, which contains the parameter TOOLS_ACCOUNT_USER_ARN, which represents the ARN for the IAM user you previously created in the tools account. In the ARN, replace <tools-account-id> with the actual account ID for your designated AWS tools account. Replace <tools-account-id> with designated AWS account id | |
Run deploy.sh by passing the name of the target AWS account profile you created earlier. The script compiles the code, builds the package, and uses the AWS CDK CLI to bootstrap and deploy the stack. See the following code: | |
cd ../target-account/ | |
./deploy.sh "<AWS-TARGET-ACCOUNT-PROFILE-NAME>" | |
Bash | |
You should now see two stacks in your target account: CDKToolkit and cf-CrossAccountRolesStack. AWS CDK creates the CDKToolkit stack when we bootstrap the AWS CDK app. This creates an S3 bucket to hold deployment assets such as the CloudFormation template and Lambda code package. The cf-CrossAccountRolesStack creates the two IAM roles we discussed at the beginning of this step. The IAM role git-action-cross-account-role now has the IAM user added to its trust policy. On the Outputs tab of the stack, you can find these roles’ ARNs. Record these ARNs as you conclude this step. | |
Stack that creates IAM roles to carry out cross account deployment | |
Configuring secrets | |
One of the GitHub actions we use is aws-actions/configure-aws-credentials@v1. This action configures AWS credentials and Region environment variables for use in the GitHub Actions workflow. The AWS CDK CLI detects the environment variables to determine the credentials and Region to use for deployment. | |
For our cross-account deployment use case, aws-actions/configure-aws-credentials@v1 takes three pieces of sensitive information besides the Region: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET, and CROSS_ACCOUNT_ROLE_TO_ASSUME. Secrets are recommended for storing sensitive pieces of information in the GitHub repo. It keeps the information in an encrypted format. For more information about referencing secrets in the workflow, see Creating and storing encrypted secrets. | |
Before we continue, you need your own empty GitHub repo to complete this step. Use an existing repo if you have one, or create a new repo. You configure secrets in this repo. In the next section, you check in the code provided by the post to deploy a Lambda-based API CDK stack into this repo. | |
On the GitHub console, navigate to your repo settings and choose the Secrets tab. | |
Add a new secret with name as TOOLS_ACCOUNT_ACCESS_KEY_ID. | |
Copy the access key ID from the output OutGitActionDeploymentUserAccessKey of the stack GitActionDeploymentUserStack in tools account. | |
Enter the ID in the Value field. Create secret | |
Repeat this step to add two more secrets: | |
TOOLS_ACCOUNT_SECRET_ACCESS_KEY (value retrieved from the AWS Secrets Manager in tools account) | |
CROSS_ACCOUNT_ROLE (value copied from the output OutCrossAccountRoleArn of the stack cf-CrossAccountRolesStack in target account) | |
You should now have three secrets as shown below. | |
All required git secrets | |
Deploying with GitHub Actions | |
As the final step, first clone your empty repo where you set up your secrets. Download and copy the code from the GitHub repo into your empty repo. The folder structure of your repo should mimic the folder structure of source repo. See the following screenshot. | |
Folder structure of the Lambda API code | |
We can take a detailed look at the code base. First and foremost, we use Typescript to deploy our Lambda API, so we need an AWS CDK app and AWS CDK stack. The app is defined in app.ts under the repo root folder location. The stack definition is located under the stack-specific folder src/git-action-demo-api-stack. The Lambda code is located under the Lambda-specific folder src/git-action-demo-api-stack/lambda/ git-action-demo-lambda. | |
We also have a deployment script deploy.sh, which compiles the app and Lambda code, packages the Lambda code into a .zip file, bootstraps the app by copying the assets to an S3 bucket, and deploys the stack. To deploy the stack, AWS CDK has to pass CFN_EXECUTION_ROLE to AWS CloudFormation; this role is configured in src/params/cdk-stack-param.json. Replace <target-account-id> with your own designated AWS target account ID. | |
Update cdk-stack-param.json in git-actions-cross-account-cicd repo with TARGET account id | |
Finally, we define the Git Actions workflow under the .github/workflows/ folder per the specifications defined by GitHub Actions. GitHub Actions automatically identifies the workflow in this location and triggers it if conditions match. Our workflow .yml file is named in the format cicd-workflow-<region>.yml, where <region> in the file name identifies the deployment Region in the target account. In our use case, we use us-east-1 and us-west-2, which is also defined as an environment variable in the workflow. | |
The GitHub Actions workflow has a standard hierarchy. The workflow is a collection of jobs, which are collections of one or more steps. Each job runs on a virtual machine called a runner, which can either be GitHub-hosted or self-hosted. We use the GitHub-hosted runner ubuntu-latest because it works well for our use case. For more information about GitHub-hosted runners, see Virtual environments for GitHub-hosted runners. For more information about the software preinstalled on GitHub-hosted runners, see Software installed on GitHub-hosted runners. | |
The workflow also has a trigger condition specified at the top. You can schedule the trigger based on the cron settings or trigger it upon code pushed to a specific branch in the repo. See the following code: | |
name: Lambda API CICD Workflow | |
# This workflow is triggered on pushes to the repository branch master. | |
on: | |
push: | |
branches: | |
- master | |
# Initializes environment variables for the workflow | |
env: | |
REGION: us-east-1 # Deployment Region | |
jobs: | |
deploy: | |
name: Build And Deploy | |
# This job runs on Linux | |
runs-on: ubuntu-latest | |
steps: | |
# Checkout code from git repo branch configured above, under folder $GITHUB_WORKSPACE. | |
- name: Checkout | |
uses: actions/checkout@v2 | |
# Sets up AWS profile. | |
- name: Configure AWS credentials | |
uses: aws-actions/configure-aws-credentials@v1 | |
with: | |
aws-access-key-id: ${{ secrets.TOOLS_ACCOUNT_ACCESS_KEY_ID }} | |
aws-secret-access-key: ${{ secrets.TOOLS_ACCOUNT_SECRET_ACCESS_KEY }} | |
aws-region: ${{ env.REGION }} | |
role-to-assume: ${{ secrets.CROSS_ACCOUNT_ROLE }} | |
role-duration-seconds: 1200 | |
role-session-name: GitActionDeploymentSession | |
# Installs CDK and other prerequisites | |
- name: Prerequisite Installation | |
run: | | |
sudo npm install -g [email protected] | |
cdk --version | |
aws s3 ls | |
# Build and Deploy CDK application | |
- name: Build & Deploy | |
run: | | |
cd $GITHUB_WORKSPACE | |
ls -a | |
chmod 700 deploy.sh | |
./deploy.sh | |
YAML | |
For more information about triggering workflows, see Triggering a workflow with events. | |
We have configured a single job workflow for our use case that runs on ubuntu-latest and is triggered upon a code push to the master branch. When you create an empty repo, master branch becomes the default branch. The workflow has four steps: | |
Check out the code from the repo, for which we use a standard Git action actions/checkout@v2. The code is checked out into a folder defined by the variable $GITHUB_WORKSPACE, so it becomes the root location of our code. | |
Configure AWS credentials using aws-actions/configure-aws-credentials@v1. This action is configured as explained in the previous section. | |
Install your prerequisites. In our use case, the only prerequisite we need is AWS CDK. Upon installing AWS CDK, we can do a quick test using the AWS Command Line Interface (AWS CLI) command aws s3 ls. If cross-account access was successfully established in the previous step of the workflow, this command should return a list of buckets in the target account. | |
Navigate to root location of the code $GITHUB_WORKSPACE and run the deploy.sh script. | |
You can check in the code into the master branch of your repo. This should trigger the workflow, which you can monitor on the Actions tab of your repo. The commit message you provide is displayed for the respective run of the workflow. | |
Workflow for region us-east-1 Workflow for region us-west-2 | |
You can choose the workflow link and monitor the log for each individual step of the workflow. | |
Git action workflow steps | |
In the target account, you should now see the CloudFormation stack cf-GitActionDemoApiStack in us-east-1 and us-west-2. | |
Lambda API stack in us-east-1 Lambda API stack in us-west-2 | |
The API resource URL DocUploadRestApiResourceUrl is located on the Outputs tab of the stack. You can invoke your API by choosing this URL on the browser. | |
API Invocation Output | |
Security considerations | |
Cross-account IAM roles are very powerful and need to be handled carefully. For this post, we strictly limited the cross-account IAM role to specific Amazon S3 and CloudFormation permissions. This makes sure that the cross-account role can only do those things. The actual creation of Lambda, API Gateway, and Amazon DynamoDB resources happens via the AWS CloudFormation IAM role, which AWS CloudFormation assumes in the target AWS account. | |
Make sure that you use secrets to store your sensitive workflow configurations, as specified in the section Configuring secrets. | |
--- | |
CDK Cross-Account Pipelines Part 1 | |
Mark Ilott | |
AWS in Plain English | |
Mark Ilott | |
· | |
Follow | |
Published in | |
AWS in Plain English | |
· | |
6 min read | |
· | |
Mar 22, 2022 | |
104 | |
Using CDK v2 with a CodeCommit Source | |
Overview | |
This is an update to a prior article that covered how to do this with CDK v1 and the pre-release version of CDK Pipelines. Here we are using the latest CDK and the GA version of Pipelines, which it turns out are just as tricky to get working in this particular scenario. | |
Just like the old version, documentation from AWS on how CDK Pipelines actually works in cross-account scenarios is thin to non-existent, so hopefully this can save you all some time. | |
I have copied some of the text from the original article to save some time. There is a full demo on GitHub you can refer to or if brave just jump straight into! | |
Alternatively, if you are using GitHub as the source for your pipelines, you can skip straight to part 2, which is much simpler than using CodeCommit. | |
The Scenario | |
I’m using AWS CDK to develop and deploy infrastructure and apps into pre-prod and production environments. Specifically, there is: | |
A Dev account (Account number: 111111111111), where the code resides in CodeCommit repositories, and where development code is deployed. Developers have (almost) full access to the account and can manage code and deploy apps. | |
A UAT (Staging) account (222222222222) — a production-like environment used for business acceptance testing. The infra team manages the environment, developers have limited access for troubleshooting and testing. | |
A Prod account (333333333333), for production deployment, obviously. Developers can view logs, and not much else. | |
A Tools (Shared Services) account (444444444444), where CodePipeline will be used to build and deploy into Dev, UAT and Prod. Developers have limited access for troubleshooting and testing, and can view but not manage pipelines. | |
You can review the theory behind it and how to set up without using CDK in this AWS blog post. | |
Just like in the AWS blog, the environment I originally developed this for is using CodeCommit for source control. It greatly complicates the setup compared to GitHub, and all I can say after a year or so using CodeCommit is don’t, unless you really have to. | |
Note this is something you can set up yourself for learning and testing. Accounts in AWS don’t cost anything, so there is no issues in setting them all up. If it’s something new for you, I’d also recommend setting them up in an Organisation, as that’s how it will be done in the real world. | |
In my case, I’m running three different Pipelines, one for each environment, and that’s what the templates set up. It’s also possible to deploy each environment as separate stages from a single pipeline if that’s better for your workflow — and it will be easy to modify the templates and process below if needed. | |
I’m using Typescript for my CDK development, but the preparation parts of this article are relevant for Python, Java and any other language CDK supports. | |
CDK Account Preparation | |
In the original article I outlined how to prepare the accounts and deploy pipelines using CloudFormation, in case you don’t have access to deploy directly via CDK (which was the case for me when I first worked this out). In the interest of brevity I’m skipping that here. The TLDR version is use CDK to export CloudFormation templates. | |
What is important is that you set up the correct trust relationships when bootstrapping accounts. | |
Assuming your tools account (where you will deploy pipelines) is 444444444444, then you need to run the following: | |
Tools account itself: | |
cdk bootstrap 444444444444/ap-southeast-1 --no-bootstrap-customer-key --cloudformation-execution-policies 'arn:aws:iam::aws:policy/AdministratorAccess' | |
And all the other accounts and regions (account 111111111111 as an example): | |
cdk bootstrap 111111111111/ap-southeast-1 --no-bootstrap-customer-key --cloudformation-execution-policies 'arn:aws:iam::aws:policy/AdministratorAccess' --trust 444444444444 --trust-for-lookup 444444444444 | |
Obviously replace the account numbers and region with your own. | |
Caution: once you have done this, anyone who has permission to deploy CDK in your Tools account can use the same credentials to deploy to any of the trusted accounts. This is handy in dev and personal accounts, but maybe not what you want in production. | |
Cross Account Preparation | |
Now here is the tricky bit. There’s a nice diagram in the AWS blog mentioned that shows the flow of assumed roles, but there’s a little more to it than that. | |
Pipeline Role in the Dev Account | |
The CodePipeline build phase (using CodeBuild) will assume a Role in the Dev account in order to access CodeCommit. As CodeBuild will also need to access S3 buckets in the Tools account during this phase, the Role in Dev also needs access to the buckets and KMS Key in the Tools Account. | |
This can be confusing, so I’ll explain again a different way. CodeBuild can only assume one Role during the build phase. During this time it needs to access the code in CodeCommit (Dev account), and save any assets produced to S3 buckets in the pipeline account (Tools Account). The Role it is assuming needs to be able to do both. | |
In the old version (and my original article) we had to create this role manually, but fortunately now CDK creates it for us. Unfortunately it does not make it at all clear what it is doing or how to use it. For that, read on… | |
Events Rule to Trigger Pipelines | |
What we do still need to do for this scenario is create an Events Rule to trigger when we update code in CodeCommit — and allow our Dev account to send the Events to our Tools account to trigger the pipelines. This requires two preparation stacks, one in the Dev account and one in the Tools account. | |
Event Rule Configuration in Tools and Dev Accounts | |
Pipeline Role | |
The pipelines themselves start with a Role that in turn assumes all the other Roles required to: | |
get the source | |
create the CloudFormation in CodeBuild | |
save assets to S3 | |
sync to other Accounts and Regions | |
and then deploy the CloudFormation. | |
In our cross-account (and possibly cross-region) scenario, CDK sets all this up by creating helper stacks: | |
in the Dev account to provide access to CodeCommit | |
in other regions to provide S3 buckets for asset sync | |
CDK creates all of these roles for us, and it almost works. The issue is in this cross-account (and possibly cross-region) scenario, the base role for the pipeline must exist before any of the helper stacks get created. | |
In a basic configuration of CDK Pipelines the pipeline Role is created with the Pipeline, after the helper stacks. The result is if you need helper stacks — and you do if you are deploying cross-region or using CodeCommit from another account — then you have to create the pipeline Role first and attach it to the CDK Pipeline as it is created. This took some trial and error to work out. | |
Part of the complications is we cannot pass parameters or resources between CDK (CloudFormation) stacks in different accounts or regions. The workaround is to use fixed names for resources so we know the ARN’s that will be created and can reference them in other stacks: | |
Role created in the Pipeline Account | |
Creating the Pipelines | |
Fortunately, the rest of the heavy lifting is taken care of by CDK Pipelines. If you are not already familiar there’s a good basic introduction in the AWS docs here. | |
I have shared a working example on GitHub here. You can use it to deploy a basic API to your own accounts and regions and review how CDK sets up these cross-account and cross-region pipelines. The Readme in GitHub explains how to deploy, including the preparation stacks followed by the pipelines themselves. | |
Hope that helps you all, please leave a comment if you have any questions or suggestions. | |
Reminder — part 2 covers using GitHub as source and is quite a bit simpler! | |
-- | |
CDK Cross-Account Pipelines Part 2 | |
Mark Ilott | |
AWS in Plain English | |
Mark Ilott | |
· | |
Follow | |
Published in | |
AWS in Plain English | |
· | |
4 min read | |
· | |
Mar 23, 2022 | |
55 | |
2 | |
Using CDK v2 with a GitHub Source | |
Overview | |
This is the second (or third depending on how we count) article on deploying cross-account pipelines with CDK Pipelines. The first part focused on using CodeCommit in a Dev Account as the source for the pipelines and the many tricks required to get it working. The good news is using GitHub is significantly simpler. | |
I have copied some of the text from the first article to save some time, so if you’ve already read that one you can skip ahead to the GitHub as Pipeline Source section. There is a full demo on GitHub you can refer to or if brave just jump straight in! | |
The Scenario | |
I’m using AWS CDK to develop and deploy infrastructure and apps into pre-prod and production environments. Specifically, there is: | |
A Dev account (Account number: 111111111111), where development code is deployed. Developers have (almost) full access to the account and can deploy apps. | |
A UAT (Staging) account (222222222222) — a production-like environment used for business acceptance testing. The infra team manages the environment, developers have limited access for troubleshooting and testing. | |
A Prod account (333333333333), for production deployment, obviously. Developers can view logs, and not much else. | |
A Tools (Shared Services) account (444444444444), where CodePipeline will be used to build and deploy into Dev, UAT and Prod. Developers have limited access for troubleshooting and testing, and can view but not manage pipelines. | |
Note this is something you can set up yourself for learning and testing. Accounts in AWS don’t cost anything, so there is no issues in setting them all up. If it’s something new for you, I’d also recommend setting them up in an Organisation, as that’s how it will be done in the real world. | |
In my case, I’m running three different Pipelines, one for each environment, and that’s what the templates set up. It’s also possible to deploy each environment as separate stages from a single pipeline if that’s better for your workflow — and it will be easy to modify the templates and process below if needed. | |
I’m using Typescript for my CDK development, but the preparation parts of this article are relevant for Python, Java and any other language CDK supports. | |
CDK Account Preparation | |
In the original article I outlined how to prepare the accounts and deploy pipelines using CloudFormation, in case you don’t have access to deploy directly via CDK (which was the case for me when I first worked this out). In the interest of brevity I’m skipping that here. The TLDR version is use CDK to export CloudFormation templates. | |
What is important is that you set up the correct trust relationships when bootstrapping accounts. | |
Assuming your tools account (where you will deploy pipelines) is 444444444444, then you need to run the following: | |
Tools account itself: | |
cdk bootstrap 444444444444/ap-southeast-1 --no-bootstrap-customer-key --cloudformation-execution-policies 'arn:aws:iam::aws:policy/AdministratorAccess' | |
And all the other accounts and regions (account 111111111111 as an example): | |
cdk bootstrap 111111111111/ap-southeast-1 --no-bootstrap-customer-key --cloudformation-execution-policies 'arn:aws:iam::aws:policy/AdministratorAccess' --trust 444444444444 --trust-for-lookup 444444444444 | |
Obviously replace the account numbers and region with your own. | |
Caution: once you have done this, anyone who has permission to deploy CDK in your Tools account can use the same credentials to deploy to any of the trusted accounts. This is handy in dev and personal accounts, but maybe not what you want in production. | |
GitHub as Pipeline Source | |
Using GitHub as source and triggering a pipeline on code update is remarkably simple (especially compared to CodeCommit). | |
There are two methods available — using a GitHub Personal Access Token and saving it in Secrets Manager, or using the AWS Connector for GitHub. The Connector is simpler | |
For completeness, here’s both: | |
Connecting to a GitHub Source | |
I do recommend using the AWS Connector method if you are able to log into both your Pipeline Account and GitHub at the same time. Follow the instructions here. | |
Creating the Pipelines | |
Fortunately that’s about all you need to know. The rest of the heavy lifting is taken care of by CDK Pipelines. If you are not already familiar there’s a good basic introduction in the AWS docs here. | |
The Pipeline itself is quite simple: | |
Pipeline Stack | |
Check out the working example on GitHub here for the rest of the detail — including the tricks required to deploy to multiple Accounts and Regions with a single CDK App. You can use it to deploy a basic API to your own accounts and regions and review how CDK sets up these cross-account and cross-region pipelines. The Readme in GitHub explains how to deploy. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Act as an expert software developer. | |
Your goal is to help the user with anything related to CDK in Javascript (not typescript). | |
The user will tell you what kind of help they need and you will help them by walking thru step by step until the solution/answer is found (or issue resolved). | |
Ask questions to the user so you can learn about the situation and the context so you have more information at your disposal. Ask the user as MANY questions as needed for you to get the FULL picture of the situation. Understanding the context and the situation is KEY to success. | |
Be more diligent in reviewing the task(s)step(s) given, understanding the current situation, identifying the goal, planning the step(s), and self-reflecting before proposing any changes. | |
Explicitly say out loud all your thoughts and planning and reflection. | |
Share this with who you are working with. | |
Always do this FIRST. | |
Once you are done with all your thinking and planning and reflecting: review, analyze and criticize your own thoughts, plan and reflection and see if your need to adjust all that one more time before doing your job as a software developer (proposing solutions, answering, replying, getting work done, etc..). | |
The idea is that preparedness is another important KEY to success but you have to be verbose about it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment