Loading video player...
Hello everyone. Thank you for joining us
today for Hashi Talks. Today we'll be
talking about managing Terraform module
life cycle. Some concepts that we will
go through include module deprecation,
using the explorer for workspace
visibility, leveraging the change
requests feature, and getting notified
on change requests creation. I'm Glenn,
a solutions architect at Hashikot. And I
often get questions from the field about
how to detect which workspaces are using
old module versions, what module
versions are being used, and how we
notify teams at scale about requirements
to upgrade these modules. These are some
questions that I'll be addressing during
this talk. This is what we will cover
today. I'll share some basic module
terminology to start with before diving
into the module life cycle from writing
to publishing to deprecating and
upgrading. Then we'll go into the visual
parts where I have a demo showing how
techbased modules work and how
workspaces consume these modules. I'll
then show how we can deprecate and
revoke modules which has slightly
different behaviors. Then I'll show how
Terraform teams are set up to receive
notifications on workspace change
requests. Finally, I'll walk through how
we can use the Terraform Explorer to
detect information about module usage,
create change requests, and receive
notifications on those change requests.
Let's begin with module terminology by
going through a quick module refresher.
A module is a container for multiple
resources that are used together. A
common use case of a module is when you
repeatedly provision collections of
resources with similar configuration.
For example, in this screenshot, we see
a network module. This network module is
an abstraction of how VPCs and subnets
integrate with each other. For this
specific module, it contains a VPC with
one subnet. For someone else using the
module, they might have a different VPC
cider. Instead of duplicating the
resource blocks, they can simply use the
module abstraction and pass in their own
value in the base_ider_block
variable. We can also chain modules
together as shown here, where the
network modules output is referenced in
the database module.
Notice that each module also has a
version constraint associated with it.
For module versioning, we recommend
following semantic versioning.
A guide on semantic versioning is shown
here where it is split into major,
minor, and patch versions. Patch
versions are typically backward
compatible bug fixes. Perhaps you're
fixing a typo within the Terraform
configuration. Minor versions are
usually used when you're adding new
functionality that is backward
compatible, meaning that module
consumers do not need to modify any of
their Terraform configurations and new
plans will typically show no deltas.
However, they can choose to add new
variables to utilize the new features
implemented. Finally, major versions are
used for incompatible API changes, which
typically introduce new behavior. This
will likely involve module consumers
introducing required new variables or
renaming existing variables.
An example of semantic versioning can be
seen in a VPC module within the
Terraform registry. We see a mix of
major versions in the 6 series and 5. We
also see minor versions like 6.2.0 and
6.3.0.
Finally, we see patch versions where we
have 6.4.0 and 6.4.1.
This module has an accompanying GitHub
repository where we can see the release
notes of each version. Here the patch
version involves a bug fix. The minor
version shows a backward compatible fix
that does not require module consumers
to make any changes. And then the major
version shows a breaking change where
module consumers should bump their
provider versions accordingly to use the
six dot versions of the module.
Beyond semantic versioning, there are
other terms we use in relation to
operating modules. These are also
features in certain tiers of HCP
Terraform and Terraform Enterprise.
Refer to the pricing page linked in this
slide at the bottom to see which of the
features are supported depending on
which platform you are running on. The
first is deprecation and revocation. For
older versions, we might want to
deprecate the module version to let
consumers of the module know they should
be upgrading their modules to the latest
version. Deprecating does not stop
existing or new users from consuming the
deprecated version. It only adds
warnings to new runs using the
deprecated module version. For other
modules, we might want to be stricter in
this enforcement. Revoking a module
version takes it a step further by
preventing new users from consuming that
module version. This means that new
Terraform workspaces with configurations
that reference a revoked module version
will face an error if they attempt to
run. The second is the explorer. Some
views that explorer can show are the top
module versions used, the modules that
each workspace is consuming, and their
versions. We can even do filters to find
out which workspaces are not using a
particular module version, something
that we'll do in in the demo later.
Finally, we have change requests from
the explorer view. After filtering out
workspaces based on a certain criteria,
for example, workspaces that are not
using the latest module version, we can
create change requests on these
workspaces to notify teams that manage
those workspaces that they need to take
action to upgrade their modules. We'll
see how each of these features are used
in the demo.
Now that we have defined what a module
is and the value of semantic versioning,
let's look at a module life cycle.
Terraform supports two module publishing
workflows, branchbased and tag based
workflows. Both workflows are similar
with some slight differences. This
diagram shows the flow for branchbased
workflows that as of this recording is
the flow that supports tests in a
private module registry. For branchbased
workflows, it starts with creating a
repository that will contain the module
code. Module producers will then write
Terraform configurations for the module,
write tests, and create examples for how
to consume the module. They will then
validate and format those configurations
which can also be achieved through
pre-commit hooks. When a module is
ready, they will make a pull request.
When a pull request is made in a
branchbased workflow, Terraform tests
are executed in the private module
registry. The PR is then validated for
Terraform best practices before it is
approved and merged to main. Merging to
main runs Terraform tests again in the
private registry. Once all tests are
passed, we can then publish a new
version based on semantic versioning.
So what happens after we publish a new
module version? At this stage, we
ideally want module consumers to use the
new version. We can choose to deprecate
or revoke older module versions with a
message shown to module consumers that
there is a new module version available.
In addition, we can also send
notifications to the specific teams
managing workspaces that are using old
module versions. We first want to
discover the workspaces that are using
an older version of the module. Once
these workspaces are identified, we can
create a change request for these
workspaces. Notifications are configured
on change requests created. Notification
destinations include email, Slack,
Microsoft Teams, and web hooks. Once
teams receive the notification, they can
then take action to upgrade the modules.
With the theory covered, let's start our
demo. For the first part of this demo,
we'll be looking at how to use the
techbased publishing workflow for
modules and how we can create Terraform
workspaces that reference those modules
by specific versions.
This is part of the larger demo. This
architecture diagram shows what we'll be
covering for the demo. We see that there
is a Terraform module created within the
Terraform private module registry. This
module is integrated with a GitHub
repository. This GitHub repository
contains three release versions with
specific semantic version tags. Each
release maps to a version of the
Terraform module.
For demo purposes, the module creates an
AWS network with a VPC and subnet.
Hence, YIDC integration with AWS is
configured.
The IM role is created and its ARN is
used in a Terraform variable set. This
Terraform variable set is associated
with a Terraform project.
Workspaces created within this project
have access to these variables and can
deploy resources in AWS. We create three
workspaces each using a different
version of the module. These workspaces
are VCS integrated with their own GitHub
repository with Terraform code using the
specific module version.
We then configure three Terraform teams
with read access to the respective
workspaces.
Each team has its own Terraform user
with an email address. This allows us to
validate that emails are sent when a
change request is created.
As part of this demo, the Terraform code
in this GitHub repo is used to bootstrap
parts of the architecture leaving the
remainder for us to complete in the demo
so that we understand how the pattern
works from the UI. If you are interested
in full end toend automation, drop a
note in the comments.
The bootstrapped architecture is shown
here in this diagram. We create only one
release and integrate that with the
Terraform module. The Bootstrap code
also comes with the AWS OIDC
integration, Terraform project,
Terraform teams
and GitHub repositories containing the
code for each workspace. We'll be
creating workspaces as part of the demo.
to get started. This is the GitHub repo
containing code for the module. The code
shown here is for a simple VPC and
subnet. The GitHub repo has one release
with tech v1.0.0
as you can see in the orange box over
here.
A Terraform module that is deployed
using the techbased workflow has a
version that matches the release tag in
the GitHub repository. Again, we see
v1.0.0
over here in the orange box.
As part of the bootstrap code, a
terraform project named demo project was
also created with a variable set that
has the required variables for AWS OIDC
integration, allowing us to deploy
resources in AWS using dynamic temporary
credentials. We will be deploying
workspaces within this project. The
Bootstrap code also creates GitHub
repositories for each workspace with
Terraform configurations using different
versions of the module. This screenshot
shows the code for the workspaces that
uses module v.0.0 zero over here.
Let's now create a new workspace using
the version control workflow.
The Bootstrap code comes with a
github.com integration configured. You
may also configure GitHub app
integration. I'll choose github.com
since the credentials I used to create
this has access to the GitHub
repositories that were bootstrapped.
Choose the GitHub repo for v1.0.0.
review the settings and create.
We see the workspace is now created and
we can start a new plan.
Once the plan is created, we can review
the plan and apply the workspace. This
creates the VBC and subnet based on
module v1.0.0.
Now that we've created version 1.0.0 and
a workspace that consumes it, let's
create module version 1.1.0. In this
version, we introduce a new optional
variable named enable DNS host names. As
you see over here, this gives module
consumers the option to enable or
disable it by changing the boolean
value. It defaults to true, which is the
current behavior that existing module
consumers experience. Since this is a
new feature that is backward compatible,
we will tag it as a minor release.
Tag the release as v1.1.0
and describe it before publishing the
release.
The new release is now published and we
can see the release notes over here.
The Terraform module automatically
detects the new release and we see the
version as 1.1.0.
Review the Bootstrap GitHub repo for the
workspace that will use module v1.1.0.
As we see the version is over here.
We'll create a new workspace referencing
this repo.
review the settings and then create.
We see that the workspace is now created
and we can start a new plan.
Once the plan is created, we can review
the plan and apply the workspace. This
creates the VPC and subnet using module
v1.1.0.
Now that we've created module version
1.1.0 and a workspace that consumes it,
let's create module version 2.0.0. In
this version, we refactor the variable
cider_block to VPC_ider_block.
Changing this variable requires module
consumers to also change the variable
they pass to the module.
Since this is not a backward compatible
change, we will tag it as a major
release.
Tag the release as v2.0.0.
Describe it
and publish it.
The new release is now published.
The Terraform module automatically
detects the new release and we can see
that the version is 2.0.0.
Now we have three module versions.
Review the Bootstrap GitHub repo for the
workspace that will use module v2.0.0.
Notice that we are now passing in the
variable as VPC_ider
block.
Now let's create a new workspace
referencing this repo.
Review the settings and create.
We see the workspace is now created and
we can start a new plan.
Once the plan is created, we can review
the plan and apply the workspace. This
creates the VPC and subnet based on
module v2.0.0.
We now have three workspaces using
different versions of the module.
In the next part, we explore the effects
of deprecating and revoking modules.
Back in the Terraform private module
registry, recall that our module has
three versions. Select version 1.1.0.
Now that we have changed the selected
version, click on manage module for
organization and choose deprecate module
version 1.1.0.
Provide a reason for deprecating the
module version. And we can also add
links to additional information as well.
For this example, I've added a link to
the GitHub repo for the v2.0.0
release.
The Terraform module now shows that
version 1.1.0 0 is deprecated.
Navigate back to the workspace that is
using v1.1.0.
Starting a new run now shows a warning
that the workspace is using the
deprecated module.
It also shows the custom message and
link that we added when deprecating the
module. If there were changes in the
terraform configuration and apply can
still proceed even if the workspace is
using the deprecated module.
Now let's test creating a new workspace
using the deprecated module. Here a new
workspace is created using similar steps
to what we did previously. Choose start
new plan.
The plan works and shows the resources
to be added.
There's still a warning that the
workspace is using the deprecated
module. At this point, reviewers can
either cancel the run
or they can continue using the
deprecated module.
There might be a valid reason for it.
For example, maybe a QA team is testing
whether upgrading the version from a
deprecated version to the latest version
results in unexpected resource deletion.
Now, let's see the module version
revocation behavior. To revoke a module,
we must first deprecate the module. Back
in the Terraform module registry, change
the version to 1.0.0
and click manage module for
organization.
Choose deprecate module version 1.0.0.
Enter a reason and a link.
The module now shows that version 1.0.0
is deprecated.
While still viewing version 1.0.0, 0.
Click manage module for organization.
Choose revoke module version 1.0.0.
Enter a reason and a link. This will
show up in the run output for module
consumers who are using the revoked
version
and then choose revoke.
The Terraform module registry now shows
that version 1.0.0 is revoked.
Navigate back to the workspace that is
using v1.0.0.
Starting a new run shows a warning that
the workspace is using the revoked
module.
It also shows the custom message and
link that we added when revoking the
module. if there were changes in the
terraform configuration and apply can
still proceed even if the workspace is
using the revoke module.
Now let's test creating a new workspace
using the revoke module. Here a
workspace is created using similar steps
to what we did previously. Choose start
new plan.
This time when a new workspace uses the
revoke module, it results in a plan
error.
This indicates that new workspaces
cannot use the revoked module. Module
consumers must make adjustments to fix
the error. We can see how revoking a
module version prevents new users from
using it.
The next part of the demo involves
setting up Terraform teams and
notifications in preparation for using
change request notifications for teams
that own their respective workspaces.
The Bootstrap code creates three
Terraform teams, one for each of the
three workspaces.
Each team comes with a default email
notification configuration.
The default email configuration sends
notifications to all team members for
all workspace events.
You can also choose to create custom
notifications which supports other
destinations like web hook, slack and
Microsoft teams. You can also specify
specific workspace events to receive
notifications for. As of this recording,
the only available option is change
requests.
Navigating back to the workspace shows
that it does not have any team access
granted. So this is for workspace v1-0-
0. Let's click add team and permissions.
We choose the relevant team
that matches the name of the workspace.
This is created as part of the bootstrap
code and grant it custom readonly
permissions.
This associates the team to the
workspace allowing the team to receive
notifications on this workspaces events.
Verify that the workspace now has team
access granted to the team named
workspace v1-0-0
with custom privileges.
Repeat the same steps to grant the team
named workspace v1-1
access to workspace v1-1-1-z
module.
Finally, we do the same for
workspace-v2-.
At this stage, we have three terraform
teams. Each of them has custom read
access to only a single workspace.
As part of the bootstrap code, we create
Terraform team members for each team
with a unique email. HCB Terraform
invitations will be received for each of
these emails. This allows us to test how
change requests on a specific workspace
results in notifications sent to members
of the team associated with that
workspace. As you can see here, the
email addresses are all different.
For each of the invitations, we will
create a HCP account.
Once the account is created, we can
accept the organization invitation.
The invited team member now sees the
organization as one of its
organizations.
View the organization's Terraform users
and their associated Terraform teams.
There are three Terraform users. Each
are members of a different team.
At this stage, we have granted each team
access to one workspace only. Each team
also has a single Terraform user for a
unique email address to test receiving
email notifications using the default
team notification configuration. What we
have done for this part is summarized in
the diagram shown here.
In this final part, we'll use the
explorer to find workspaces that are not
using the latest module version and then
create a change request for these
workspaces. When a change request is
created, notifications will be sent
based on the configurations done in the
previous part.
The explorer gives us the option to
select by type or view preconfigured
queries based on specific use cases.
Let's see what the modules type shows.
Clicking modules shows details like the
source of each module version.
The number of workspaces using a
particular version of the module and
which workspaces are using that version.
Clicking the workspaces view shows
various details about each workspace.
Information includes which project the
workspace belongs to,
its run status,
the module count,
which modules are being used, and
information about the provider. There's
other information available, and I
encourage you to check it out.
We can add filters to narrow down the
search. In this case, it is narrowed
down to which workspaces are not using
the latest version of the module and
which has a run status of planned and
finished.
This results in the original two
workspaces that are using the deprecated
and revoked module versions.
Selecting the checkbox next to the
workspaces reviews the create change
request button. Click this button.
This leads to a change request form
where we can fill in the subject and a
message describing the requested
changes. We can then choose create
change request.
Navigating back to the workspace that is
using the revoked module shows that
there is one change request.
Clicking that shows the change request
subject and message that we entered.
If you click into this, we'll see the
formatted change request.
At the same time, an email is also
received by the team member of the team
that has read access to that workspace.
This shows that we have a proactive way
of informing the respective teams that
they must upgrade their module versions.
We repeat the same for the workspace
using the deprecated v1.1.0 module
version. We see that there's also one
change request.
We can view the change request and again
clicking into it shows the formatted
change request.
The team member that has a different
address now also receives an email based
on the change request created for
workspace v1-1-0.
This shows that notifications can be
targeted to specific teams that have
been granted access to the workspaces
that need attention.
That's all for the demo. Thank you and
let me know in the comments if you have
any questions.
Modern infrastructure demands regular updates to maintain security, efficiency, and compliance. However, managing the lifecycle of Terraform modules presents unique challenges for platform teams: How do you signal to consumers that a module version should no longer be used? How can you identify workspaces that need updates? How do you effectively communicate these needs across your organization? In this session, we'll explore comprehensive module lifecycle management strategies within HCP Terraform's private registry. You'll learn practical approaches to: - Implement effective module versioning strategies that enable seamless updates - Leverage module deprecation to signal that versions are maintained but not recommended - Use module revocation to block new usage of problematic versions while maintaining existing workloads - Utilize the Explorer to identify workspaces using outdated module versions - Create and manage change requests to track necessary updates - Configure team notifications to ensure timely communication about required changes Through practical demonstrations, you'll see how these tools create a complete feedback loop between module publishers and consumers, enabling effective governance without disrupting developer workflows. Whether you're managing a small team or a large enterprise, these practices will help you maintain module health while strengthening communication between platform teams and module consumers. Speaker: Glenn Chia X: https://twitter.com/glenncjw Subscribe to our YouTube Channel → https://www.youtube.com/c/HashiCorp?sub_confirmation=1 For hands-on interactive labs, visit HashiCorp Developer → https://developer.hashicorp.com/ HashiCorp, an IBM company, helps organizations automate hybrid cloud environments with Infrastructure and Security Lifecycle Management. HashiCorp offers The Infrastructure Cloud on the HashiCorp Cloud Platform (HCP) for managed cloud services, as well as self-hosted enterprise offerings and community source-available products. For more information, visit hashicorp.com. For more information → https://hashicorp.com LinkedIn → https://linkedin.com/company/hashicorp X → https://x.com/HashiCorp Facebook → https://facebook.com/HashiCorp