name,ring,quadrant,isNew,description Path-to-production mapping,Adopt,Techniques,TRUE,"

Although path-to-production mapping has been a near-universal practice at Thoughtworks since codifying Continuous Delivery, we often come across organizations unfamiliar with the practice. The activity is most often done in a workshop with a cross-functional group of people — that includes everyone involved in designing, developing, releasing and operating the software — around a shared whiteboard (or virtual equivalent). First, the steps in the process are listed in order, from the developer workstation all the way to production. Then, a facilitated session is used to capture further information and pain points. The most common technique we see is based on value-stream mapping, although plenty of process map variants are equally valuable. The activity is often eye-opening for many of the participants, as they identify delays, risks and inconsistencies and continue to use the visual representation for the continuous improvement of the build and deploy process. We consider this technique so foundational that we were surprised to discover we hadn't blipped it before.

" Component visual regression testing,Trial,Techniques,FALSE,"

Visual regression testing is a useful and powerful tool to have in your toolbox, but it has a significant cost given it's done for the entire page. With the rise of component-based frameworks such as React and Vue, we've also seen the rise of component visual regression testing. This technique strikes a good balance between value and cost to ensure that no undesired visuals have been added to the application. In our experience, component visual regression testing presents fewer false positives and promotes a good architectural style. By using it with tools such as Vite and the webpack feature Hot Module Replacement (HMR), it could be seen as a paradigm shift for applying test-driven development to front-end development.

" Dragonfly,Assess,Platforms,TRUE,"

Dragonfly is a new in-memory data store with compatible Redis and Memcached APIs. It leverages the new Linux-specific io_uring API for I/O and implements novel algorithms and data structures on top of a multithreaded, shared-nothing architecture. Because of these clever choices in implementation, Dragonfly achieves impressive results in performance. Although Redis continues to be our default choice for in-memory data store solutions, we do think Dragonfly is an interesting choice to assess.

" OrioleDB,Hold,Platforms,FALSE,"

OrioleDB is a new storage engine for PostgreSQL. Our teams use PostgreSQL a lot, but its storage engine was originally designed for hard drives. Although there are several options to tune for modern hardware, it can be difficult and cumbersome to achieve optimal results. OrioleDB addresses these challenges by implementing a cloud-native storage engine with explicit support for solid-state drives (SSDs) and nonvolatile random-access memory (NVRAM). To try the new engine, first install the enhancement patches to the current table access methods and then install OrioleDB as a PostgreSQL extension. We believe OrioleDB has great potential to address several long-pending issues in PostgreSQL, and we encourage you to carefully assess it.

" AWS Backup Vault Lock,Trial,Tools,TRUE,"

When implementing robust, secure and reliable disaster recovery, it’s necessary to ensure that backups can't be deleted or altered before their expiry, either maliciously or accidentally. Previously, with AWS Backup, these policies and guarantees had to be implemented by hand. Recently, AWS has added the Vault Lock feature to ensure backups are immutable and untamperable. AWS Backup Vault Lock enforces retention and deletion policies and prevents even those with administrator privileges from altering or deleting backup files. This has proved to be a valuable addition and fills a previously empty space.

" Databricks Overwatch,Assess,Tools,FALSE,"

Databricks Overwatch is a Databricks Labs project that enables teams to analyze various operational metrics of Databricks workloads around cost, governance and performance with support to run what-if experiments. It's essentially a set of data pipelines that populate tables in Databricks, which can then be analyzed using tools like notebooks. Overwatch is very much a power tool; however, it's still in its early stages and it may take some effort to set it up — our use of it required Databricks solution architects to help set it up and populate a price reference table for cost calculations — but we expect adoption to get easier over time. The level of analysis made possible by Overwatch is deeper than what is allowed by cloud providers' cost analysis tools. For example, we were able to analyze the cost of job failures — recognizing that failing fast saves money compared to jobs that only fail near the final step — and break down the cost by various groupings (workspace, cluster, job, notebook, team). We also appreciated the improved operational visibility, as we could easily audit access controls around cluster configurations and analyze operational metrics like finding the longest running notebook or largest read/write volume. Overwatch can analyze historical data, but its real-time mode allows for alerting which helps you to add appropriate controls to your Databricks workloads.

" io-ts,Adopt,Languages & Frameworks,TRUE,"

Our teams developing in TypeScript are finding io-ts invaluable, especially when interacting with APIs that ultimately result in the creation of objects with specific types. When working with TypeScript, getting data into the bounds of the type system (i.e., from the aforementioned APIs) can lead to run-time errors that can be hard to find and debug. io-ts bridges the gap between compile-time type checking and run-time consumption of external data by providing encode and decode functions. Given the experiences of our teams and the elegance of its approach, we think io-ts is worth adopting.

" Carbon,Hold,Languages & Frameworks,FALSE,"

We're seeing some interest in the Carbon programming language. That doesn't come as a surprise: it has Google's backing and is presented as a natural successor to C++. In our opinion C++ can't be replaced fast enough as software engineers have shown, over the past decades, that writing safe and error-free C++ code is extremely difficult and time-consuming. While Carbon is an interesting concept with its focus on migration from C++, without a working compiler, it's clearly a long way from being usable and there are other modern programming languages that are good choices if you want to migrate from C++. It's too early to tell whether Carbon will become the natural successor to C++, but, from today's perspective, we recommend that teams look at Rust and Go rather than postponing a migration because they're waiting for Carbon to arrive.

" Team cognitive load,Adopt,Techniques,TRUE,"

Team interaction is a key concept when redesigning an organization for business agility and speed. These interactions will be reflected in the software being built (see Conway's Law) and indicate how effectively teams can autonomously deliver value to their customers. Our advice is to be intentional about how teams are designed and how they interact. Because we believe that organizational design and team interactions evolve over time, we think it's particularly important to measure and keep track of the team cognitive load, which indicates how easy or difficult teams find building, testing and maintaining their services. We've been using a template to assess team cognitive load that is based on ideas by the authors of the Team Topologies book.

" Threat modeling,Adopt,Techniques,FALSE,"

We continue to recommend that teams carry out threat modeling — a set of techniques to help you identify and classify potential threats during the development process — but we want to emphasize that this is not a one-off activity only done at the start of projects; teams need to avoid the security sandwich. This is because throughout the lifetime of any software, new threats will emerge and existing ones will continue to evolve thanks to external events and ongoing changes to requirements and architecture. This means that threat modeling needs to be repeated periodically — the frequency of repetition will depend on the circumstances and will need to consider factors such as the cost of running the exercise and the potential risk to the business. When used in conjunction with other techniques, such as establishing cross-functional security requirements to address common risks in the project's technologies and using automated security scanners, threat modeling can be a powerful asset.

" Backstage,Adopt,Techniques,FALSE,"

In an increasingly digital world, improving developer effectiveness in large organizations is often a core concern of senior leaders. We've seen enough value with developer portals in general and Backstage in particular that we're happy to recommend it in Adopt. Backstage is an open-source developer portal platform created by Spotify that improves discovery of software assets across the organization. It uses Markdown TechDocs that live alongside the code for each service, which nicely balances the needs of centralized discovery with the need for distributed ownership of assets. Backstage supports software templates to accelerate new development and a plugin architecture that allows for extensibility and adaptability into an organization's infrastructure ecosystem. Backstage Service Catalog uses YAML files to track ownership and metadata for all the software in an organization's ecosystem; it even lets you track third-party SaaS software, which usually requires tracking ownership.

" Delta Lake,Adopt,Techniques,FALSE,"

Delta Lake is an open-source storage layer, implemented by Databricks, that attempts to bring ACID transactions to big data processing. In our Databricks-enabled data lake or data mesh projects, our teams prefer using Delta Lake storage over the direct use of file storage types such as AWS S3 or ADLS. Until recently, Delta Lake has been a closed proprietary product from Databricks, but it's now open source and accessible to non-Databricks platforms. However, our recommendation of Delta Lake as a default choice currently extends only to Databricks projects that use Parquet file formats. Delta Lake facilitates concurrent data read/write use cases where file-level transactionality is required. We find Delta Lake's seamless integration with Apache Spark batch and micro-batch APIs very helpful, particularly features such as time travel (accessing data at a particular point in time or commit reversion) as well as schema evolution support on write.

" Delta Lake,Adopt,Techniques,FALSE,"

Delta Lake is an open-source storage layer, implemented by Databricks, that attempts to bring ACID transactions to big data processing. In our Databricks-enabled data lake or data mesh projects, our teams prefer using Delta Lake storage over the direct use of file storage types such as AWS S3 or ADLS. Until recently, Delta Lake has been a closed proprietary product from Databricks, but it's now open source and accessible to non-Databricks platforms. However, our recommendation of Delta Lake as a default choice currently extends only to Databricks projects that use Parquet file formats. Delta Lake facilitates concurrent data read/write use cases where file-level transactionality is required. We find Delta Lake's seamless integration with Apache Spark batch and micro-batch APIs very helpful, particularly features such as time travel (accessing data at a particular point in time or commit reversion) as well as schema evolution support on write.

" Great Expectations,Adopt,Techniques,FALSE,"

Great Expectations has become a sensible default for our teams in the data quality space, which is why we recommend adopting it — not only for the lack of better alternatives but also because our teams have reported great results in several client projects. Great Expectations is a framework that allows you to craft built-in controls that flag anomalies or quality issues in data pipelines. Just as unit tests run in a build pipeline, Great Expectations makes assertions during the execution of a data pipeline. We like its simplicity and ease of use — the rules stored in JSON can be modified by our data domain experts without necessarily needing data engineering skills.

" Kotest,Adopt,Techniques,FALSE,"

Kotest (previously KotlinTest) is a stand-alone testing tool for the Kotlin ecosystem that is widely used among our teams across various Kotlin implementations — native, JVM or JavaScript. Its key advantages are that it offers a variety of testing styles in order to structure test suites and that it comes with a comprehensive set of matchers, which allow for expressive tests in an elegant internal DSL. In addition to its support for property-based testing, our teams like the solid IntelliJ plugin and the support community. Many of our developers consider it their first choice and recommend those who are still using JUnit in Kotlin consider switching over to Kotest.

" NestJS,Adopt,Techniques,FALSE,"

In the past, we've cautioned about Node overload, and we're still cautious about the reasons to choose it. However, in scenarios where Node.js is required to build back-end applications, our teams are reporting that NestJS is a suitable option to enable developers to create testable, scalable, loosely coupled and easily maintainable applications in enterprises. NestJS is a TypeScript-first framework that makes the development of Node.js applications safer and less error-prone. NestJS is opinionated and comes with SOLID principles and an Angular-inspired architecture out of the box.

" React Query,Adopt,Techniques,FALSE,"

React Query is often described as the missing data-fetching library for React. Fetching, caching, synchronizing and updating server state is a common requirement in many React applications, and although the requirements are well understood, getting the implementation right is notoriously difficult. React Query provides a straightforward solution using hooks. It works hand-in-hand with existing async data-fetching libraries like axios, Fetch and GraphQL since they are built on promises. As an application developer, you simply pass a function that resolves your data and leave everything else to the framework. We like that it works out of the box but still offers a lot of configuration when needed. The developer tools, unfortunately not yet available for React Native, also help developers new to the framework understand how it works. For React Native, you can use a third-party developer tools plugin utilizing Flipper. In our experience, version 3 of React Query brought the stability needed to be used in production with our clients.

" Swift Package Manager,Adopt,Techniques,FALSE,"

When introduced in 2014, Swift didn't come with a package manager. Later, Swift Package Manager was created as an official Apple open-source project, and this solution has continued to develop and mature. Our teams rely increasingly on SwiftPM because most packages can be included through it and the processes for both creators and consumers of packages have been streamlined. In the previous Radar, we recommended trialing, but we now believe it makes sense to select it as the default when starting new projects. For existing projects using tools like CocoaPods or Carthage, it might be worth a quick experiment to gauge the level of effort to migrate and to check whether all dependencies are available.

" Carbon efficiency as an architectural characteristic,Assess,Techniques,TRUE,"

Sustainability is a topic that demands the attention of enterprises. In the software development space its importance has increased, and we're now seeing different ways to approach this topic. Looking at the carbon footprint of building software, for example, we recommend assessing carbon efficiency as an architectural characteristic. An architecture that takes into consideration carbon efficiency is one where design and infrastructure choices have been made in order to to minimize energy consumption and therefore carbon emissions. The measurement tooling and advice in this space is maturing, making it feasible for teams to consider carbon efficiency alongside other factors such as performance, scalability, financial cost and security. Like almost everything in software architecture, this should be considered a trade-off; our advice is to think about this as one additional characteristic in a whole set of relevant quality attributes that are driven and prioritized by organizational goals and not left to a small cadre of experts to ponder in a siloed manner.

" CUPID,Assess,Techniques,TRUE,"

How do you approach writing good code? How do you judge if you've written good code? As software developers, we're always looking for catchy rules, principles and patterns that we can use to share a language and values with each other when it comes to writing simple, easy-to-change code.

Daniel Terhorst-North has recently made a new attempt at creating such a checklist for good code. He argues that instead of sticking to a set of rules like SOLID, using a set of properties to aim for is more generally applicable. He came up with what he calls the CUPID properties to describe what we should strive for to achieve ""joyful"" code: Code should be composable, follow the Unix philosophy and be predictable, idiomatic and domain based.

" GitHub push protection,Assess,Techniques,TRUE,"

The accidental publication of secrets seems to be a perennial issue with tools such as Talisman popping up to help with the problem. Before now, GitHub Enterprise Cloud users with an Advanced Security License could enable security scanning on their accounts, and any secrets (API keys, access tokens, credentials, etc.) that were accidentally committed and pushed would trigger an alert. GitHub push protection takes this one step further, and brings it one step earlier in the development workflow, by blocking changes from being pushed at all if secrets are detected. This needs to be configured for the organization and applies, of course, only to license holders, but additional protection from publishing secrets is to be welcomed.

" Local-first application,Assess,Techniques,TRUE,"

In a centralized application, the data on the server is the single source of truth — any modification to the data must go through the server. Local data is subordinate to the server version. This seems like a natural and inevitable choice to enable collaboration among multiple users of the software. Local-first application, or local-first software, is a set of principles that enables both collaboration and local data ownership. It prioritizes the use of local storage and local networks over servers in remote data centers or the cloud. Techniques like conflict-free replicated data types (CRDTs) and peer-to-peer (P2P) networks have the potential to be a foundational technology for realizing local-first software.

" Metrics store,Assess,Techniques,TRUE,"

Metrics store, sometimes referred to as headless business intelligence (BI), is a layer that decouples metrics definitions from their usage in reports and visualizations. Traditionally, metrics are defined inside the context of BI tools, but this approach leads to duplication and inconsistencies as different teams use them in different contexts. By decoupling the definition in the metrics store, we get clear and consistent reuse across BI reports, visualizations and even embedded analytics. This technique is not new; for example, Airbnb introduced Minerva a year ago. However, we're now seeing considerable traction in the data and analytics ecosystem with more tools supporting metrics stores out of the box.

" Server-driven UI,Assess,Techniques,TRUE,"

Server-driven UI continues to be a hot topic of discussion in mobile circles because it offers the potential for developers to take advantage of faster change cycles without falling foul of an app store's policies around revalidation of the mobile app itself. Server-driven UI separates the rendering into a generic container in the mobile app while the structure and data for each view is provided by the server. This means that changes that once required a round trip to an app store can now be accomplished via simple changes to the responses the server sends. While some very large mobile app teams have had great success with this technique, it also requires a substantial investment in building and maintaining a complex proprietary framework. Such an investment requires a compelling business case. Until the case is made, it might be best to proceed with caution; indeed, we've experienced some horrendous, overly configurable messes that didn't actually deliver on the promised benefits. But with the backing of behemoths such as Airbnb and Lyft, we may very well see some useful frameworks emerge that help tame the complexity. Watch this space.

" SLIs and SLOs as code,Assess,Techniques,TRUE,"

Since Google first popularized service-level indicators (SLIs) and service-level objectives (SLOs) as part of their site reliability engineering (SRE) practice, observability tools like Datadog, Honeycomb and Dynatrace started incorporating SLO monitoring into their toolchains. OpenSLO is an emerging standard that allows defining SLIs and SLOs as code, using a declarative, vendor-neutral specification language based on the YAML format used by Kubernetes. While the standard is still quite new, we're seeing some encouraging momentum, as with Sumo Logic's contribution of the slogen tool to generate monitoring and dashboards. We're excited by the promise of versioning SLI and SLO definitions in code and updating observability tooling as part of the CI/CD pipeline of the service being deployed.

" Synthetic data for testing models,Assess,Techniques,TRUE,"

During our discussions for this edition of the Radar, several tools and applications for synthetic data generation came up. As the tools mature, we've found that using synthetic data for testing models is a powerful and broadly useful technique. Although not intended as a substitute for real data in validating the discrimination power of machine-learning models, synthetic data can be used in a variety of situations. For example, it can be used to guard against catastrophic model failure in response to rarely occurring events or to test data pipelines without exposing personally identifiable information. Synthetic data is also useful for exploring edge cases that lack real data or for identifying model bias. Some helpful tools for generating data include Faker or Synth, which generate data that conforms to desired statistical properties, and tools like Synthetic Data Vault that can generate data that mimics the properties of an input data set.

" TinyML,Assess,Techniques,TRUE,"

We continue to be excited by the TinyML technique and the ability to create machine learning (ML) models designed to run on low-powered and mobile devices. Until recently, executing an ML model was seen as computationally expensive and, in some cases, required special-purpose hardware. While creating the models still broadly sits within this classification, they can now be created in a way that allows them to be run on small, low-cost and low-power consumption devices. If you've been considering using ML but thought it unrealistic because of compute or network constraints, then this technique is worth assessing.

" Verifiable credentials,Assess,Techniques,TRUE,"

When we first included it in the Radar two years ago, verifiable credentials (VC) was an intriguing standard with some promising potential applications, but it wasn't widely known or understood outside the community of enthusiasts. This was particularly true when it came to the credential-granting institutions, such as state governments, who would be responsible for implementing the standards. Two years and one pandemic later, the demand for cryptographically secure, privacy-respecting and machine-verifiable electronic credentials has grown and, as a result, governments are starting to wake up to VC's potential. We're now starting to see VC crop up in our work for public-sector clients. The W3C standard puts credential holders at the center, which is similar to our experience when using physical credentials: users can put their verifiable credentials in their own digital wallets and show them to anyone at any time without the permission of the credentials' issuer. This decentralized approach also enables users to better manage and selectively disclose their own information which greatly improves data privacy protection. For example, powered by zero-knowledge proof technology, you can construct a verifiable credential to prove that you're an adult without revealing your birthday. It’s important to note that although many VC-based decentralized identity solutions rely on blockchain technology, blockchain is not a prerequisite for all VC implementations.

" Clasp,Assess,Techniques,TRUE,"

Unfortunately, a big part of the world still runs on spreadsheets and will continue to do so. They're the ultimate tool to let anyone build those small custom tools tailored to their exact needs. However, when you want to enhance them with a level of logic that requires ""real"" code, the low-code nature of spreadsheets can then become a constraint. If you're with a company that, like Thoughtworks, uses Google's G-Suite, Clasp enables you to apply at least some Continuous Delivery practices to Apps Script code. You can write the code outside of the Apps Script project, which creates options for testing, source control and build pipelines; it even lets you use TypeScript. Clasp has been around for a while, and you shouldn’t expect a programming environment with all of the usual comforts, but it can greatly improve the experience of using Apps Script.

" Databricks Overwatch,Assess,Techniques,TRUE,"

Databricks Overwatch is a Databricks Labs project that enables teams to analyze various operational metrics of Databricks workloads around cost, governance and performance with support to run what-if experiments. It's essentially a set of data pipelines that populate tables in Databricks, which can then be analyzed using tools like notebooks. Overwatch is very much a power tool; however, it's still in its early stages and it may take some effort to set it up — our use of it required Databricks solution architects to help set it up and populate a price reference table for cost calculations — but we expect adoption to get easier over time. The level of analysis made possible by Overwatch is deeper than what is allowed by cloud providers' cost analysis tools. For example, we were able to analyze the cost of job failures — recognizing that failing fast saves money compared to jobs that only fail near the final step — and break down the cost by various groupings (workspace, cluster, job, notebook, team). We also appreciated the improved operational visibility, as we could easily audit access controls around cluster configurations and analyze operational metrics like finding the longest running notebook or largest read/write volume. Overwatch can analyze historical data, but its real-time mode allows for alerting which helps you to add appropriate controls to your Databricks workloads.

" dbtvault,Assess,Techniques,TRUE,"

Data Vault 2.0 is a data modeling methodology and design pattern intended to improve the flexibility of data warehouses compared to other popular modeling approaches. Data Vault 2.0 can be applied to any data store such as Snowflake or Databricks. When implementing Data Vault warehouses, we've found the dbtvault package for dbt to be a helpful tool. dbtvault provides a set of jinja templates that generate and execute the ETL scripts necessary to populate a Data Vault warehouse. Although dbtvault has some rough edges — it lacks support for enforcing implied uniqueness or performing incremental loads — overall, it fills a niche and requires minimal configuration to get started.

" git-together,Assess,Techniques,TRUE,"

We're always looking for ways to remove small frictions from pair programming, which is why we're excited by git-together, a tool written in Rust that simplifies git commit attribution during pairing. By aliasing git-together as git, the tool allows you to add extensions to git config that capture committer information, aliasing each committer by their initials. Changing pairs (or switching to soloing or mob programming) requires you to run git with, followed by the initials of the pair (for example: git with bb cc), allowing you to resume your regular git workflow afterward. Every time you commit, git-together will rotate through the pair as the official author that git stores, and it will automatically add any other authors to the bottom of the commit message. The configuration can be checked in with the repo, allowing git-together to work automatically after cloning a repo.

" Harness Cloud Cost Management,Assess,Techniques,TRUE,"

Harness Cloud Cost Management is a commercial tool that works across all three of the major cloud providers and their managed Kubernetes clusters to help visualize and manage cloud costs. The product calculates a cost efficiency score by looking at idle resources as well as resources not allocated to any workload and uses historical trends to help optimize resource allocation. The dashboards highlight cost spikes and allow a user to register unexpected anomalies, which are then fed into their reinforcement learning algorithm around anomaly detection. Cloud Cost Management can recommend adjustments to limits for memory and CPU usage, with options to optimize for either cost or performance. ""Perspectives"" allows you to group costs based on organizationally defined filters (which could correspond to business units, teams or products) and automate report distribution to bring visibility into cloud spend. We believe Cloud Cost Management offers a compelling feature set to help organizations mature their FinOps practices.

" Infracost,Assess,Techniques,TRUE,"

We continue to see organizations move to the cloud without properly understanding how they will track ongoing spend. We previously blipped run cost as architecture fitness function, and Infracost is a tool that aims to make these cloud cost trade-offs visible in Terraform pull requests. It's open-source software and available for macOS, Linux, Windows and Docker and supports pricing for AWS, GCP and Microsoft Azure out of the box. It also provides a public API that can be queried for current cost data. We remain excited by its potential, especially when it comes to gaining better cost visibility in the IDE.

" Karpenter,Assess,Techniques,TRUE,"

One of the fundamental capabilities of Kubernetes is its ability to automatically launch new pods when additional capacity is needed and shut them down when loads decrease. This horizontal autoscaling is a useful feature, but it can only work if the nodes needed to host the pods already exist. While Cluster Autoscaler can do some rudimentary cluster expansion triggered by pod failures, it has limited flexibility; Karpenter, however, is an open-source Kubernetes Operator autoscaler with more smarts built in: it analyzes the current workloads and the pod scheduling constraints to automatically select an appropriate instance type and then start or stop it as needed. Karpenter is an operator in the spirit of tools like Crossplane that can provision cloud resources outside the cluster. Karpenter is an attractive companion to the autoscaling services cloud vendors provide natively with their managed Kubernetes clusters. For example, AWS now supports Karpenter as a first-class alternative in their EKS Cluster Autoscaler service.

" Mizu,Assess,Techniques,TRUE,"

Mizu is an API traffic viewer for Kubernetes. Unlike other tools, Mizu does not require instrumentation or code changes. It runs as a DaemonSet to inject a container at the node level in your Kubernetes cluster and performs tcpdump-like operations. We find it useful as a debugging tool, as it can observe all API communications across multiple protocols (REST, gRPC, Kafka, AMQP and Redis) in real time.

" Soda Core,Assess,Techniques,TRUE,"

Soda Core is an open-source data quality and observability tool. We talked about Great Expectations previously in the Radar, and Soda Core is an alternative with a key difference — you express the data validations in a DSL called SodaCL (previously called Soda SQL) as opposed to Python functions. Once the validations are written, it can be executed as part of a data pipeline or scheduled to run programmatically. As we become increasingly data-driven, it's critical to maintain data quality, and we encourage you to assess Soda Core.

" Teller,Assess,Techniques,TRUE,"

Teller is an open-source universal secret manager for developers that ensures the correct environment variables are set when starting an application. However, it's not a vault itself — it's a CLI tool that connects to a variety of sources, ranging from cloud secrets providers to third-party solutions like HashiCorp Vault to local environment files. Teller has additional functionality to scan for vault-kept secrets in your code, to redact secrets from logs, to detect drift between secrets providers and to sync between them. Given the sensitivity of accessing secrets, we can't emphasize enough the need to secure the supply chain for open-source dependencies, but we appreciate how easy the CLI is to use in local development environments, CI/CD pipelines and deployment automation.

"