A guide to getting started with C4 diagrams and Structurizr

Discover the benefits of using Structurizr and Domain Specific Language (DSL) to create interactive and versioned C4 diagrams as code, enabling precise CA modelling and visualization for your software architecture.

Table of Contents

What are C4 diagrams?

The C4 model diagram strategy was created to establish a standard set of simple notation principals for visually documenting software architectures.

It consist of a standard set of hierarchical abstractions (software systems, containers, components, and code) that are used to create a set of hierarchical diagrams (system context, containers, components, and code). Although the code diagram is almost never created, other than from the code itself.

Each level of diagram digs deeper into the details of the systems you are trying to communicate and can be thought of as different scales on a map.

The C4 model for visualising software architecture from c4modeI.com

For detailed explanations see The C4 model for visualising software architecture

Structurizr is a toolset that helps code, manage and display C4 diagrams. This guide will help you get setup and started quickly.

You can use a variety of tools to create and conform to this diagramming standard. The key ones I have used are:

For a full list of tools see C4 model tools

Using Structurizr and Domain Specific Language (DSL)

Getting started with Visio is straight forward, just using the stencil. However, what we are after is diagrams as code, that we can version along with our design and code changes. To do this we can either use Mermaid or Domain Specific Language (DSL). DSL allows for more precise C4 modelling and Structurizr allows for creating interactive visualizations.

This is a quick getting started guide to get going locally rendering views of your models on Windows. You can read all the details and options of Structurizr and the DSL syntax at Structurizr.

Prerequisites

Setup Steps and getting started

1. Working Folder

You will need to have a folder that’s dedicated for the Structurizr C4 diagrams code and configuration files. Ideally this will be for a single domain and in your local git folders. That way you can branch, versions and commit your diagrams as versioned code.

For example: c:\gitrepos\myprojectrepo\docs\architecture\diagrams\my-domain\structurizr-data\

Note: Currently you can only have one workspace per folder, so you may need several workspaces, such as one per domain.

2. Docker setup

After you have installed Docker Desktop then the next step is to ‘pull’ (or download) the Structurizr docker image from a Docker registry, typically Docker Hub. By pulling this image, you鈥檙e downloading the necessary files to run Structurizr Lite in a Docker container on your local machine. To do this run the following command

# Pull the Structurizr docker image
docker pull structurizr/lite

The easiest way to execute these commands is from the Docker Desktop built-in Terminal. Click on the ‘>_ Terminal’ button at the bottom.

3. Create and run a container

Once the image is downloaded and ready in Docker, then from the Docker terminal run the following command in this format

docker run -it --rm -p <local port>:8080 -v "<host machine Structurizr folder>:/usr/local/structurizr" structurizr/lite

The -it option supports both:
-i (interactive) keeps the STDIN open even if not attached.
-t (tty) allocates a pseudo-TTY, which allows you to interact with the container.

The -rm option automatically removes the container when it exits, ensuring no leftover containers. You may not want to do this if you are going to run this container repeatedly and have the necessary disk space.

In this case it would be something like…

docker run -it --rm -p 8080:8080 -v "c:\gitrepos\myprojectrepo\docs\architecture\diagrams\my-domain\structurizr-data\:/usr/local/structurizr" structurizr/lite

Once you have your command line tested, it’s a good idea to put this into a readme.md file in the structurizr-data folder, so that others can easily create a server pointing to the correct folders (unless everyone uses the same root paths, they will need to adjust it for themselves, but it’s a good starter).

If you have all the paths correct then you should see a successful start and something like this in the terminal.

4. Access Structurizr Web Dashboard

Once the container is running you can now simply open a browser and navigate to http://localhost:<local port> or in our example http://localhost:8080

A default Structurizr workspace will be automatically created for you. It will look like this…

and a corresponding set of file and folders will be added to your folder.

It’s the workspace.dsl that you will be editing to define your architectural model and how it should be visualized.

Quick-start on modelling using DSL

workspace.dsl

The workspace.dsl file is a text-based domain-specific language (DSL) file used to define a software architecture model based on the C4 model. This file allows you to describe the elements of your software system, their relationships, and how they should be visualized. Here are some key components:

  • Model Definition: Defines the people, software systems, containers, components, and their relationships.
{
    "model": {
        "people": [
            {
                "id": "1",
                "name": "User",
                "description": "A user of my software system."
            }
        ],
        "softwareSystems": [
            {
                "id": "2",
                "name": "Software System",
                "description": "My software system."
            }
        ],
        "relationships": [
            {
                "sourceId": "1",
                "destinationId": "2",
                "description": "Uses"
            }
        ]
    }
}
  • Views: Specifies how the elements should be visualized in different diagrams (e.g., system context, container, component diagrams).
{
    "views": {
        "systemContextViews": [
            {
                "softwareSystemId": "2",
                "description": "System Context diagram for Software System",
                "elements": [
                    { "id": "1" },
                    { "id": "2" }
                ],
                "automaticLayout": true
            }
        ],
        "styles": {
            "elements": [
                {
                    "tag": "Software System",
                    "background": "#1168bd",
                    "color": "#ffffff"
                },
                {
                    "tag": "Person",
                    "shape": "person",
                    "background": "#08427b",
                    "color": "#ffffff"
                }
            ]
        }
    }
}

workspace.json

Both files serve the same purpose but cater to different use cases. The DSL file is more human-readable and easier to write manually, while the JSON file is better suited for automated processing and integration with other tools.

The web app, Structurizr Lite generates and maintains this file and you should very rarely need to view or update this file manually.

.structurizr folder

The .structurizr folder is a directory used by Structurizr to store various configuration and data files related to your workspace such as images, other assets, logs and temporary files used during operations. This is managed by the app should not be interferred with 馃槈

Tips for building you model

Add a title and description to your workspace

workspace "Model Title Here" "Add a description of the model here" {

    model {
    ...
    }
    views {
    ...
    }
    configuration {
    ...
    }

Set Identifiers as hierarchical

By default, all identifiers are treated as being globally scoped, however by using the !identifiers keyword you can specify that element identifiers should be treated as hierarchical (relationship identifiers are unaffected by this setting).

workspace {

    !identifiers hierarchical

    model {
        softwareSystem1 = softwareSystem "Software System 1" {
            api = container "API"
        }

        softwareSystem2 = softwareSystem "Software System 2" {
            api = container "API"
        }
    }
}

So now each api can have the same local name, but be referenced softwareSystem1.api and softwareSystem2.api respectively.

see Identifiers | Structurizr

Add your users/personas at the top

Syntax

person <name> [description] [tags] {
    // Define properties and relationships here
}

Example

model {
        customer = person "Online Shopping Customer" "A customer"
        picker = person "Picker" "A warehouse picker" "Warehouse"
        whmanager= person "Warehouse Manager" "A warehouse manager" "Warehouse"
        despatcher = person "Despatch Operator" "A warehouse despatcher" "Warehouse"
        ...

Then your high level systems

In Structurizr DSL, the softwareSystem element is used to define a software system within your architecture model. A software system represents a major part of your overall system, typically encompassing multiple containers and components.

Syntax

softwareSystem <name> [description] [tags] {
    // Define properties and relationships here
}

Example

webapp= softwaresystem "Store Web App" "The main web store" "Existing System"
email = softwaresystem "E-mail System" "The internal Microsoft Exchange e-mail system." "Existing System"
atm = softwaresystem "ATM" "Allows customers to withdraw cash." "Existing System"

Use Groups

The group element is used to define a named grouping of elements, which will be rendered as a boundary around those elements. This is useful for organizing and visually separating different parts of your architecture model.

Example

workspace {
    model {
        group "Company 1" {
            a = softwareSystem "System A" "Description of System A."
        }
        group "Company 2" {
            b = softwareSystem "System B" "Description of System B."
        }
        a -> b "Uses"
    }
    views {
        systemLandscape {
            include *
            autoLayout lr
        }
        styles {
            element "Group" {
                color #ff0000
            }
        }
    }
}

Use the Azure or other custom themes

The theme element is used to apply a set of predefined styles to your diagrams and is used in the views section. Themes help you maintain a consistent look and feel across your diagrams, especially when using common visual styles for elements and relationships.

Syntax

theme <url>

Example of the Azure theme

Here I have also added some shapes that get overridden by the Azure theme.

views {
        // theme default
        themes https://static.structurizr.com/themes/microsoft-azure-2021.01.26/theme.json
        // https://structurizr.com/help/theme?url=https://static.structurizr.com/themes/microsoft-azure-2021.01.26/theme.json
        styles {
            element "Database" {
                shape "cylinder"
            }
            element "Person" {
                shape "Person"
                background "#08427b"
                color "#ffffff"
            }
            element "Software System" {
                background "#1168bd"
                color "#ffffff"
            }
        }
        ...

For more themes see Themes | Structurizr

Allow manual layout in Structurizr Lite

The autolayout element is used to automatically arrange the elements in a view, making it easier to create well-organized and visually appealing diagrams without manually positioning each element.

Syntax

autolayout [direction] [rankSeparation] [nodeSeparation]
  • Direction: Specifies the direction of the layout. Possible values are:
    • lr (left to right)
    • rl (right to left)
    • tb (top to bottom)
    • bt (bottom to top)
  • Rank Separation: (Optional) Specifies the separation between ranks (levels) in the layout. Default is 300.
  • Node Separation: (Optional) Specifies the separation between nodes (elements) in the layout. Default is 300

When you add this to a view definition, you will not be able to manually reposition the shapes in the GUI. Just remove or remark the line to allow it.

views {        
       systemLandscape "SystemLandscape" {
            include *
            // autolayout lr
        }

        systemContext eCommerceSystem "SystemContext" {
            include *
            autolayout lr
        }
        ...

Here the SystemLandscape is manually arranged and the eCommerceSystem is arrange left-to-right.

Notes, Examples and References

Structuring YAML Pipelines and nested variables in Azure DevOps

When managing pipelines for large and complex repositories with multiple ‘Platforms’, each containing multiple apps and services, then the folder structure and variable strategy can be complicated. However, if this is done right, then the payoff for template reuse is dramatic.

Here, I outline my approach on the pipeline folder and YAML structure only. The variable structure allows for a full set of naming conventions to easily default across all your projects apps and delegate standards to organisation and platform levels. This, ideally, leaves only app specific configurations for your dev teams to manage and worry about.

This strategy rests on top of my general approach to structuring a mono-repository. For more details on that see Mono-repository folder structures.

Continue reading “Structuring YAML Pipelines and nested variables in Azure DevOps”

Mono-repository folder structures

Every developer has their own way of structuring their code base. There is no right or wrong way, but some strategies have at least had some logical thought 馃槈

This is a sample of how I generally structure my mono-repos when they need to scale to many organisational platforms, apps, and projects.

Continue reading “Mono-repository folder structures”

Rest API Paging Strategy

There is a lot written about paging strategies for REST API’s. This is my simple and quick take on the subject and what I generally implement when rolling my own APIs.

Any API that returns a collection should have some form of return result limiting. This is to avoid killing your web servers, database servers, networks and avoiding a super easy distributed denial of service (DDoS) attack.

I don’t cover other optimisations like filtering and sorting, although these do have a major impact too.

There are two major API standards/frameworks that I would follow when implementing my own paging strategy:

Adding to that, there are two primary pagination styles:

  • Offset
    • Most common. Set the number of records in a page and then the record offset to use.
    • If there are more results, the server also returns some metadata that contains information about the current page, such as the total number of pages and the link for the next page.
  • Keyset / Cursor based
    • Less common. Harder to program. Uses a specific cursor, which is a unique identifier for a specific item in the list, then returns this record and the next specified number of results.
    • If there are more results, the server also returns a cursor for the next page.
Standard / FrameworkPrimary Paging MethodQuery Properties
ODataOffset– skip (the offset from the beginning of the results)
– top (the number of records it wants to retrieve in each page)
GraphQLKeyset / Cursor based– first (the number of records to return)
– after (specify the cursor of the record to show after)
Table showing high-level paging methods

My General Principals

  1. Use Offset paging unless, you have large data and performance challenges.
  2. Use ‘skip’ and ‘top’ for the parameters, as this matches OData which you may want to implement later anyway.
  3. Always have a default page size (‘top’) and use it if ‘top’ is not specified.
  4. Always return a total query set count (integer) by default. e.g., $count defaults to true, but can be turned off, if necessary, by passing false.
  5. If more records exist, then always return a ‘nextLink’ URL
  6. Put the results array into the ‘value’ field to match that of the OData.

Response Structure and Paging Metadata

  • nextLink
  • totalCount

Example OData Return Results

{
    "@odata.context": "http://localhost:5000/odata/$metadata#Books",
    "@odata.count": 100,
    "@odata.nextLink": "http://localhost:5000/odata/Books?$count=true&$top=2&$skip=1",
    "value": [
        {
            "Id": 1,
            "ISBN": "978-0-321-87758-1",
            "Title": "Essential C#5.0",
            "Author": "Mark Michaelis"
        },
        {
            "Id": 2,
            "ISBN": "063-6-920-02371-5",
            "Title": "Enterprise Games",
            "Author": "Michael Hugos",
        }
    ]
}

Example of hand rolled API Return Results

{
    "count": 100,
    "nextLink": "http://localhost:5000/Books?$count=true&$top=2&$skip=1",
    "value": [
        {
            "Id": 1,
            "ISBN": "978-0-321-87758-1",
            "Title": "Essential C#5.0",
            "Author": "Mark Michaelis"
        },
        {
            "Id": 2,
            "ISBN": "063-6-920-02371-5",
            "Title": "Enterprise Games",
            "Author": "Michael Hugos",
        }
    ]
}

Data Changes Complications

There are a few complexities that may be of a concern if you are requiring an exact immutable record set. In most cases, this is not critical, but could be in a financial or audit type scenario. That is the complete recordset will need to be locked and remain unaltered during all the paging activities.

An example would be where a field in the recordset must add up exactly to a total at a specific point in time.

As most dataset are dynamic, new records can be added and removed at any point and that means you can’t guarantee to not return the same record in different pages or that the records on a given page/offset will remain the same, and that the total count does not change.

There are several solutions to this problem, and I will only suggest the easiest here and it’s not perfect either.
The dataset will require a created timestamp field, as well as a soft delete ‘isDeleted’ or ‘deletedAt’ type field. Then you can return all the records (including the soft deleted ones) prior to a specified creation time (i.e. the time of the first request in the sequence).

This would ensure consistency in the records returned, but not necessarily the values in those records.

References

Pagination | GraphQL

Pagination – OData | Microsoft Learn

PageSize, Top and MaxTop – OData | Microsoft Learn

Add NextPageLink and $count for collection property – OData | Microsoft Learn

How to publish a multi-page Azure DevOps Wiki to PDF (and pipeline it)

Although you can print a single page of your wiki to a PDF using the browser, it’s problematic when you have a more complex structured multi-page wiki and you need to distribute or archive it as a single file.

Fortunately thanks to the great initiative by Max Melcher and his AzureDevOps.WikiPDFExport tool, combined with Richard Fennell’s WIKI PDF Export Tasks, we can not only produce pretty good multi-page PDF’s of our Wiki’s, but to also create a Pipeline to automate their production.

The documentation for both these tools is good, but I have included here some additional tips and more complete steps to quickly get your pipelines setup.

Pre-requirements

To follow the steps outlined below you will need to:

  1. Download the latest azuredevops-export-wiki.exe from GitHub (or the source code and build it yourself)
    • I create a local folder like C:\MyApps\AzureDevOps-Export-Wiki\ and drop the EXE there. Then I can execute all my command lines and see the outputs there too.
  2. Add the WIKI PDF Export Tasks extension in the Visual Studio Marketplace to your Azure DevOps Organisation. Click here WIKI PDF Export Tasks – Visual Studio Marketplace

The Setup

Assume we have an Azure DevOps (Azdo) project called ‘MyAzdoProject‘. This has a default code repository with the same name and once created, a wiki repository called ‘MyAzdoProject.wiki‘.

You can clone this Wiki repo by selecting the ‘Clone wiki’ from the Wiki menu.

In the code repository, I have created a folder called /resources/wiki-pdf-styles/ to hold the

  • Header HTML template file
  • Footer HTML template file
  • CSS Style Sheet

In the Wiki, we may have documentation for several Apps and each may have several sections such as Architecture, UX design, Requirements notes etc..

For this illustration I am only wanting to output the Architecture pages and subpages for App1. So everything below /App1/Architecture/** in the wiki.

The Resource Files

My resource files are as follows (name of the files include the apps ‘Short Code’ ‘app1’ so each app can have independent files):

header-app1.html

<div style='padding-left: 10px; margin: 0; -webkit-print-color-adjust: exact; border-bottom:1px solid grey; color: grey; width: 100%; text-align: left; font-size: 6px;'>
Nicholas Rogoff - My Cool App 1 - Architecture
</div>

footer-app1.html

<div style='padding-right: 10px; padding-top:2px; margin: 0; -webkit-print-color-adjust: exact; border-top:1px solid grey; color: grey; width: 100%; text-align: right; font-size: 6px;'>Copyright Nicholas Rogoff 2023 |
 Printed: <span class='date'></span> | Page: </span><span class='pageNumber'></span> of <span class='totalPages'></span>
</div>

styles.css

body {
  font-family: "Segoe UI Emoji", Tahoma, Geneva, Verdana, sans-serif;
  font-size: 10pt;
}

h1 {
  font-size: 25pt;
  color: #521e59;
}

h2 {
  font-size: 20pt;
  color: #3b868d;
}

h3 {
  font-size: 15pt;
  color: #f39000;
}

h4 {
  font-size: 12pt;
  color: #ec644a;
}

img {
  max-width: 100%;
  max-height: 800px;
}

/* Workaround to add a cover page */
.toc {
  page-break-after: always;
}

/* target a span with class title inside an h1 */
h1 span.title {
  page-break-before: avoid !important;
  page-break-after: always;
  padding-top: 25%;
  text-align: center;
  font-size: 48px;
}

/* make tables have a grey border */
table {
  border-collapse: collapse;
  border: 1px solid #ccc;
}

/* make table cells have a grey border */
td,
th {
  border: 1px solid #ccc;
  padding: 5px;
}

The Command Line

You can manually run the azure-export-wiki.exe (download the latest from here Releases 路 MaxMelcher/AzureDevOps.WikiPDFExport (github.com)) locally on a clone of your wiki repository. This is useful not just to output the PDF, but also to quickly refine your customizations, such as, parameters, templates and CSS.

I have used the following parameters:

  • -p / –path
    • Path to the wiki folder. If not provided, the current folder of the executable is used.
  • -o / –output
    • The path to the export file including the filename, e.g. c:\output.pdf
  • –footer-template-path, –header-template-path
    • local path to the header and footer html files
  • –css
    • local path to the CSS file
  • –breakPage
    • Adds a page break for each wiki page/file
  • –pathToHeading
    • Adds a path to the heading of each page. This can be formatted in the CSS
  • –highight-code
    • Highlight code blocks using highligh.js
  • –globaltoc
    • This sets the title for a global Table of Contents. As you will see, I have used this, in combination with the CSS to add in a main header Title.

…so my final command line looks like this:

.\azuredevops-export-wiki.exe 
  -p "C:\GitRepos\MyAzdoProject.wiki\MyAzdoProject\App1\Architecture" 
  -o "output.pdf"  
  --footer-template-path "C:\GitRepos\MyAzdoProject\MyAzdoProject\resources\wiki-pdf-styles\footer-cdhui.html"  
  --header-template-path "C:\GitRepos\MyAzdoProject\MyAzdoProject\resources\wiki-pdf-styles\header-cdhui.html" 
  --css "C:\GitRepos\MyAzdoProject\MyAzdoProject\resources\wiki-pdf-styles\styles.css" 
  --breakPage 
  --pathToHeading 
  --highlight-code 
  --globaltoc "<span class='title'>Nicholas Rogoff Cool App 1 Architecture Wiki</span>" -v

You can run and refine this command line locally and generate the output.

You can also do a lot more styling with the CSS than I have done and refine it to your requirements. Just use the –debug flag in the command line above and the intermediate HTML file is produced. You can then see all the classes that you can play with.

The Pipeline

I decided to create a YAML Pipeline Template, as I often have many apps and extensive wiki documentation. Printing the whole Wiki to a PDF is not feasible, and hits limitations, so I have a several pipelines to output distinct parts of the wiki structure.

The YAML Task Template (publish-wiki-to-pdf-cd-task-template.yml)

Everything in this template is parameterized to allow flexible usage. You can also set the defaults and simplify the consuming pipelines.

# Task template for generating the PDF from the Wiki

parameters:
  - name: LocalWikiCloneFolder
    displayName: The local path to clone the wiki repo to
    type: string
    default: '$(System.DefaultWorkingDirectory)\wikirepo'
  - name: WikiRootExportPath
    displayName: The path in the Wiki to export to PDF
    type: string
  - name: ProjectShortCode
    displayName: The short code for the project. Used to pick up the custom headers and footers files
    type: string
  - name: CustomFilesPath
    displayName: The local path to the custom files on the build agent in the main repo
    type: string
    default: '\resources\wiki-pdf-styles\**'
  - name: PdfOutputFileName
    displayName: The filename for the output pdf. Do not include the extension
    type: string
    default: '$(ProjectShortCode)-Wiki.pdf'
  - name: WikiRepoUrl
    displayName: The URL of the Wiki repo
    type: string
    default: 'https://myorg@dev.azure.com/myorg/MyAzdoProject/_git/MyAzdoProject.wiki'
  - name: PdfTitleHeading
    displayName: The title heading for the PDF
    type: string
    default: 'Nicholas Rogoff - $(ProjectShortCode) - Wiki'
    
steps:
- task: CopyFiles@2
  displayName: Copy-Headers-Footers-Styles
  inputs:
    Contents: '$(CustomFilesPath)'
    TargetFolder: '$(System.DefaultWorkingDirectory)\styles\'
    OverWrite: true
  enabled: false
- task: WikiPdfExportTask@3
  displayName: Create-PDF
  inputs:
    cloneRepo: true
    repo: '$(WikiRepoUrl)'
    useAgentToken: true
    localpath: '$(LocalWikiCloneFolder)'
    rootExportPath: '$(WikiRootExportPath)'
    outputFile: '$(System.DefaultWorkingDirectory)\$(PdfOutputFileName).pdf'
    ExtraParameters: '--footer-template-path "$(System.DefaultWorkingDirectory)\resources\wiki-pdf-styles\footer-$(ProjectShortCode).html" --header-template-path "$(System.DefaultWorkingDirectory)\resources\wiki-pdf-styles\header-$(ProjectShortCode).html" --css "$(System.DefaultWorkingDirectory)\resources\wiki-pdf-styles\styles.css" --breakPage --pathToHeading --highlight-code --globaltoc "<span class=''title''>$(PdfTitleHeading)</span>" -v'
- task: PublishBuildArtifacts@1
  displayName: Publish-Artifact
  inputs:
    PathtoPublish: '$(System.DefaultWorkingDirectory)\$(PdfOutputFileName).pdf'
    ArtifactName: 'drop'
    publishLocation: 'Container'

The main pipeline

# Publishes the wiki to PDFs

trigger:
- none
pr: none

# schedules:
# - cron: "0 0 * * *"
#   displayName: Daily midnight wiki publish
#   branches:
#     include:
#     - main

#   always: true

pool:
  vmImage: windows-latest

# Setting the build number to the date as work-around to include in the Title as $(Build.BuildNumber)
name: $(Date:yyyy-MM-dd)

variables:
- name: projectShortCode
  value: 'app1'
- name: localWikiCloneFolder
  value: '$(System.DefaultWorkingDirectory)\wikirepo'
- name: wikiRootExportPath
  value: '$(localWikiCloneFolder)\MyAzdoProject\Projects\CDH-UI\Architecture'
- name: customFilesPath
  value: '\resources\wiki-pdf-styles\**'
- name: wikiRepoUrl
  value: 'https://myorg@dev.azure.com/m/MyorgyAzdoProject/_git/MyAzdoProject.wiki'
- name: pdfOutputFilename
  value: '$(ProjectShortCode)-Architecture-Wiki.pdf'
- name: pdfTitleHeading
  value: 'Nicholas Rogoff - $(ProjectShortCode) - Architecture Wiki $(Build.BuildNumber)'

steps:
- template: './templates/publish-wiki-to-pdf-cd-task-template.yml'
  parameters:
    LocalWikiCloneFolder: $(localWikiCloneFolder)
    WikiRootExportPath: '$(wikiRootExportPath)'
    ProjectShortCode: '$(projectShortCode)'
    CustomFilesPath: '$(customFilesPath)'
    PdfOutputFileName: '$(pdfOutputFilename)'
    WikiRepoUrl: '$(wikiRepoUrl)'
    PdfTitleHeading: '$(pdfTitleHeading)'

I have left in some running options here. The default is completely manual, but I have added for reference, commented out, the format for a scheduled operation, as well as on every change (not recommended).

I have also used the ‘name:’ (which is referenced as $(Build.BuildNumber)), to create a date in a format I wanted for the Header page.

When this pipeline runs the PDF artifact can be downloaded. You can obviously add a new step to copy the file to any destination that suits your requirements.

Changing your PowerShell Prompt

Ever had a PowerShell session when you are far down the folder path, and the prompt is so long it gets hard to see what commands and response you have…like this…

If only you could change the prompt to be a lot shorter…well, you can easily \o/

The PowerShell prompt is determined by the built-in聽Prompt聽function. You can customize the prompt by creating your own聽Prompt聽function and saving it in your PowerShell profile.

This sounds complicated, but the Prompt function is not protected, so to change the prompt for the current session only, just execute the function code as shown below, or with your own custom version and voila!

To make your custom prompt more permanent you need to save this to your PowerShell Profile. This means saving the function to the Power_profile.ps1 file in the appropriate location. Depending on the scope there are several locations, but I’m staying simple and changing mine for just me on my machine! 馃槈

  • Locations for Current user, Current Host are:
    • Windows – $HOME\Documents\PowerShell\Microsoft.PowerShell_profile.ps1
    • Linux – ~/.config/powershell/Microsoft.Powershell_profile.ps1
    • macOS – ~/.config/powershell/Microsoft.Powershell_profile.ps1

For full information on Profiles see about Profiles – PowerShell | Microsoft Learn).

Very Short Prompt

This prompt just shows the Drive letter no matter where you are in the folders

function prompt {(get-location).drive.name+"\...>"}

This looks like this…

Screenshot of the terminal

My Preferred Shorter Prompt

This shows the Drive letter, ‘…’ for any intermediate folders and then just the last folder name, so you know your end destination.

function prompt {"PS " + (get-location).drive.name+":\...\"+ $( ( get-item $pwd ).Name ) +">"}

and looks like this…

Screenshot of the terminal

Reset back to the Default

Use this to set everything back to the original 馃檪

function prompt {
    $(if (Test-Path variable:/PSDebugContext) { '[DBG]: ' }
      else { '' }) + 'PS ' + $(Get-Location) +
        $(if ($NestedPromptLevel -ge 1) { '>>' }) + '> '
}

For full details see about Prompts – PowerShell | Microsoft Learn