Earlier this week, I travelled to Groningen, NL to participate in DevNetNoord. Little did I know I was the only English speaker, but it did get me a chance to practice my four sentences of Dutch that I’ve learned.
I did a talk that is close to my heart:
Nullable Reference Types: It’s Actually About Non-Nullable Reference Types
In this talk, I show how Nullable Reference Types work and why I think it’s important for many projects to switch over to them. This includes:
You can see my code and slides here if you missed it:
Also, I did a Coding Short that explores some of the same ideas:
Nullable Reference Types: Or, Why Do I Need to Use the ? So Much!
Once upon a time, I worked with Chris Sells and the software arm of DevelopMentor (trip down memory lane, huh). We built a developer tool and it was my first experience working on a product where developers were our primary customer. It left me with a bad taste in my mouth.
It was a great experience working with titan’s of our industry. Without a doubt, one of my best professional experience. This was marred by learning about how developers think about products that they could use.
I think this is true whether it is an open source project, or a commercial product. It has felt like developers can get focused on “How hard could it be?”. This has led many developers to eschew “other people’s code” and want to build it themselves. I think this is a mistake. Let’s talk about it.
We used to say Buy vs. Build but it isn’t always about money. Whether you buy a tool/framework or adopt an open source solution, the decision is similar. Do you allot time to build the code you need; or do you adopt another technology that will require time to get used to and learn. I think there are two schools of thought:
If you’re making a conscious decision about this, then good for you. But, I’ve noticed that often companies will make decisions based on fear. Sure, there are risks to having dependencies.
But, adopting solutions come with what I like to call “old code”. When you’re not the only consumer of foundational code, you can have a higher confidence that it is a stable product (for the most part). Do you really want to take ownership of fundamental parts of your architecture.
Let’s take the example of messaging. You could choose to save money on using Azure ServiceBus by building your own message queuing solution. But, what is the benefit? I think we tend to forget about the real costs of building systems. If you have a $100K/yr developer and that one developer could create something in-house in 4-6 months, that is front loaded costs of $50K. While adopting a service or dependency would save you that, though there are costs in adopting too.
You can still make solid decisions about the range of the solutions you need. For example, instead of Azure ServiceBus - using RabbitMQ, or NServiceBus and MassTransit might be better.
Another example of this is when I run into developers who insist on Vanilla JavaScript/DOM instead of using frameworks like Vue, Angular, Svelte and React. Sure, you don’t have to learn anything new, but building your own reactivity and manipulating the DOM directly can be really difficult. I’d rather spend my time on the domain problems than building a framework. That’s just me.
Building from scratch is hard…and can be costly in other ways. What do you think?
In the past couple of years, I’ve been looking at my career and my impending future. As many of you might know, I was contemplating moving from independent to employee. I’ve been independent since about 2007 (and this is my 40th year in software). It was a big question I had to ask for myself. It was a matter of trading flexibility for security and healthcare. But that is just the background.
I hadn’t interviewed or performed a job search in many years. I thought I could just jump in like I did when I was younger. I couldn’t. Everything seemed to have changed.
A combination of mass layoffs in tech and the irrational thought that AI was going to replace many of us (see my Coding Short)- and the job market was far tighter than I’d ever seen it before. Where was I to fit in. Ultimately, I decided to make a big life change and start a new company in the Netherlands. But that did not leave me with some observations about the job crunch. Let’s talk about it.
We used to view resumes and use the interviews to work out whether people were a good match. Recruiters have always used achronym-based bias when matching people and companies, but ultimately that also missed lots of great candidates.
But even these interviews were rife with bad ideas. Coding on whiteboards; abstract thinking tests (e.g. “How many manhole covers are in the US?”); and gotcha questions, were all bad ideas.
Ultimately, hiring someone was a risk but we tried to mitigate that risk by looking for people who fit the ‘culture’. That’s actually why I used to get jobs easily, I look the part. I look like someone out of central casting for the “Comic Book Guy”. I fit the impression of a good developer.
It wasn’t perfect, but I feel like it relied less on the perfect resume. How does it work now?
I think the industry has changed completely how we evaluate possible employees. Using Robot Automation Processing (RPA) and Application Tracking Systems (ATS). Essentially, this jargon for using machine learning to filter out resumes that do not fit into a narrow focus.
This has led an industry of ATS-beating tools (e.g. JobScan) that represent an “arms race” as the ATS improves to counter-act the ATS-Beating software. Both sides of this battle seem to do little to help companies find the right people. But, it means that most resumes submitted electronically (directly to a company’s website or through LinkedIn), are outright rejected almost immediately.
What frustrates me about this is that these technologies force potential employees to be good at creating resumes with far too many buzz words instead of their actual experience. Great people are falling through the cracks.
It also encourages people to write/modify their resume for each and every position. For many of us, that means have two base resumes (one that is scanned easily by ATS systems) and a good looking resume (which usually rejected by ATS systems). Then, to require us to add/remove items to match some magical set of skills is a waste of everyone’s time.
Some companies are also trying to test developers in other ways:
This doesn’t seem better and is leaving wide swaths of great developers looking for jobs for months/years.
Some of this may be attributed to the Bootcamp-ification of the workforce. By adding a lot of entry-level developers while promising them that jobs are easy to get, we’ve accidently excluded good developers and lied to newly trained developers about the state of our industry.
I do not have a magic bullet. But I do have some opinions:
Stop hiring people for skills: You’re hiring for the ability to learn and adapt. The tech industry is too volitile to think that today’s skills are going to be what you need in 1, 2, or 5 years.
Stop Testing Syntax: Interview people for how they think instead of how to solve a task. Remembering syntax is unimportant in today’s development environment.
Find People Who Are Adaptable: The worst thing that happens in an interview these days is when a developer refuses to admit they do not know something. This is a result of the 10x developer, super start developer, or even everyone is a senior software dev. The ability to find the answer is so much more important than knowing the answer. If our software processes are iterative, I expect developers to be good at the
"fail->learn from failure->try again"
workflow. If they can’t admit they are wrong, there is no room for trial and error.
Do we need resumes? Of course. But I think resumes should represent the person not an application for the job. If we’re going to use resume evaluation software, making them smarter and with a lower bar of entry is important. I know that hiring managers want to wittle down 1,000 resumes to three people to interview. But this incredibly short-sighted.
Will it get better? I have my doubts.
What do you think?
We’re three weeks into our new lives in The Netherlands. So much is happening, it’s been an anxiety laden experience. We’ve started Dutch lessons (Dank u wel), planned for our furniture to arrive, and started the emigration process. So far, so good.
As I’ve been talking about this adventure a lot, I want to apologize if you want me to get back to technical content — it’s coming, I promise. We’ve been asked quite a bit how we’re able to emigrate to The Netherlands. Let me share what we know so far.
There are several ways to be allowed to stay in the Netherlands and I am not an expert at all. For us, we’re taking advantage of the Dutch American Friendship Treaty (DAFT). The treaty allows for permission to come to the Netherlands to do one of two things:
In both of these cases, your work must have essential interest in the Dutch Economy or the Dutch Culture.
For us, we wanted to create a Dutch version of our company Wilder Minds. We could have just created a DBA (Doing Business As), but I am so used to having a company as an umbrella to the work I do; it seemed like the obvious choice.
For the Dutch company, we needed to invest €4500 in the new company. (We actually ended up with €9000 since my wife and I are co-owners). My wife could have registered as my wife, but I didn’t want two tiered access for us. That investment needs to essentially sit in the bank account for the length of our Visa. I think we also needed to show some amount of cash reserves to confirm we can pay our own way for a while - I can’t quite remember.
There are several steps before you can complete the approval process:
That BSN is kind of a gateway to a lot of other steps to normalization. Once we have it (soon I hope), we’ll be able to open bank accounts and get Dutch phone numbers. The bank account is crucial as much of payments are done through your bank card. Retailers will take credit cards, but for many of the kinds of services you need (Internet, Phone, Electric, etc.) all require a Dutch bank account.
Finally, once we get approval (hopefully by April), we’ll be able to stay in the country for two years. After that, we’ll be able to ask for a renewal for an additional two years. After those four years, we can start the process for a permanent residence permit. This is the avenue we’re taking.
I’ve been asked a lot about getting Dutch Citizenship. I doubt either of us qualify, but I actually want to keep my US citizenship (voting, etc.)
From here, we’re waiting for our house to sell in the US (let me know if you’re looking for a place in Atlanta) ;) Once that is complete, we’ll be looking for an apartment/house to buy. But that’s a longer story I’ll share later.
Until next update!
It’s been a wild couple of months for the Wildermuth family! In the space of just a couple of months, we’ve relocated to The Hague in the Netherlands. It’s been a scant 36 hours since we arrived, and there is so much to do.
Now that we’re getting settled, I can get back to work teaching and creating content! Coming up (hopefully next week), I’ll be resuming my weekly Coding Shorts YouTube videos.
In addition, I’m happy to announce a new instance of my Virtual ASP.NET Architecture Course. In this course, I’m covering:
This course includes understanding how to plan and build distributed applications with .NET including how to use .NET Aspire in your own applications.
For more information:
I’m also still available for coaching, training and consulting. Feel free to reach out at https://wildermuth.com/en/#contact!
Back in 1993, I moved to Amsterdam with a guitar and $70. Not my brightest move. I spent much of the next two years playing music on the street (e.g. Busking) in and out of Amsterdam. It was an amazing part of my life. I don't regret a minute of it.
Since the day I came back from Amsterdam to be ‘an adult’, I’ve held a torch for the Dutch and the Netherlands. Any conference anywhere near northern Europe has been my excuse to head back to the country. The pie-in-the-sky dream has been to move back and start a company.
Guess what? It’s happening! Recently, my wife and I have been discussion a change of life. As I get older, it’s been important to me and my wife that we stay active, enjoy more of our time, and live out our dreams. This means that we can go back to a car-free, walking/biking lifestyle that the Netherlands makes possible that is really difficult here in the States.
Starting early next year, we’re moving to The Hague (Den Haag) to start a new life there. While my wife and I will be out of the US, I’ll still be doing the same things I’ve always done including my training (e.g. Pluralsight et al.), creating YouTube videos, and my consulting work. Being in Europe will let me expand my company’s (Wilder Minds) reach.
I want to thank all of my readers, students, viewers, and clients for following me across my career journey. This change doesn’t mean anything different, just a different time zone!
If you’re in or around the Netherlands, please don’t hesitate to contact me with opportunities: https://wildermuth.com/contact
I’ve been remiss. I recently gave a talk at the Atlanta .NET Users’ Group and promised to post the source code. About time I got to this ;)
I gave a talk on how to use Aspire in .NET 8/9. We walked through how to add Aspire to an existing project and make use of this new technology. If you have/had questions, please don’t hesitate to comment below!
Here are the slides and code:
The talk wasn’t recorded, but I have similar content on my YouTube Channel.
Other questions, feel free to contact via Contact Me.
I just finished giving my two talks at TechBash in Pennsylvania. Great to visit the Poconos this time of year. I also attended and spoke at the Atlanta Developers Conference last Saturday. Great audiences and great questions.
I wanted to share some of the examples and slides from these talks:
@ AtlDevCon: Aspire to Connect - A talk where I showed the attendees how to add Aspire to an existing app with ASP.NET Core API, A Vue App, Redis server and RabbitMQ for a queue.
@ TechBash: Aspire to Connect - A talk where I showed the attendees how to add Aspire to an existing app with ASP.NET Core API, A Vue App, Redis server and RabbitMQ for a queue.
@ TechBash: Lock It Down:
Using Azure Entra for .NET APIs and SPAs - A talk where I demonstrated how to hook up Azure Entra ID to login using a JavaScript front end and how to validate the JWT on the back-end.
Other questions, feel free to contact via Contact Me or on Twitter.
I recently was inundated with Chrome injecting into many websites a little pop-up to encourage you to sign-in with your Google account. I hate it. After a lot of searching (and a heroic Twitter user) - I got it to go away.
I’m mostly adding this here so I can find it next time, but I hope it helps others.
Here is the problem:
I don’t use my Google Account as my main identity, so I never want this. After delving into my Google account and Chrome settings, I was close to just signing out of Chrome entirely when I asked Twitter. I got an answer from Terry Beard:
To use this, just copy this to the address bar of your Chrome:
chrome://settings/content/federatedIdentityApi
Hope this helps and that it shows up in my search results next time I forget how to do this.
At one of my clients (he’ll be thrilled he made it in another blog post), I was showing him how to structure a complex Linq query. This came as a surprise to him and I thought it was worth a quick blog entry.
We’ve all been taught how Linq queries should look (using the Query syntax):
// Query Syntax
IEnumerable<int> numQuery1 =
from num in numbers
where num % 2 == 0
orderby num
select num;
This works fine, but like many of us, we’re used to the method syntax:
// Method Syntax
IEnumerable<int> numQuery1 = numbers
.Where(n => n % 2 == 0)
.OrderBy(n => n)
.ToList();
They both accomplish the same thing but I tend to prefer the method syntax. For me, the biggest difference is being able to compose the query. What I mean is this:
// Composing Linq
var qry = numbers.Where(n => n % 2 == 0);
if (excludeFours)
{
// Extend the Query
qry = qry.Where(n => n % 4 != 0);
}
// Add more linq operations
qry = qry.OrderBy(n => n);
var noFours = qry.ToList();
I think this is useful in a couple of ways. First, when you need to modify a query from input, this is less clunky that two completely different queries. But, more importantly I think, by breaking up a complex query into individual steps, it can help the readability of the query. For example:
// Using Entity Framework
IQueryable<Order> query = ctx.Orders
.Include(o => o.Items);
if (dateOrder) query = query.OrderByDescending(o => o.OrderDate);
var result = await query.ToListAsync();
While I think we’ve failed at talking about how linq is really working, I’m hoping this helps a little bit.
In my job as a consultant, I often code review Vue applications. Structuring a Vue app (or Nuxt) can be a challenge as your project grows. It is common to me to see views with a lot of business logic and computed values. This isn’t necessarily a bad thing, but can incur technical debt. Let’s talk about it!
For example, I have a simple address book app that i’m using:
Let’s start with the easy part, I’ve got a component:
<div class="mr-6">
<entry-list @on-loading="onLoading"
@on-error="onError"/>
</div>
In order to react to any state it needs, we’re using emits (e.g. events). So when the wait cursor is needed, we get an event emited to show or hide the cursor. Same for errors. So we have to communicate through props
and emits
. But, I’m getting ahead of myself, let’s look at the component’s properties that it binds to:
const router = useRouter();
const currentId = ref(0);
const entries = reactive(new Array<EntryLookupModel>());
const filter = ref("");
Then it binds the entries (et al.):
<div class="h-[calc(100vh-14rem)] overflow-y-scroll bg-yellow">
<ul class="menu text-lg">
<li v-for="e in entries"
:key="e.id">
<div @click="onSelected(e)"
:class="{
'text-white': currentId === e.id,
'font-bold': currentId === e.id
}">{{ e.displayName }}</div>
</li>
</ul>
</div>
Simple, huh? Just like most Vue projects you’ve seen, especially examples (like I write too). But to serve this data, we need some business logic:
function onSelected(item: EntryLookupModel) {
router.push(`/details/${item.id}`);
currentId.value = item.id;
}
onMounted(async () => {
await loadLookupList();
})
async function loadLookupList() {
if (entries.length === 0) {
try {
emits("onLoading", true);
const result = await http.get<Array<EntryLookupModel>>(
"/api/entries/lookup");
entries.splice(0, entries.length, ...result);
sortEntities();
} catch (e: any) {
emits("onError", e);
} finally {
emits("onLoading", false);
}
}
}
function sortEntities() {
entries.sort((a, b) => {
return a.displayName < b.displayName ? -1 :
(a.displayName > b.displayName ? 1 : 0)
});
}
Not too bad, but it makes this simple view complex. And, if we wanted to test this component, we’d have to do an integration test and fire up something like Playwright to test the actual code generation. This works, but your tests are much more fragile and take a long time to run.
Enter Pinia (or any shared objects). Pinia allows you to create a store that, essentially, creates a shared object that can hold your business logic. By removing the business logic form the components, we can also unit test them. I’m a fan. Let’s see what we would do to change this.
Note, this isn’t really a tutorial on how to use Pinia, but if you want the details look here:
First, let’s create a store:
export const useStore = defineStore("main", {
state: () => {
return {
entries: new Array<EntryLookupModel>(),
filter: "",
errorMessage: "",
isBusy: false
};
},
}
You create a store using defineStore
and expose it as a composable so the first person who retrieves the store, creates the instance. But, importantly, every other calling of useStore will retrieve that same instance. So, in our component we’d just use the useStore to load the main store:
const store = useStore();
And to bind to the store, you’d just use the store. For example, in the component:
<div class="h-[calc(100vh-14rem)] overflow-y-scroll bg-yellow">
<ul class="menu text-lg">
<li v-for="e in store.entries"
:key="e.id">
<div @click="onSelected(e)"
:class="{
'text-white': currentId === e.id,
'font-bold': currentId === e.id
}">{{ e.displayName }}</div>
</li>
</ul>
</div>
All the same data binding happens, it’s just wrapped up in the store. But what about that business logic? Pinia handles that with actions
:
export const useStore = defineStore("main", {
state: () => {
return {
entries: new Array<EntryLookupModel>(),
filter: "",
errorMessage: "",
isBusy: false
};
},
actions: {
async loadLookupList() {
if (this.entries.length === 0) {
try {
this.startRequest();
const result = await http.get<Array<EntryLookupModel>>(
"/api/entries/lookup");
this.entries.splice(0, this.entries.length, ...result);
this.sortEntities();
} catch (e: any) {
this.errorMessage = e;
} finally {
this.isBusy = false;
}
}
},
...
}
If we push the loading of the data into the actions
member, we are adding any functions we need to be exposing to the applicaiton. For example, instead of using a local function, we can just access it from the store:
onMounted(async () => {
await store.loadLookupList();
})
Again, we’re just deferring to the store. You might be asking why? This centralizes the data and logic into a shared object. To show this, notice that the store also has members for errorMessage
and isBusy
. As a reminder, we were using events to tell the App.vue
that the loading or error message has changed. But since we’re just using a reactive
object in the store, we can skip all that plumbing and instead just use the store from the App.vue
:
<script setup lang="ts">
const store = useStore();
// const errorMessage = ref("");
// const isBusy = ref(false);
// function onLoading(value: boolean) { isBusy.value = value}
// function onError(value: string) { errorMessage.value = value}
...
</script>
<template>
...
<div class="mr-6">
<entry-list/>
</div>
</section>
<section class="flex-grow">
<div class="flex gap-2 h-[calc(100vh-5rem)]">
<div class="p-2 flex-grow">
<div class="bg-warning w-full p-2 text-xl"
v-if="store.errorMessage">
{{
errorMessage
}}
</div>
<div class="bg-primary w-full p-2 text-xl"
v-if="store.isBusy">
Loading...
</div>
...
</template>
So, the logic of errors and isBusy (et al.) is contained in this simple store. My component now has only cares about local state that it might need (e.g. currentId is picked and shows the other pane):
<script setup lang="ts">
...
const store = useStore();
const router = useRouter();
const currentId = ref(0);
function onSelected(item: EntryLookupModel) {
router.push(`/details/${item.id}`);
currentId.value = item.id;
}
onMounted(async () => {
await store.loadLookupList();
})
watch(router.currentRoute, () => {
if (router.currentRoute.value.name === "home") {
currentId.value = 0;
}
})
</script>
But what if we need some computed values? Pinia handles this as getters
:
getters: {
entryList: (state) => {
if (state.filter) {
return state.entries
.filter((e) => e.displayName
.toLowerCase()
.includes(state.filter));
} else {
return state.entries;
}
}
}
Each getter is a computed value. So when the state of the store changes, this is computed and can be bound to. You may have noticed a filter property. To handle the change, we’re just binding to an input:
<input class="input join-item caret-neutral text-sm"
placeholder="Search..."
v-model="store.filter" />
Since this is bound to the filter, when a user types into it, our entryList will change. You’ll notice that in the getter, we’re just filtering the list of entries based on the filter. So, if we switch the binding to the entryList, we’ll be binding to the computed value:
<ul class="menu text-lg">
<!-- was "store.entries" -->
<li v-for="e in store.entryList"
:key="e.id">
<div @click="onSelected(e)"
:class="{
'text-white': currentId === e.id,
'font-bold': currentId === e.id
}">{{ e.displayName }}</div>
</li>
</ul>
Except for binding to the filter and to the entryList, the component doesn’t need to know about any of this.
So why are we doing this? So we can unit test the store itself. Make sense?
I recently was working on a project where the client wanted the health checks to be part of the OpenAPI specification. Here’s how you can do it.
Annoucement: In case you’re new here, I’ve just launched my blog with a new combined website. Take a gander around and tell me what you think!
ASP.NET Core supports health checks out of the box. You can add them to your project by adding the Microsoft.AspNetCore.Diagnostics.HealthChecks
NuGet package. Once you’ve added the package, you can add health checks dependencies to your project by adding them to the ServiceCollection
:
builder.Services.AddHealthChecks();
Once you’ve added the health checks, you need to map the health checks to an endpoint (usually /health
). You do that by calling MapHealthChecks:
app.MapHealthChecks("/health");
This works great. If you need to use the health checks, you can just call the /health
endpoint.
At the client, our APIs were were generated via some tooling by reading the OpenApPI spec. The client wanted the health checks to be part of the OpenAPI specification so that the client could call it with the same generated code. But how to get it to work?
The solution is to not use the MapHealthChecks
method, but instead to build an API (in my case, Minimal APIs) use perform the health checks. Here’s how you can do it:
builder.MapGet("/health", async (HealthCheckService healthCheck,
IHttpContextAccessor contextAccessor) =>
{
var report = await healthCheck.CheckHealthAsync();
if (report.Status == HealthStatus.Healthy)
{
return Results.Ok(new { Success = true });
}
else
{
return Results.Problem("Unhealthy", statusCode: 500);
}
}); // ...
This works great. One of the reasons I decided to do it this way, is that instead of just a string, I wanted to return some context about the health check. This way, the client can know what is wrong with the health check.
NOTE: Returning reasons for the failure can be a security risk. Be careful not to return any sensitive information.
I found the best way to do this is to create a problem report with some information:
var report = await healthCheck.CheckHealthAsync();
if (report.Status == HealthStatus.Healthy)
{
return Results.Ok(new { Success = true });
}
else
{
var failures = report.Entries
.Select(e => e.Value.Description)
.ToArray();
var details = new ProblemDetails()
{
Instance = contextAccessor.HttpContext.Request.GetServerUrl(),
Status = 503,
Title = "Healthcheck Failed",
Type = "healthchecks",
Detail = string.Join(Environment.NewLine, failures)
};
return Results.Problem(details);
}
By creating a problem detail, I can specify what the URL was used, the status code to use (503 in this case), and a list of the failures. The report that the CheckHealthAsync
method returns has a dictionary of the health checks. I’m just using the description of the health check as the failure reason. Remember when you call AddHealthChecks
you can add additional checks like this one for testing the DbContext connection string:
builder.Services.AddhealthChecks()
// From the
// Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore
//package
.AddDbContextCheck<ShoeContext>();
Then you can add some additional information for the OpenAPI specification:
builder.MapGet("/health", async (HealthCheckService healthCheck,
IHttpContextAccessor contextAccessor) => { ... })
.Produces(200)
.ProducesProblem(503)
.WithName("HealthCheck")
.WithTags("HealthCheck")
.AllowAnonymous();
Make sense? Let me know what you think!
I’ll make this post pretty quick. I’ve been looking at my Nuget packages and they’re kinda a mess. Not just the packages, but the naming and branding. To start this annoying process, I’ve decided to move all my Nuget packages that support Minimal APIs to a common GitHub repo and package naming.
This package is to help you organize your Minimal APIs by using a code generator to automate registration of your APIs by implementing an IApi interface. You can read more about it here: Docs.
If you’ve been using my package to organize your Minimal APIs, the name of the package has been changed:
Was: WilderMinds.MinimalApiDiscovery
Now: MinimalApis.Discovery
The old package has been depreciated, and you can install the new package by simply:
> dotnet remove package WilderMinds.MinimalApiDiscovery
> dotnet add package MinimalApis.Discovery
The second package in this repository is MinimalApis.FluentValidation. I’m a big fan of how Fluent Validation works, but as I was teaching Minimal APIs - it was tedious to add validation. In .NET 7, Microsoft introduced Endpoint Filters as a good solution. You can read more about how this works at: Docs
This package hasn’t changed name, but has been moved from beta to release. You can update or install this package:
> dotnet add package MinimalApis.FluentValidation
Let me know what you think!
It’s been a while, huh? I haven’t been blogging much (as I’ve been dedicating my time to my YouTube channel) - so I thought it was time to give you a quick update. I have a series of Nuget packages that I’ve created to help with .NET Core development. Let’s take a look:
This is my newest package. It adds support to use FluentValidation as an endpointfilter in Minimal APIs. To install:
> dotnet add package MinimalApis.FluentValidation
I created this package to support structuring your Minimal APIs. It has a sourcegenerator that will register all your Minimal APIs with one call in startup. The strategy here was to avoid having to put anything in the DI layer, since Minimal APIs are static lambas. To install:
> dotnet add package WilderMinds.MinimalApiDiscovery
This is a small package I created so I create a wrapper around the complexity of writing images to Azure Blog Storage. Take a look! To install:
> dotnet add package WilderMinds.AzureImageStorageService
This is an older package I wrote to handle the MetaWeblog API in my own blog. This API is used for some tools to post new blog entries. To install it:
> dotnet add package WilderMinds.MetaWeblog
Another package I wrote to support my blog, but some people find it useful. In early .NET Core, there wasn’t a solution for exposing a RSS feed from some content. This package does just that: To install it:
> dotnet add package WilderMinds.RssSyndication
Finally, a small Nuget package to allow you to inject the Swagger Heirarchy plugin for Swagger/OpenAPI to create levels of hierarchies in your swagger conigurations. Install it here:
> dotnet add package WilderMinds.SwaggerHierarchySupport
In my last blog post, I mentioned that I was pivoting to what I’m doing next. It feels a lot of people are going through an upheaval. Is it systemic?
To be clear, I have no idea what’s happening but it looks like a lot of organizations have taken the current landscape to trim their rolls. Of course, some of you might think AI is the culprit but I talked about that in one of my rants if you want to go flame me there ;)
Over the past month or so, I’ve watched as Pluralsight, LinkedIn, Plex, Microsoft and I am sure more that I haven’t noticed. Earlier in the year, the announcement from Facebook, Alphabet, Twitter and Microsoft already left some people worried.
A lot of these jobs seem to be more about developer relations. This seems like a pattern. Sure, startups like SourceGraph are hiring, but can they absorb the other layoffs? I don’t know.
Am I concerned? For my own future, sure. I’m 54 and ageism exists, but I have faith in my abilities. The real concern for me, is that Developer Advocates are being cut. There is a real movement towards Discord as documentation and developer relationships. I think this trend is newer than the pandemic.
The other side of the coin is that overall the unemployment rate is actually pretty low. For me, that meant I wasn’t looking at jobs outside of the tech companies. Sometimes I forget that most tech jobs aren’t in tech companies like Ford, Geico, and Wells Fargo. So, if you’re looking, don’t forget that tons of other companies have openings (in fact, reach out if you are a Vue developer that can work remote, I know of a job or two).
Lastly, I wanted to highlight a few people that I know are looking and that I think are genuinely great. Here are their LinkedIn links:
I am not sure I’m looking for a job yet, but I’m still writing courses and working with clients for the time being. This fall may tell a different tale.
I went to my blog the other day and noticed my last story here was in February. I guess I got a little distracted. So, what have I been up to? Let’s talk about it.
Over the last couple of years, like many blogs, I’ve seen the readership dwindle. This doesn’t mean I think it’s time to abandon the blog. But with so many other things taking my time, I suspect I won’t be blogging quite as regularly as I have in the past. After 1730 blog posts, this blog has been really important to me. I’d never abandon it.
So, if I’m not blogging, what am I doing?
The most obvious answer to this is the film I’ve been working on since the beginning of Covid: Man Enough to Heal. The film post-production wrapped in May. Since then, I’ve submitted it to film festivals and engaged a sales agent to find distribution. I’m quite happy with the results and can’t wait to share here when it’s available to watch!
Since the beginning of the year, I’ve working full-time updating and creating new courses for Pluralsight. Right now, I’m in the middle of updating my long ASP.NET Core, End-to-End course for .NET 6 (and .NET 8 when it ships). The other courses I’ve released or updated this year include:
While blogging has waned, I’ve been focused on doing short videos I’m calling “Coding Shorts”. These videos are ten or so minutes long so I can teach one, discrete skill or technology. I’ve made 67 of these so far. Here’s one of my recent ones if you’re interested in getting a taste of them:
I’ve also authored a handful of “Rants” where I talk about the industry and my opinions about what is important. You can find all the videos at my channel:
Lastly, I’ve been spending time thinking about what is next. Every once in a while (~10 years), I find the need to change the direction of my career. Teaching and training have been great, but I think I’m ready for another challenge. In the past, these pivots have been about what things I’ve focused on. Some of these include:
But where do I go next? I have no idea. But I realize that this is likely my last pivot. This means I’m looking to do something that excites me and that I think is important to do. But who knows what that is. I’m scaling back my training and doing more client work, but I’d love to find some clients that are doing important things. If you think you’re one of those companies, feel free to reach out on my work site:
In case you don’t know, I release a newsletter every week with the articles that I find useful — both software related and other tech (e.g. Space, Science). If you want to subscribe, feel free to visit:
I’ve been posting and making videos about ideas I’ve had for discovering Minimal APIs instead of mapping them all in Program.cs
for a while. I’ve finally codified it into an experimental nuget package. Let’s talk about how it works.
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
The package can be installed via the dotnet tool:
dotnet add package WilderMinds.MinimalApiDiscovery
Once it is installed, you can use an interface called IApi
to implement classes that can register Minimal APIs. The IApi
interface looks like this:
/// <summary>
/// An interface for Identifying and registering APIs
/// </summary>
public interface IApi
{
/// <summary>
/// This is automatically called by the library to add your APIs
/// </summary>
/// <param name="app">The WebApplication object to register the API </param>
void Register(WebApplication app);
}
Essentially, you can implement classes that get passed the WebApplication
object to map your API calls:
public class StateApi : IApi
{
public void Register(WebApplication app)
{
app.MapGet("/api/states", (StateCollection states) =>
{
return states;
});
}
}
This would allow you to register a number of related API calls. I think one class per API is too restrictive. When used in .NET 7 and later, you could make a class per group
:
public void Register(WebApplication app)
{
var group = app.MapGroup("/api/films");
group.MapGet("", async (BechdelRepository repo) =>
{
return Results.Ok(await repo.GetAll());
})
.Produces(200);
group.MapGet("{id:regex(tt[0-9]*)}",
async (BechdelRepository repo, string id) =>
{
Console.WriteLine(id);
var film = await repo.GetOne(id);
if (film is null) return Results.NotFound("Couldn't find Film");
return Results.Ok(film);
})
.Produces(200);
group.MapGet("{year:int}", (BechdelRepository repo,
int year,
bool? passed = false) =>
{
var results = await repo.GetByYear(year, passed);
if (results.Count() == 0)
{
return Results.NoContent();
}
return Results.Ok(results);
})
.Produces(200);
group.MapPost("", (Film model) =>
{
return Results.Created($"/api/films/{model.IMDBId}", model);
})
.Produces(201);
}
Because of lambdas missing some features (e.g. default values), you can always move the lambdas to just static methods:
public void Register(WebApplication app)
{
var grp = app.MapGroup("/api/customers");
grp.MapGet("", GetCustomers);
grp.MapGet("", GetCustomer);
grp.MapPost("{id:int}", SaveCustomer);
grp.MapPut("{id:int}", UpdateCustomer);
grp.MapDelete("{id:int}", DeleteCustomer);
}
static async Task<IResult> GetCustomers(CustomerRepository repo)
{
return Results.Ok(await repo.GetCustomers());
}
//...
The reason for the suggestion of using static methods (instance methods would work too) is that you do not want these methods to rely on state. You might think that constructor service injection would be a good idea:
public class CustomerApi : IApi
{
private CustomerRepository _repo;
// MinimalApiDiscovery will log a warning because
// the repo will become a singleton and lifetime
// will be tied to the implementation methods.
// Better to use method injection in this case.
public CustomerApi(CustomerRepository repo)
{
_repo = repo;
}
// ...
This doesn’t work well as the call to Register
happens once at startup and since this class is sharing that state, the injected service becomes a singleton for the lifetime of the server. The library will log a warning if you do this to help you avoid it. Because of that I suggest that you use static methods instead to prevent this from accidently happening.
NOTE: I considered using static interfaces, but that requires that the instance is still a non-static class. It would also limit this library to use in .NET 7/C# 11 - which I didn’t want to do. It works in .NET 6 and above.
When you’ve created these classes, you can simple make two calls in startup to register all IApi
classes:
using UsingMinimalApiDiscovery.Data;
using WilderMinds.MinimalApiDiscovery;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddTransient<CustomerRepository>();
builder.Services.AddTransient<StateCollection>();
// Add all IApi classes to the Service Collection
builder.Services.AddApis();
var app = builder.Build();
// Call Register on all IApi classes
app.MapApis();
app.Run();
The idea here is to use reflection to find all IApi
classes and add them to the service collection. Then the call to MapApis()
will get all IApi
from the service collection and call Register.
The call to AddApis
simply uses reflection to find all classes that implement IApi
and add them to the service collection:
var apis = assembly.GetTypes()
.Where(t => t.IsAssignableTo(typeof(IApi)) &&
t.IsClass &&
!t.IsAbstract)
.ToArray();
// Add them all to the Service Collection
foreach (var api in apis)
{
// ...
coll.Add(new ServiceDescriptor(typeof(IApi), api, lifetime));
}
Once they’re all registered, the call to MapApis
is pretty simple:
var apis = app.Services.GetServices<IApi>();
foreach (var api in apis)
{
if (api is null) throw new InvalidProgramException("Apis not found");
api.Register(app);
}
While I’m happy with this use of Reflection since it is only a ‘startup’ time cost, I have it on my list to look at using a Source Generator instead.
If you have experience with Source Generators and want to give it a shot, feel free to do a pull request at https://github.com/wilder-minds/minimalapidiscovery.
I’m also considering removing the AddApis and just have the MapApis
call just reflect to find all the IApis and call register since we don’t actually need them in the Service Collection.
You can see the complete source and example here:
As you likely know if you’ve read my blog before, I have spent the last decade or so creating courses to be viewed on Pluralsight. I love making these kinds of video-based courses, but I’ve decided to get back to instructor led training a bit.
While my video courses really benefit a lot of learners, I’ve realized that some people learn better with direct interaction with a live teacher. In addition, I have missed the direct impact of working with students.
I’m proud to announce that my first two instructor-led courses:
ASP.NET Core: Building Sites and APIs - April 11-13, 2023
Building Apps with Vue, Vite and TypeScript - May 9-11, 2023
These courses will be taught online (via Zoom). This sort of remote teaching can be taxing for many people, so I am teaching it as three 1/2 days. Each day, I’ll hold the class from noon to 5pm (Eastern Time Zone).
Early Bird Pricing until March 24th: $699
I hope you’ll join me at these new courses!
This topic has been on my TODO:
list for quite a while now. As I work with clients, many of them are just ignoring the warnings that you get from Nullable Reference Types. When Microsoft changed to make them the default, some developers seemed to be confused by the need. Here is my take on them:
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
There has always been two different types of objects in C#: value types
and reference types
. Value types are created on the stack (therefore they go away without needing to be garbage collected); and Reference Types are created by the heap (needing to be garbage collected). Primitive types and structs are value types, and everything else is a reference type, including strings. So we could do this:
int x = 5;
string y = null;
By it’s design, value-types couldn’t be null. They just where:
int x = null; // Error
string y = null;
There were occasions that we needed null on value types. So they introduced the Nullable<T>
struct. Essentially, this allowed you to make value types nullable:
Nullable<int> x = null; // No problem
They did add some syntactical sugar for Nullable<T>
by just using a question mark:
int? x = null; // Same as Nullable<int>
But why nullability? So you can test for whether a value exists:
int? x = null;
if (x.HasValue) Write(x);
While this works, you could test for null as well:
int? x = null;
if (x is not null) Write(x);
OK, this is what Nullable value types are, but reference types already support null. Reference types do support being null, but do not support not allowing null. That’s the difference. By enabling Nullable Reference Types, all reference types (by default) do not support Null unless you use the define them with the question-mark:
object x = null // Doesn't work
But utilizing the null type definition:
object? x = null // works
As C# developers, we spend a lot of time worrying about whether an object is null (since anyone can pass a null for parameters or properties). So, enabling Nullable Reference Types makes that impossible. By default, new projects (since .NET 6) have enabled Nullable Reference Types by default. But how?
In C# 8, they added the ability to enable Nullable Reference Types. There are two ways to enable it: file-based declaration or a project level flag. For projects that want to opt into Nullable Reference Types slowly, you can use the file declarations:
#nullable enable
object x = null; // Doesn't work, null isn't supported
#nullable disable
But for most projects, this is done at the project level:
<!--csproj-->
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>
The <Nullable/>
property is what enables the feature.
When you enable this, it will produce warnings for applying null to reference types. But you can even turn these into errors to force a project to address the changes:
<WarningsAsErrors>Nullable</WarningsAsErrors>
So, you’ve gotten this far so let’s talk some basics. When defining a variable, you can opt-into nullability by defining the type with nullability:
string? x = null;
That means anywhere you’re just defining the type (without inferring the type), C# will assume that null isn’t a valid value:
string x = "Hello";
if (x is null) // No longer necessary, this can't be null
{
// ...
}
But what happens when we infer the type? For value types, it is assumed to be a non-nullable type, but for reference type…nullable:
var u = 15; // int
var s = ""; // string?
var t = new String('-', 20); // string?
This is actually one of the reasons I’m moving to the new syntax for creating objects:
object s = new(); // object - not nullable
Not exactly about nullable reference types, but in this case, the object is not null because we’re making sure it’s not nullable.
When clients have moved here, the biggest pain they seem to run into is with classes (et al.). After spending so many years writing simple data classes like so:
public class Customer
{
public int Id { get; set;}
public string Name { get; set;} // Warning
public DateOnly Birthdate { get; set;}
public string Phone { get;set;} // Warning
}
Properties that aren’t nullable are expected to be set before the end of the constructor. There are two ways to address make them nullable; and initialize the properties.
Making the properties nullable has the benefit of being more descriptive of the actual usage of the property:
public class Customer
{
public int Id { get; set;}
public string? Name { get; set;} // null unless you set it
public DateOnly Birthdate { get; set;}
public string? Phone { get;set;} // null unless you set it
}
Alternatively, you can set the value:
public class Customer
{
public int Id { get; set;}
public string Name { get; set;} = "";
public DateOnly Birthdate { get; set;}
public string Phone { get;set;} = "";
}
Or,
public class Customer
{
public int Id { get; set;}
public string Name { get; set;}
public DateOnly Birthdate { get; set;}
public string Phone { get;set;}
public Customer(string name, string phone)
{
Name = name;
Phone = phone;
}
}
It may, at first, seem like trouble for certain types of classes. In fact, it’s is not uncommon to opt-out of nullability for entity classes:
#nullable disable
public class Customer
{
public int Id { get; set;}
public string Name { get; set;} // No Warning
public DateOnly Birthdate { get; set;}
public string Phone { get;set;} // No Warning
}
#nullable enable
When you start using nullable properties on objects, you quickly run into warnings:
Customer customer = new();
WriteLine($"Name: {customer.Name}"); // Warning
The warning is because the compiler can’t confirm it is not null (Name is nullable). This is one of the uncomfortable parts of using Nullable Reference Types. So we can wrap it with a test for null (like you’ve probably been doing for a long time):
Customer customer = new();
if (customer.Name is not null)
{
WriteLine($"Name: {customer.Name}");
}
At that point, the compiler can be sure it’s not null because you tested it. But this seems a lot of work to determine null. Instead we can use some syntactical sugar to shorten this:
Customer customer = new();
WriteLine($"Name: {customer?.Name}"); // Warning
The ?.
is simply a shortcut. If customer
is null, it just returns a null. This allows you to deal with nested nullable types pretty easily:
Customer customer = new();
WriteLine($"Name: {customer.Name?.FirstName?}"); // Warning
In this example, you can see that the ?
is used at multiple places in the code as Name
could be null and FirstName
could also be null.
This also affects how you will allocate a variable that might be null. For example:
Customer customer = new();
string name = customer.Name; // Warning, Name might be null
The null coalescing operator can be used here to define a default:
Customer customer = new();
string name = customer.Name ?? "No Name Specified"; // Warning, Name might be null
The ??
operator allows for the fallback in case of null. which should simplify some common scenarios.
But sometimes we need to help the compiler figure out whether something is null. You might know that a particular object is not null even if it is a nullable property. There is an additional syntax that supports telling the compiler that you know better. Just use the !
syntax.
Customer customer = new();
string name = customer.Name!; // I know it's never null
This just tells the compiler what you expect. If the Name is null, it will throw an exception…so only use it when you’re sure. The bang symbol (e.g. !
) is used at the end of the variable. So if you need to string these, you’ll put the bang at each level:
Customer customer = new();
string name = customer.Name!.FirstName!; // I know they're never null
While using Nullable Reference Types could be seen as a way to over-complicate your code, these bits of syntactical sugar can simplify dealing with nullables.
Just like any other code, you can use the question-mark to specify that a value is nullable:
public class SomeEntity<TKey>
{
public TKey? Key { get; set; }
}
The problem with this is that the type specified in TKey
could also be nullable:
SomeEntity<string?> entity = new();
But this results in a warning because you can’t have a nullable of a nullable. The generated type might look like this:
public class SomeEntity<string?>
{
public string?? Key { get; set; }
}
Notice the double question-mark. It also suggests that the generic class doesn’t quite know whether to initialize it or not since it doesn’t know about the nullability. To get around this, you can use the notnull
constraint:
public class SomeEntity<TKey> where : notnull
{
public TKey? Key { get; set; }
}
That way the generic type can be in control of the nullability instead of the caller.
I hope that this quick intro into Nullable Reference Types helps you get your head around the ‘why’ and ‘how’ of Nullable Reference Types. Please comment if you have more questions and/or complaints!
I’ve worked with Progressive Web Application plug-ins with several SPA frameworks. Most of them are pretty simple to implement. But when I learned about Vite’s plug-in, I was intrigued since that would work across different SPA frameworks. Let’s take a look at it.
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
The Vite plug-in for PWA works at the Vite/Build level, not for your specific framework (or lack of a framework). That means it will work for Vue, React, SvelteKit and Vanilla JS (and any other Vite-powered development). Before we do any of this, we have a working website:
To install it, you just need to add it to your development-time dependencies:
> npm i vite-plugin-pwa --save-dev
Once installed, you can add it to your vite.config.js file:
...
import { VitePWA } from "vite-plugin-pwa";
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
vue(),
VitePWA()
],
...
})
With this installed, you’ll see that your builds will generate some extra files:
build started...
✓ 30 modules transformed.
../wwwroot/registerSW.js 0.13 kB
../wwwroot/manifest.webmanifest 0.14 kB
../wwwroot/index.html 0.56 kB
../wwwroot/assets/index-cfd5afe3.css 7.14 kB │ gzip: 1.97 kB
../wwwroot/assets/index-25653f73.js 75.09 kB │ gzip: 30.04 kB
built in 1378ms.
PWA v0.14.1
mode generateSW
precache 5 entries (80.98 KiB)
files generated
..\wwwroot\sw.js
..\wwwroot\workbox-519d0965.js
The file generated by the plug-in include:
With this generated, you should see the “install icon” on supported browsers:
You can customize the metadata that is used by just adding a metadata object in the plug-in:
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
vue(),
VitePWA({
manifest: {
icons: [
{
src: "/icons/512.png",
sizes: "512x512",
type: "image/png",
purpose: "any maskable"
}
]
}
})],
...
The properties that you can customize in the manifest are all defined here.
If you run the example now, you can look at the manifest for errors or omissions:
If you click on the Application tab in the tools, you can see that it is complaining about missing icons for different operating systems.
If you switch to the Service Worker, you can see it is running:
But how does this work? The Service Worker can intercept network requests and serve the content necessary to load up the project. In fact, if you look at the “Cache Storage”, you’ll see the standard cache of the web page’s files:
The feature of the browser that supports all of this is called workbox
, so if you look at that cache, you’ll see the files that are being cached to load this offline (including .html, .js, .css, etc.). So, let’s try and making the app offline to see what happens:
You can go offline in the Network tab by changing the networking to offline. If you refresh the page, you’ll get something that looks like this:
But what happened? The cache (see earlier) is only caching the files needed to serve the page, not for any functionality.
How do we fix this? Luckily, the plug-in supports changing the workbox settings to create your own caches (called runtimeCaching). To do this we return back to the vite.config.js
file:
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
vue(),
VitePWA({
manifest: {
icons: [
{
src: "/icons/512.png",
sizes: "512x512",
type: "image/png",
purpose: "any maskable",
},
],
},
workbox: {
runtimeCaching: [
{
urlPattern: ({ url }) => {
return url.pathname.startsWith("/api");
},
handler: "CacheFirst" as const,
options: {
cacheName: "api-cache",
cacheableResponse: {
statuses: [0, 200],
},
},
},
],
},
}),
],'
...
By creating a section for workbox, we can configure a number of things, but for us we want to create an API cache. You can see that i’m testing all requests for /api
and caching all GET
s into our own cache. By enabling this, the customers reappear. We can see (and interrogate) the cache in the Application tools:
This sort of all encompassing cache might not be realistic, but you could be caching non-volatile API calls. This isn’t a solution for handling offline changes. You can use application-specific code to write it to session or local storage.
I hope you’ve seen how the Vite PWA plug-in works anbd how you can use it to install your website as a local application!
You can find the example of the project here:
I’ve spent the last couple of months working on a new Pluralsight course about Modules in JavaScript. I’ve been writing JavaScript (and TypeScript) for a lot of years. But digging into the course made me understand how some of this modularity actually worked. Let’s talk about some things that surprised me.
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
While Node.js has had module support long before ECMAScript got it’s act together and started supporting modules. CommonJS was an early standard for exposing modules. So most of the Node.js projects i’ve worked on just supposed that I had to use CommonJS. For example, a simple import using CommonJS (e.g. require()):
// index.js
const invoices = require("./invoices.js");
Since EMCAScript Modules (ESM) are supported, you could just name your index.js
(in our case) to index.mjs
and it would allow us to use EMCAScript:
// index.mjs
import invoices from "./invoices.mjs";
But, for me, I like that Node.js allows us to change the default module type to ESM:
{
"name": "before",
"version": "1.0.0",
"description": "",
"main": "index.js",
"type": "module", // commonjs is the default
"scripts": {
"start": "node ./index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"lodash": "^4.17.21"
}
}
Then we can just use ESM in our code without the renaming:
// index.js
import invoices from "./invoices.js";
So, you can use ESM to load all your own code where you’ve defined your modules directly. This works with your own projects or npm packages. If you’re using it for your own projects, you can rename your project to .cjs and it will be treated as a CommonJS:
// invoices.cjs
module.exports = [...];
But, more commonly, npm packages are mostly defined as CommonJS modules. How do we use them? For example, we can bring in a npm package (in this case lodash
) like so:
import lodash from "lodash";
This allows us to use the lodash object as you like. But there is a limitation. Ordinarily, you could destructure it to get just the round function we need:
import { round } from "lodash";
But ESM with Node.js, it doesn’t work. It is because of a fundamental difference in how CommonJS defines named element and how ESM does it. So, to do it, you will need to import it as the default, but then you can destructure manually:
import lodash from "lodash";
const { round } = lodash;
It’s a minor nit, but if you know how CommonJS modules defined names (as I show in my course), it actually makes sense.
I’ve been using ESM for a while and never ran into the import()
function. I used to think that CommonJS was the only module system that allowed for late binding imports. But, alas, I was wrong.
The import function allows you to request an import at runtime, though it is asynchronous so you have to deal with the promise. For example:
export async function calculateTotal(invoice) {
const { taxRates } = await import("./taxRates.js");
const rate = taxRates[invoice.state];
const total = invoice.amount + invoice.amount * rate;
return {
rate,
total,
};
}
You can see that the import allows you to load the module the first time we use calculateTotal()
. This does mean that you have to deal with asynchrony with the caller too:
invoices.forEach(async i => {
const { rate, total } = await calculateTotal(i);
console.log(`Invoice: ${i.invoiceNumber}, Date: ${i.invoiceDate}
Gross: $${round(i.amount,2)}
Tax Rate: ${rate * 100}%
Net: $${round(total,2)}`);
});
Note that the foreach is now async and you can use await to deal with the asynchrony.
You can find the example of the project here:
I was playing with ChatGPT (like we all are) and thought about what if it could write a blog-post in the style of my writing. The exact prompt was:
Write an article in the style of Shawn Wildermuth
The article is shown below, but clearly it isn’t in my voice. It doesn’t use the words “pragmatic” or “dogma”. Go figure.
As a software developer, we are constantly tasked with solving problems for our users and clients. The key to success in this role is to understand the needs of the people you are building software for. This requires a deep understanding of their problems, their perspectives, and the experiences they are going through. In short, it requires empathy.
Empathy is the ability to understand and share the feelings of others. In software development, this means taking the time to understand the needs of our users and clients, and then using that understanding to build software that meets their needs. When we build software with empathy, we create solutions that are intuitive, user-friendly, and effective.
There are several key benefits to building software with empathy:
Better User Experience: By understanding the needs and perspectives of our users, we can create a better user experience. This means that our software will be more intuitive and easier to use, leading to increased adoption and satisfaction.
Increased Collaboration: When we have empathy for our users and clients, we are more likely to collaborate with them and work together to solve their problems. This leads to stronger relationships and more successful projects.
Improved Problem-Solving Skills: When we understand the perspectives of our users and clients, we are better equipped to solve their problems. This leads to more creative and effective solutions, and a better outcome for everyone involved.
There are several ways to develop empathy in software development:
In conclusion, empathy is a powerful tool in software development. By understanding the needs and perspectives of our users and clients, we can build software that is more intuitive, user-friendly, and effective. So, next time you’re working on a project, take a step back, put yourself in your users’ shoes, and build with empathy. Your users and clients will thank you!
not by: Shawn Wildermuth
Most of my job these days is creating tutorials or examples. This means I often start from scratch with a new type of project. I usually just cruft up some sample data for my project to start. While investigating some alternatives to Automapper (video and blog coming soon), I wanted to be able to create a bunch of sample data. I, luckily, ran into a tool that I was surprised I’d never heard of: Bogus. Let’s talk about it.
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
Bogus is a library that works with C#, F# and VB.NET that can be used to create repeatable, fake data for applications. It is somewhat a port of a similar library Bogus.js. It accompished this by creating generators (called Fakers) that have a set of rules for generating one or more fake objects. Built-into Bogus is a set of generalized rules for common data categories (i.e. Addresses, Companies, People, Phone Numbers, etc.). Enough talk, let’s see how it works. The full repo is at:
To install Bogus, you can use the Package Manager or just the dotnet CLI:
> dotnet add package Bogus
You start out by creating an instance of a class called Faker<T>
. From that class you would use a fluent syntax to set up rules on creating sample data. But let’s start with our POCO for a Customer
:
public class Customer
{
public int Id { get; set; }
public string? CompanyName { get; set; }
public string? Phone { get; set; }
public string? ContactName { get; set; }
public int AddressId { get; set; }
public Address? Address { get;set;}
public IEnumerable<Order>? Orders {get;set;}
}
You can notice that aside from simple properties, we have a one-to-one relationship to an Address
and a one-to-many relationship with Orders
. Let’s start by creating a faker for the Customer
object and the simple properties:
var customerFaker = new Faker<Customer>();
We can then use the RuleFor
method to specify a rule for the Company Name:
var customerFaker = new Faker<Customer>()
.RuleFor(c => c.CompanyName, f => f.Company.CompanyName())
The first parameter of the RuleFor
method is a lambda to pick the property on Customer
that I want to fake. The second parameter is another lambda to pass in how to generate the property. While we could write any code we need here, the most-common case is to use the Faker
object passed to use the built-in semantics. In this case we are using the Company category to generate a company name.
If we continue this, we can fake more simple properties like so:
var customerFaker = new Faker<Customer>()
.RuleFor(c => c.CompanyName, f => f.Company.CompanyName())
.RuleFor(c => c.ContactName, f => f.Name.FullName())
.RuleFor(c => c.Phone, f => f.Phone.PhoneNumberFormat());
You can see here that we’re using the Name category and the Phone category. The Bogus library has a large set of these built-in semantics. Sometimes we’ll need to use custom code to generate data we need. For example, we’ll want to generate IDs for the generated customers. One strategy is to just create a local integer and assign it with simple code:
var id = 1;
var customerFaker = new Faker<Customer>()
.RuleFor(c => c.Id, _ => id++)
.RuleFor(c => c.CompanyName, f => f.Company.CompanyName())
.RuleFor(c => c.ContactName, f => f.Name.FullName())
.RuleFor(c => c.Phone, f => f.Phone.PhoneNumberFormat());
Here we can see that we just have an integer (which will become a closure to the rule) and we just increment it everytime a new customer is created.
To use the Faker, we can just call Generate()
with how many you want:
var customers = customerFaker.Generate(1000);
This will create a thousand fake customers.
By default, the generation of customers is random. So that everytime you create an instance of the Faker object (e.g. new Faker<Customer>
), you would get different customers. When you want a consistent set of fake data, you can use a seeder to ensure that you get the same data every time. To do this, you just need to set a seed value to the same number:
public class CustomerFaker : Faker<Customer>
{
public CustomerFaker()
{
var id = 1;
UseSeed(1969) // Use any number
.RuleFor(c => c.Id, _ => id++)
.RuleFor(c => c.CompanyName, f => f.Company.CompanyName())
.RuleFor(c => c.ContactName, f => f.Name.FullName())
.RuleFor(c => c.Phone, f => f.Phone.PhoneNumberFormat());
}
}
var customers = new CustomerFaker().Generate(1000);
When you do this, you can guarantee to get the same customers. But this affects the entire instance of the faker. This is because every call to Generate
will generate the next set of faked data. For example:
var customerFaker = new CustomerFaker();
var customers = customerFaker.Generate(1);
var companyName = customers.First().CompanyName;
var newCustomers = customerFaker.Generate(1);
Assert.IsTrue(companyName == newCustomers.First().CompanyName); // FAILS
This is because the seed is the repeatable data is per-instance. So that the the first call to Generate
will give you the first repeatable object; and the second call to Generate
gives you the second object.
But if you create a new instance, the names will be guaranteed:
var customerFaker = new CustomerFaker();
var customers = customerFaker.Generate(1);
var companyName = customers.First().CompanyName;
var newFaker = new CustomerFaker();
var newCustomers = newFaker.Generate(1);
Assert.IsTrue(companyName == newCustomers.First().CompanyName); // TRUE
This support the idea of repeatable sample data!
In our Customer
class, we have a property for an Address
. We can create a Faker for the Address
too:
public class AddressFaker : Faker<Address>
{
public AddressFaker()
{
var id = 0;
UseSeed(1969)
.RuleFor(c => c.Id, f => ++id)
.RuleFor(c => c.Address1, f => f.Address.StreetAddress())
.RuleFor(c => c.Address2, f => f.Address.SecondaryAddress())
.RuleFor(c => c.City, f => f.Address.City())
.RuleFor(c => c.StateProvince, f => f.Address.State())
.RuleFor(c => c.PostalCode, f => f.Address.ZipCode());
}
}
Again, there is a category for the type of data we need and can decide how to generate sample addresses. One thing you might want is to optionally not create certain parts of the fake data. For example, for our addresses, I want some of the Address2
properties to be null to replicate some apartment/suite numbers and addresses that do not have them. To do this, you can use OrNull()
method:
.RuleFor(c => c.Address2, f => f.Address.SecondaryAddress()
.OrNull(f, .5f))
The OrNull
method takes the faker object and a value between 0 and 1 to determine how often to generate a null
value. In this example, we’re specifying that we want half (or 50%) of the Addresses to have a null for it’s secondary address.
Now that we have a faker that does what we want, let’s use it to generate addresses too!
public class CustomerFaker : Faker<Customer>
{
AddressFaker _addrFaker = new AddressFaker();
public CustomerFaker()
{
var id = 1;
UseSeed(1969) // Use any number
.RuleFor(c => c.Id, _ => id++)
.RuleFor(c => c.CompanyName, f => f.Company.CompanyName())
.RuleFor(c => c.ContactName, f => f.Name.FullName())
.RuleFor(c => c.Phone, f => f.Phone.PhoneNumberFormat())
.RuleFor(c => c.Address, _ => _addrFaker.Generate(1)
.First()
.OrNull(_, .1f));
}
}
You can notice that we’re creating an instance of the AddressFaker
and then using it when we specify the rule for the Customer
’s Address
property. We can even use OrNull
to only generate Addresses for 90% of the customers.
There is a lot more to the Bogus library, but hopefully this will get you started. To get the example code from the video and this blog post, see the Github Repo:
If you’ve heard me talk about Vite in the past (and so commonly mispronouce it), you know I am a fan. With many Vue, React and SvelteKit applications are moving to Vite, I’ve been investigating how to integrate it for development and production into ASP.NET Core applications. Let’s see what I found out.
I also made a Coding Short video that covers this same topic, if you’d rather watch than read:
Normally, we’ve used packagers (Webpack, Rollup) to at development-time to watch for changes and hot-swap or reload pages as necessary. For development time, approaches this differently. While Vite also does hot-swapping of code, but it approaches this with actually compiling the project. Instead it exposes a server for a project that relies on script modules.
For example, to start a project, you need to just point at the entry file:
<body>
<div id="app"></div>
<script type="module" src="/src/main.js"></script>
</body>
Most modern browsers now support the script-type of module. In this case, Vite loads the main.js and then just follows imports and exports to load all the parts of the project that you need. This means that startup is incredibly fast since there really isn’t any compilation step.
When you’re developing directly with Vite, you can just start it at the command-line and it will server the index.html as well as the script/resource files. Though Vite isn’t really a production ready web-server. The serving of the files is really to have a great development-time experience.
For production time, it still compiles projects (by default with Rollup) in the same way that these frameworks have always done.
With this different approach, integrating with ASP.NET Core presents some challenges.
A little background: our project is a simple ASP.NET Core project with a Vite project as a subdirectory called “Client”:
Before we talk about how to get Vite working for development, let’s talk about how it will work when you publish your app for production (or other non-development builds).
In a Vite project, you can use Vite to build your project by using the build command (shown here in a package.json file’s scripts):
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
Vite uses Rollup (by default) to build and package your project for production. So we can configure Vite to output our project by modifying the vite.config.js file and adding a build configuration:
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
build: {
outDir: "../wwwroot/client",
emptyOutDir: true,
},
resolve: {
alias: {
"@": fileURLToPath(new URL("./src", import.meta.url)),
},
},
});
The outDir
is pointing at our wwwroot folder where the ASP.NET Core project will have access to the build files. The emptyOutDir
is specifically to empty it before building so you don’t get any extra assets littering that folder.
When we build the project, we get several files generated:
vite v4.0.4 building for production...
✓ 31 modules transformed.
../wwwroot/client/index.html 0.45 kB
../wwwroot/client/assets/index-ca646b5b.css 1.22 kB │ gzip: 0.48 kB
../wwwroot/client/assets/index-bdf9da80.js 77.78 kB │ gzip: 30.96 kB
Then we can just reference these files in the host page (a Razor page in this example):
@page @section Styles {
<link rel="stylesheet" href="/client/assets/index-ca646b5b.css" />
} @section Scripts {
<script src="/client/assets/index-bdf9da80.js"></script>
}
<h1>Film List</h1>
<div id="app"></div>
Notice that we’re also adding any markup (the div#app
in this example) that the Vite project needs.
But we have a problem, the Vite build is giving a cache-busting name (the random string after index-
). On every build, this will change, so we can use a special tag helpers:
@section Styles {
<link rel="stylesheet" asp-href-include="/client/assets/index-*.css" />
} @section Scripts {
<script asp-src-include="/client/assets/index-*.js"></script>
}
By using the asp-href-include
and asp-src-include
tag helpers, we can use a wildcard to include the right files for us.
Lastly, we need to actually run the build. We can do this by just adding the build to the .csproj
file. By adding a Target for before publish, we can just execute the build:
<Target Name='CompileClient'
BeforeTargets="Publish">
<Exec WorkingDirectory="./client"
Command="npm install" />
<Exec WorkingDirectory="./client"
Command="npm run build" />
</Target>
Notice that we’re calling npm install
first to be sure that all the packages exist. And that we’re using the WorkingDirectory
to specify our client
directory.
Now, when you publish the project (manually or in a build script), the Vite project is built too!
But we came to talk about development, let’s talk about that next.
Like we saw earlier, during development, you would run Vite and it would load scripts on demand using the type=module
method. When you run Vite in this mode, it is essentially running a server for the markup and a single, large SPA. If you’re creating APIs with ASP.NET Core and just hosting your SPA as a single HTML file, this works perfectly.
NOTE: There is a package called
Microsoft.AspNetCore.SpaServices.Extension
that is meant to do this, but it is not well documented and may be depreciated by now. It didn’t work well with Vite, though it might for Angular and React using their CLIs
But in many cases, you’ll want to host one or more SPAs on specific pages of your project. How do we handle this since both ASP.NET Core and Vite will be serving files?
During development you’ll want to run both servers, and just use the vite serving for the assets (.js/.css) for your project. To do this, let’s look at the Razor page again:
@page @section Styles {
<link rel="stylesheet" asp-href-include="/client/assets/index-*.css" />
} @section Scripts {
<script asp-src-include="/client/assets/index-*.js"></script>
}
<h1>Film List</h1>
<div id="app"></div>
What we want to do here is only use these styles and script tags during production, so we can surround it with an environment
tag for production:
<environment include="Production">
@section Styles {
<link rel="stylesheet" asp-href-include="/client/assets/index-*.css" />
} @section Scripts {
<script asp-src-include="/client/assets/index-*.js"></script>
}
</environment>
This will set it up to only use the build assets during production. We can then add an environment
tag for development:
<environment include="Development">
<script type="module" src="http://localhost:5000/src/main.js"></script>
</environment>
You’ll notice that we’re using the Vite server to serve the main.js file. If you remember from earlier, this will load other assets on-demand and hot-swap them as necessary.
In this way you get the best of both worlds. But we have a problem:
In our example, we’re hosting the SPA on a page who’s URL is http://localhost:8000/FilmList
. This is related to the Razor page’s URL. But our Vite project (Vue in this case) is using history-type routing. That means, when it navigates, it takes over the URL. So when we navigate to our SPA’s home page, it changes it to http://localhost:8000/
and for the list page it changes the URL to http://localhost:8000/films
(which are based on the projects own routing, not server-side routing).
The problem is if we refresh the page or open that URL, it fails because we’re not serving up our Razor page at those URLs. There are two fixes here. First, we need to tell the Vite project what the base address for our project is. We can do this in vite.config.js
:
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
base: "/FilmList",
build: {
outDir: "../wwwroot/client",
emptyOutDir: true,
},
resolve: {
alias: {
"@": fileURLToPath(new URL("./src", import.meta.url)),
},
},
});
You can see the addition of the base
property which let’s the app know what the base URL for the project is. In this way, the navigation will start at that base URL. This is better. But then our SPA’s route for the list of films is http://localhost:8080/FilmList/films
. But this isn’t a valid server-route either. We need a way to have inter-SPA URLs serve the SPA page (and the internal routing do the right thing).
To do this, you can simply use fallback routes in the ASP.NET Core server. You’ll want to make sure that any fallbacks are specified after all your other routes (e.g. Razor Pages, Controllers and Minimal APIs). You to this by using the MapFallback
calls. For example, in our case (since we’re using Razor pages) we can use MapFallbackToPage
like so:
app.MapGet("api/films", async (BechdelDataService ds, int? page, int? pageSize) =>
{
//...
}).Produces<IEnumerable<Film>>(contentType: "application/json").Produces(404).ProducesProblem(500);
app.MapFallbackToPage("/FilmList");
app.Run();
Note, that this fallback is not redirecting, but just serving that page. That way the URL is preserved for the Vite project to use for it’s own routing.
This will fallback to any page that isn’t found in routing to the FilmList Razor page that contains our SPA. That might be too broad though, you may want to use the fallback with a pattern too (so it only falls back to that page’s URLs) like so:
app.MapFallbackToPage("/FilmList/{*path}", "/FilmList");
The first parameter of the MapFallbackToPage
allows you to specify a routing pattern to apply this fallback to. In this way, any urls that start with /FilmList
will just fallback to that page.
I hope this helps some of you using Vite for your own projects. You can get the example for this project at:
https://github.com/shawnwildermuth/codingshorts/tree/main/aspnetvite
I’m happy to answer any of your questions below if I’ve been unclear about any of this!
Thanks for reading.
I recently released a Coding Short video and a blog post about the new JWT Tooling in .NET 7. It was received well, but I didn’t dig into some of the real details of what is happening. If you need to catch up, here is the blog post:
What I didn’t have a chance to explain is everything that the user-jwts tool actually does. It makes several changes:
What it doesn’t do is wire up your startup to include JwtBearer authentication, it only sets up the tool as an issuer of the JWT. Let’s walk through this.
The first step is it adds a new Authentication
section to the developer settings:
"Authentication": {
"Schemes": {
"Bearer": {
"ValidAudiences": [
"http://localhost:38015",
"https://localhost:44384",
"http://localhost:5241",
"https://localhost:7254"
],
"ValidIssuer": "dotnet-user-jwts"
}
}
}
It gets the valid audiences by looking at the launchsettings.json
(in the Properties
folder) of your project. The ValidIssuer is there to match the issuer of the JWT as configured in user settings (see next section for what I mean).
In order to allow the JWT to be signed, it needs to have some security information. This information is in the user-secrets
file. If your project didn’t have support user-secrets yet, the tool adds the user-secret GUID to the project and then stores some information there. To see what they added, just list the secrets in this project:
> dotnet user-secrets list
In my case, this returns the secret and valid issuer information (this is a throw-away project, so leaking this secret doesn’t matter):
Authentication:Schemes:Bearer:SigningKeys:0:Value = R98yic+EGjR0asjN8eHe2nSLlhBB8tWcebIxHmcOSko=
Authentication:Schemes:Bearer:SigningKeys:0:Length = 32
Authentication:Schemes:Bearer:SigningKeys:0:Issuer = dotnet-user-jwts
Authentication:Schemes:Bearer:SigningKeys:0:Id = e1b964aa
What’s interesting here, is that some of these defaults are configurable. By default, when the tool issues a JWT, it uses your machine name identification. If you don’t want that, you can simply override it:
> dotnet user-jwts create -n shawn@aol.com
You can even change the name of the Issuer with:
> dotnet user-jwts create -n shawn@wildermuth.com --scheme YourIssuerName
I was able to use this information to just prototype issuing JWTs (via API) to just re-use the tools’ information. You can see here the GetSection matches the information in user-secrets
:
var bearer = _config.GetSection("Authentication:Schemes:Bearer");
if (bearer is not null)
{
var uniqueKey = bearer.GetSection("SigningKeys")
.Get<SymmetricSecurityKey[]>()?
.First()
.Key;
var issuer = bearer["ValidIssuer"];
var audiences = bearer.GetSection("ValidAudiences")
.Get<string[]>();
var key = new SymmetricSecurityKey(uniqueKey);
var creds = new SigningCredentials(key, SecurityAlgorithms.HmacSha256Signature);
Hope this answers some questions. Ping me below if you have questions!