The adapter pattern in Go
Abstracting the stuff you don’t care about
Every time you encounter a testability problem, there is an underlying design problem. If your code is not testable, then it is not a good design.
—Michael Feathers, “The Deep Synergy between Testability and Good Design”
How do you test a database without a database? Don’t worry, this isn’t one of those Zen puzzles. I have something more practical, but equally enlightening, in mind.
Testing external dependencies
No program is an island, and we often have to communicate with other programs in order to get our work done. For example, we might use some external database such as PostgreSQL, or an internet API such as the weather service we dealt with in An API client in Go.
Any external dependency of this kind presents both a design problem and a testing problem. Sometimes we can solve both problems at once, by using the adapter pattern.
An adapter is a way of grouping together all the code in our system that deals with a particular dependency. For example, we might group all the code that knows how to talk to a specific API into one package, or function, and we could call it the “adapter” for that API.
Adapters are ambassadors
Adapters are also sometimes called ambassadors: their job is to “represent” us to the external system, and vice versa. They deliver vital messages to the foreign embassy, translating them into the appropriate “language” so that they’ll be understood. In turn, they translate and bring back to us the response in a language we can understand.
The net effect of the adapter is to decouple all knowledge about the specifics of the external system from the rest of the program. We can treat it as an obliging ambassador that will ask questions of some foreign power on our behalf, and bring us back the results in a conveniently-shaped diplomatic bag.
Encapsulating all dependency-specific knowledge in a single component, then, solves both our design problem and our testability problem. It means that we don’t need to call the remote API in our tests, and in turn the status of our tests doesn’t depend on whether some external service is available.
Example: a database adapter
Let’s see how the adapter pattern might work with a dependency like some SQL database, for example. Suppose we need to store product information for Acme Widgets, Inc, and we’d like to access it using the classic CRUD methods: Create, Read, Update, and Delete.
So let’s say we define some Widget
struct:
type Widget struct {
string
ID string
Name }
Our first attempt at a constructor for Widget
might look
something like this, with the gory details omitted:
func Create(db *sql.DB, w Widget) (ID string, err error) {
// SQL: create widgets table if it doesn't exist
// SQL: insert into widgets table
// handle possible error
return w.ID, nil
}
We take some *sql.DB
object representing a database
handle, instantiated using some specific driver (for example, Postgres).
We’ll use that to execute the necessary SQL queries (omitted here) to
add the specified new widget to the database.
Dependency expertise and business logic don’t mix
This is fine, of course, and most Go applications that use databases look something like this. But it’s a little awkward, in a couple of important ways. First, knowledge about the specific database server (for example, the idiocrasies of its SQL syntax) is embedded in a function that should really only contain business logic. That is, code that implements rules about widgets for our specific customer or problem domain.
We don’t want this key business logic all tangled up with the code to construct SQL queries for a specific database server. That’s just bad design, because it violates the Single Responsibility Principle, that any given function should do more or less one thing. We’d have to copy and paste the same logic to any other function that stores widgets in a different way.
The more serious problem is that now it’s impossible to test
our widget logic without having an external database available, and
making real queries against it. Even if this is only some local test
server, it’s still annoying. We can’t just run go test
: we
have to use a Makefile or Docker Compose file or something to start the
Postgres server first.
Actually, it is possible to start external services
automatically in a Go test, either by running commands via
os/exec
, or by starting containers using a package such as
testcontainers
.
That’s a valid approach, but a bit heavyweight: it’s sumo, not judo.
Let’s invent an abstract “widget store”
The adapter pattern gives us a more elegant way to design this problem out of existence. How would that work? Well, the underlying issue is that the widget logic is uncomfortably tightly coupled with the “storing things in Postgres” code. Let’s start by breaking that dependency.
It’s presumably not crucial to widgets that they be stored in Postgres, specifically. So let’s invent some completely abstract widget store, described by an interface:
type Store interface {
(Widget) (string, error)
Store}
We can implement this interface using any storage technology we
choose. All we need to do is provide a suitable Store
method, and make it work.
Now we can change Create
to take an abstract
Store
, instead of something specific like a
*sql.DB
:
func Create(s Store, w Widget) (ID string, err error) {
, err = s.Store(w)
IDif err != nil {
return "", err
}
return ID, nil
}
Building a trivial
Store
for testing
In a real application, Create
would probably do some
widget-related business logic (validation, for example), which we can
imagine wanting to test in isolation.
To do that, we still need something that implements
Store
, for test purposes. But this can be as trivial as we
like. In fact, we could use a Go map. The data won’t be persistent, but
that doesn’t matter; it only needs to persist for the duration of the
test.
type mapStore struct {
*sync.Mutex
m map[string]widget.Widget
data }
func newMapStore() *mapStore {
return &mapStore{
: new(sync.Mutex),
m: map[string]widget.Widget{},
data}
}
func (ms *mapStore) Store(w widget.Widget) (string, error) {
.m.Lock()
msdefer ms.m.Unlock()
.data[w.ID] = w
msreturn w.ID, nil
}
Even though this is only a test fixture, we’d still like it to be
concurrency safe, so that a mapStore
could be
shared between parallel tests if necessary. The protective mutex makes
this possible.
This isn’t too dissimilar from the example we developed in Walking with filesystems, where we used an
fstest.MapFS
as a fast, trivial implementation of the
file-tree interface fs.FS
.
Testing the business logic
Great. With that preparatory work out of the way, we can go ahead and
write a test for Create
:
func TestCreate_GivesNoErrorForValidWidget(t *testing.T) {
.Parallel()
t:= newMapStore()
s := widget.Widget{
w : "test widget",
ID}
:= "test widget"
wantID , err := widget.Create(s, w)
gotIDif err != nil {
.Errorf("unexpected error: %v", err)
t}
if wantID != gotID {
.Error(cmp.Diff(wantID, gotID))
t}
}
We can run this test without any awkward external dependencies, such as a Postgres server. That makes our test suite faster and easier to run, and by decoupling the widget logic from the storage logic, we’ve also improved the overall architecture of our package.
A Postgres
adapter that also implements Store
In the real program, though, we’ll probably want to store widget data
in something like Postgres. So we’ll also need an implementation of
Store
that uses Postgres as the underlying storage
technology.
Suppose we write something like this, for example:
type PostgresStore struct {
*sql.DB
db }
func (p *PostgresStore) Store(w Widget) (ID string, err error) {
// horrible SQL goes here
// handle errors, etc
return ID, nil
}
This is an equally valid implementation of the Store
interface, because it provides a Store
method. The only
major difference from the mapStore
we built earlier is that
there are about 1.3 million lines of code behind it, because it talks to
Postgres. Thank goodness we don’t have to test all that code just to
know that Create
works!
Testing the adapter behaviour by chunking
However, we do also need to know that our
PostgresStore
works. How can we test it? We could connect
it to a real Postgres server, of course, but that just puts us right
back where we started. Could we use chunking to avoid this?
With the weather client program in An API client in Go, we split up the API adapter’s behaviour into inbound and outbound chunks. In that case, the outbound part knew how to format the URI for the request, based on the user’s location and key, while the inbound part knew how to decode the weather API’s response into data we can use. Each of these chunks of behaviour was pretty easy to test in isolation.
“Outbound”, in the case of our PostgresStore
example,
would mean that, given a widget, the adapter generates the correct SQL
query to insert it into the database. That’s fairly easy to test,
because it’s just string matching. We can play around with a real
Postgres and figure out what the SQL needs to be, then check that the
adapter generates it correctly.
What about the “inbound” side? Well, our Store
interface
is deliberately very simple: we can only store widget information, not
query it. In a real application, though, we’d also need to
retrieve widgets from the Store
, and so we’d need
to add a Retrieve
method to the interface. Its behaviour
would be the inbound side of our Postgres adapter. Let’s briefly talk
about what that would involve, and how to test it.
In the Postgres case, implementing Retrieve
would mean
doing a SQL query to get the required data, and then translating the
resulting sql.Row
object, if any, to our
Widget
type.
Faking a database using
sqlmock
This is awkward to test using a real database, as we’ve seen, but
it’s also pretty difficult to fake a sql.DB
. Fortunately,
we don’t have to, because the sqlmock
package does exactly this useful job.
We can use sqlmock
to construct a very lightweight DB
object that does nothing but respond to a specific query with some
static data. After all, we don’t need to test that Postgres
works. If it doesn’t, that’s not our problem, thank goodness.
All we need to test on our side is that if we get a row
object containing some specified data, we can correctly translate it
into a Widget
.
Let’s write a helper function to construct a
PostgresStore
using this fake DB:
import "github.com/DATA-DOG/go-sqlmock"
func fakePostgresStore(t *testing.T) widget.PostgresStore {
, mock, err := sqlmock.New()
dbif err != nil {
.Fatal(err)
t}
.Cleanup(func() {
t.Close()
db})
:= "SELECT id, name FROM widgets"
query := sqlmock.NewRows([]string{"id", "name"}).
rows ("widget01", "Acme Giant Rubber Band")
AddRow.ExpectQuery(query).WillReturnRows(rows)
mockreturn widget.PostgresStore{
: db,
DB}
}
We call it a “fake PostgresStore
”, but that’s just a
manner of speaking. It’s a perfectly genuine PostgresStore
,
and there’s a genuine *sql.DB
hidden inside that
abstraction. It’s just not connected to a real Postgres server. Instead,
we’re impersonating a (very simple-minded) Postgres server that only
accepts one specific SQL query, and always responds with a single row of
fake data.
Testing the adapter against our fake DB
Now we can use our “fake” PostgresStore
in a test. We’ll
call its Retrieve
method and check that we get back the
Widget
described by our canned test data:
func TestPostgresStore_Retrieve(t *testing.T) {
.Parallel()
t:= fakePostgresStore(t)
ps := widget.Widget{
want : "widget01",
ID: "Acme Giant Rubber Band",
Name}
, err := ps.Retrieve("widget01")
gotif err != nil {
.Fatal(err)
t}
if !cmp.Equal(want, got) {
.Error(cmp.Diff(want, got))
t}
}
Very neat! Finally, let’s write the Retrieve
method and
check that it passes our test.
func (ps *PostgresStore) Retrieve(ID string) (Widget, error) {
:= Widget{}
w := context.Background()
ctx := ps.DB.QueryRowContext(ctx,
row "SELECT id, name FROM widgets WHERE id = ?", ID)
:= row.Scan(&w.ID, &w.Name)
err if err != nil {
return Widget{}, err
}
return w, nil
}
And we still didn’t need a real Postgres server, or any other external dependencies. Of course, our confidence in the correctness of the code only goes as far as our confidence that our SQL query is right, and it might not be.
Similarly, the canned row data returned by our fake might not match that returned by a real server. So at some point we’ll need to test the program against a real server.
The work we’ve done here, though, has greatly reduced the scope of our dependency on that server. In fact, we might only need it for one or two tests that we run every once in a while, just to confirm that our assumptions about how it will behave are still valid.
Adapters are just good design
Thanks to the synergy between testability and good design, a pattern
that we introduced to simplify testing has actually resulted in a better
architecture for our program. We’ve decoupled the “knowing about
widgets” code from the “knowing about Postgres” code, by creating the
Store
abstraction and the PostgresStore
adapter that plugs into it.
That change makes it easier to test our program, to be sure, but it also makes it easier to understand and reason about. Indeed, it makes it more testable because it makes it easier to reason about. We don’t have to worry about two different and unrelated layers of behaviour interfering with each other and messing up our test results.
Should we want to introduce the option for a different database
backend at some point, such as SQLite, or MySQL, or some arbitrary cloud
storage API, that’s now much easier. All we need to do is write a
suitable adapter that implements the Store
interface.
We could even make it so that users can supply their own
Store
adapters, to talk to whatever kind of storage engine
they want, and they will magically just work with our
Widget
business logic. How delightful!