bookmark_borderRTS Game: Building the game, plan of attack

In my previous post I announced the start of my dream: building my own game.

In this post I will elaborate further about the goal I have set and how I intent to reach that goal. Writing a game is, like any project, quite a challenge. It is a continuous process of ‘zooming in’ (doing the actual work) and ‘zooming out’ (keeping an eye on the bigger picture). Only that way you can be sure you reach the goal in the most efficient manner.

Like any project, to keep it clear what to do, there is a list of tasks. It is wise to (ball-park) estimate how much time they will consume. At the same time there is a desired ‘launch date’. This brings a certain tension as you want as much value (features/tasks) done on launch date. Usually you have a minimal amount of tasks you have to get done.

Since I am now the developer and ‘product owner‘ at the same time I experience both sides. I need to investigate what to do to reach my goals. At the same time estimate and do the actual work.

This blog post covers the questions:

  • What needs to be done?
  • When will they be done?
  • What is the ‘end-date’?

What needs to be done?

In the grand scheme of things (zoomed out), roughly:

  • make the game ‘feature complete’
  • make my own graphics, sounds, etc
  • easily distributable & installable

What is ‘feature complete’?

I am heavily inspired by the first versions of C&C (RA), and I feel that to have a minimum playable game there should be at least:

  • 1 resource to ‘mine’ to earn money with (est: 8 hours)
  • 1 ‘power’ resource (est: 4 hours)
  • 1 faction to play with as a player (est: 0 hours)
  • A simple tech-tree (structures) (est: 8 hours)
    • Central structure to produce other structures (Const yard in C&C)
    • Power plant
    • Barracks
    • Factory
    • Refinery
  • A few units which have a Rock-Paper-Scissors mechanism (est: 16 hours)
    • Infantry
      • Two types: rocket launchers and soldiers
    • Light unit (fast, trike/quad)
      • 1 or 2 types
    • Heavy unit (tank)
      • 1 or 2 types
  • A (random generated) map to play on (est: 4 hours)
  • A very simple AI to play against (est: 12 hours)
  • A clear objective (ie, destroy enemy) (est: 4 hours)
  • A beginning and ‘end’ of the game. (menu to start from, and ‘game won/lost’ screen). (est: 8 hours)

And I think if you really want to push it you can shave off some features. I believe the first step is to get ‘full circle’ as soon as possible. Meaning you have a concept of a game. It has a beginning and an end.

From here on I can expand scope to my ideal game. But not before I have done the two other important things.

Estimation: 64-80 hours (20% deviation)

Make my own graphics, sounds, …

The next important part is graphics, sounds, music, story, etc. As for graphics. At the moment I use Dune 2 graphics as placeholders. Once I have the game mechanics in place and I know which units/structures I need for the minimum version I can start creating/getting these. So in a way they are not required immediately, but they are required whenever I want to commercially release my own game.

Changing the graphics will have impact on some implementations for sure, although I do set up the code as flexible as possible, there are always cases that I have not thought about.

There is a caveat. Creating graphics is hard. It is not my primary skill. To tackle this I could:

  • learn to create my own (? hours)
  • find someone else who is willing to do graphics for me (0 hours, but $$$)
  • find premade graphics in a market place and use that (4 hours searching, and $$$)

The same logic can be applied to Sounds, possibly Music.

Estimation: 0 – 10.000 hours

Seriously: this is a risk and I need to decide which strategy to get a more reliable amount of hours.

Easily distributable & installable

Release early, release often. It is a phrase I use all the time when I am working at clients. The same goes for my game. There is a difference however. Usually I work on mid-large web applications. There is no need for customers to install anything to use the latest version of the website. When an organisation has implemented Continuous Deployment, it can easily deploy new versions at will. Customers automatically have the latest version upon a new visit of the website.

For applications that need to be installed there are several platforms out there. There is a reason why the concept of an ‘App store’ is famous. It delivers a web-like experience.

There is a lot to win here, the first step though is to make sure any user is able to download a distribution for their platform (Windows, Mac OS, Linux) and able to install the game. I already took the liberty of wanting to support these platforms. So yes, I target the PC platform. No mobile, game console, etc.

In that sense there are a few steps to be taken:

  1. Offer a distribution somewhere (website? distribution platform? need to figure out)
  2. Provide an easy installation procedure (how? install4j? etc)
  3. Provide an app-store like experience. (use a distribution platform like Steam?)

Estimation: 8-16 hours

Estimation is based on creating an installer.

When will it be done?

Ok great, so there is a lot to do. So when will all this be done?

Lets bring a few numbers together. The estimates and the available time.

Bringing the estimates together

I just take the hours from all 3 paragraphs above, and sum them.

Min: 64 + 0 + 8 = 72

Max: 80 + 0 + 16 = 96

(I purposely did not add the 10.000 hours from the graphics section, it seriously is way too unsure).

So basically, I am estimating that 72 to 96 hours should be enough. Meaning, given an 8 hour work day it would take 9 to 12 days to get a minimum version which is distributable in the easiest way. Without doing anything about graphics, sounds, etc. (using dune 2 graphics as ‘stub’)

Estimated time needed: 72 to 96 hours

This excludes graphics.

How much time do I have?

To calculate the amount of time I have realistically I also do a min/max calculation. The minimum amount being the amount I am 100% sure of I have. The max being added hours I probably can spend, but I should not count on it.

Minimum hours are easy. I have at least 8 hours a week (1 work day a week) to spend. And every 8 weeks I take a week off to spend even more time. This means in a period of 10 weeks I can spend 14 days.

The max would be weekly 0 to 8 hours more. I can spend a weekend sometimes, an evening, sometimes more evenings. It really depends on a lot of things.

I like to think in periods of 10 weeks, I consider a full 10 weeks as an ‘iteration’. When working on Magic Gatherers we used this mechanism and it worked out pretty well. Also, every iteration had a particular focus. The first iteration there was ‘from idea to launched product’ for instance. For this game it would be different of course.

One other aspect with time is ‘when should it be done?’. The easiest thing would be to use the iteration end-date. Considering that the first work-day is within this week, this week is counted as the first of the iteration. An iteration of 10 weeks will mean the end date is 29th of october.

Meaning:

End-date: 29th of october 2017

Time available: 14 days to 19 days (112 to 152 hours)

Looks like an easy feat!… oh wait, I have seen this before. This probably means I forgot something. Ah yes, the graphics… and so much more unknowns. Looks like it will be a close one.

So when will it be done?

Yes, good question, so in this case I choose to use the fixed-time flexible scope approach. Although I do know if the minimal scope is not met I will not launch the product. Then again, I really DO want to launch a product so I probably will be very harsh on the scope and just make it fit.

This brings me to another topic, priorities and goal. What do I try to achieve with releasing something at the end of iteration #1? I will elaborate on that in a different blog post.

Conclusion

It looks like it is feasible to get a minimalistic feature complete game done within the first iteration. That would mean at the 29th of october (latest) a downloadable and installable game should be available.

However, there are more things that need to be done that were not explicitly defined in overall 3 phases. You can think of, writing dev blogs, youtube video’s for demoing features, a monthly in-depth video for Patrons and so forth.

The only way to know is to just do it!

bookmark_borderKickstarting my dream! – building my own Real-Time-Strategy Game

My dream is to build my own games and make a living out of it. It is fairly obvious to me now, but at the same time I never dared to make it this explicit.

Once you decide to make work of your dreams, you get into some kind of planning stage. How am I going to build any game? What kind of game? And how do you find time? Practical stuff, dreamy stuff. There is so much needed to build a game, where to begin?

For me it is obvious I want to build an RTS game. I have fond memories of Dune 2, Command & Conquer and Warcraft. Those games got me into programming. No wonder I built Dunedit, Arrakis, Dune 2 – The Maker, and am still working on my own engine today. You could say: I have a strong desire to build my own RTS.

So what is needed to build a game on your own? Obviously you need practical skills, coding, graphics, sounds, music, etc. I’m convinced you can learn about anything – although that does not make you an expert at it. I can code, and I have some skill in music. And, if I can’t ‘create’ it myself, there are various resources out there…

Then the next question is, when am I going to build this? It requires some serious time. How am I going to make time for this? Especially if you have a family, work, etc.

The answer to that is, you ‘make’ time. My experience with working on Magic Gatherers is that I can at least free up 1 day a week and every 8/9 weeks 5 days. Freeing up means basically taking unpaid leave.

This means, over a period of 10 weeks, I can allocate 14 days. A full two weeks of time I can spend on my own stuff: 10 ‘day a week’ + 4 extra days (the 5 day week).

As a freelancer I can more easily free up time, but it also means I need to watch my cash-flow and be sure that my family can rely on a sort of steady income. This is my starting point, from here I want to free up more time (but I need preparation for that).

In the meantime, given that I know my time I can spend on the game for this year, it becomes obvious that I have to prioritize the things to do. This is related to goals, which I will talk about in a later post.

If you want to follow my progress, and get a more indepth view of what I encounter while working towards my dream, follow (or support) me on patreon.

bookmark_borderMagic Gatherers looking for Beta testers!

Do you play Magic The Gathering? Do you want to keep track of your games (win/lose ratio)? Perhaps want to know which deck was used? How about finding new players to play against or organize your own Tournament?

Sounds awesome right!?

Thats something I am working on right now with 2 other guys.

We play MTG and we believe it should be a social and fun happening. Anything technical should be supportive, be as less intrusive but supporting you where possible.

To do that we’re building an application that can do the following:

  • keep track of your matches, which deck did you play with against whom and did you win/lose?
  • easy way to find other people to play against (duels, a Tournament, etc)
  • easy Tournament organisation. Set up a Tournament yourself, or join one easily.
  • have fun achievements along the way
  • If you’re interested, head over to our website and subscribe to our mail list to get invited to the first version of the app asap!

    Perhaps you have some ideas on your own? Don’t hesitate to share them. Either via email, or comment on this reddit post.

    bookmark_borderIntegration testing your Asp .Net Core App – Dealing with Anti Request Forgery (CSRF), Form Data and Cookies

    1. Integration testing your asp .net core app with an in memory database
    2. Integration testing your asp .net core app dealing with anti request forgery csrf formdata and cookies (this)

    This post can be considered a sequel to Setting up integration testing in Asp.Net Core. It builds upon some code provided there.

    Running Integration Tests are awesome. (Be mindful though: They Are A Scam as well).

    So great, you got it up and running. And you probably will run into a few (practical) things when dealing with a bit more useful scenario’s. In this case I describe visiting a page (GET) and POSTing a form. Also we deal with the case when you have set up Anti Request Forgery.

    I will give you a few hints of code. For your convenience I have also put them up as Github Gists.

    Disclaimer about the presented code:
    I have come to this code after some investigation, but I lost track of what source lead to what piece of code. So if you are (or know) the source of pieces of code, please notify me and I will give credits where credits are due.

    Now we have that out of the way, lets get started!

    Context – some example code

    To explain things easier, lets say you have a GET and POST action defined (for same URL). The GET delivers a view with a form; in your integration test you want to do a POST to the same URL as if the user filled in the form. (Since we’re not UI testing, we don’t care about the HTML – no need to play browser here).

    In our example, lets say we have some code like this:

    // GET request to awesomesauce
    [Route("awesomesauce")]
    public async Task AwesomeSauce()
    {
    	var model = new MyAwesomeModel();
    	return View(model);
    }
    
    
    // POST request to awesomesauce
    [HttpPost, Route("awesomesauce"), ValidateAntiForgeryToken]
    public async Task AwesomeSauce(MyAwesomeModel myAwesomeModel)
    {
    	// if valid, do stuff
    	
    	// else...
    	return View(myAwesomeModel);
    }
    

    Your integration test would look like this:

    
    [Collection("Integration tests collection")]
    public class AwesomeSauceTest : AbstractControllerIntegrationTest
    {
    	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
    	[Fact]
    	public async Task Visits_AwesomeSauce_And_Posts_Data()
    	{
    		var response = await client.GetAsync("/awesomesauce");
    		response.EnsureSuccessStatusCode();
    
    		// How do we do this? Send data (POST) - with anti request forgery token and all?... lets find out!
    		//var response = await client.SendAsync(requestMessage);
    	}
    }
    

    In the test we marked our questions. How do we post data? And how do send this AntiRequestForgery token?

    Lets begin with POSTing data. In a convenient world I would like to present a

    Dictionary

    with keys and values, then simply pass that as method BODY and let some helper method transform that into a true HttpRequest message. I made such a thing my own, it looks like this:

    With the

    PostRequestHelper

    we can now POST data like so:

    public class AwesomeSauceTest : AbstractControllerIntegrationTest
    {
    	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
    	[Fact]
    	public async Task Visits_AwesomeSauce_And_Posts_Data()
    	{
    		var response = await client.GetAsync("/awesomesauce");
    		response.EnsureSuccessStatusCode();
    
    		var formPostBodyData = new Dictionary
    			{
    				{"Awesomesauce.Foo", "Bar"},
    				{"Awesomesauce.AnotherKey", "Baz"},
    				{"Any_Other_Form_Key", "Any_Other_Value"}
    			};
    
    		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);
    
    		// TODO: AntiRequestForgery token
    
    		var response = await client.SendAsync(requestMessage);
    
    		// Assert
    	}
    }
    

    Well that is easy isn’t it?

    If you paid attention you already saw a hint in the

    PostRequestHelper

    about a

    CookiesHelper

    . Although it is not needed to deal with AntiRequestForgery, it is a handy tool. I’ll explain it below.

    Dealing with the AntiRequestForgery token

    In general it is easy, you do a GET, in its response you receive a token. You need to extract that token and put that token on your next POST request and you’re done.

    To extract the token, you can use this:

    Now we can use it in our test like so:

    public class AwesomeSauceTest : AbstractControllerIntegrationTest
    {
    	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
    	[Fact]
    	public async Task Visits_AwesomeSauce_And_Posts_Data()
    	{
    		var response = await client.GetAsync("/awesomesauce");
    		response.EnsureSuccessStatusCode();
    
    		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);
    
    		var formPostBodyData = new Dictionary
    			{
    				{"__RequestVerificationToken", antiForgeryToken}, // Add token
    				{"Awesomesauce.Foo", "Bar"},
    				{"Awesomesauce.AnotherKey", "Baz"},
    				{"Any_Other_Form_Key", "Any_Other_Value"}
    			};
    
    		var requestMessage = PostRequestHelper.Create("/awesomesauce", formPostBodyData);
    
    		var response = await client.SendAsync(requestMessage);
    
    		// Assert
    	}
    }
    

    And voila, you can now do a POST request which will pass the token and make the POST happen. By omitting the token you can test if your action is protected by CSRF (or using a different token). Although I would not try to test the framework itself, I would advice to have tests in place that make sure specific controller actions are protected.

    Dealing with Cookies

    As bonus, lets deal with Cookies. You need to deal with those probably. Sometimes you need to post the data again (as if you are a real browser). In that case to make life easier there is a method on the

    PostRequestHelper

    called

    CreateWithCookiesFromResponse

    . This basically creates a POST request, and copies over your cookies from a (previous) GET request.

    The CookiesHelper looks like this:

    In our example test above, we could have used it like this:

    public class AwesomeSauceTest : AbstractControllerIntegrationTest
    {
    	public AwesomeSauceTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
    	[Fact]
    	public async Task Visits_AwesomeSauce_And_Posts_Data()
    	{
    		var response = await client.GetAsync("/awesomesauce"); // this returns cookies in response
    		response.EnsureSuccessStatusCode();
    
    		string antiForgeryToken = await AntiForgeryHelper.ExtractAntiForgeryToken(response);
    
    		var formPostBodyData = new Dictionary
    			{
    				{"__RequestVerificationToken", antiForgeryToken}, // Add token
    				{"Awesomesauce.Foo", "Bar"},
    				{"Awesomesauce.AnotherKey", "Baz"},
    				{"Any_Other_Form_Key", "Any_Other_Value"}
    			};
    
    		// Copy cookies from response
    		var requestMessage = PostRequestHelper.CreateWithCookiesFromResponse("/awesomesauce", formPostBodyData, response);
    
    		var response = await client.SendAsync(requestMessage);
    
    		// Assert
    	}
    }
    

    Conclusion

    After we have set up integration testing we want to get to do some basic interactions with our application. Using an AntiRequestForgeryHelper we can extract a token. Using the PostRequestHelper we can construct a new Request to easily send over a request. Combined they can make any scenario work with CSRF protection.

    In case you need to pass over cookies information you can use the CookiesHelper.

    bookmark_borderIntegration Testing your Asp .Net Core app with an in memory database

    Parts:

    1. Integration testing your asp .net core app with an in memory database (this)
    2. Integration testing your asp .net core app dealing with anti request forgery csrf formdata and cookies

    Revisions:

    • 14th august 2016 – updated for .net core 1.0
    • 29th april 2016 – first version covering RC1

    Recently I am working with .Net (C#). And we’re working in .Net Core, which is awesome. (new stuff, wooh yeah!). I wanted to set up integration testing and it was tough to find resources to make it all happen. Which is pretty obvious considering how new some stuff is.

    I found some articles scattered around this topic. But, there is not a full guide from “start till ‘full integration testing’ + in memory database”. So because of the lack of it, here is my take.

    I couldn’t have made it this far without some notable resources (see below) and the answer to my Github question (with a friendly and very constructive response, thanks Asp.Net guys!).

    Overview: What this blog post covers

    1. Setting everything up – your first integration test
    2. Run your tests against an in memory database + making sure the memory database has its tables set up.
    3. Then make it as fast as possible

    Do note: An in memory database is NOT the same as your SQL Server. But if that is not bothering you (or not in these test cases), no worries there.

    Step 1: First make it work – setting everything up

    I assume you don’t have any integration test running yet. I am using xUnit. Read this well written article[#1] how to set up your integration test base. I summarise here quickly, but if you get stuck read the article. Then get back here.

    Hook up your dependencies, here are mine (taken from

    project.json

    ):
    [json]

    “dependencies”: {
    … // my project dependencies
    “FluentAssertions”: “4.2.1”,
    “xunit”: “2.1.0”,
    “xunit.runner.dnx”: “2.1.0-rc1-build204”,
    “Microsoft.AspNetCore.TestHost”: “1.0.0”,
    }

    [/json]

    Within your test class, define:

    
    	public TestServer server { get; }
    
    	public HttpClient client { get; }
    
    

    Then in the constructor of your test class:

    var builder = new WebHostBuilder().UseStartup<Startup>();
    
    server = new TestServer(builder);
    
    client = server.CreateClient();
    

    Now also create a test case. Something along the lines of:

    
    [Fact]
    public async void TestVisitRoot() {
        var response = await client.GetAsync("/");
        response.EnsureSuccessStatusCode();
    }
    
    

    This is basically the example from original article, but stripped down (where applicable).

    Try running the test case first. It should run the app as if you ran it normally and it would visit the homepage. It also connects to your real database, webservices and whatnot.

    Congrats, you completed step one. On to the next. Where we will be…

    Step 2: Replacing database with an in-memory SQLite database

    I assume you use a SQL Server in your ‘real world’ scenario. For integration testing you want to have a predictable state before running the test. An empty database is pretty predictable (after you fill it up with test data ;-)).

    Also an in-memory database saves you the hassle of dealing with files, permissions, removing (temp) files, etc.

    In order to inject our in memory database, we need to override the

    Startup

    class. We need to create a seam in our class so we can write our test-specific (ie override) database setup code there.

    We start by creating a class

    TestStartup

    which extends from

    Startup

    . Then we make sure we use

    TestStartup

    in our constructor in our test class:

    var builder = new WebHostBuilder()
    	.UseStartup<TestStartup>() // use our testStartup version
    
    

    In your

    Startup

    class you need a method where you are setting up your database. You probably do this within a

    Configure

    or

    ConfigureServices

    method. Instead of wiring up the database within that method, extract that code in a separate method and call it

    SetupDatabase

    .

    The code in that method might look a bit like this:

    public virtual void SetUpDataBase(IServiceCollection services)
    {
    	services
    		.AddEntityFramework()
    		.AddSqlServer()
    		.AddDbContext<YourDatabaseContext>(options =>
    			options.UseSqlServer(
    				Configuration["Data:DefaultConnection:ConnectionString"]
    		));
    }
    

    Make sure you define this method as

    virtual

    . This allows us to override it within our

    TestStartup

    class.

    Before we override the method, we need to make sure we have the appropiate SQLite dependency defined in our

    project.json

    . So add it. This should make the dependencies look like:

    ...  
    "dependencies": {
        ...
        "FluentAssertions": "4.2.1",
        "xunit": "2.1.0",
        "xunit.runner.dnx": "2.1.0-rc1-build204",
        "Microsoft.AspNetCore.TestHost": "1.0.0",
        "Microsoft.EntityFrameworkCore.InMemory": "1.0.0",
        "Microsoft.EntityFrameworkCore.Sqlite": "1.0.0",
      }
    

    Now, in your

    TestStartup

    override method

    SetupDatabase

    and let it set up your SQLite in memory database:

    public override void SetUpDataBase(IServiceCollection services)
    {
    	var connectionStringBuilder = new SqliteConnectionStringBuilder { DataSource = ":memory:" };
    	var connectionString = connectionStringBuilder.ToString();
    	var connection = new SqliteConnection(connectionString);
    
    	services
    		.AddEntityFrameworkSqlite()
    		.AddDbContext<CmsDbContext>(
    			options => options.UseSqlite(connection)
    		);
    }
    

    Try running your app. See how it behaves.

    You might run into problems where it complains about not having a database setup, or no tables being found. No worries, there are a few things left to do.

    Ensure creation of database

    At some place in your webapp you most likely create your

    dbContext

    , along the lines of:

    //Create Database
    using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
    	.CreateScope())
    {
    	var dbContext = serviceScope.ServiceProvider.GetService<YourDatabaseContext>();
    
    	// run Migrations
    	dbContext.Database.Migrate();
    }
    

    For making sure your in-memory database has a database setup (with tables, etc). In general you want to do this…:

    //Create Database
    using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
    	.CreateScope())
    {
    	var dbContext = serviceScope.ServiceProvider.GetService<CmsDbContext>();
    
    	dbContext.Database.OpenConnection(); // see Resource #2 link why we do this
    	dbContext.Database.EnsureCreated();
    
    	// run Migrations
    	dbContext.Database.Migrate();
    }
    
    

    Of course you want this only for integration tests. So don’t leave it like that. Again, create a seam. So you get:

    // method in Startup.cs
    public virtual void EnsureDatabaseCreated(YourDatabaseContext dbContext) {
    	// run Migrations
    	dbContext.Database.Migrate();
    }
    
    // within your Configure method:
    using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>()
    	.CreateScope())
    {
    	var dbContext = serviceScope.ServiceProvider.GetService<YourDatabaseContext>();
    	EnsureDatabaseCreated(dbContext);
    }
    

    And in your

    TestStartup

    you override it like so:

    // method in TestStartup.cs
    public override void EnsureDatabaseCreated(YourDatabaseContext dbContext) {
    	dbContext.Database.OpenConnection(); // see Resource #2 link why we do this
    	dbContext.Database.EnsureCreated();
    
    	// now run the real thing
    	base.Migrate(dbContext);
    }
    

    Same trick. Now re-run your test. It should work now. You could leave it like this. There are a few challanges up ahead, like dealing with cookies, anti-request forgery and so on. I might blog about those too.

    Note: overriding like this might not be the only/best case after RC1, as there are changes that should make it way easier to add your own dependencies/setup that will be coming in RC2.

    Now, the downside of integration tests is that b ooting them up is very slow compared to unit tests. So you want to do that only once (preferably) and then run all your tests. Yes, that also has downsides, your tests should be careful when sharing state throughout one webapp run. So make sure you keep your tests isolated.

    Step 3: Speed up your integration tests! Use a TestFixture + xUnit collection

    Inspired by another article[#3] and xUnit’s ability to use xUnit’s

    CollectionDefinition

    you can make sure your webapp is only booted once.

    Sharing webapp between test cases using a TestFixture

    This solves the problem: creating a web app for each test case.

    To do this, in short, create a new class. For instance

    TestServerFixture

    . Move your client/server setup in this class. So it looks like this:

    public class TestServerFixture : IDisposable
    {
    	public TestServer server { get; }
    
    	public HttpClient client { get; }
    
        public TestServerFixture()
        {
    		// Arrange
    		var builder = new WebHostBuilder()
    			.UseEnvironment("Development")
    			.UseStartup<TestStartup>();
    			// anything else you might need?....
    
    		server = new TestServer(builder);
    
    		client = server.CreateClient();
    	}
    
    	public void Dispose()
        {
    		server.Dispose();
    		client.Dispose();
        }
    }
    

    Note the differences, the testFixture implements an

    IDisposable

    interface. Your setup which was in your test class constructor, has moved to the constructor of the fixture.

    Now, to make things easier (as you will create more and more integration test classes), create an abstract test class, which will be setup to receive the

    TestServerFixture

    . Then you can extend from this abstract class in your concrete test classes.

    The abstract class would look like this:

    public abstract class AbstractIntegrationTest : IClassFixture<TestServerFixture>
    {
    	public readonly HttpClient client;
        public readonly TestServer server;
    
    
        // here we get our testServerFixture, also see above IClassFixture.
        protected AbstractIntegrationTest(TestServerFixture testServerFixture)
    	{
    		client = testServerFixture.client;
    		server = testServerFixture.server;
    	}
    }
    

    As you can see we use an IClassFixture. Which is used for shared context between test cases..

    This little boilerplate code will now allow us to get our concrete test class to look like:

    public class MyAwesomeIntegrationTest : AbstractIntegrationTest
    {
    	public MyAwesomeIntegrationTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
     [Fact]
    public async void TestVisitRoot() {
        var response = await client.GetAsync("/");
        response.EnsureSuccessStatusCode();
    }
    
    // etc more tests...
    }
    

    Sharing your webbapp between test classes using xUnit’s CollectionFixture

    This solves the problem: creating a web app (using a TestFixture) for each test class.

    So awesome you have multiple test classes and you notice that you boot up your webapp everytime. And you want to solve this for (some) classes. Well that is possible. For obvious pro’s and con’s which I won’t dive into. (beware of state! ;-))

    Setting up a CollectionFixture is also explained here. But for completeness sake, let me rephrase:

    First create a class that we will use to define a collection. Like so:

    [CollectionDefinition("Integration tests collection")]
    public class IntegrationTestsCollection
    {
    	// This class has no code, and is never created. Its purpose is simply
    	// to be the place to apply [CollectionDefinition] and all the
    	// ICollectionFixture<> interfaces.
    }
    

    Now above all test classes you want to include in the

    Integration tests collection

    , simply add this line above the class definition:

    [Collection("Integration tests collection")]
    

    Which makes, in our above example, it like this:

    [Collection("Integration tests collection")]
    public class MyAwesomeIntegrationTest : AbstractIntegrationTest
    {
    	public MyAwesomeIntegrationTest(TestServerFixture testServerFixture) : base(testServerFixture)
    	{
    	}
    
     // your tests here...
    }
    

    Now run your tests again and note you boot up your webapps for each collection only once. Hence if you put everything in one collection, your webapp will only boot once.

    Conclusion

    We can run integration tests. If we want we can run them against an in memory database. Using seams we can inject our own test setup code, which probably changes in RC2 or further. If we want to speed up our tests we can put them in one “Integration collection” and use a single TestFixture.

    Resources:

    1. Asp.net docs – Integration testing
    2. SQlite in memory database create table does not work
    3. Fast testing
    4. My original question/ticket at Github