Every unit test framework I can think of comes with a way to ignore tests, usually it's as simple as adding an attribute to the test. While I was thinking of what syntax to use for ignore in AAATest, I starting wondering if it should be a feature at all.
The biggest problem with ignored unit tests is that there is nothing compelling anyone to turn them back on. Once they are ignored they have a tendency to stay ignored forever. A test can stay in an ignored state for years without anyone noticing or caring. The person who decided to ignore it in the first place could be long gone.
At the moment I am leaning towards not providing a mechanism to ignore tests(and not just because it's the easiest :p thing to do). It would force the user to either delete or comment out the test.
Deleting would obviously force you to consider if the test would really be needed in future. Deleting is something that we don't do lightly, only when we are absolutely sure. Commenting is often used as a soft delete, many a person has argued that if code is commented out then it should be deleted. Does this rule follow for unit tests? I'm still not sure but I'm leaning toward yes.
On a side note, I wonder if it's possible to create an NDepend rule that fails if code is commented?
The only functional difference between commenting and ignoring is that commented code does not have any maintenance overhead. If your ingnoring the test then is paying the maintenance cost for it worthwhile? Only if you plan to enable it again. If your ignoring a test for long enough that it get's out of sync with the code, should it be deleted? Absolutely.
The project I'm working on at the moment has a failing test, it is expected to fail and something that I intend to work on soon. In this case I should have been working in a branch and wasn't. By the time the branch is ready to be pulled back in then the test should be passing.
A branch is the correct place for incomplete code and is therefore allowed to contain failing tests. If your code is in an incomplete state for long enough that you want to ignore tests then you really should be working on a branch anyway.
My current, though not staunchly held, opinion is that tests should be deleted rather than ignored and that being able to ignore tests is a hack for poor SCM usage. Until someone can provide me with a better argument AAATest will not be able to ignore tests.
Proactively Lazy
Wednesday, September 3, 2014
Tuesday, September 2, 2014
Appveyor Impressions
So I wanted to setup a CI process for AAATest. I don't have a server lying around, so I thought I'd give this newfangled cloud CI a try. Technically I do have a machine lying around that I could use, but it's on the other side of the room, I can reach the cloud from here. The cloud CI of choice was Appveyor.
My first impression wasn't great. My project is setup to use a git submodule for the wiki, which is just how github works. I created a project file to hold the documentation and the main solution references this. Personally I'd prefer the documentation to exist in the same repo, but that is an issue for another day.
The UI build, as far as I could tell, did not allow for subrepositories, so straight away I had to resort to the appveyor build tool, configured via a yaml file in the root of the repository. This is when the problems with appveyor became immediately apparent.
A while ago I wrote about avoiding spaghetti builds. Appveyor violates every one of these rules. I went into detail on the biggest issues below.
Even Abed would be confused by this. Does the source contain the build or does the build control the source? It seems to be both. The checkout process goes like this:
1. Appveyor checks out the code.
2. Appveyor executes the build found in the repository.
3. The build checks out more code.
There is a circular relationship between the source and the build, bound to end in tears. This circular relationship leads to the next issue.
One of the golden rules of a build system is that it has to be able to run locally. Without this you can't try a change without committing it. If the build takes several minutes or more (integration tests, deployments, etc) then your downtime between iterations is just as long, not great for productivity.
Imagine if you had to commit your code and wait for a cloud service to compile before seeing any errors. This is essentially what appveyor forces upon your build.
The other major issue is that there is no real way to configure a build. The configuration is either global or tied to a branch. This limits you to a single build process and there are many good reasons to have more than one.
I generally want to compile and test on every check in, but integration tests take much longer and I'm happy for them to only run as often as possible. I might want performance tests to be compiled in release mode. I might only want the deployment to be run manually, etc.
These scenarios are impossible with the way configuration is handled by Appveyor.
Is it a CI server, competing with team city? Is it a build framework, competing with nant/msbuild? Is it a deployment server, competing with octopus?
Unfortunately the answer seems to be all three. The only good news is that the CI server is what shows the most promise, which is the only pat of it I'm interested in using.
Will it work as a CI server for AAATest? At the moment I think it will, just barely. AAATest is a very simple project, build, test and deploy (nuget) are the only build steps requried and I think Appveyor will manage.
For a more complex project, with complex configuration scenarios? I think you would drive yourself mad.
I'm hoping they really focus on the CI server part in future and leave the building and deployment to better tools.
First Impressions
My first impression wasn't great. My project is setup to use a git submodule for the wiki, which is just how github works. I created a project file to hold the documentation and the main solution references this. Personally I'd prefer the documentation to exist in the same repo, but that is an issue for another day.
The UI build, as far as I could tell, did not allow for subrepositories, so straight away I had to resort to the appveyor build tool, configured via a yaml file in the root of the repository. This is when the problems with appveyor became immediately apparent.
A while ago I wrote about avoiding spaghetti builds. Appveyor violates every one of these rules. I went into detail on the biggest issues below.
Who's the Boss?
1. Appveyor checks out the code.
2. Appveyor executes the build found in the repository.
3. The build checks out more code.
There is a circular relationship between the source and the build, bound to end in tears. This circular relationship leads to the next issue.
NoLocal Builds
Imagine if you had to commit your code and wait for a cloud service to compile before seeing any errors. This is essentially what appveyor forces upon your build.
Build Configuration == Build Process
I generally want to compile and test on every check in, but integration tests take much longer and I'm happy for them to only run as often as possible. I might want performance tests to be compiled in release mode. I might only want the deployment to be run manually, etc.
These scenarios are impossible with the way configuration is handled by Appveyor.
What is Appveyor
Unfortunately the answer seems to be all three. The only good news is that the CI server is what shows the most promise, which is the only pat of it I'm interested in using.
Conclusion
For a more complex project, with complex configuration scenarios? I think you would drive yourself mad.
I'm hoping they really focus on the CI server part in future and leave the building and deployment to better tools.
Introducing AAATest
I've had a bit of time on my hands lately and was determined to finish one of my projects, or at least make enough progress that I have something to show for it, something that can be improved upon later.
The project I decided on was a unit test framework. I had started work on it several months ago but never got much past the exploratory phase. It was just an experiment to see how far I could push the boundaries of c# and the .net framework, to see how much could be done in a simpler and more expressive way.
The framework is quickly approaching it's 0.1 BFW (barely works) milestone and unlike most of my projects I'm quite pleased with the direction. So I thought now would be a good time to start writing about it. Today will be about why it was created and future posts will go into more depth on some design decisions and experiences as I setup publishing for a new library.
Warning: Some of the comments below might seem like a criticism of nUnit. My intention isn't to criticize it but to use it to contrast it with my own effort. I've happily used nUnit for the best part of a decade and will probably use it for many years in future.
Test frameworks have barely evolved over the last decade. In that time .net community (and Microsoft itself) has changed quite dramatically. MVC has been embraced as the way to build web apps. Nuget and the countless OSS tools it provides have been embraced. Continuous integration and deployment are no longer foreign words.
But our test frameworks are largely the same. If I had to use a release of nUnit that is ten years old I doubt I would notice. This isn't because nUnit is bad, quite the opposite is true. It just hasn't changed because it works, it works well and we've all just learned to accept the warts as just the way things are.
The other part of the problem, I think, is that our test frameworks are general test frameworks. nUnit works well for pure unit testing and it works well for integration tests. Being versatile is a good thing, but many have gone down the wrong path with unit testing based on the lack of direction.
On the other hand, integration testing tools have evolved quite a bit. There was nothing like SpecFlow a decade ago. Selenium barely worked at the time. Of course, with the largely static pages of the period there wasn't as much a need for browser driven testing.
So why did integration testing continue to evolve but unit testing not? I believe this is because new tools were developed and their focus was entirely on creating a better experience for integration tests. In contrast, our tools for unit testing were stuck in there generalist philosophy.
When I started work on AAATest I wanted to see what would happen if I created a test framework purely for unit testing. I was sick of adding arrange, act and assert comments in every single test. AAATest would be more expressive and have this baked in.
My IOC containers know what dependencies my classes require and work it out just fine at run time. But my tests, they require me to create the class in every test fixture. Not just the class but all of it's dependencies. AAATest would automatically manage these dependencies.
I wanted it to work with idiomatic code and to push people down the path of writing idiomatic, modern c# code. It is an opinionated framework, the exact extent of which I don't know yet.
AAATest github page. I could go on, but github has the majority of the content and is much better than blogger for code samples.
I also put together the tutorial on TDD with AAATest. I'm planning to extend this in future with more examples.
So far it can only run the the tests included in the example test project, but you have to start somewhere.
The project I decided on was a unit test framework. I had started work on it several months ago but never got much past the exploratory phase. It was just an experiment to see how far I could push the boundaries of c# and the .net framework, to see how much could be done in a simpler and more expressive way.
The framework is quickly approaching it's 0.1 BFW (barely works) milestone and unlike most of my projects I'm quite pleased with the direction. So I thought now would be a good time to start writing about it. Today will be about why it was created and future posts will go into more depth on some design decisions and experiences as I setup publishing for a new library.
Warning: Some of the comments below might seem like a criticism of nUnit. My intention isn't to criticize it but to use it to contrast it with my own effort. I've happily used nUnit for the best part of a decade and will probably use it for many years in future.
Unit Testing Evolution
Test frameworks have barely evolved over the last decade. In that time .net community (and Microsoft itself) has changed quite dramatically. MVC has been embraced as the way to build web apps. Nuget and the countless OSS tools it provides have been embraced. Continuous integration and deployment are no longer foreign words.
But our test frameworks are largely the same. If I had to use a release of nUnit that is ten years old I doubt I would notice. This isn't because nUnit is bad, quite the opposite is true. It just hasn't changed because it works, it works well and we've all just learned to accept the warts as just the way things are.
The other part of the problem, I think, is that our test frameworks are general test frameworks. nUnit works well for pure unit testing and it works well for integration tests. Being versatile is a good thing, but many have gone down the wrong path with unit testing based on the lack of direction.
On the other hand, integration testing tools have evolved quite a bit. There was nothing like SpecFlow a decade ago. Selenium barely worked at the time. Of course, with the largely static pages of the period there wasn't as much a need for browser driven testing.
So why did integration testing continue to evolve but unit testing not? I believe this is because new tools were developed and their focus was entirely on creating a better experience for integration tests. In contrast, our tools for unit testing were stuck in there generalist philosophy.
Narrowing Focus
When I started work on AAATest I wanted to see what would happen if I created a test framework purely for unit testing. I was sick of adding arrange, act and assert comments in every single test. AAATest would be more expressive and have this baked in.
My IOC containers know what dependencies my classes require and work it out just fine at run time. But my tests, they require me to create the class in every test fixture. Not just the class but all of it's dependencies. AAATest would automatically manage these dependencies.
I wanted it to work with idiomatic code and to push people down the path of writing idiomatic, modern c# code. It is an opinionated framework, the exact extent of which I don't know yet.
Presenting
AAATest github page. I could go on, but github has the majority of the content and is much better than blogger for code samples.
I also put together the tutorial on TDD with AAATest. I'm planning to extend this in future with more examples.
So far it can only run the the tests included in the example test project, but you have to start somewhere.
Thursday, July 24, 2014
8 Biggest Mistakes of Application Architecture
We spend a lot of time arguing of the minutiae of software development. Exceptions or return codes, dynamic or static typing, OO or functional. IMO these have much less to do with the quality of software produced than the architecture, yet we seem to spend less time discussing that. This is the big picture stuff that makes or breaks a product (and it's owners) and in my experience, terrible architecture seems to be the norm in the enterprise world.
The following is a rant about some of the worst anti-patterns I've seen in enterprise software. Each one I have experienced at several companies, though (usually) not all at once. Most of them are cargo cult practices, some can be traced back to official Microsoft guidance, some have no reason at all but just seem to exist anyway.
I ran into this one just today. We have two services, a and b, that are different but related yet the share most/all of the same internal libraries. Let's pretend there are good reasons why these services need to be deployed on different machines, because sometimes there are. This does not mean they need to be in separate .csproj files.
It's just as easy, perhaps even easier, to deploy a single application to multiple machines with different configurations.
The physical deployment of an application does not have to be a 1:1 mapping with the solution structure.
On a related note, it is often claimed that a developers machine should resemble production closely. I can't think of anything worse. The purpose of a development environment is to DEVELOP, anything that aids this endeavor is welcome, anything that inhibits it is not.
I want to be developing. I don't care about how many configurations the application will be split into in production. I don't want to manage IIS application pools. I don't want to set up SSL certificates. I don't even want to have to go to the login screen every time I recompile. These are all production issues and you shouldn't have to deal with it unless your configuring a production environment.
A good rule of thumb here is how long it takes a brand new developer to be setup. The answer should be minutes, not hours or days. I actually (very briefly) worked at a company that considered it normal for new developers to spend a week setting up there environment. Companies like this tend to receive the double blow of a week of downtime and a high turnover rate.
It's been 15 years since this appeared at number 2 on the Joel Test. I'm honestly amazed how many companies still screw it up.
There are many good reasons to separate code into different projects. An MVC app and a WCF service might share common functionality in a library project (or maybe not, see 1). But there are just as many good reasons NOT to separate code into a million tiny assemblies.
Recently I came across a solution that had 8 projects just to handle pdf conversion. HtmlToPdfConverter, JpegToPdfConverter, etc. Each project had a single class in a single file. The classes combined were barely long to deserve splitting up, let alone into a separate project each.
All this achieves is to slow down build and debug times. MSBuild slows down dramatically as you increase the number of projects. Try feeding your source tree directly into csc and see just how c# code can be compiled.
Long before IDE's held all our code together things were organized into folders. Fortunately, IDE's are capable if using this technological wizardry, giving us a very simple mechanism of grouping related functionality. Folders exist, use them!
ORMs frequently get a bad wrap for generating n+1 queries, but in the world of enterprise architecture I all to often see a different cause: The Repository Pattern.
ORMs actually come with a repository pattern built in. In nHibernate (which I'll use as my example) the repository is the ISession interface. It's simple and flexible, it can handle nearly every type of query you need and the ones it can't should probably not by performed by an ORM anyway.
An architect or developer will come along and build there own repository on top of this one, but with less features and no flexibility. Just in case you want to change your database one day. How many of you have ever actually seen this happen?
The problem with having data access behind a repository is that it has no context of what data is required for the task at hand. Does it need all the children of that entity? Should it load a parent entity? What should it filter by? It ends with a million permutations of a Find method that the ORM repository is perfectly capable of expressing.
Why does this cause n+1 errors? Because it's abstracted the tool with the best way of resolving them.
Advance Idiots will return a model with lazy loading. Essentially creating an ORM on an ORM.
This one only applies to web apps.
I'm not talking about a logical service layer here, but a real physical one, usually with WCF as the glue. I think the root cause of this is the same as number 1, architects assuming the the physical structure and the logical structure need to completely match.
Numbers everyone should know are unfortunately numbers that architects don't seem to know. Getting data from RAM is a lot faster than getting it from a network. The service is also likely to make network connections of it's own, usually to a database. The end response time is cumulative, having to make multiple service calls quickly adds up. It won't even show up in a profiler.
The most frequent justification I hear for this is scalability. Having a service layer on a different machine than the web server will somehow, magically, help the application scale in a way that having multiple web servers (maybe even a farm) will not. If the applications weren't spending so much time serializing objects and waiting for synchronous remote procedure calls then maybe they would scale a bit more linearly.
The other justification I hear a lot is that service oriented architectures are considered good architecture. I'll probably save this for another rant, but a service layer and SOA are not synonyms, they are completely unrelated to one another.
Advanced Idiots will combine this with the repository pattern and have n+1 issues spanning multiple network hops.
Super advanced idiots will have services that call services that call services. Displaying a single piece of data to a user can cross a dozen network/process boundaries.
Super amazeball idiots will instantiate a service layer in it's own process every time a method needs to be called on it. Seriously, I've seen this in a production environment handling millions of dollars a month.
This isn't an attack on IOC containers, I think they're great, but in the wrong hands they are disastrous. The concept is simple enough, that's why every man and his dog have created a toy IOC container, unfortunately a lot of people don't seem to get to the end of tutorial 1.
An IOC container is designed to wire up structural dependencies. If a class name ends with Service, Factory or Handler it is a good candidate to go in the container. If the methods of a class contain adjectives then it is a good candidate to go in the container. If a class name ends with Model, DTO, Request or Response then it probably should NOT go in the container. If a class frequently appears in a list it should probably NOT go in the container.
If an application is full of calls to Resolve<T>() then it was built by someone who doesn't understand the basic concepts of IOC.
Advanced Idiots will combine this with service layers (see number 5). I've actually seen IOC containers used to resolve factories that call web services that resolve POCO objects. I've actually an architect that defended this as being good architecture. I've actually left work at lunch time before the urge to kill became too great...
So many applications start out life as a simple forms-over-data interface. But using CRUD as the basis for an application architecture is doomed from the start. CRUD is built around the data model and non-developers are horrible at thinking about the data model, they really shouldn't have to.
There is simple crud like active record, but this is about the more advanced CRUD operations. CRUD DAO's, CRUD Repositories, CRUD View Models, CRUD DTO's. So much work for such a limited pattern.
The root of this, of course, is the developers innate desire to generalize common patterns and come up with elaborate use of generics to reduce the amount of code we have to write to solve the problem wrong.
Software should be modeled around users and the actions they perform, not rows in a database.
The seems to be a rule that the more a company talk about being agile the worse their deployment procedures are. I would argue that being able to deploy frequently (dare I say continuously) is one of the defining features that show your agility as an organization.
If you can not have a bug fix in a customers hands within minutes of bug fixed (excluding testing) then your organization is not agile. Telling users that they have to wait for next months deploy to have a critical issue fixed is not agile. Being unable to put up and tear down test environments on a whim is not agile.
In far to many companies doing a production release is a nerve wracking affair that can consume several days. Number 1, 2 and 5 are the major causes of deployment headaches.
The cause of every one of these issues has been the same, someone with architect in their title. We really should reconsider what the role of an architect is and if the should be involved with technical decisions, or involved at all.
The following is a rant about some of the worst anti-patterns I've seen in enterprise software. Each one I have experienced at several companies, though (usually) not all at once. Most of them are cargo cult practices, some can be traced back to official Microsoft guidance, some have no reason at all but just seem to exist anyway.
1. Logical structure != Physical structure
I ran into this one just today. We have two services, a and b, that are different but related yet the share most/all of the same internal libraries. Let's pretend there are good reasons why these services need to be deployed on different machines, because sometimes there are. This does not mean they need to be in separate .csproj files.
It's just as easy, perhaps even easier, to deploy a single application to multiple machines with different configurations.
The physical deployment of an application does not have to be a 1:1 mapping with the solution structure.
2. Development is not Production
On a related note, it is often claimed that a developers machine should resemble production closely. I can't think of anything worse. The purpose of a development environment is to DEVELOP, anything that aids this endeavor is welcome, anything that inhibits it is not.
I want to be developing. I don't care about how many configurations the application will be split into in production. I don't want to manage IIS application pools. I don't want to set up SSL certificates. I don't even want to have to go to the login screen every time I recompile. These are all production issues and you shouldn't have to deal with it unless your configuring a production environment.
A good rule of thumb here is how long it takes a brand new developer to be setup. The answer should be minutes, not hours or days. I actually (very briefly) worked at a company that considered it normal for new developers to spend a week setting up there environment. Companies like this tend to receive the double blow of a week of downtime and a high turnover rate.
It's been 15 years since this appeared at number 2 on the Joel Test. I'm honestly amazed how many companies still screw it up.
3. The .csproj Fetish
There are many good reasons to separate code into different projects. An MVC app and a WCF service might share common functionality in a library project (or maybe not, see 1). But there are just as many good reasons NOT to separate code into a million tiny assemblies.
Recently I came across a solution that had 8 projects just to handle pdf conversion. HtmlToPdfConverter, JpegToPdfConverter, etc. Each project had a single class in a single file. The classes combined were barely long to deserve splitting up, let alone into a separate project each.
All this achieves is to slow down build and debug times. MSBuild slows down dramatically as you increase the number of projects. Try feeding your source tree directly into csc and see just how c# code can be compiled.
Long before IDE's held all our code together things were organized into folders. Fortunately, IDE's are capable if using this technological wizardry, giving us a very simple mechanism of grouping related functionality. Folders exist, use them!
4. Repositories on Repositories
ORMs frequently get a bad wrap for generating n+1 queries, but in the world of enterprise architecture I all to often see a different cause: The Repository Pattern.
ORMs actually come with a repository pattern built in. In nHibernate (which I'll use as my example) the repository is the ISession interface. It's simple and flexible, it can handle nearly every type of query you need and the ones it can't should probably not by performed by an ORM anyway.
An architect or developer will come along and build there own repository on top of this one, but with less features and no flexibility. Just in case you want to change your database one day. How many of you have ever actually seen this happen?
The problem with having data access behind a repository is that it has no context of what data is required for the task at hand. Does it need all the children of that entity? Should it load a parent entity? What should it filter by? It ends with a million permutations of a Find method that the ORM repository is perfectly capable of expressing.
Why does this cause n+1 errors? Because it's abstracted the tool with the best way of resolving them.
Advance Idiots will return a model with lazy loading. Essentially creating an ORM on an ORM.
5. The Service Layer.
This one only applies to web apps.
I'm not talking about a logical service layer here, but a real physical one, usually with WCF as the glue. I think the root cause of this is the same as number 1, architects assuming the the physical structure and the logical structure need to completely match.
Numbers everyone should know are unfortunately numbers that architects don't seem to know. Getting data from RAM is a lot faster than getting it from a network. The service is also likely to make network connections of it's own, usually to a database. The end response time is cumulative, having to make multiple service calls quickly adds up. It won't even show up in a profiler.
The most frequent justification I hear for this is scalability. Having a service layer on a different machine than the web server will somehow, magically, help the application scale in a way that having multiple web servers (maybe even a farm) will not. If the applications weren't spending so much time serializing objects and waiting for synchronous remote procedure calls then maybe they would scale a bit more linearly.
The other justification I hear a lot is that service oriented architectures are considered good architecture. I'll probably save this for another rant, but a service layer and SOA are not synonyms, they are completely unrelated to one another.
Advanced Idiots will combine this with the repository pattern and have n+1 issues spanning multiple network hops.
Super advanced idiots will have services that call services that call services. Displaying a single piece of data to a user can cross a dozen network/process boundaries.
Super amazeball idiots will instantiate a service layer in it's own process every time a method needs to be called on it. Seriously, I've seen this in a production environment handling millions of dollars a month.
6. Replacing new with Resolve
This isn't an attack on IOC containers, I think they're great, but in the wrong hands they are disastrous. The concept is simple enough, that's why every man and his dog have created a toy IOC container, unfortunately a lot of people don't seem to get to the end of tutorial 1.
An IOC container is designed to wire up structural dependencies. If a class name ends with Service, Factory or Handler it is a good candidate to go in the container. If the methods of a class contain adjectives then it is a good candidate to go in the container. If a class name ends with Model, DTO, Request or Response then it probably should NOT go in the container. If a class frequently appears in a list it should probably NOT go in the container.
If an application is full of calls to Resolve<T>() then it was built by someone who doesn't understand the basic concepts of IOC.
Advanced Idiots will combine this with service layers (see number 5). I've actually seen IOC containers used to resolve factories that call web services that resolve POCO objects. I've actually an architect that defended this as being good architecture. I've actually left work at lunch time before the urge to kill became too great...
7. CRUD
So many applications start out life as a simple forms-over-data interface. But using CRUD as the basis for an application architecture is doomed from the start. CRUD is built around the data model and non-developers are horrible at thinking about the data model, they really shouldn't have to.
There is simple crud like active record, but this is about the more advanced CRUD operations. CRUD DAO's, CRUD Repositories, CRUD View Models, CRUD DTO's. So much work for such a limited pattern.
The root of this, of course, is the developers innate desire to generalize common patterns and come up with elaborate use of generics to reduce the amount of code we have to write to solve the problem wrong.
Software should be modeled around users and the actions they perform, not rows in a database.
8. Deployment
The seems to be a rule that the more a company talk about being agile the worse their deployment procedures are. I would argue that being able to deploy frequently (dare I say continuously) is one of the defining features that show your agility as an organization.
If you can not have a bug fix in a customers hands within minutes of bug fixed (excluding testing) then your organization is not agile. Telling users that they have to wait for next months deploy to have a critical issue fixed is not agile. Being unable to put up and tear down test environments on a whim is not agile.
In far to many companies doing a production release is a nerve wracking affair that can consume several days. Number 1, 2 and 5 are the major causes of deployment headaches.
Conclusion
The cause of every one of these issues has been the same, someone with architect in their title. We really should reconsider what the role of an architect is and if the should be involved with technical decisions, or involved at all.
Thursday, March 13, 2014
Android: Take back your builds
So I started getting back into android development, reviving a little app I published a few years ago. The app itself was fairly crappy by android 1.6 standards (get off my lawn!) and now it's just looking terrible.
So I fire up the shiny new android studio but it looks like I'm going to have to start the project from scratch and port everything over. After getting frustrated trying to do the simplest things, like adding a library I went back to Eclipse. Several arcane build errors and several even more arcane eclipse ones later I was getting frustrated. Clearly this isn't going to work.
Aside from the terrible UI's of both IDE's there was something much more important. I wasn't getting any functionality out of them either, they were just getting in my way. I wanted to try the new atom editor but as yet don't have an invite, so I went back to trusty old vim with ant. This is when I discovered androids dirty little secret: the build system.
Monolithic Madness
A build system, as with other aspects of development, work best when you start with discreet parts and assemble them into a whole. I expected to find some ant tasks that wrap android tools and to be able to plug them in and get going.
Alas, google instead gives us some monolithic tools that define your build system for you. Attempting to go deeper give you a "here be dragons" warning. The tools that are at the very center of android development and yet there documentation consists of:
The other platform tools, such as aidl, aapt, dexdump, and dx, are typically called by the Android build tools or Android Development Tools (ADT), so you rarely need to invoke these tools directly. As a general rule, you should rely on the build tools or the ADT plugin to call them as needed.
I firmly believe that this is at the center of why android IDE's are so lackluster, they have to conform to this monolithic build system. Aside from the "one build to rule them all" approach, building an android app is a very complicated procedure. This isn't "compile and run" like you find in other types of projects, these are the steps to build a basic, runable android application (illustrated here):
- Compile layouts, resources, etc into a resource file.
- Generate an R.java source file, this is needed just to be able to compile your code.
- Compile your real code.
- Turn your .class files into .dex files.
- Combine your resource file with your .dex files into an .apk.
- Sign your .apk.
- Finally we get to run!
So to create a build system how you want it, you have to replicate all this with little to no documentation. And this is what I'm about to do.
Step 1 - House Keeping
The first thing we have to do when defining our build is to know where the android sdk, our libraries and everything else is, so in ant we have:
1: <property name="android.sdk" location="D:\Program Files (x86)\Android\android-studio\sdk" />
2: <property name="android.aapt" location="${android.sdk}\build-tools\android-4.4.2\aapt.exe" />
3: <property name="android.jar" location="${android.sdk}\platforms\android-19\android.jar" />
4: <property name="android.dex" location="${android.sdk}\build-tools\android-4.4.2\dx.bat" />
5: <property name="android.adb" location="${android.sdk}\platform-tools\adb.exe" />
6: <property name="build.resource" location="build\aapt\resource.jar" />
7:
8: <record name="buildlog.txt" action="start" append="false" />
9:
10: <path id="libPackage">
11: <fileset dir="lib\">
12: <include name="android-binding-0.45-update.jar" />
13: <include name="guice-2.0-no_aop.jar" />
14: <include name="roboguice-1.1.2.jar" />
15: </fileset>
16: </path>
17: <path id="libApp">
18: <pathelement location="${android.jar}" />
19: <path refid="libPackage" />
20: </path>
21: <path id="libTest">
22: <fileset dir="lib\">
23: <include name="hamcrest-core-1.3.jar" />
24: <include name="junit-4.11.jar" />
25: </fileset>
26:
27: <pathconvert property="info.libPackage" refid="libPackage" pathsep="; " />
28: <pathconvert property="info.libApp" refid="libApp" pathsep="; " />
29: <pathconvert property="info.libTest" refid="libTest" pathsep="; " />
30:
31: <echo message="android.sdk: ${android.sdk}" />
32: <echo message="android.aapt: ${android.aapt}" />
33: <echo message="android.jar: ${android.jar}" />
34: <echo message="android.dex: ${android.dex}" />
35: <echo message="build.resource: ${build.resource}" />
36:
37: <echo message="libPackage:" />
38: <echo message="${info.libPackage}" />
39: <echo message="libApp" />
40: <echo message="${info.libApp}" />
41: <echo message="libTest" />
42: <echo message="${info.libTest}" />
43:
It might seem a bit verbose but having everything defined here and having the actual values in the output will make debugging a lot simpler. libPackage are the bare minimum of libraries that we need to deploy with our app. libApp is the superset of these that we need to compile, basically libPackage + android.jar. libTest is a superset of that which includes libraries like junit.
Our first two real targets are the clean and init ones:
1: <target name="clean">
2: <delete dir="build" />
3: </target>
4:
5:
6: <target name="init">
7: <tstamp/>
8: <mkdir dir="${build}"/>
9: <mkdir dir="${build}\aapt\"/>
10: <mkdir dir="${build}\javac\app"/>
11: <mkdir dir="${build}\javac\test"/>
12: <mkdir dir="${build}\dex"/>
13: </target>
Step 2: Compiling
Next in line is compiling our project:
1: <target name="build" depends="init" >
2: <exec executable="${android.aapt}" failonerror="true">
3: <arg value="package" />
4: <arg value="-f" />
5: <arg value="-v" />
6: <arg value="-M" />
7: <arg path="src\app\AndroidManifest.xml" />
8: <arg value="-A" />
9: <arg path="src\app\assets" />
10: <arg value="-I" />
11: <arg path="${android.jar}" />
12: <arg value="-m" />
13: <arg value="-J" />
14: <arg path="build\aapt\" />
15: <arg value="-F" />
16: <arg path="${build.resource}" />
17: <arg value="-S" />
18: <arg path="src\app\res" />
19: <arg value="--rename-manifest-package" />
20: <arg value="my.new.package.name" />
21: </exec>
22:
23:
24: <javac destdir="build\javac\app" includeantruntime="false" classpathref="libApp" >
25: <src path="src\app\" />
26: <src path="build\aapt\com" />
27: </javac>
28:
29:
30: <javac destdir="build\javac\test" includeantruntime="false" classpathref="libTest" >
31: <src path="src\test\" />
32: <classpath>
33: <pathelement location="build\javac\app"/>
34: <path refid="libTest"/>
35: </classpath>
36: </javac>
37:
38: </target>
The first and most unfamiliar one is where we invoke the aapt tool. I honestly don't know what half of these options are because it's so poorly documented but the important ones are:
- -M The location of your android manifest file.
- -I The location of the android.jar that your using.
- -J The location of the R.java file (needed to compile your real code).
- -F The location of the generated resource.jar.
- -S The location of your resource files.
The first javac compiles our actual application, referencing the libApp libraries. The source files used are src\app (Application code) and build\aapt\com (R.java). The second compiles our unit tests, making sure to add our application .class files to the classpath.
Step 3: Testing
This is a very generic task that runs our unit tests. For java developers it should look fairly standard, just run junit with our compiled .class files and libTest libraries in the classpath:
1: <target name="test" depends="init, build" >
2: <junit haltonerror="true" haltonfailure="false" enableTestListenerEvents="true">
3: <classpath >
4: <pathelement location="build\javac\app"/>
5: <pathelement location="build\javac\test"/>
6: <path refid="libTest"/>
7: </classpath>
8: <batchtest>
9: <fileset dir="build\javac\test" includes="**/*.class" />
10: <formatter type="plain" usefile="false"/>
11: </batchtest>
12: </junit>
13: </target>
At this point we've create our standard "build". It is completely independent of the IDE. with VI for example, I simply type :mak<enter> and all my tests are run in approximately 4 seconds for fast feedback.
Unfortunately unit tests aren't enough and we will want to actually run our app. In the next article I'll cover packaging, signing and deploying.
Saturday, December 28, 2013
Creating an MVC framework part 2 - First Decisions and Code
In writing the first bit of code a quickly came across a few key decisions. The very first one was what sort of routing I wanted to include. This lead to a deeper question of what sort of applications I wanted the framework to be suitable for.
My career has largely been spent creating line of business applications and that is the priority focus of the framework. Eventually I would like to expand on that but with limited time you have to simplify or get no where. I also didn't intend for it to be particularly opinionated but I'm naturally getting pushed that way. Again I hope to make it more flexible in future but for now it's simply not a priority.
The first casualty of this is routing, otherwise known as pretty URL's. This feature is simply not that important in a line of business app and it simplifies things to not have to worry about it at this point. For now, the only route is /{controller}/{action}/{querystring}.
The other big decision I made is project structure. I'm sick of having various parts of an application strewn about the project, or several projects, based on type. The grouping of this framework is going to be done by function. A standard project layout will be along the lines of:
My career has largely been spent creating line of business applications and that is the priority focus of the framework. Eventually I would like to expand on that but with limited time you have to simplify or get no where. I also didn't intend for it to be particularly opinionated but I'm naturally getting pushed that way. Again I hope to make it more flexible in future but for now it's simply not a priority.
The first casualty of this is routing, otherwise known as pretty URL's. This feature is simply not that important in a line of business app and it simplifies things to not have to worry about it at this point. For now, the only route is /{controller}/{action}/{querystring}.
The other big decision I made is project structure. I'm sick of having various parts of an application strewn about the project, or several projects, based on type. The grouping of this framework is going to be done by function. A standard project layout will be along the lines of:
- MvcProject
- -\_Common
- ---_Layout.cshtml
- -\Product
- ---_Controller.cs
- ---List.js
- ---List.cshtml
Let there be Light
So I created a new web application and in the global.asax.cs I put:
- protected void Application_Start(object sender, EventArgs e) {
- RouteTable.Routes.Add(new Route("{*url}", new RequestHandler()));
- }
This tells ASP that we want to handle all incoming URL's with our request handler, which is all I want for now. RequestHandler looks like this:
- public class RequestHandler : IRouteHandler, IHttpHandler {
- IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { return this; }
- bool IHttpHandler.IsReusable { get { return true; } }
- void IHttpHandler.ProcessRequest(HttpContext context) {
- context.Response.Write("Hello World!");
- }
- }
IRouteHandler and IHttpHandler are the interfaces we need to implement to recieve requests from ASP. I'm not sure if it's a good idea to combine them or not but it works for now. The real meat and potatoes starts with ProcessRequest though, it might not look like much but this is the entry point to our framework. This is where the adventure begins.
Wednesday, December 18, 2013
Creating an MVC framework part 1 - Why???
So I was working on an application framework, something that would take some of the drudgery out of creating a new application. I like things to be nicely structured, but setting up this architecture takes time. Usually by the time I've set it up that urge to solve the original problem had waned and my projects folder was filled with yet another orphan.
I just wanted to solve the most common things. Validation, logging, permissions are the obvious ones, I find the implementation of these in microsofts MVC framework to be pretty terrible, it's almost always one of the first things I replace. I like the command/query model, so I wanted that in place. I want to be able to offer desktop/mobile integration further down the line. I want easy integration with knockoutjs. Most importantly, I don't want to have to set all this up every single time.
So as I'm sporadically putting various parts together I noticed one thing. MS MVC is getting in my way and stopping me from creating this framework, It must go! So what was once going to be an application framework is now going to be MVC framework as well.
I think if your going to replace something then you need to replace it with something better, which requires at least a modest level of arrogance. You have to know how what you want to do better and hopefully have a rough idea on how to get there.
As I got further through my exploratory and brainstorming phase I started to get more confident that yes, I can make a better MVC.
The first thing I started looking at is what I wanted my controllers to look like. I came up with this:
There are a number of interesting things here, First of all, the controller is responsible for defining a public API, so routes and permissions are defined by attributes. MS MVC has recently added attribute based routing as well.
A permissions system will be provided by the framework that will allow role based and finer grained permissions.
The controller is abstract. For many common actions like view and list in the above sample, all the controller does is delegate responsibility. Abstract actions will have the ability to be routed by default. Saving those couple of lines of code was one of the main drivers for my framework, yes I'm that anal.
The HttpPost may seem fairly standard, but by convention this will enlist a transaction.
All action return objects. There are no view results, json results or anything like that. It's one object in, one object out. The format of the result is determined by the caller.
We don't have to check if ModelState.IsValid on *every single function*. That is done for us.
It's just called Controller. I will mention the project layout at a later date.
There are a couple of big interfaces in there, User and System. These are basically wrappers for real objects. I really like this abstraction because it makes it explicit who where interacting with. Where not interacting with NotificationSubSystem, where interacting with the user. We aren't interacting with the command executor, we are interacting with the system.
This is the current plan any way. Some ideas may turn out to be bad ones, some might be brilliant and there could be many more additions. Finding out which is which should be a fun journey.
I just wanted to solve the most common things. Validation, logging, permissions are the obvious ones, I find the implementation of these in microsofts MVC framework to be pretty terrible, it's almost always one of the first things I replace. I like the command/query model, so I wanted that in place. I want to be able to offer desktop/mobile integration further down the line. I want easy integration with knockoutjs. Most importantly, I don't want to have to set all this up every single time.
So as I'm sporadically putting various parts together I noticed one thing. MS MVC is getting in my way and stopping me from creating this framework, It must go! So what was once going to be an application framework is now going to be MVC framework as well.
A touch of arrogance
I think if your going to replace something then you need to replace it with something better, which requires at least a modest level of arrogance. You have to know how what you want to do better and hopefully have a rough idea on how to get there.
As I got further through my exploratory and brainstorming phase I started to get more confident that yes, I can make a better MVC.
Controllers
The first thing I started looking at is what I wanted my controllers to look like. I came up with this:
- public abstract class Controller {
- [Route("/product/view/*/{Id}")]
- public abstract object View(ProductDetailsQuery query);
- public abstract object List(ProductListQuery query);
- [Roles("ProductManager")]
- [HttpPost]
- public object Create(ProductCreateCommand command) {
- System.Execute(command);
- User.Message(MessageType.Success, "Product has been created");
- return User.Redirect("/Product/List");
- }
- }
There are a number of interesting things here, First of all, the controller is responsible for defining a public API, so routes and permissions are defined by attributes. MS MVC has recently added attribute based routing as well.
A permissions system will be provided by the framework that will allow role based and finer grained permissions.
The controller is abstract. For many common actions like view and list in the above sample, all the controller does is delegate responsibility. Abstract actions will have the ability to be routed by default. Saving those couple of lines of code was one of the main drivers for my framework, yes I'm that anal.
The HttpPost may seem fairly standard, but by convention this will enlist a transaction.
All action return objects. There are no view results, json results or anything like that. It's one object in, one object out. The format of the result is determined by the caller.
We don't have to check if ModelState.IsValid on *every single function*. That is done for us.
It's just called Controller. I will mention the project layout at a later date.
There are a couple of big interfaces in there, User and System. These are basically wrappers for real objects. I really like this abstraction because it makes it explicit who where interacting with. Where not interacting with NotificationSubSystem, where interacting with the user. We aren't interacting with the command executor, we are interacting with the system.
Fingers Crossed
This is the current plan any way. Some ideas may turn out to be bad ones, some might be brilliant and there could be many more additions. Finding out which is which should be a fun journey.
Subscribe to:
Posts (Atom)