Generic boolean value converter for WPF

When implementing user interfaces with WPF and using databinding you often need to show something differently based on boolean value in your view model. Here is a clever converter to help you.

Sample usage

<GenericBooleanConverter x:Key="booleanToVisibility" TrueValue="Visible" FalseValue="Collapsed" />
<GenericBooleanConverter x:Key="boolToOrientation" TrueValue="Vertical" FalseValue="Horizontal" />

Orientation="{Binding IsMyViewModelVertical, Converter={StaticResource booleanToOrientation}}"
Visibility="{Binding IsMyViewModelVisible, Converter={StaticResource booleanToVisibility}}">
<Label>I am a label that is sometimes visible</Label>
<Label>I am a label that is sometimes to the left and sometimes below the previous label</Label>


Differentiating price in hours-based software development contracts

There is a certain market price for a software developer with certain amount of relevant work experience. My company is using these prices, regardless of who the client is or what the actual task is. With this strategy,  the only way to increase revenue is to hire more developers and getting more deals. This is not easy for a relatively unknown company like ours.

Coffee shops take advantage of buyer’s price insensitivity by selling special coffees with a lot higher margins than regular coffees while the production costs are almost the same. Could we charge more from customers that are willing to pay more? Below is an example of a differentiated price list.

Product Price Salary Admin.costs Profit Profit increase
Software developer 100€/h 50€/h 25€/h 25€/h
Certified developer 120€/h 50€/h 25€/h 45€/h +80%
Senior developer 150€/h 50€/h 25€/h 75€/h +200%

All three developer types cost the company the same. The developer sertifications are relatively cheap. They usually prove nothing, but some customers might be willing to pay extra for making sure that the developer is familar with the technology. A senior developer could be a “trusted” developer that has been working for current company a couple of years and has already proven the ability to deliver.


Unit testing Castle Active Record using SQLite in-memory database

Sorry about the formatting and everything else but here is how I enabled SQLite in-memory mode for database hitting unit tests in Visual Studio. The approach is not ideal but it works with

The software

Download the Finisar SQLite provider. Copy the files SQLite.NET.dll and SQLite3.dll into your test project root. Add a reference to SQLite.NET.dll. Change the properties of SQLite3.dll from “Allways copy” to “If newer”.

Custom connection provider

The SQLite in-memory destroys all the data in the database when the connection is closed. By default, when using Castle Active Record the connection are closed after every operation like CreateSchema(). There can be some property setting to use the same connection for all the operations between a test setup and tear down, but it was quite easy to write a custom connector that does the same thing. Here is the code.

public class SQLiteInMemoryTestingConnectionProvider : NHibernate.Connection.DriverConnectionProvider
public static System.Data.IDbConnection Connection = null;

public override System.Data.IDbConnection GetConnection()
if (Connection == null)
Connection = base.GetConnection();

return Connection;

public override void CloseConnection(System.Data.IDbConnection conn) { }

Configuring the Active Record

Set the following properties for Active Record initialization:

hibernate.connection.driver_class = NHibernate.Driver.SQLiteDriver

hibernate.dialect = NHibernate.Dialect.SQLiteDialect

hibernate.connection.provider = MyNamespace.SQLiteInMemoryTestingConnectionProvider, MyAssemblyNameContainingTheProvider

hibernate.query.substitutions = true=1;false=0

hibernate.connection.connection_string = Data Source=:memory:;Version=3;New=True;

Closing the connection after each test

In your test tear down method, close the connection manually:

if (SQLiteInMemoryTestingConnectionProvider .Connection != null) SQLiteInMemoryTestingConnectionProvider .Connection.Close();

SQLiteInMemoryTestingConnectionProvider .Connection = null;

This causes the data created by the test to be destroyed. Use test setup to recreate the database schema for the following test.

Experiences with LINQ to SQL O/R mapper

 I have been experimenting with LINQ to SQL. It seems like a nice tool for accessing existing databases and for database first or database driven development. For object driven approach this tool has some rather unpleasant limitations. So far I have encountered following “problems”:

  1. LINQ to SQL supports only “one table per inheritance hierarchy” – persisting strategy. Creating a base class with common attributes like Id, timestamps or owners for all persistent objects does not sound like a good idea any more.
  2. LINQ to SQL does not seem to support many to many relations transparently. If your model has many to many relations you have to introduce artificial relation class to your object model having primary keys of related entities as properties.
  3. It seems that you can not have two automatically updating DateTime properties in your classes (Type = DateTime, Time Stamp = true). I was trying to auto create a table that has properties “CreatedAt” and “ModifiedAt” and make the first one synchronize on insert and the other one on updates. Instead of a auto generated table I got error “System.InvalidOperationException: Members ‘System.DateTime CreatedAt’ and ‘System.DateTime ModifiedAt’ both marked as row version.”.
  4. I don’t really know where I should define the default values for my persistent classes. Maybe by writing a partial method to OnCreated()?

Luckily we still have tools like Castle Active Record (or NHibernate) for object driven development. And you might be able to use LINQ with NHibernate.

Java and Microsoft C#.NET web service interoperability

It isn’t always just pluging and playing when trying to integrate two systems using web services. Especially when the other end is using some ancient Java thing (ie. Axis ver. miekka & kivi) as web service framework.

Q: When calling the service using a test tool the response seems to be valid. When calling a Java web service from C# the actual call succeeds but the response object is null. What is wrong?

A: Most likely the response XML is not valid. Pay special attention to messages root element namespace. Is it there? A dirty-quick-fix can be done by subclassing the generated web service class, overriding the GetReaderForMessage method and adding the missing namespace definition using string manipulation functions.

Q: The response message says it’t encoded as UTF8 and this seems to be true (I inspected the binary message using hex editor). Still some of the non ASCII characters like Ä or Ö appear as question marks when read from deserialized response object. What is wrong?

A: It seems that the .NET Framework determines the encoding from HTTP headers. Check “Content-Type” response header. It should match the encoding used in the actual message (ie. “text/html; charset=utf-8”).

Test-driven development in a challenging environment

I started writing my master’s thesis today. It will be a case study on a project where I will re-implement part of an existing system using test driven development. What makes it interesting is that the component to be implemented consists of a SharePoint Web Part user interface and it integrates to Microsoft Project Server. I haven’t done any .Net or SharePoint development in ages and the guys at work said that it’s quite difficult to write unit tests for that kind of stuff. Of course I didn’t believe them!

I think I’ll be making notes on interesting findings here if there will be any.

Code coverage

Full test coverage is a good goal to reach for but it does not prove your program correct.

public int multiply(int a, int b) {
  return a + b;

public void testMultiply() {
  assertEquals( 4, multiply(2,2) );

The above multiply method has full test coverage (statement, path) and it passes the test(s) but it clearly doesn’t do what it is supposed to do.

Generating code documentation form unit tests

I have been practicing test-driven development for a year now. I like it. There is one problem though I’d like to solve. It has been said that unit tests document what the code does and how to use it. In most projects that I have been involved in, this not a very accurate description. The unit tests are normally not very well structured and the names of the unit tests do not describe very well what the code should do. This is likely due to not refactoring the tests as often as the code.

Here is a fragment from a hypothetical unit test:

public class UnitTestingPracticeTest extends TestCase {

  public void testUnitTestDescribesWhatTheCodeDoes() {



  public void testUnitTestDescribesHowToUseTheCode() {

    UnitTest unitTest = new UnitTest(ClassUnderTest.class);




At least to me, this does not seem very readable. If I want to understand what the unit test is trying to tell me, I have to read it word by word. Compared to normal text, reading unit test code is a lot slower. If the tests were more readable, would it cause people to pay more attention to them and keep them as sharply refactored and intentional as the actual code? I was thinking of a tool that would reformat the test code to be more readable. How about something along the following:

Module documentation

This could be completely generated from the unit test code. All it takes is that the people make unit test names as describing as possible.

OpenOffice page styles and page numbering

I just learned how to use page styles AND page numbering in OpenOffice (actually in NeoOffice, but this should apply to OpenOffice as well). When I tried to use page styles for the first time, it changed the page style of entire document and that was not the thing I wanted. The problematic part with page numbers is when you have create cover pages or a tables of contents and you do not want your page numbering start from the actual first page (eg. cover page).

The thing with the page styles seems to be that you have to break your document in parts using a thing called manual break. Here is how to use different page styles in one document:

  1. Select “File > New > Text document” to create a new document
  2. Open a tool box called “Styles and Formatting”:
    Syles and Formatting panel
  3. Create a new page style called “Cover page”:
    Creating a page style
    Here you can define custom layout settings for your cover page.
  4. (Double click the created style to apply it to the current page)
  5. Write something on the cover page (do not use built-in heading styles)
  6. Select “Insert > Manual break…”, leave the “Type” radio button to “Page break” but do not leave the “Style” selection to “[None]”. In this example I select page style “Index” because the next page will be the table of contents.
  7. Select “Insert > Indexes and tables > Indexes and tables…”, select type “Table of contents” and click OK.

Ok, now we have a two paged document with a cover page and a table of contents, each page using different page styles. You can change and modify your page styles using the “Styles and Formatting” panel.

Now, to make the actual document page numbering to start from the next page, do the following:

  1. This is the most important step to make our page numbering to work! Select “Insert > Manual break…” after the table of contents and use following settings:
    Manual page break for starting page numbering
    It is very important that you select a page style for your next page and check the “Change page number” selection. You can leave the number field to default.
  2. Select “Insert > Header > Default” (or Footer) to add header on every content page
  3. Activate header and select “Insert > Fields > Page nuber”
  4. Now, write few pages of something structured (ie. using built-in heading styles)
  5. Update your page of contents and enjoy the correct page numbering!

I hope this helps someone :)