Wednesday, 27 February 2013

Lex.Db for Windows 8 Store Apps (Part 2) - Deleting and Indexing


In the last post we looked at the basic features of Lex.DB and where it may be useful for Windows 8 Store application development. This time out I will dig a little deeper into the API and look at how we can use the Indexing features which allow us to query the database in a more useful manner as well as how we delete records from the database.

Indexes are fields which we determine will be used for searching data. Within a table we are rarely likely to search on all of the fields when looking for data, but it's just a likely we won't just want to search by the primary key. Lex.DB is lightweight and fast as it only holds it's indexes in memory when running. This allows us to request specific records based on these indexes, without the overhead of all of the other fields being held in-memory for no reason. This does require a small amount of configuration to tell Lex.DB which fields should be indexed, but after that the usage is pretty simple.

NOTE: Lex.DB is being actively developed at the time of writing and features are changing often. This post was written at the release of version Newer versions may implement features beyond the scope of this post which I may write out in future blog posts.


We're going to extend our earlier simple demo to demonstrate how we can create indexes and then query against them. The first task is to modify our Initialisation method to declare which properties we wish to use as indexes. For our Person sample we're going to index the Surname field to allow us to query records for those matching specific surnames.
private async Task InitLexDB()
 _db.Map<Person>().Automap(i => i.PersonID, true)
  .WithIndex("Surname", i => i.Surname);
 await _db.InitializeAsync();
Here we use the WithIndex method which takes in two arguments. The first is the name for the index as a string. The second is an expression to control which property is being added to this index.

Our next task will be to update the AddPeople method we previously created. We'll first purge the database and then populate it with a set of records we can work against.
private async Task AddPeople()
 await _db.PurgeAsync();

 Person newPerson1 = new Person { Forename = "Joe", Surname = "Bloggs" };
 Person newPerson2 = new Person { Forename = "James", Surname = "Smith" };
 Person newPerson3 = new Person { Forename = "David", Surname = "Peterson" };
 Person newPerson4 = new Person { Forename = "Steve", Surname = "Gordon" };
 Person newPerson5 = new Person { Forename = "David", Surname = "Gordon" };
 Person newPerson6 = new Person { Forename = "Colin", Surname = "Gordon" };
 Person newPerson7 = new Person { Forename = "Michael", Surname = "Gordon" };

 List<Person> newPeople = new List<Person>{ 
  newPerson7 };

 await _db.Table<Person>().SaveAsync(newPeople);
The PurgeAsync method on the database will delete all records in all tables so we can start with a clean slate. We'll look at that again later in this blog entry. With these changes we now have an index as well as some data we can query against.

Querying By An Index

We'll firstly create a simple method which performs a likely real-world requirement.  It will allow us to supply a surname as an argument and will then return to us a list of people who have that surname.
private async Task<List<Person>> LoadBySurname(string surname)
 List<Person> people = await _db.Table<Person>()
  .IndexQueryByKey("Surname", surname).ToListAsync();
 return people;
The work is done using the IndexQueryByKey method which takes in two arguments. The first is the name of the index we want to use, in this case "Surname". The second is the value you wish to search for. Here we pass along the surname string which has been passed into the LoadBySurname method. We use ToListAsync to return a task of type List<Person> asynchronously.

Using Take and Skip

Lex.DB has a few other extension methods we can take advantage of to limit the results being returned. A common requirement is to return a limited amount of records. Let's pretend our table holds 1000 people. It's unlikely we want to deal with all of those together. Instead we may want to look at the first 20. We can do that using the Take(x) method.
await _db.Table<Person>().Query<int>().Take(20).ToListAsync()
The take method expects an interger to instruct it on how many records to return. This example would return the first 20 records from the table.

This alone may have it's uses but a more realistic requirement will be to gracefully manage paging data. The scenario here would be to allow the user to step through data in pages. Here we could return whole list of people and then juggle it in memory to manage the paging, but that's pretty inefficient on large tables. It's better to call to the database for a limited number of records required to show the current page of data. This will mean more database calls which has it's own overheads, but done correctly is likely to be suitable to quite a few paging scenarios.

This is pretty easy to achieve by using the Skip(x) extension.
await _db.Table<Person>().Query<int>().Take(10).Skip(10).ToListAsync()
This will interrogate the table and return to us a list of 10 records, skipping the first 10. We could quite easily build a simple paging system around these two commands.

Querying Between Ranges

One final index querying feature I want to cover in this post is how to supply a greater than / less than range for the returned records. There are a couple of methods on the IndexQuery that we can use limit the range of the returned records. Let's say that we want to return only the people whose Surname begins with the letter G. We can write the following query.
private async Task<List<Person>> LoadBySurnameRange()
 var table = _db.Table<Person>();
 var index = table.IndexQuery<string>("Surname");
 var query = await index.GreaterThan("G", true).LessThan("H").ToListAsync();
 return query;
First we create a variable to hold the table object and then an index variable to hold the base IndexQuery. The important line is the next one where we specify the GreatThan and LessThan methods. These take two arguments. The first is the data we which to compare with, the second is an optional boolean to represent if the compare should be equal to the comparison data.

In this case we want all records matching the letter G or higher, so we use G for the first argument and specify true for the equal to argument. We then use the LessThan method to state we want to restrict to any Surname which is less than "H". We don't use the equal to argument here as we wouldn't want to capture anyone with a Surname of "H". Finally we use the ToListAsync method to return a list of the results.

The methods can be used together as we have here, or independently.

NOTE: Until version of Lex.DB there were bugs with these methods and the results were often not as expected. Thanks to the project owner, Lex Lavnikov for taking in the feedback and fixing this very quickly. Please ensure you update to the new version using NuGet.

Deleting Records

Deleting records in Lex.DB is pretty straightforward and there are a few methods we can use to do this. If we know the primary key for the record we can use the DeleteByKeyAsync method on the DBTable class which requires just the primary key to be passed in.
await _db.Table<Person>().DeleteByKeyAsync(2);
We can also use the DeleteAsync method which takes in the object (or a list of objects) to be deleted. In our test application we can add a simple method to demonstrate this.
private async Task DeletePerson(List<Person> people)
     await _db.Table<Person>().DeleteAsync<Person>(people);
To use this method we will add the following code to our OnNavigatedTo void for the page.
int recordcountatstart = _db.Table<Person>().CountAsync().Result;

List<Person> peopletodelete = await LoadBySurname("Bloggs");
await DeletePerson(peopletodelete);

int recordcountatend = _db.Table<Person>().CountAsync().Result;
The first line uses a method called CountAsync to count the records on the Person table. This is so that we can validate the number of records which have been deleted is correct. The second line uses a LoadBySurname method we added earlier which allows us to query our Surname index for a particular surname. In this case, because I know the test data we have populated into the database I know we should have one matching record.

The next line uses our DeletePerson method which we know takes a list of people as it's argument. Here we pass in the result of the preceeding query which is our person with a surname of Bloggs. Finally I run the count again and can see that my table now only has 6 records, confirming the expected single record has been deleted.

Earlier we looked at a command use on the AddPeople method which purged the whole database. To end this section I'll show a corresponding command on the DBTable which we can use if we need to purge only a single table in our database.
await _db.Table<Person>().PurgeAsync();


In this post we have reviewed two more important concepts for working with Lex.DB, indexing and deletion. Both are fairly simple using the available methods.  My early impressions so far are that Lex.DB might offer a good solution for working with data in my Windows 8 applications. The API is still new and a little rough in places.

My next step is to pull down the source and dig into it a little further as it looks like a fun project to try contribute to. It takes me out of my comfort zone and will hopefully expand my c# knowledge. I'll continue this series with some of my findings. I also plan to start a new project to build a more real-world example application for Lex.DB which will use better design practices such as the repository pattern.

I'm curious to see how Les.DB works in a system where I need to store related objects in one-to-many and many-to-many situations. Currently Lex.DB does not support this but I hope to find a viable pattern which I can wrap around it to achieve the end goal.

Monday, 25 February 2013

Model Inheritance and Relationships in Entity Framework Code First


I was recently assisting with an Entity Framework related query on the ASP.NET forum. The question seems pretty straightforward and I assumed that an example would already exist for their scenario. However, after searching I didn't find anything that directly addressed their specific query. Rather than give up, I decided to make a console application to try out the scenario for myself. It was good timing as I expect to have a very similar requirement myself in one of my upcoming web applications so this was an opportunity to play with the concept. I've decided to share the resulting code here as I haven't found many blog posts covering this specific use case.

The Scenario

The person posting the question was building a basic appointment system for a website. He needed to have an appointments entity, a clients entity and a staff entity. Each appointment would have a related staff member and client. No problems there, however he wanted to utilise inheritance on his clients and staff objects whilst having both map to a table named people in the database. The area they were stuck on seemed to be defining the relationships on the appointments table to manage having a relationship for a staff member and client linking back to the single people table. This all needed to be defined with Entity Framework Code First which would then generate the database tables.

The Solution

So nothing too complex there and my first thought was that I should be able to find an existing blog post to link to. As I've mentioned, after a bit of Googling nothing seemed to cover this particular requirement directly. There are many on using inheritance with Entity Framework but those didn't seem to show a usage scenario when the resulting table would need to have two relationships with another table. So I opened up Visual Studio and threw together a small console application to build out the example.

I started with the models which were pretty simple for the purposes of testing.
public abstract class Person
     public int PersonID { get; set; }
     public string Name { get; set; }

public class StaffMember : Person
     public string BadgeNumber { get; set; }
     public virtual ICollection<Appointment> Appointments { get; set; }

public class Client : Person
     public virtual ICollection<Appointment> Appointments { get; set; }

public class Appointment

     public int AppointmentID { get; set; }
     public DateTime Date { get; set; }

     public int? StaffMemberID { get; set; }
     public virtual StaffMember StaffMember { get; set; }

     public int? ClientID { get; set; }
     public virtual Client Client { get; set; }
The things to take note of here are that Person is an abstract class, allowing me to use it as a base for the StaffMember and Client classes. I could just have easily included a Name property on Client and StaffMember classes and allowed EF to generate a table for each class. However, in a real world situation there's likely to be many more shared properties (forename, surname, DOB, title, telephone number etc.) and it doesn't make a lot of sense to define these multiple times. Clients and StaffMembers are both types of Person and using this base class makes the code a little cleaner to work with as they only have a few distinct properties each.

The other thing I want to draw attention to is the ForeignKey data annotations. These are used to create friendlier names for the generated Foreign Keys on the appointment table. As you can see, this table has two foreign keys, one for the StaffMember who will be leading the appointment and one for the Client who is attending. The actual relationships will be with the people table as although we have separate classes to represent these two variants of Person, only a single table is created.

This is the default inheritance behaviour for Code First which is known as Table Per Hierarchy. Entity Framework is clever enough to know that these classes derive from Person and will only produce a single table which is capable of representing both derived objects. Note that this is configurable and you could choose to represent these objects in separate tables just as easily.

The resulting table has the following fields
  • PersonID
  • Name
  • BadgeNumber
  • Discriminator
The BadgeNumber will only have data when the object that is saved is of type StaffMember. If Clients had specific fields these would also be included and used only for Client objects. These fields are therefore null-able since they will not always be used (a Client will never have an employee badge number). This won't always be acceptable for the design of the database. Take for example the BadgeNumber in this example. If I wanted to enforce within the database that all StaffMembers must have a badge number we couldn't do that in this design. In that case another of the inheritance settings would need to be used to create a Table Per Type structure. This still allows us to have a base object but any objects derived from that base class will have their own table linked back by the primary key to the base table people table. This allows stronger rules regarding allow nulls to be configured. It's up to you and you database architect to decide what is most appropriate for your application.

The last field is Discriminator which is an Entity Framework generated column to allow it to record which actual type of object is being stored in the table. In this case some records will have a discriminator of "StaffMember", others will have "Client". We can use this when querying the context to specify which type of objects we want back which I'll show a little further on.

Next up I created the context to tell Entity Framework about the classes I wanted to include in my database.
public class MyContext : DbContext
     public DbSet<Person> People { get; set; }
     public DbSet<Appointment> Appointments { get; set; }
     protected override void OnModelCreating(DbModelBuilder modelBuilder)

     public void Seed(MyContext context)
          Client c1 = new Client { Name = "Steve Client" };
          StaffMember sm1 = new StaffMember { Name = "Staff 1", BadgeNumber = "1234" };
          StaffMember sm2 = new StaffMember { Name = "Staff 2", BadgeNumber = "5678" };


          context.Appointments.Add(new Appointment { Client = c1, StaffMember = sm1, Date = DateTime.Now.AddDays(1) });
          context.Appointments.Add(new Appointment { Client = c1, StaffMember = sm2, Date = DateTime.Now.AddDays(2) });
          context.Appointments.Add(new Appointment { Client = c1, StaffMember = sm2, Date = DateTime.Now.AddDays(3) });

      public MyContext()
           Database.SetInitializer<MyContext>(new DropCreateDatabaseAlways<MyContext>());
This may look a little complex but the main parts to focus on are the two DBSets I have created to instruct Entity Framework to model the Appoints and People tables. I don't need to specify our derived classes here as they will be mapped to the People table whenever they are saved by Entity Framework.

I use a constructor to tell the initializer to always drop and recreate my table. This is so I can always start with a clean database for the purposes of this test app. This would certainly not be set in production! Finally I use a Seed method to populate some Staff, Client and Appointment records in the database so I can do some test queries.

Here you can see that I can new up StaffMembers or Clients as I need them and then add them to the People entity in the context. Entity Framework takes responsibility for mapping these correctly and including the discriminator value.

At this point I'm nearly there. I have a domain model defined which I can code against and I have told Entity Framework how to map that to a database for us. I have populated the database with some test data and the final part of the application will show how we can query against it.
static void Main(string[] args)
     using (var db = new MyContext())
          var client = db.People.OfType<Client>().FirstOrDefault();
          foreach (var a in client.Appointments)
               Console.WriteLine(string.Format("{0} - {1}", a.Date, a.StaffMember.Name));

I'm using a console application to keep things simple, so all the logic is entered directly on the main method. I use a simple command to access the People entity and request back the first record. The key part of this is the OfType extension method which allows me to specify I want a client object. This is where Entity Framework uses the Discriminator field in the database to know what object each record represents.

With the client in hand I can write out their name and then loop through to display a list of their appointments.


And there we have it. This has been a simple example of how inheritance can be used to utilise base classes for our domain objects while mapping to a suitable database structure using Entity Framework. We also covered how we set up our relationships to handle the case were we need a base object to relate more than once in the same table as we did with the Appointments in this example.

Friday, 22 February 2013

Lex.Db for Windows 8 Store Apps (Part 1) - The Basics


For my first technical post on this new blog I wanted to cover a new project called Lex.DB which has caught my interest today. For a brief summary of this project I can write no better summary than the description on the github page:
"Lex.DB is a lightweight, superfast, in-process database engine, completely written in C#".
Great, so where would we use this? Well in my case it looks like an interesting option for local data storage in Windows 8 Store Applications. I've recently started developing Windows 8 Store Apps and one problem area I hit early on was the lack of a built in SQL like storage mechanism for local data. Microsoft's approach and demo's really seemed to push connected storage to cloud based solutions and didn't cover a lot of approaches for disconnected local storage. My research lead to two main options, both of which I have been experimenting with.

Option 1 was to manage my own serialisation of data to and from XML files using the DataContractSerializer. I have an application in development at the moment using this which stores persistent application data in the XML files which is then loaded into in memory objects while the app is running. This approach works but has some limitations in that most of the time I have a large amount of the data loaded into memory. This is perhaps fine for the small application I'm working on, but not scalable for larger line of business style apps.

Option 2 was SQLite, which is a library providing an in-process transactional SQL database engine that can be used with WinRT with the help of a nuget package called SQLite-Net. I won't go into the details of how this works in this post, but will say that's it's a good option for Windows 8 Store apps from what I've seen so far. It does have some downsides due to it's platform dependant nature. The biggest of which is the need to compile your application three times whenever you wish to deploy a new version.

So that brings me to Lex.DB which I have only briefly worked with so far. This project doesn't have the same platform issues as SQLite and it also avoids the need for all data be held in memory at runtime. Instead it seems that the indexes are held in memory so that data can be queried and then the full objects retrieved as needed. My initial impressions are quite good, although some features such a relationships are not supported currently. It may therefore require some work to build a workable solution for all but the simplest of data models. A big advantage is that it supports both Windows 8 Store apps as well as Windows Phone 8 apps.

This post will aim to cover the basics of using Lex.DB in a Windows 8 Store application. Later posts will dive in a bit deeper with some sample code I've started putting together to use Lex.DB in a more real world sample application. At this time I don't know how scalable this can be for a larger application where we need to manage relationships, but we can take the journey together and see where it leads us.

NOTE: Lex.DB is being actively developed at the time of writing and features are changing often. This post was written at the release of version Newer versions may implement features beyond the scope of this post which I may write out in future blog posts.

A small caveat I'd like to put here before we start. I'm self taught in C# and certainly lack knowledge of lots of areas. Therefore my code may be suboptimal or even downright wrong! I welcome any comments for suggestions to improve it as I go.

Let's Get Started

In this post I'm going to show a very simple app to demonstrate saving and loading data. I won't concern myself with splitting out code in the MVVM pattern and will simply use the code behind of the MainPage to show the core usage of Lex.DB.

We start with a new Blank C# Windows Store application in Visual Studio. The first thing we need to do is to bring in Lex.DB using NuGet.

To start with we need a model to represent the objects we will be storing using Lex.DB.

Let's creat a folder called Models in the project and creat a new class to represent the data model, which for this sample we'll called Person. This class simply has some basic properties to represent a person object.
public class Person
     public int PersonID { get; set; }
     public string Forename { get; set; }
     public string Surname { get; set; }
On the MainPage.xaml.cs we firstly needed to add a using statement to access Lex.DB
using Lex.Db;
We need an instance of the Lex.DB class (named DBInstance) which we'll include as a private variable to be used by the page's methods as required.
DbInstance _db = new DbInstance("SampleDB");
The argument we pass here is the name we want for our database. Our next steps are to tell Lex.DB what models we want to store. This initialisation code will be performed one time only when the page is loaded, so we'll be using the OnNavigatedTo method for this demo. To separate things a little I've created an async method to handle the actual initialisation asynchronously (any such data access should be async to avoid blocking the UI) which is then called from the page's OnNavigatedTo method.
protected async override void OnNavigatedTo(NavigationEventArgs e)
     await InitLexDB();

private async Task InitLexDB()
     _db.Map<Person>().Automap(i => i.PersonID, true);
     await _db.InitializeAsync();
The Map method is where we tell Lex.DB which model classes we wish to store. Lex.DB includes an AutoMap option which maps all of the public properties using reflection. For this demo we won't go any further than this but there are ways to be more specific regarding the mapping of properties. AutoMap takes two arguments (the second of which is optional). The first is an expression to tell with which field is going to be our primary key. The second is a boolean to determine if Lex.DB should manage the key for us (like an auto incrementing ID in SQL). I've chosen to utilise this, but we could set this to false if we want to control the key ourselves. The next line initialises everything using the asynchronous method.

At this point we have everything setup and can begin reading and writing to the database using it's asynchronous methods. Normally these would probably be triggered by UI events and the requirements of the different views but to keep this post focused I want to simply add a new Person object each time the MainPage is loaded. After this the full list of objects is retrieved and displayed using a TextBlock element. This bares no resemblance to real world usage for now but shows how simple it is use Lex.DB.

To do this we creat two new Async methods. One to add a person and another to return a list of all Person records from the database.
private async Task AddPerson()
     Person newPerson = new Person { Forename = "Joe", Surname = "Bloggs" };
     await _db.Table<Person>().SaveAsync(newPerson);

private async Task<List<Person>> LoadAllPeople()
     Person[] people = await _db.Table<Person>().LoadAllAsync();
     return people.ToList();
The AddPerson method first creates a new Person object. We then use the SaveAsync method to store this new Person to the database. This is completed using the generic table method. It we weren't concerned about this being async we could use the simpler Save syntax:
In this case Lex.DB would work out where to store the object based on it's type. But this is Windows 8, so async is the way to go.

The LoadAllPeople method returns a Task of type List<Person>. The LoadAllAsync method is doing the work and returning to us all the records in the Person table. Since there is no ToList exposed for the LoadAllAsync, instead we store the returned data into an array. We can then return this on the next line by using the array's own ToList extension method.

Finally we can update OnNavigatedTo to use the new methods we have created. After we have loaded the data we loop through the list of people and for simplicity just added the names to a TextBlock.
protected async override void OnNavigatedTo(NavigationEventArgs e)
     await InitLexDB();
     await AddPerson();

     List<Person> people = await LoadAllPeople();

     foreach (Person p in people)
          DataOutput.Text += p.Forename + " " + p.Surname + " - ";
And that's our completed demo for this post. One thing I will quickly show is that just as easily as pulling all of the records from the database, we can pull a single person using the ID value if we know it. To do that we used the following command:
Person person = await _db.Table<Person>().LoadByKeyAsync(1);
The LoadByKeyAsync method takes in the value of the primary key we are searching for. This is similar to Entity Framework's DbSet Find() method.


In this post I've hopefully shown the basic usage of Lex.DB which will be the ground work for further exploration in my next posts. We'll look at the Indexing features which can be utilised and then start thinking about how we would need to structure a Windows 8 Store application to begin using Lex.DB in a more real world manner.

View Part 2