Posts

Showing posts from 2011

Java and the Intrinsic Lock

I was working on converting a Java application to a multi-threaded application today and wanted to use the simplest means possible. This involves the following steps: Make the class implement Runnable or descend from the Thread class Put the previously sequential code into threads Use the "synchronized" keyword on methods that should be critical sections Join the threads before working on the results Now, there are variations on this theme. Because each object in Java has an intrinsic lock, you can use any old Object as a mutex by calling synchronized(object){} around the critical section you want to protect. so this: private synchronized foo(){ // critical section stuff } is the same as this: private foo(){ synchronized(this){ // critical section stuff } } And when trying to synchronize multiple threads, both are wrong!! The problem is that each object has an intrinsic lock, not each class. This is easiest to see in the second case. Because each t...

Threaded Search

I spent a little bit of time threadifying the classic mergesort. This sort works by using recursively dividing the array to be sorted down to some depth, and then eventually merging the sorted members of the array. This sort also has the desirable characteristic that it sorts in-place. There is no additional memory location required. This is a *good thing* when you have hundreds of thousands of elements in your array. In this particular implementation, an insertion sort is used when the recursive division has less than 100 elements. Multi-threading *could* occur at the first n levels of bifurcation, at the bottom level only, or at some intermediary number of levels. Limiting the number of threads and making them a small multiple of the number of CPU cores causes the threading overhead and context switching to be limited as well. The optimal multiplier appears to be quite low, only about 2 threads per CPU core. For these reasons, I choose to thread only at one depth level. Let's lo...

The most efficient algorithm to scan a bitmap

Yesterday in class, we were learning about the linux scheduler. How you can have several different algorithms for scheduling, etc. All very nice. The actual scheduler used in linux 2.6 has 140 FIFO queues; one for each possible priority. The scheduler takes the first job from the highest priority queue. The queues have a 140 bit bitmap. Because the bitmap gets scanned every single time there is a new quantum, the algorithm for scanning the bitmap needs to be very fast. Our professor asked how we would go about scanning the bitmap. As an example, he put up an algorithm that would compare a single bit, then shift the bits left and keep track of the number of shifts. Obviously room for improvement there. I suggested that the algorithm should be just an integer base 2 log function. To which the professor asked me how I would implement it. I wanted to say: "in hardware", but he was already moving forward. But really, the question of "what is the most significant bit?" ...

file_operations structure changes

Working on a device driver for Linux 3.0.4. In class we are modifying some existing code: static struct file_operations simple_fops = { .read = simple_read, .write = simple_write, .ioctl = NULL, .open = simple_open, .release = simple_release, }; However, this doesn't compile with the latest kernel source. The error is: error: unknown field ‘ioctl’ specified in initializer The new structure, as defined in <source>/include/linux/fs.h is: struct file_operations { struct module *owner; loff_t (*llseek) (struct file *, loff_t, int); ssize_t (*read) (struct file *, char __user *, size_t, loff_t *); ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *); ssize_t (*aio_read) (struct kiocb *, const struct iovec *, unsigned long, loff_t); ssize_t (*aio_write) (struct kiocb *, const struct iovec *, unsigned long, loff_t); int (*readdir) (struct file *, void *, fil...

WCF Ria Services result set size limitation

While creating what should be a simple silverlight WCF RIA services application, the client returned the following error for some datasets: System.ServiceModel.DomainServices.Client.DomainOperationException: Load operation failed for query 'GetBins'. Value cannot be null. Parameter name: validationErrors ---> System.ArgumentNullException: Value cannot be null. By using firebug and limiting the size of the return dataset, it became evident that after a certain size, the dataset wasn't returned at all with the http package. It turns out that this is a WCF RIA services configuration issue. The default value for MaxItemsInObjectGraph was something way below the total number of records returned. The solution was to do one of two things: Set MaxItemsInObjectGraph to be some larger number either in the web.config, or in the domain service class definition itself as follows: [ServiceBehavior(MaxItemsInObjectGraph = 2147483647)] public class myDomainService : DomainSer...

Silverlight Context Menu

Silverlight 4. I can't believe how many times I've had to search for this one and remembered what it was. So, it gets added to my permanent list of memories. The only real trick to get the right namespace is to drag a MenuItem onto the page, then delete it and add this code inside whatever you want to have a context menu. < toolkit : ContextMenuService.ContextMenu >   < toolkit : ContextMenu >     < toolkit : MenuItem Header ="Add child"/>     < toolkit : MenuItem Header ="Remove" />   < toolkit : ContextMenu > < toolkit : ContextMenuService.ContextMenu >

Sorting the results of a MS SQL table-valued function

For whatever reason, MS SQL table-valued functions don't get the respect they deserve. Linq to entities pretends that they don't exist at all. I mean, seriously, how hard would it be to allow you to import them in the same way that you can import stored procedures? Anyway, as a mechanism for abstracting your database schema, I like them. They are pre-compiled, strongly typed, and results cached, so they can get very fast. One thing they don't do well is return sorted values. The following code won't work well when contained in a table-valued function: SELECT * FROM Table1 ORDER BY Column1 This requires us to resort to doing something like this: SELECT TOP 100 PERCENT * FROM Table1 ORDER BY Column1 And even that result can be ignored sometimes. Consider a table that consists of a key, HierarchyId , and a description. We want to unwind the hierarchyId so that we have key, parent key, and description. This is necessary if we are working with the hierarchy outside of...