Responsive WPF User Interfaces Part 2

January 20th, 2009 / LeeCampbell

WPF threading model

While WPF is referred to as a Single Threaded model, WPF actually implements a model where by default two threads are used. One thread is used for rendering the display (Render thread), the other thread is used to manage the User Interface (UI Thread). The Render thread effectively runs in the background and you don’t have to worry about it, while the UI thread receives input, handles events, paints the screen, and runs application code.

The Dispatcher

The UI thread in WPF introduces a concept called a dispatcher. For a while it eluded me exactly what the dispatcher was and I had elevated it to the realm of magic. However I now like to think of it as a system made up of

Responsive WPF User Interfaces Part 1

January 20th, 2009 / LeeCampbell


WPF is a fantastic platform for creating User Interfaces (UI) that allow all the sexy and powerful fashions of the day. Sexy features like reflections, animations, flow documents & 3D are all native to WPF. Powerful programming concepts such as declarative programming & test driven development are available and encouraged. While many new developers to WPF have a hard enough time learning XAML and the binding model that is new with WPF, I feel that a key concept of Usable Interfaces can get forgotten. This therefore is the first in a series of posts on producing Responsive UI in WPF.
This series of posts will try to keep a narrow focus on producing UI that are responsive. Where necessary I will digress to cover essential background concepts. There is an assumption, however, that the audience has basic WPF skills and has at least covered the WPF Hands on Lab 1.

Declarative Programming

I thought it would be nice to jump straight to some code that can give us some responsive User Interfaces straight out of the box. WPF has a the concept of Storyboards that allow values to transition from one value to another over a given time. The great thing about Storyboards is they cater for Binding, they don’t have to be a linear transition, they are smart enough to cancel out when they are no longer valid and you can program them declaratively in XAML.
In the example below we show the usage of 2 Storyboards. These Storyboards declare that the opacity value on the target ("glow") shall change from its current value to 1 (Timeline1) or 0 (TimeLine2). Each Storyboard is set to run for 0.3 seconds. The first timeline is started by the trigger associated to the IsMouseOver event. The second trigger is associated with the exit of the condition that started the first time line. This means when we mouse-over a button, a glow will gradually illuminate. When we mouse-off the illumination will fade away. What is great about Storyboards is that in this example, if you mouse-over then mouse-off quicker than 0.3 seconds the first story board will hand over to the second story board without completing and the second story board will fade from a value somewhere between 0 and 1 until it becomes 0.

Responsive UIs in WPF – Dispatchers to Concurrency to testability

January 20th, 2009 / LeeCampbell

Welcome to my series on building responsive WPF applications. This series for the intermediate WPF developer aims to guide you to building WPF applications that won’t freeze up on your user when under load. Users expect applications to be responsive, and they will lose confidence in applications that noticeably freeze even for the smallest pause.

In this series I will introduce you to:

  • wonderful features that WPF offers to allow you to avoid multi-threaded programming for simple tasks
  • the threading model that WPF implements and why you should care
  • creating responsive single threaded applications

Must-haves for future grid-computing

October 2nd, 2008

I’ve been test-flying a few datagrid/datafabric products lately and had a nice opportunity to try them out on gig-ethernet and infiniband. The thing I noticed with all distributed-data systems is that synchronous-replication is always a bottleneck, because you’re forced to wait on the replication-ack before proceeding with the next operation. Because of this, gigabit-ethernet’s latency puts a ~5k ceiling on any throughput rates you can achieve.

For any grid vendor it’s crucial to rollout modern, ultra-low latency interconnect support then, as its latency characteristics blow away any Gig-E numbers. For real-time price publishers, for algo-trading, latency is the key issue, more than throughput.

Waters Magazine: Flying By Wire

August 1st, 2008 / newyorkscot

Waters Magazine has just published my article “Flying By Wire” in its Open Platform section of the August issue. The article discusses how advanced trading systems need better control systems to dependably innovate and take new opportunities to market. I draw an analogy between trading systems and modern jet aircraft where stability, performance and control are essential characteristics that need to be considered during design, development and testing. Read the article on the Lab49 website here.

Parallel Programming in Native Code

June 6th, 2008 / Kenny Kerr : Technology

Rick Molloy just mentioned to me that he’s created a new blog on MSDN to coincide with Stephen’s talk at TechEd today about the Concurrency Runtime for Visual C++.

Parallel Programming in Native Code

Welcome to the Parallel Programming in Native Code blog.  I started this blog so that I and others on my team would have a place to talk about topics relating to native concurrency.  I want to use this blog to provide early looks into what we’re thinking about, give announcements about any publicly available content or CTPs and of course respond to feedback that we receive from readers and customers…

HPC Considered Harmful

May 30th, 2008 / Development in a Blink

This is how Greg Wilson, Department of Computer Science at the University of Toronto, begins his presentation.

I do like that he points out

Software engineering for science has to address three fundamental issues […]

It’s a shame “getting the right answer” and “being productive” don’t make the list.

Cluster Primitives: MPI, MPI.NET, Large Data, and Passing Classes

April 27th, 2008 / Andre de Cavaignac

The Message Passing Interface (MPI) standard, and its .NET implementation, MPI.NET have been some of the cornerstones of development on compute clusters.  The standard supplies a simple yet primitive way of both sending and receiving data between running compute processes.

A virtuous pairing

April 16th, 2008 / Joe on Computing

I ran across the following comments on an El Reg article about virtualization: Comments about “Virtualization: Nothing New”.

He’s right, isn’t he? Having the same company offer both virtualization and grid solutions is a truly virtuous pairing. First you tell people they need a grid solution to make a huge pool of computers look like a single one, then after it’s all set up you sell them virtualization software. It’s beautiful. It’s like if, for example, Nestle and Jenny Craig got together to simultaneously offer chocolate products and dieting solutions. Oh wait, they did: Nestle to buy Jenny Craig.

Podcast: Lab49, ScaleOut, and Microsoft Talk About Distributed Cache

About three weeks ago, I had the opportunity to sit down with Bill Bain of ScaleOut Software and the two Joes, Joe Cleaver and Joe Rubino, from Microsoft’s Financial Services Industry Evangelism team after I gave my presentation on distributed caches at Microsoft’s 6th Annual Financial Services Developer Conference. The two Joes recorded a podcast of our conversation.

Bill, Joe, and Joe, thanks for the opportunity to talk with you guys.

Not-So-Hidden Latency Part 2 – Trader/Comprehension Latency

March 20th, 2008

Following on from my previous post on Not-So-Hidden Latency, another topic Tom Groenfeldt and I had started discussing earlier this week was something we at the lab have been thinking about for some time: trader latency or comprehension latency. I’ll explain below.

As the search for low latency has continued, it has focused on two things: (a) reducing the latency in receiving and processing market data and (b) reducing the latency in executing a transaction. Now, insofar as algorithmic trading is concerned, the search has also focused on reducing the time it takes for algorithms to make execution decisions based upon that incoming market data, i.e., the time it takes them to get from (a) to (b).

Gigabit, We Hardly Knew You

It seems just yesterday that 10Mbit 10BASE-T Ethernet networks were the norm, and the workstation wonks I worked with years ago at US Navy CINPACFLT in Pearl Harbor, Hawaii jockeyed to have high-speed ATM fiber run to their offices. Sure, this was the age when dual bonded ISDN lines represented the state of the art in home Internet connectivity, but who really needed that much bandwidth? What did we have to transfer? Email? Usenet posts? Gopher pages?

Post-Game: Microsoft Financial Services Developer Conference

As mentioned in a previous post, I spent the two days last week at the 6th Annual Microsoft Financial Services Developer Conference, and I have to say that it was a great event.

On Wednesday, I gave my talk on distributed caches:

The room was packed, folks were asking great questions, and the feedback I got was very positive. For folks who are already knee-deep in high-performance computing and distributed caches, the presentation may not offer much not already known (except perhaps for the late sections on performance tests we ran in the lab and advanced techniques like object segmentation). But given that Microsoft had given this conference a clear emphasis on HPC and that many developers in attendance were relatively new to the subject, the presentation seemed to strike a fair balance between background and practice.

Parallel Programming with C++ – Part 4 – I/O Completion Ports

January 3rd, 2008 / Kenny Kerr : Technology

So far in the Parallel Programming with C++ series I’ve talked about asynchronous procedure calls (APCs) and how they can be used to build efficient and responsive client applications quite easily by waiting for I/O requests to complete asynchronously without having to create additional worker threads.

Although APCs are the most efficient way to perform asynchronous I/O they have one drawback: an APC will only ever complete on the same thread that initiated the operation. This usually isn’t a problem for client applications but is woefully inadequate for server applications. Client applications may also find this unacceptable if they need to process a lot of I/O requests. The problem is that if only a single thread is being used then only a single processor is in use and any additional processors are likely sitting idle.

Windows Compute Cluster (HPC) Basics: Running Map/Reduce Models on CCS 2003

December 26th, 2007 / Andre de Cavaignac

After not writing anything about HPC or Windows Compute Cluster for a while, I figured its about time I write *something* about it because I've been working with it so much recently!

Download the Sample Project (Visual Studio 2008, Beta 2)

Windows Compute Cluster Server 2003 is Microsoft's relatively new implementation of cluster computing running on Windows.  Although Microsoft is starting behind in this game (Unix has been in grid computing for a long time now), the .NET development environment and ease of administration makes CCS a very compelling environment.  The first version of CCS is not very feature rich, but provides the core components required to build distributed applications.  Version 2 (named Windows HPC 2008) which is currently in beta offers a much wider range of functionality and should add some new ideas and competitve edge to the Windows grid computing world.

Parallel Programming with C++ – Part 3 – Queuing Asynchronous Procedure Calls

December 13th, 2007 / Kenny Kerr : Technology

In part 1 of the Parallel Programming with C++ series I introduced asynchronous procedure calls (APCs) and how they can be used with alertable I/O to process asynchronous I/O requests without blocking an application’s thread. In part 2, I showed how APC handling can be integrated with window message loop.

Parallel Programming with C++ – Part 2 – Asynchronous Procedure Calls and Window Messages

December 12th, 2007 / Kenny Kerr : Technology

In part 1 of the Parallel Programming with C++ series I introduced asynchronous procedure calls (APCs) and how they can be used with alertable I/O to process asynchronous I/O requests without blocking an application’s thread.

Of course the example still ended up blocking since the SleepEx function was used to flush the APC queue. Fortunately that’s not the only function that Windows provides to place a thread in an alertable state and in fact Windows provides a number of such functions with different characteristics. One that is particularly useful for client applications is MsgWaitForMultipleObjectsEx as it allows you to integrate APC handling into a thread’s message loop.

Microsoft Financial Services Industry Chat

October 26th, 2007

My chat with Joe Cleaver & Joe Rubino has now been posted here. Have a listen, it clocks in at around 40(!) minutes, so sit back & enjoy.

Interesting apps

August 30th, 2007 / Development in a Blink

LINQPad supports LINQ to objects LINQ to SQL LINQ to XML In fact, everything in C# 3.0 and .NET Framework 3.5. LINQPad is also a learning tool for experimenting with this new technology. MPI.Net .NET wrapper for  MS-MPI implementation to enable C# developers to create High Performance Computing Applications. Needs Microsoft Compute Cluster Pack SDK.

Amazon’s Grid on Demand

August 23rd, 2007

Amazon are pioneering a rent-a-grid service, called EC2 which looks extremely interesting. I hope to spend some time playing with it this weekend – sounds great for distributed Erlang testing!