The eagerly awaited Intel Sandy Bridge processors have finally been announced (yesterday) and have received superb reviews. Read more at Macrumors, Engadget, TechReport, Intel, Intel Blogs (an older link). They feature, amongst overall improvements on all fronts, vastly improved graphics performance and battery life. These really can’t be found on the mac line soon enough. No doubt Apple will be touting 15 hours battery life with these if they’re touting 10 hours now. Mid year release, I reckon, along with Lion though I’d like to see those on the MacBook Air more than any other model as they are, without a doubt, best of breed now. To quote Intel on the most significant feature of this release in my opinion:
Anyone using CUDA.NET or OpenCL.NET? I’m thinking that a few GTX 480′s each with 480 cores will provide some power in crunching structural risk. Anyone know of any pros/cons of using either CUDA or OpenCL?
Recently (2010-09-21) I spend all day learning about newly released Microsoft High Performance Computing (HPC) product, version 2008-R2. Previously, I had a chance to work briefly with the initial version of the product in 2007 – I saw many good improvements over the first version. The list is too long to enumerate all of them – the highlights are better node management (including pre/post compute state, verification and distribution of software/patches to nodes, etc.), status and problem reporting, root-cause identification, etc. One very helpful feature added – “sanity” checks that could be run pre-deployment or at any time. Many immature (usually homegrown) grid management solutions are bounced like yo-yos daily to verify good state of the nodes. There are also create difficulties in visualizing the state of the grid, or finding outliers (nodes) in performance for root cause analysis – all the tasks in which HPC 2008-R2 has superior offering. Back in 2007, one of my managers flat out asked if there is a way to use HPC to admin an existing in-house Linux/Java grid.
“HPC nodes in Windows Azure” sounds like a sensible addition. Let’s hope Microsoft improve the HPC API – specifically the SOA stuff.
For once I wasn’t flight to New York. But as usual one has to read something during these long flights.
MSR continues to push forwards with Dryad. But when will we see Dryad sold as part of the HPC product?
My question is how real-time is their solution? What products are they supporting in this real-time world? Who’s model are they using?
To build the DryadLINQ samples you need to install HPC 2008 client SDK followed by the DryadLINQ_x86.msi or DryadLINQ_x64.msi. I compiled the AddPair sample to test the HPC Dryad install, which generated this output:
DryadLinq0.dll was built successfully.
Query 0 Output: file://\\Lab49HPC\Drop Area\output\17325799-fb79-4c93-a624-80f8f
DryadLinq1.dll was built successfully.
[PartitionedTable: file://\\Lab49HPC\Drop Area\output\AddPair.pt]
RangePartition(p => p.Left,_)
Select(x => (x.Left + x.Right))
13/05/2010 15:35:39 Connecting to HPC cluster.
13/05/2010 15:35:39 Creating job submission information.
13/05/2010 15:35:39 Requesting min of 2 and max of 1000 nodes.
13/05/2010 15:35:39 Copying 8 files to server
13/05/2010 15:35:40 Submitting job.
13/05/2010 15:35:51 Job submitted.
The job to create this table is still queued. Waiting …
To ensure jobs are purged from the queue, its important to call Session.Close() and BrokerClient.Close(true).
The message queues will also be purged when their corresponding job’s TTL expires. The messages aren’t deleted when the job ends (for durable sessions) because a client may come back to retrieve the results after the job completes and resources are given back to the cluster.
Firstly, if your going to do anything with DryadLINQ you need to get some serious hardware (read RAM) – the hardware scavenging that I’ve used so far for the HPC work might not be sufficient The DryadLINQ Programming Guide is a must read as it provides information on many of the samples that come with the SDK.
It’s nice to be back in NYC, it’s been a year or so, and luckily today was pleasant from a weather perspective. Sometimes you need to do these short duration trip to help out clients – luckily my body clock doesn’t suffer from jetlag
Visual Studio 2010 contains innovations in the parallel computing space to enable developers to cope with parallel applications.
So here in all its glory is my latest Proof Of Concept (POC). Leveraging everything I have blogged about previously, it’s time to see if we can move from the Excel RunnerHPC world The diagram below essentially provide a high level architecture of what I hope will provide a useful guide to leveraging Windows Server 2008 HPC to calculate Market Risk.
It appears that if you use the default Broker that comes with HPC 2008 R2, you get a durable broker, and can use DurableSession. However, move to a custom broker and you appear to be unable to leverage the default broker durability, hence your completely on your own Goodbye DurableSession
The HPC SDK has examples from a the Scheduler/Job/Task and HPC SOA. I was hoping that tasks would be more that what they provide, but essentially they are just a way of executing an application on a node. I’ll explain more soon as to what I was hoping to use tasks for – specially around pre tasks of a job.
Interesting, but I don’t quite think job templates are dynamic enough for what I want in my Market Risk POC. I maybe wrong, but templates look a little to regimented for the dynamic resource orchestration I want and believe is required in risk land. I maybe proved wrong in the coming weeks, but for the moment lets see where we go with custom brokers.
I’m sure the HPC team could improve the error messages displayed in the HPC Cluster Manager/View Tasks/Results pane:
Exception has been thrown by the target of an invocation.
The above is really not help, especially given that the code is .NET complied in debug. Can’t we at least get a stack trace?
Platform EGO offer a nice feature that helps in SLA land – resource orchestration. I suspect Windows HPC 2008 R2 isn’t quite up to the features of Platform.
When you install Dryad the installer looks for “Microsoft HPC Pack\Bin” but R2 puts HpcScheduler.exe in “Microsoft HPC Pack 2008 R2\Bin”. Hence do the following:
In the Program Files folder where “Microsoft HPC Pack 2008 R2” is located, create a folder named “Microsoft HPC Pack”.
Create a “Bin” folder in “Microsoft HPC Pack” and copy HpcScheduler.exe fom “Microsoft HPC Pack 2008 R2\Bin” to “Microsoft HPC Pack\Bin”.
Install Dryad on the cluster.