Wednesday, 25 July 2012
In order to build robust cloud applications, a client that calls a service has to handle four scenarios:
Partial Success (Success with conditions)
A partial success occurs when a service only accomplishes part of a requested task. This might be a query where you ask for the last 100 transactions, and only the last 50 are returned. Or the service only creates an order entry, but does not submit the order. Usually a reason is supplied with the partial success. Based on that reason the client has to decide what to do next.
Transient failures occur when some resource (like a network connection) is temporarily unavailable. You might see this as a timeout, or some sort of error information indicating what occurred. As discussed in a previous post , continually retrying to connect to a transient resource impedes scalability because resources a being held on to while the retries are occurring. Better to retry a few times and if you cannot access the resource, treat it as a complete failure.
With failure, you might try another strategy before you treat the resource access as a failure. You might relax some conditions, and then achieve partial success. You might access another resource that might be able to accomplish the same task (say obtain a credit rating) albeit at greater cost. In any case all failures should be reported to the client. You can summarize this responsibility in this diagram:
Bill Wilder helped me formulate these thoughts.
Thursday, 27 October 2011
Tuesday, 14 December 2010
Windows Azure provides two storage mechanisms: SQL Azure and Azure Storage tables. Which one should you use?
Can Relational Databases Scale?
SQL Azure is basically SQL Server in the cloud. To get meaningful results from a query, you need a consistent set of data.
Transactions allow for data to be inserted according to the ACID principle: all related information is changed together.
The longer the database lock manager keeps locks, the higher the likelihood two transactions will modify the same data. As transactions wait for locks to clear, transactions will either be slower to complete, or transactions will time out and must be abandoned or retried. Data availability decreases.
Content distribution networks enable read-only data to be delivered quickly to overcome the speed of light boundary. They are useless for modifiable data. The laws of physics drive a set of diminishing economic returns on bandwidth. You can only move so much data so fast.
Jim Gray pointed out years ago that computational power gets cheaper faster than network bandwidth. It makes more economic sense to compute where the data is rather than moving it to a computing center. Data is often naturally distributed.
Is connectivity to that data always possible? Some people believe that connectivity will be always available. Cell phone connectivity problems, data center outages, equipment upgrades, and last mile problems indicate that is never going to happen.
Computing in multiple places leads to increased latency. Latency means longer lock retention. Increased locked retention means decreased availability.
Most people think of scaling in terms of large number of users: Amazon, Facebook, or Google. Latency also leads to scalability based on geographic distribution of users, transmission of a large quantity of data, or any bottleneck that lengthens the time of a database transaction.
The economics of distributed computing argue in favor of many small machines, rather than one large machine. Google does not handle its search system with one large machine, but many commodity processors. If you have one large database, scaling up to a new machine can cost hours or days.
The CAP Theorem
Eric Brewer’s CAP Theorem summarizes the discussion. Given the constraints of consistency, availability, and partitioning, you
can only have two of the three. We are comfortable with the world of single database/database cluster with minimal latency where we have consistency and availability.
If we are forced to partition our data should we give up on availability or consistency? Let us first look at the best way to partition, and then ask whether we want consistency or availability.
What is the best way to partition?
If economics, the laws of physics, and current technology limits argue in favor of partitioning, what is the best way to partition? Distributed objects, whether by DCOM, CORBA, or RMI failed for many reasons . The RPC model increases latencies that inhibit scalability. You cannot ignore the existence of the network. Distributed transactions fail as well because once you get beyond a local network the latencies with two-phase commit impede scalability.
Two better alternatives exist: a key value/type store such as Azure Storage Services, or partitioning data across relational databases without distributed transactions.
Storage Services allow multiple partitions of tables with entries. Only CRUD operations exist: no foreign key relations, no joins, no constraints, and no schemas. Consistency must be handled programmatically. This model works well with tens of hundreds of commoity processors, and can achieve massive scalability.
One can partition SQL Azure horizontally or vertically. With horizontal partitioning we divide table rows across the database. With vertical partitioning we divide table columns across databases. Within the databases you have transactional consistency, but there are no transactions across databases.
Horizontal partitioning works especially well when the data divides naturally: company subsidiaries that are geographically separate, historical analysis, or of different functional areas such as user feedback and active orders. Vertical partitioning works well when updates and queries use different pieces of data.
In all these cases we have to deal with data that might be stale or inconsistent.
Consistency or Availability?
Ask a simple question: What is the cost of an apology?
The number of available books in Amazon is a cached value, not guaranteed to be correct. If Amazon ran a distributed transaction over all your shopping cart orders, the book inventory system, and the shipping system, they could never build a massively scalable front end user interface. Transactions would be dependent on user interactions that could range from 5 seconds to hours, assuming the shopping cart is not abandoned. It is impractical to keep database locks that long. Since most of the time you get your book, availability is a better choice that consistency.
Airline reservation systems are similar. A database used for read-only flight shopping is updated periodically. Another database is for reservations. Occasionally, you cannot get the price or flight you wanted. Using one database to achieve consistency would make searching for fares. or making reservations take forever.
Both cases have an ultimate source of truth: the inventory database, or the reservations database.
Businesses have to be prepared to apologize anyway. Checks bounce, the last book in the inventory turns out to be defective,
or the vendor drops the last crystal vase. We often have to make records and reality consistent.
Software State is not the State of the World
We have fostered a myth that the state of the software has to be always identical to the state of the world. This often makes
software applications difficult to use, or impossible to write. Deciding what the cost of getting it absolutely right is a
business decision. As Amazon and the airlines illustrate, the cost of lost business and convenience sometimes offsets the occasional problems of inconsistent data. You must then design for eventual consistency.
Scalability is based on the constraints of your application, the volume of data transmitted, or the number and geographic distribution of your users.
Need absolute consistency? Use the relational model. Need high availability? Use Azure tables, or the partitioned relational
model. Availability is a subjective measure. You might partition and still get consistency.
If the nature of your world changes, however, it is not easy to shift from the relational model to a partitioned model.
Monday, 22 November 2010
Determining how to divide your Azure table storage into
multiple partitions is based on how your data is accessed. Here is an example
of how to partition data assuming that reads predominate over writes.
Consider an application that sells tickets to various
events. Typical questions and the attributes accessed for the queries are:
|How many tickets are left for an event?
||date, location, event
What events occur
on which date?
When is a particular
artist coming to town?
When can I get a
ticket for a type of event?
Which artists are
coming to town?
The queries are listed in frequency order. The most common
query is about how many tickets are available for an event.
The most common combination of attributes is artist or date
for a given location. The most common query uses event, date, and location.
With Azure tables you only have two keys: partition and row.
The fastest query is always the one based on the partition key.
This leads us to the suggestion that the partition key
should be location since it is involved with all but one of the queries. The
row key should be date concatenated with event. This gives a quick result for
the most common query. The remaining queries require table scans. All but one are
helped by the partitioning scheme. In reality, that query is probably location
based as well.
The added bonus of this arrangement is that it allows for geographic
distribution to data centers closest to the customers.
Wednesday, 15 September 2010
"Government, without popular information, or the means of acquiring it, is but a Prologue to a Farce or a Tragedy; or, perhaps both. Knowledge will forever govern ignorance."
What is it?
Control over information is a societal danger similar to control over economic resources or political power. Representative government will not survive without the information to help us create meaningful policies. Otherwise, advocates will too easily lead us to the conclusion they want us to support.
How does one get access to this data?
Right now, it is not easy to get access to authoritative data. If you have money you search for it, purchase it, or do the research to obtain it. Often, you have to negotiate licensing and payment terms. Why can’t we shop for data the same way we find food, clothing, shelter, or leisure activities? None of these activities requires extensive searches or complex legal negotiations.
Why can’t we have a marketplace for data?
Microsoft Dallas is a marketplace for data. It provides a standard way to purchase, license, and download data. Currently it is a CTP, and no doubt will undergo a name change, but the idea will not.
The data providers could be commercial or private. Right now, they range from government agencies such as NASA or the UN to private concerns such as Info USA and NAVTEQ. You can easily find out their reputations so you know how authoritative they are.
As a CTP there is no charge, but the product offering will have either transaction/query or subscription based pricing. Microsoft has promised “easy to understand licensing”.
What are the opportunities?
There is one billing relationship in the marketplace because Microsoft will handle the payment mechanisms. Content Providers will not have to bill individual users. They will not have to write a licensing agreement for each user. Large provider organizations can deal with businesses or individuals that in other circumstances would not have provided a reasonable economic return. Small data providers can offer their data where it would have previously been economically unfeasible. Content Users would then be able to easily find data that would have been difficult to find or otherwise unavailable. The licensing terms will be very clear, avoiding another potential legal headache. Small businesses can create new business opportunities.
The marketplace itself is scalable because it runs on Microsoft Azure.
For application developers, Dallas is about your imagination. What kind of business combinations can you imagine?
How do you access the data?
Dallas will use the standard OData API. Hence Dallas data can be used from Java, PHP, or on an IPhone. The data itself can be structured or unstructured.
An example of unstructured data is the Mars rover pictures. The Associated Press uses both structured and unstructured data. The news articles are just text, but there are relationships between various story categories.
Dallas can integrate with the Azure AppFabric Access Control Service.
Your imagination is the limit.
The standard API is very simple. The only real limit is your imagining the possibilities for combining data together.
What kind of combinations can you think of?
Wednesday, 18 August 2010
Sunday, 11 July 2010
Commodity hardware has gotten very cheap. Hence it often makes more economic sense to spread the load in the cloud over several cheap, commodity servers, rather than one large expensive server.
Microsoft's Azure data pricing makes this very clear. One Gigabyte of SQL Azure costs about $10 per month. Azure table storage costs $0.15 per GB per month.
The data transfer costs are the same for both. With Azure table storage you pay $0.01 for each 10,000 storage transactions.
To break even with the SQL Azure price you can get about 9,850,000 storage transactions per month. That is a lot of bandwidth!
Another way to look at the cost is to suppose you need only 2,600,000 storage transactions a month (1 a second assuming an equal time distribution over the day). That would cost you only $2.60. That means you could store almost 50 GB worth of data. To store 50 GB worth of data in SQL Azure would cost about $500 / month.
If you don't need the relational model, it is a lot cheaper to use table or blob storage.
Sunday, 27 December 2009
One way to approach the different architectural implications is to look at the various vendor offering and see how they match to the various application types.
You can divide the cloud vendors into four categories, although one vendor might have offerings in more than one category:
Platform as a Service providers
Software as a Service providers
Application as a Service providers
Cloud Appliance Vendors
The Platform as a Service providers attempt to provide a cloud operating system for users to build an application on. An operating system serves two basic functions: it abstracts the underlying hardware and manages the platform resources for each process or user. Google App Engine, Amazon EC2, Microsoft Azure, and Force.com are examples of platform providers.
The most restrictive platform is the Google App Engine because you program to the Google API which makes it difficult to port to another platform. One the other hand, precisely because you program to a specific API, Google can scale your application and provide recovery for a failed application.
At the other extreme is Amazon. Amazon gives you a virtual machine which with you can program directly against the native OS installed on the virtual machine. This freedom comes with a price. Since the Amazon virtual machine has no knowledge about the application you are running, it cannot provide recovery or scaling for you. You are responsible for doing that. You can use third party software, but that is just a means of fulfilling your responsibility.
Microsoft tries to achieve a balance between these two approaches. By using .NET you have a greater degree of portability than the Google API. You could move your application to an Amazon VM or even your own servers. By using metadata to describe your application to the cloud fabric, the Azure infrastructure can provide recovery and scalability.
The first architectural dimension (ignoring for a moment the relative econonmics which will be explored in another post) then is how much responsibility you want to take for scaling and recovery vs. the degrees of programming freedom you want to have. Of course the choice between the Google API and Microsoft Azure might come down to the skill set of your developers, but in my opinion, for any significant application, the architectural implications of the platform choice should be the more important factor.
Sunday, 05 July 2009
I just did an interview on .NET rocks about cloud computing.
We covered a whole bunch of topics including:
what is cloud computing
comparing the various offering of Google, Force.com, Amazon, and Microsoft
the social and economic environment required for cloud computing
the implications for transactional computing and the relational model
the importance of price and SLA for Microsoft whose offerring is different from Amazon and Google
the need for rich clients even in the world of cloud computing.
Sunday, 25 January 2009
I have uploaded the slides and code for my talk on Windows Azure at the Microsoft MSDN day in Boston on January 22.
The talk was a combination of slides from several PDC talks with some of my own additions. I went through the fundamental architecture of the Azure cloud operating system and the basic elements of the Azure cloud services (i.e. Identity, Workflow, Live, SQL Data Services).
I did two demos. The first was a simple thumbnail generator. It illustrated a simple, scalable architecture using web roles and worker roles that used the primitive Azure operating system storage for blobs and queues. It also demonstrated how you model and configure a cloud application. The second, using the SQL Data Services, demonstrated how to integrate a non-cloud application (on a desktop or server) with the cloud. The app used a variety of industry standard mechanisms (WS*, REST, HTTP Get) to create and query data.
Wednesday, 29 October 2008
At the PDC Microsoft announced its answer to Amazon and Google's cloud computing services.
This answer has two parts: the Azure platform and hosted applications. Unfortunately people confuse these two aspects of cloud computing although they do have some features in common.
The idea behind Azure is to have a hosted operating systems platform. Companies and individuals will be able to build applications that run on infrastructure inside one of Microsoft's data centers. Hosted services are applications that companies and individuals will use instead of running them on their own computers.
For example, a company wants to build a document approval system. It can outsource the infrastructure on which it runs by building the application on top of a cloud computing platform such as Azure. My web site and blog do not run on my own servers, I use a hosting company. That is an example of using a hosted application.
As people get more sophisticated about cloud computing we will see these two types as endpoints on a continuum. Right now as you start to think about cloud computing and where it makes sense, it is easier to treat these as distinct approaches.
The economics of outsourcing your computing infrastructure and certain applications is compelling as Nicholas Carr has argued.
Companies will be able to vary capacity as needed. They can focus scarce economic resources on building the software the organization needs, as opposed to the specialized skills needed to run computing infrastructure. Many small and mid-sized companies already using hosting companies to run their applications. The next logical step is for hosting on an operating system in the cloud.
Salesforce.com has already proven the viability of hosted CRM applications. If I am a small business and I need Microsoft Exchange, I have several choices. I can hire somebody who knows how to run an Exchange server. I can take one my already overburdened computer people and hope they can become expert enough on Exchange to run it without problems. Or I can outsource to a company that knows about Exchange, the appropriate patches, security issues, and how to get it to scale. The choice seems pretty clear to most businesses.
We are at the beginning of the cloud computing wave, and there are many legitimate concerns. What about service outages as Amazon and Salesforce.com have had that prevent us from accessing our critical applications and data? What about privacy issues? I have discussed the cloud privacy issue in a podcast. People are concerned about the ownership of information in the cloud.
All these are legitimate concerns. But we have faced these issues before. Think of the electric power industry. We produce and consume all kinds of products and services using electric power. Electric power is reliable enough that nobody produces their own power any more. Even survivalists still get their usual power from the grid.
This did not happen over night. Their were bitter arguments over the AC and DC standards for electric power transmission. Thomas Edison (the champion of DC power) built an alternating current electric chair for executing prisoners to demonstrate the "horrors" of Nikola Tesla's approach. There were bitter financial struggles between competing companies. Read Thomas Parke Hughes' classic work "Networks of power: Electrification in Western society 1880-1930". Yet in the end we have reliable electric power.
Large scale computing utilities could provide computation much more efficiently than individual business. Compare the energy and pollution efficiency of large scale electric utilities with individual automobiles.
Large companies with the ability to hire and retain infrastructure professionals might decide to build rather than outsource. Some companies may decide to do their own hosting for their own individual reasons.
You probably already have information in the cloud if you have ever used Amazon.com. You have already given plenty of information to banks, credit card companies, and other companies you have dealt with. This information surely already resides on a computer somewhere. Life is full of trust decisions that you make without realizing it.
Very few people grow their own food, sew their own clothes, build their own houses, or (even in these tenuous financial times) keep their money in their mattresses any more. We have learnt to trust in an economic system to provide these things. This too did not happen overnight.
I personally believe that Internet connectivity will never be 100% reliable, but how much reliability will be needed depends on the mission criticality of an application. That is why there will always be a role for rich clients and synchronization services.
Hosting companies will have to be large to have the financial stability to handle law suits and survive for the long term. We will have to develop the institutional and legal infrastructure to handle what happens to data and applications when a hosting company fails. We learned how to do this with bank failures and we will learn how to do this with hosting companies.
This could easily take 50 years with many false starts. People tend to overestimate what will happen in 5 years, and underestimate what will happen in 10-15 years.
Azure, the color Microsoft picked for the name of its platform, is the color of a bright, cloudless day. Interesting metaphor for a cloud computing platform. Is the future of clouds clear?
Tuesday, 09 September 2008
To further simplify
the example, let us assume that the we want to use the certificate to encrypt
a message from the client to the service. It is easy to apply what we discuss
here to other scenarios.
As we discussed in
the previous post, we need to generate two certificates, the root certificate
that represents the Certificate Authority, and the certificate that represents
the identity of the client or service. We will also create a Certificate Revocation
We will use a tool
called makeCert to generate our certificates. Makecert, which ships with the
.NET platform, allows you to build an
X509 certificate that can be used for development and testing. It does three
Generates a public and private key
It associates the key pair
with a name
It binds the name with the
Many of the
published examples use makecert to both create and install the certificate. We
will do the installation in a separate step because this approach is closer to
the use of real certificates.
Separating the certificates also allows the certificates to be
installed on many machines instead of just one. This makes distributing
certificates to developer machines much easier.
First we will
create the Root Certificate with the following command:
-sv RootCATest.pvk -r -n "CN=RootCATest" RootCATest.cer
-n specifies the
name for the root certificate authority. The convention is to prefix the name
with "CN=" where CN stands for "Common Name"
-r indicates that
the certificate will be a root certificate because it is self-signed.
-sv specifies the
file that contains the private key. The private key will be used for signing
certificates issued by this certificate authority. Makecert will ask you for a
password to protect the private key in the file.
RootCATest.cer will just have the public key. It is in the Canonical Encoding Rules (CER) format. This
is the file that will be installed on machines as the root of the trust chain.
Next we will create
a certificate revocation list.
makecert -crl -n
"CN=RootCATest" -r -sv RootCATest.pvk RootCATest.crl
-crl indicates we
are creating a revocation list
-n is the name of
the root certificate authority
-r indicates that
this is the CRL for the root certificate, it is self-signed
-sv indicates the
file that contains the private key
the name of the CRL file.
At this point we
could install the root certificate, but we will wait until we finish with the
certificate we will use in our scenario.
Here we need two files. We will need a CER file for the client machine
so that we can install the public key associated with the service. Then we
will create a PKCS12 format file that
will be used to install the public and private key in the service.
The initial step is
RootCATest.cer -iv RootCATest.pvk -n "CN=TempCert" -sv TempCert.pvk -pe -sky exchange TempCert.cer
-n specifies the
name for the certificate
-sv specifies the
file for the certificate. This must be unique for each certificate created. If
you try to reuse a name, you will get an error message .
-iv specifies the
name of the container file for the private key of the root certificate created
in the first step.
-ic specifies the
name of the root certificate file created in the first step
-sky specifies what
kind of key we are creating. Using the exchange option enables the certificate
to be used for signing and encrypting the message.
-pe specifies that
the private key is exportable and is included with the certificate. For
message security is this required because you need the corresponding private
The name of the CER
file for the certificate is specified at the end of the command.
Now we need to
create the PKCS12 file. We will use a the Software Publisher Certificate Test
Tool to create a Software Publisher's Certificate. You use this format to
create the PKCS12 file using the pvkimprt tool.
We now have four
The next step is to
install these on the appropriate machines. I could not get certmgr to work
properly to do an automated install.
The Winhttpcertcfg tool works for PKCS12 format files, but not CER
format files. We will use the MMC snap-in for this.
Run the mmc snapin
tool (type mmc in the Run menu). First we will open the Certificates
snap-in. Choose: Add/Remove Snap-In.
When you add the
snap-in, choose local computer account for the computer you want to install
the certificate (usually the local one).
We want to install
the root certificate on both the client and service machines in the Trusted Root Certificate Store.
Select that store, right mouse click and
install both the RootCATest.cer and RootCATest.crl files. On the client side you want to install only
the public key in the TempCert.cer file.
On the service side only you want to install the PKCS12 format file
(TempCert.pvk) which has the private key for the certificate. Install that in
the Personal store. For private key installation you will have to provide the
password for the PKCS12 file.
On the service
side, we need to give the identity of the running process (NETWORK SERVICE)
the rights to read the private key. We use two tools FindPrivateKey and cacls
to do this. Run the following command:
"delims=" %%i in ('FindPrivateKey.exe My LocalMachine -n
"CN=TempITNCert" -a') do (cacls.exe "%%i" /E /G "NT
Remember to delete
these certificates when you are finished with them.
Sunday, 24 August 2008
Working with X509
certificates can be very frustrating for WCF developers.
This is the first of
two posts. In this post I will explain just enough of the background for X509
certificates so that I can explain in the next post how to create and use
certificates during .NET development with WCF. The second post is here.
I do not know any
good books for a developer that explains how to use certificates. Even the
excellent books on WCF just give you the certificates you need to get the
sample code to work. They do not really explain to you why you are installing
different certificates into different stores, or how to generate the
certificates you need to get your software to work. Very often the examples run
on one machine with the client and service sharing the same store. This is not
a realistic scenario.
Obviously I cannot
explain all about certificates in one blog post. I just wish to share some
knowledge. Hopefully it will spare you some grief.
Here is the problem
I want to solve.
Suppose you have a
set of web services that is accessed by either an ASP.NET or rich client. The
service requires the client application to use an X509 certificate to access
the service. This could be to encrypt the data, to identify the client, to sign
the data to avoid repudiation, or for a number of other reasons. How do you
install the certificates on the client and service machines?
technology is based on asymmetric
In the encryption
scenario, the client would use the public key of the service to encrypt the
traffic. The service would use its
private key to decrypt the message. In
the identification scenario the service would use the public key of the client
to identify a message signed with the client's private key.
One of the key
issues is how you can be sure that the public key is associated with a given
identity. Perhaps somebody substituted their key for the one you should be
using. Perhaps somebody is hijacking
calls to the service, or you made a mistake in the address of the service. A classic example of these types of
vulnerabilities is the "man in the middle
attack". Another key issue is
that the private key cannot be read or modified by unauthorized parties.
Infrastructure (PKI) is the name for a technology that uses a certificate
authority (CA) to bind the public key to an identity. This identity is unique
to the certificate authority. X509 is a standard for implementing a PKI. An X509 certificate represents an association
between an identity and a public key.
An X509 certificate
is issued by a given Certificate Authority to represent its guarantee that a
public key is associated with a particular identity. Depending on how much you
trust the CA, and the amount of identity verification the CA did, would determine how much trust you have in the certificate. For example VeriSign issues
different types of certificates depending on how much verification was done.
Sometimes organizations will be their own certificate authorities and issues
certificates because they want the maximum amount of control.
between a CA and its issued certificates is represented in the "chain of
trust". Each X509 certificate is signed with the private key of the CA. In
order to verify the chain of trust you need the CA's public key. If you are your own CA authority you can
distribute the X509 certificate representing this "root
certificate". Some browsers and
operating systems install root certificates as part of their setup. So the
manufacturer of the browser or operating system is part of the chain of trust.
The X509 standard
also includes a certificate revocation list (CRL) which is a mechanism for
checking whether a certificate has been revoked by the CA. The standard does not specify how often this
checking is done. By default, Internet Explorer and Firefox do not check for certificate
revocation. Certificates also contain an expiration date.
Another approach to
trust is called "peer
to peer" trust, or "web of trust". Given the difficulties of peer trust it is
not practical for most Internet applications. It can, however, make development
scenarios simpler. Your development environment, however, should mimic your deployment
environment. Hence I do not recommend
using peer to peer trust unless that is practical for your deployed solution.
There are various
protocols for transmitting certificates.
We will be interested in two of them.
Encoding Rules (CER) protocol will be used to digitally transmit the public key
of a given identity. The PKCS12 protocol will be used to transmit the public
and private keys. The private key will be password protected.
The next post will
describe the mechanisms for creating and installing certificates in a .NET
Sunday, 01 June 2008
On Friday, June 6 of Microsoft's Tech-Ed I will be hosting a Birds of a Feather Session on the topic "Software + Services is For Small Companies Too". It will be held in Room S330 E at noon.
To continue the conversation, please add your comments and opinions to this blog post. If you are unable to attend feel free to add your thoughts as well here.
Here are some questions to get you started thinking about the topic:
What is Software + Services?
Are small companies afraid of software + services? Are they afraid of cloud computing? Why?
Doesn't cloud computing leverage the efforts of small companies? If cloud computing makes IT a commodity, doesn't this allow small companies to be even more nimble in their development efforts?
What are the real advantages that large companies have over small companies? What about the innovators dillemma? How do large companies keep their current customers happy and assure future growth through innovation? Doesn't this help small companies. Doesn't cloud computing help small companies innovate even more?
Thursday, 03 April 2008
I have put my VSLive! talk, explaining how to use Windows Comunication Foundation and Windows Workflow Foundation together to create distributed applications in the Presentations section of my web site.
Friday, 28 March 2008
Quick answer: When I don't know about it? When two experienced co-workers do not know also?
I was working on a workflow code sample for an upcoming talk, when I started getting ridculous compilation errors.
The compiler could not find the rules definition file when it was clearly available. The workflow designer could find it because I could associate it with a policy activity. The compiler falsely complained about an incorrect type association in a data bind, but it was clearly correct. Once again the designer had no problem doing the data bind.
I tried to find an answer on Google with little success. After two hours of experimenting, I tried a different Google query and came up with the following link: https://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=612335&SiteID=1.
The essence of the solution is the following:
"this is a well-known
problem with code files that have desigable classes in them - the class
that is to be designed has to be the first class in the file. If you
do the same thing in windows forms you get the following error: the class Form1 can be designed, but is not the first class in the file. Visual Studio requires that designers use the first class in the file. Move the class code so that it is the first class in the file and try loading the designer again."
It turns out I had changed a struct that was defined first in my file to a class. I moved that class to the end of the file and "mirabile dictu" everything worked.
So if this is a well known problem, why can't we get an error message just like in the Windows Forms case?
While it was clearly my mistake, Microsoft has a share of the blame here. Clearly this requirement makes it easier to build the workflow designer. It would have been just as easy to check if this class was not defined first, and issue an error message.
Tuesday, 04 March 2008
I am going to be giving two talks and a workshop at VS Live! in San Francisco.
The first talk is an "Introduction to Windows Workflow Foundation"
where I explain both the business reasons why Microsoft developed Workflow Foundation as well as the technical fundamentals. This talk will help you understand not only how to build workflows, but when it makes sense to do so and when to use some other technology.
The second is "Workflow Services Using WCF and WWF
". WCF allows you to encapsulate business functionality into a service. Windows Workflow Foundation allows you to integrate these services into long running business processes. The latest version of the .NET Framework (3.5) makes it much easier to use these technologies together to build some very powerful business applications.
On Thursday I will give a whole day tutorial
on Workflow Foundation where will dive into the details of how to use this technology to build business applications.
Other speakers will talk about VSTS, ALM, Silverlight, AJAX, .NET Framework 3.0 and 3.5, Sharepoint 2007, Windows WF, Visual Studio 2008, SQL Server 2008, and much more.
If you have not already registered for VSLive San Francisco, you can receive a $695 discount on the Gold Passport if you register using priority code SPSTI. More at www.vslive.com/sf
Tuesday, 12 February 2008
One of the great features in Visual Studio is the ability to startup more than one project at the same time. You do not need to create two solutions, for example, for a client and a server to be able to debug them both.
I thought everybody knew how to do this, but when I found out that two members of a project team I am working with did not, I decided to blog how to do this.
Select the solution in the Solution Explorer, right mouse click and you will see the following menu:
Select the Set Startup Projects menu item, and a property page will appear that lists all the properties in the project. For example:
You can associate an action with each of the projects: None, Start, or Start without debugging.
When you start execution, the projects that you wanted to startup will begin execution. If you allowed debugging, and set breakpoints, the debugger will stop at the appropriate places.
Thursday, 17 January 2008
Thursday, 22 November 2007
The Windows Workflow
Foundation (WF) ships with a Policy Activity that allows you to execute a set
of rules against your workflow. This activity contains a design time rules
editor that allows you to create a set of rules. At run time, the Policy
Activity runs these rules using the WF Rules engine.
features, the rules engine allows you to prioritize rules and to set a chaining
policy to govern rules evaluation. The
rules engine uses a set of Code DOM expressions to represent the rules. These
rules can be run against any managed object, not just a workflow. Hence, the
mechanisms of the rules engine have nothing to do with workflow. You can
actually instantiate and use this rules engine without having to embed it
inside of a workflow. You can use this rules engine to build rules-driven .NET
I gave a talk at
the last Las Vegas VSLive! that demonstrates how to do this. The first sample
in the talk uses a workflow to demonstrate the power of the rules engine. The
second and third samples use a very simple example to demonstrate how to use
the engine outside of a workflow.
Two problems have to
be solved. You have to create a set of
Code DOM expressions for the rules. You have to host the engine and supply it
the rules and the object to run the rules against.
While the details
are in the slides and the examples, here is the gist of the solution.
To use the rules
engine at runtime, you pull the workflow rules out of some storage mechanism.
The first sample uses a file. A WorkflowMarkupSerializer instance deserializes
the stored rules to an instance of the RuleSet class. A RuleValidation instance validates the rules
against the type of the business object against which you will run the rules
against. The Execute method on the RuleExecution class is used to invoke the
rules engine and run the rules.
How do you create
the rules? Ideally you would use some domain language, or domain based
application, that would generate the rules as Code DOM expressions. If you were
masochistic enough, you could create those expressions by hand.
As an alternative,
the second sample hosts the Workflow rules editor dialog (RuleSetDialog class)
to let you create the rules. Unfortunately, like the workflow
designer, this is a programmer's tool, not a business analyst's tool. A WorkflowMarkupSerializer
instance is used to serialize the rules to the appropriate storage.
I would be
interested in hearing about how people use this engine to build rules driven
Monday, 20 August 2007
My series of four digitial articles have been published by Addison-Wesley. You can get the links to purchase them and the associated source code from my web site.
I have tried to explain, in practical terms, what you need to know to actually build real world software using Windows Workflow. There is a tiny amount of theory to explain the underpinnings. The vast majority of the explanation uses code examples to illustrate all the key points. The last shortcut in the series has two extended examples that illustrate how to build custom activities.
Sunday, 29 October 2006
Here are good instructions on how to install RC1 for the .NET Framework 3.0: http://blogs.msdn.com/pandrew/archive/2006/09/07/745701.aspx. People, including myself, have been having problems getting the Workflow Extensions for Visual Studio 2005 installed. I moved the installer file (Visual Studio 2005 Extensions for Windows Workflow Foundation RC5(EN).exe) to a different directory from the other installation files. The workflow extensions then installed just fine.
Tuesday, 08 March 2005
Microsoft's Indigo platform will unify all the divergent transport technologies (ASMX, WSE, COM+, MSMQ, Remoting) that are in use today. For building a service on the .NET platform this is the technology you will use.
What technology should you use today?
The ASMX platform's programming model is the same as Indigo's. Attributes, indicating what technologies (security, reliability, etc.) you want the infrastructure to use are applied to methods. Hence, a converter will be provided to convert ASMX code to Indigo code.
Does this mean ASMX should be the technology of choice? I would argue that WSE is the better technology to use. WSE's programming model is not that of Indigo. Classes and inheritance are used to interact with the WSE infrastructure. WSE will interoperate with Indigo. Nonetheless, the conceptual model of WSE is identical to that of Indigo.
ASMX is tied to the HTTP transport and its request / response protocol. It encourages programmers to think of a service call as a remote procedure call with programming types, not as an interoperable, versioned XML document message validated by XML Schema.
Service developers need to think of request / response as one of several possible message exchange patterns (MEP). The most fundamental MEP, the one all MEPs are built from, as the WS-Addressing spec makes clear, is the one-way asynchronous message. Business services tend to be asynchronous; you apply for a loan and you do not hear back for days.
Service messages can go through intermediaries before reaching the ultimate recipient. Each message segment may go over transports other than HTTP.
WSE's transport classes allow you to build services that use different MEPs over various transports. The SOAP envelope classes make it easy to build the SOAP message body as XML, or serialized XML objects. You learn to think in terms of XML documents and messages, not execution environment dependent types.
Using this conceptual model your services will last longer, and be easier to evolve in a business environment. That will be of more use to your business than using a technology that has a better upgrade path, but will have to be rewritten sooner because it is poorly designed and implemented.
Tuesday, 08 March 2005 11:21:11 (Eastern Standard Time, UTC-05:00) | | All
| Microsoft .NET
Sunday, 29 February 2004
When the speakers on the .NET track of the Syscon Edge 2004 conference got together, Carl Franklin and I were talking about why people think that C# is the "official language" for .NET. I told him that even though most of my consulting is in C#, I think that attitude is wrong. I believe it is important to elaborate why I feel this way.
People who feel that VB.NET is an inferior language to C#, or that somehow C# is a "better language", or the "official language" for accessing the .NET Framework Class Library are just plain wrong. My personal opinion is that I prefer C# to VB.NET because I like the compact syntax among other things, but that is a personal judgement.
People who talk that way about VB.NET are confusing three issues.
First suitability to access the Framework Class Library (FCL). Every example in my book "Application Development Using C# and .NET" has been translated into VB.NET and works exactly the same way. I have used the same courseware for both C# training and VB.NET training with the only difference that the examples were in the different languages. From the point of view of the FCL, everything C# can do, VB.NET can do as well.
Second issue: suitability to a given task. Equality before the FCL, or the Common Language Runtime is not everything. Perl.NET can do things that C# cannot. Does that make Perl.NET a better language than C#? No. It just makes it a better choice in some cases. If you need to use unsafe mode, you need C#. You cannot overload operators in VB.NET. You might find VB.NET's late binding feature more convenient than using the reflection API in C#. You might like background compilation in VB.NET. It is is possible, that for certain features the IL that C# generates is more efficient than the IL that VB.NET does. I do not know if this is true, but even if it is, it probably does not matter for most applications. After all, in some performance situations managed C++ is better than C#. For people interested in the differences between the languages look at O'Reilly's C# and VB.NET Conversion pocket reference.
FInally: de gustibus non disputandum est, there are matters of personal preference. I like C#'s compactness. I think it has certain advantages, but that is a matter of taste. Taste is important even in technical matters, but do not confuse taste with other factors, or mistake taste for intuition.
I wish VB.NET programmers a long and productive life. VB.NET programmers should not feel inferior.
Sunday, 29 February 2004 23:01:16 (Eastern Standard Time, UTC-05:00) | | All
| Microsoft .NET
|April, 2013 (1)
|March, 2013 (1)
|July, 2012 (1)
|June, 2012 (1)
|May, 2012 (3)
|March, 2012 (2)
|February, 2012 (1)
|January, 2012 (1)
|October, 2011 (1)
|May, 2011 (1)
|January, 2011 (1)
|December, 2010 (1)
|November, 2010 (1)
|September, 2010 (2)
|August, 2010 (1)
|July, 2010 (1)
|March, 2010 (1)
|December, 2009 (2)
|November, 2009 (3)
|October, 2009 (2)
|August, 2009 (2)
|July, 2009 (1)
|June, 2009 (2)
|May, 2009 (3)
|January, 2009 (3)
|October, 2008 (1)
|September, 2008 (2)
|August, 2008 (1)
|June, 2008 (1)
|April, 2008 (1)
|March, 2008 (3)
|February, 2008 (2)
|January, 2008 (1)
|November, 2007 (1)
|October, 2007 (1)
|August, 2007 (1)
|May, 2007 (1)
|October, 2006 (1)
|September, 2006 (2)
|August, 2006 (1)
|July, 2006 (1)
|June, 2006 (8)
|February, 2006 (1)
|November, 2005 (1)
|October, 2005 (1)
|August, 2005 (1)
|March, 2005 (2)
|December, 2004 (2)
|November, 2004 (1)
|August, 2004 (1)
|June, 2004 (2)
|March, 2004 (1)
|February, 2004 (1)
|Pick a theme: