Wednesday, December 14, 2016

Generating a self-signed certificate for multiple wildcard domains using Subject Alternative Name

Recently I needed to enable HTTPS on several IIS sites on several machines. Some of the sites share part of the domain name and all of them can be accessed by more than one domain:
  • Site A:
  • Site A:
  • Site B:
  • Site B:
  • Site C:
  • Site C:
  • Site D:
  • Site D:

and so on...

Please bear in mind that I wanted to use the default (443) port for all of them - otherwise I would need to create a map of ports - taking into account the number of environments involved (more than 100 combinations) it would be impossible to remember them (or at least hard). IIS has a limitation and it allows to bind only one site for IP and port combination if you use HTTPS. It is a classic chicken and egg problem - IIS must be able to know upfront which certificate should be used during SSL handshake but it's impossible because the HTTP request with Host Header inside  (which is needed to determine the domain name and hence the site) hasn't been sent yet. So I needed only one certificate for all of the sites and brands. 
One cert to rule them all...
To do that we need a certificate with SAN - Subject Alternative Name property. Inside we can list all of the domain combinations. To make things simpler we can even use wildcard domains - in case of adding a new brand in the future.

I checked the internet but most of the entries about SAN certificate generation were about OpenSSL which on Windows doesn't work very well (it has a port but poor in my opinion). Then I found out that old makecert tool has been superseded in Windows by a totally new and better tool: PSPKI - Public Key Infrastructure PowerShell module. That was the thing I needed and it saved me a lot of time.

To use the new tool just follow the instruction:
  1. Download PSPKI module from here:
  2. It requires installing Remote Server Administration Tools for Windows.
    If you have Windows 7 use this link:
    If you have Windows 10 (as I do) use this one:
    You can find easily links for other operating systems.
  3. Open PowerShell console with administrative privileges.
  4. Run below script (I added new lines for clarity):
    Import-Module PSPKI
    -Subject "CN=*" 
    -KeyUsage "KeyEncipherment, DigitalSignature" 
    -SAN "*","*","*","*" 
    -ProviderName "Microsoft Software Key Storage Provider" 
    -AlgorithmName ecdsa_p256 
    -KeyLength 256 
    -SignatureAlgorithm sha256 
    -Path c:\SSLCertificate.pfx 
    -Password (ConvertTo-SecureString "password" -AsPlainText -Force) 
Please note that you need to list all of the domains in the SAN parameter as Subject value is ignored if SAN is present in the certificate. I prefer to use the first domain's name in the Subject.

Such certificate can be used in HTTPS bindings of all sites. To avoid browser security errors (apart from Firefox which has its own cert store) you need to install this certificate into Personal and Trusted Root Certification Authorities store on your Local Machine.

Tuesday, October 08, 2013

Serializing list of enums in .NET

Recently I've been coding some WCF methods and I've got a strange exception during WCF message serialization. I needed to send list of enums to the WCF service. There are couple of ways to do that.
First solution is using [Flag] attribute to combine several enum values into one variable - but then you need to use powers of 2 for enum values. I couldn't do that because of the requirements and actual big number of that enums stored in current database. It would require writing and applying a lot of scripts just to correct old enum values in db.
Another way of doing that is by passing a list of enums. So I've decided to use that approach. But strangely I've encountered a little problem around that. Below is a sample code from WCF data contract:

 public class SomeClass  
   public List<SomeEnum> SomeEnums { get; set; }  

Looks pretty simple but if we will try to pass empty list of that values we'll encounter an below exception:
Enum value '0' is invalid for type 'SomeEnum' and cannot be serialized. Ensure that the necessary enum values are present and are marked with EnumMemberAttribute attribute if the type has DataContractAttribute attribute.
Code is valid. The problem is that during serialization .NET framework is trying to serialize empty list which has capacity set to something bigger than 0. So the solution is simple - we just need to set the capacity to 0 by using Count property.

 someObject.SomeEnums.Capacity = someObject.SomeEnums.Count;  

It's a shame that it isn't working like that out of the box. In my opinion .NET Framework should check that before serialization and do not throw any exception. Code is not doing something invalid so it shouldn't throw any exception.

Sunday, October 06, 2013

Gaining MCSD - about Microsoft exams

Recently I've passed last exam required to obtain Microsoft Certified Solutions Developer. Now I'm fully certified in web development in .NET framework. That wasn't my first meeting with exams from Microsoft and because of that I want to leave some comments about them.

First of all I must say that preparation for some of them took me more time than I thought. That's because exams are testing knowledge in a very detailed way. In my opinion sometimes in too detailed way. Of course they should be difficult enough to ensure somebody with this title is representing something but hey - in some areas we are being questioned which attributes in WCF configuration are valid(!). That's not fair in my opinion - I don't think that somebody might be editing those files without referencing MSDN documentation or some kind of editor.
Below are exams that I've passed in chronological order.

70-513 TS: Windows Communication Foundation Development with Microsoft .NET Framework 4
It was mostly about configuring WCF services. I have couple of caveats - as I've said earlier I don't think that knowing all configuration options by heart is a good thing, we need to be realistic about it.

70-480 Programming in HTML5 with JavaScript and CSS3
This exam was offered for free by Microsoft, so I've decided to give it a try. Before preparing for it I thought that it might contain some questions about Metro style specific styles and extensions, but it was purely about HTML5 and CSS3 without mentioning anything about Metro UI.

70-516 TS: Accessing Data with Microsoft .NET Framework 4
This exam was checking our general knowledge about Entity Framework, LINQ To SQL and similar topics. I think it is a very important one, because in almost every application that is being developed we can encounter tampering with data of some kind.

70-515 TS: Web Applications Development with Microsoft .NET Framework 4
Because I've been working a lot with web technologies, mostly with ASP.NET MVC I thought it will the be easiest one to pass for me... and I was right - from this particular one I've managed to gain maximum number of 1000 points. What I didn't like about this exam? The fact that they are testing knowledge about ASP.NET MVC 2 when the current edition is MVC 4. Framework has changed a lot during that time. Hopefully there wasn't too much questions about MVC at all.

70-519 PRO: Designing and Developing Web Applications Using Microsoft .NET Framework 4
This exam was the last one and in some way it's trying to sum up knowledge tested in three earlier exams. Because of that fact it was also rather easy one. After passing that one I've obtained MCPD title.
70-486 Developing ASP.NET MVC4 Web Applications
That one was the next version of 70-515 - I was complaining that it was about MVC2 so they've prepared new version with MVC4 on board (so MVC3 is not included in any exam at all) - and there were even questions about mobile stuff. This exam is definitely the most appropriate in web developer path. It is really testing knowledge that is crucial in real life scenarios. Very important one.

70-487 Developing Windows Azure and Web Services
That one is in my opinion new version of 70-513 because it is mostly examining WCF knowledge. It's a pity that there were only couple of (maybe 3 or 4) questions about Windows Azure. It should have removed Windows Azure from title - because it was only testing general knowledge around that. Most questions were about WCF and LINQ.

To sum up exams are really different - some of them are testing really important knowledge which is good and some of them are checking if you can remember xml configuration schema properly which is bad. Good thing is that preparation to exams in standardizing knowledge and making it more confident. Even though I had had some knowledge around particular topics after preparation my self-confidence had risen.

Another interesting thing is that some kind of people (especially me) need some kind of goals to achieve. Some kind of path to follow. It is really satisfying when you finish something that you have been doing for a long time.

Sunday, February 24, 2013


As I mentioned some time ago Microsoft is offering free 3 month trial account in the BrowserStack service. Because I'm currently working on application that should support variety of browsers including old IE versions (fortunately starting from IE7 only - not IE6) I've decided to give it a try. I must say that I'm really impressed about it. But first let me explain some more details.

What is it

It is a service that gives us a possibility to use almost every browser running on operating system of our choice (but they are missing Linux - place to make some improvements in the future). We can have access to the remote browser running somewhere in the cloud to check how our application is working in it. No more using virtual machines, no more installing anything - BrowserStack is running in browser which simplifies a lot of things.

How it works - live sessions

First of all we can choose on which browser and on which operating system we want to test our application. Then in a very quick manner an instance of this browser is being prepared. Average preparation times were about 1 minute. It is acceptable because preparing your own virtual machine and starting it would definetely take much more time. After mentioned 1 minute we can control remote machine through some Adobe Flash application. It feels like remote desktop but limited to browser window. Of course we can use developer tools in almost every one of it - it is really good idea because without them service would be useless.


Most of applications that need to be tested are rather running locally on developer machine. For BrowserStack that isn't a problem - there is a tunneling feature to make our local sites available for the remote browsers. Tunnel is being prepared by simple Java application.

Automated testing

The most interesting thing is that with this service comes an API for making browser instances programatically (from any language where it is possible to send HTTP request). We can specify URL which should be opened when the browser will start. It is very useful because we can leverage automated JavaScript tests of our application. For example we can write and run Jasmine unit tests in the cloud. To collect test results from the remote browser we need something that can send data from the JavaScript code in the remote browser to our computer - and that is the place when node.js come in handy. There are several test runners on the internet (Testem, Yeti, etc.) that are built on the top of node.js and to send test results back to our computer through sockets. Of course they are travelling using tunnel that was prepared earlier.

Thursday, February 21, 2013

Disable attach security warning

Visual Studio by default is showing a message when trying to attach to process. It is saying that:
Attaching to this process can potentially harm your computer.  If the information below looks suspicious or you are unsure, do not attach to this process.
Because I'm attaching many times a day it becomes really annoying. Couple minutes of googling and here it is the solution for disabling it:
  1. First you need to make sure that Visual Studio isn't running.
  2. Next you need to modify registry key depending on which version of Visual Studio you have.

    For Visual Studio 2008:

    For Visual Studio 2010:

    For Visual Studio 2012:

    In all cases simply set DisableAttachSecurityWarning to 1.
That's all.

Wednesday, February 20, 2013


Recently Microsoft did something really interesting. Everybody knows that Internet Explorer (especially older versions) shouldn't be even called a browser. But still Microsoft Windows is the most popular operating system used commercially and so their browser is. Every web developer  knows that making something to work on the IE is really challenging task. Microsoft learned a lesson in couple of recent years and wanted to repair their image in that field. Internet Explorer 9 and 10 aren't the best browsers but in my opinion they are now acceptable. But they can't repair the old versions and years will past before they will disappear completely

So they have prepared a website: which is trying to make developing web applications supporting Internet Explorer easier.

First of all they've prepared a tool that can scan our website and get a report that can help us improve general look and feel experience of our site. In some situations it can be helpful but it's always better to see how our site is looking in the real browser.

Second great feature is set of virtual machines with all Internet Explorer versions starting from IE7. Apart from that there is a nice bonus - 3 month free trial account in the BrowserStack - a service which we can use to test how our application is looking on a variety of browsers and operating systems, including mobile ones. This service is really amazing.

Microsoft prepared also a developers guide for applications that must work on older versions of their browsers.

All of that is really looking good and I think Microsoft realized how difficult it is to develop web sites for the IE. When developing on a computer that has a Internet Explorer 9 installed I've been using IE7 and IE8 modes to test if everything is ok with application, but to be honest I can remember several situations in which this mode doesn't really reflect true behavior of those old browsers. So it's a really step forward from Microsoft in the developers direction. Hopefully the next versions of Internet Explorer won't be so popular because of problems around them.

Saturday, November 03, 2012

Visual Studio 2010 best extensions

Developers are mostly lazy. Nobody wants to do the same things every day. Doing the same activities can be very boring and non productive. Of course modern IDE should have multiple funcionalities that can improve speed of coding and development but it's really hard to satisfy all users at once.

Below I present my favourite extensions to Visual Studio 2010. Working without them would be much slower and frustrating. This set isn't focused mainly to any of the Visual Studio project type but I've been using them mainly in web development - ASP.NET MVC and WCF projects.


Visual Studio default formatting engine isn't the best solution out there. It is very unefficient when formatting mixed html and C# (Razor). This extension is trying to focus this problem. In my opinion it's definitely better than default engine but in the field of formatting razor views there are still some things to do and fix. But when it comes to formatting pure C# I think we can't find better solution.

Collapse Solution

Very small extension - it adds one option in solution context menu - when we can collapse all projects in solution. It can spare our time when we have solution with big number of projects inside.

Go To Definition

I've been working a long time with Java and PHP and for these technologies I've mainly used NetBeans IDE. I don't want to start a war which is better Eclipse or NetBeans but I've used to one feature when exploring code in NetBeans: CTRL + click on the identifier is opening definition of a class, interface etc. I don't know why this feature was mapped to F12 in Visual Studio. F12 is far far away from normal hands position - CTRL is better choice for this. This extension is simply adding CTRL + click functionality to the Visual Studio.

Indent Guides

When we are looking to code that is indented several times it's hard to tell which ending bracket is matching an opening bracket. This another simple exntesion fixes this problem by adding lines from opening to closing bracket. No more guessing!


There is a best practise saying that code should explaing itself without the need of comments or when you think that something should be commented it's probably been bad written. Whatever the case might be everybody is working from time to time with comments. For quick distinction between code and comments this extension is making them italic. I know this is a little cosmetic change but I prefer it to look like that.

Spell Checker

Staying in comments topic they should be written in spoken language so I think they should have a spell checker binded to. This extension is doing this exact thing. I may be a pedantic person but I really don't like typos in ANY text.

VS10x Code Map v2

Making our way through big files (I know that classes with number of methods have bad metrics and etc.) can be frustrating. By default we have a selector with list of methods, fields and properties, but to select something we have to scroll through which isn't effective when we have a long list. Better solution is having this list opened all the time when we simply click one element on the it and we don't need to scroll at all. This extension adds code map window which simplifies exploring code. It has couple of useful configuration options which can help better organize showed code map.

VS10x Method Block Highlighter

Very simple extension which allows us to simply colorize code - it can be helpful when editing code in several places in one file at once.

VSCommands for Visual Studio 2010

The last but definitely not least extension on the list. Frankly speaking my favourite one. I really like "Locate in Solution" feature which is very helpful when we want to locate currently edited file in the solution explorer. I don't like this feature from Visual Studio because we can either turn this on or off - we don't have this on-demand localization - it's really annoying when solution explorer is jumping every time we change a file. Apart from this we can edit project and solution files - which can spare us having another editing tool.

Of course everybody have his or her own best practises and favourite extensions, but I feel comfortable working with mentioned set.

Friday, November 02, 2012

Interviews as a way to improve your skills

Living in IT world isn't easy. Staying in touch with most recent technologies and any updates from the IT world requires a lot of time and energy. How to quickly gain knowledge how to improve our skills to be more attractive? Solution is very easy: go to as many interviews as you can - to as many companies as you can. Of course I mean in the industry of our interest. There are a lot of free benefits from this:
  • we can quickly gain knowledge about what's new going on - what new technologies are currently mostly used, what key knowledge is important
  • we're getting used to stressful conversations about our carrier and personality - it would be no surprises later
  • we can take free tests on different technologies
Having all this three points we can quickly identify our weak points and in which areas we need to improve ourselves. Of course this technique is rather considered in long-term (a year or more) but can be very benefitial.

Mainly we can easily get through the first step of recruitment in any company, beacuse it's rather being done by someone that can barely judge our true skills (no offence). Usually this first contact person we are speaking with, knows nothing about technology and IT. It's easy to pass this first step because it's mainly being done by recruitment and human resources companies.

Second (optional) step is harder - we are speaking (usually by the phone) with someone from our potential employer. It depends on company but mainly this person knows much more about candidate requirements and technology. But it's relatively easy to pass also this step. Of course we must have some knowledge about given topic, but mainly there are questions about what we've been doing in our previous company.

Last step is the most difficult one - we have an interview with technical person - lead programmer or somebody similar - master technical guru of the company / team. Here we have the essentials. We are being asked to solve difficult problems - we can identify where are our weak points. Typically we're taking some tests - we can check what we need to learn, and what we're really good at. After all, even when we've failed at some tasks we can ask questions about how it should be done and what is the answer.

Tests and questions are usually similar between companies, so it's easy to learn just by doing them. I had a situation that I had scheduled four interviews in a row in one day (the third step ones). First one was a disaster - I know that I really didn't fit to that position but I wanted to try for fun. I learned a good lesson from it, asked couple of questions and left unemployed but smarter. The next two interviews haven't gone as bad as the first but still I was not satisfied with the results. Before the last one I'd been after session of three exhausting recruitment sessions on which I'd learned a big set of new things and solutions. It's payed me a lot, because on this last one I've been asked similar questions to these from the three earlier (failed) conversations - it was just a piece of cake to asnwer to all of them.

One day of intense interviews can give you a lot of experience. It's really worth sending cv to big number of companies from time to time. Actually regular searching through job offers can even give you some knowledge about what capabilities are most wanted on the job market.

Wednesday, October 24, 2012

Cloud Computing definition

This time I'm gonna once more talk about Cloud and scalability. There are couple of misunderstandings in this topic, so I'll try to explain it.

Developing scalable applications is based on several techniques, but all of this techniques are connected in some way. They're all using compute power of distributed computers. One of the most popular solution is Cloud Computing which is a step forward from Fabric, Grid and Cluster Computing which are using distributed machines, but there are not scalable capabilites in them.

This new technique haven't growed up in one day. It's all about evolution. It all started from SOA (Service Oriented Arhitecture) - which is model of organizing software into loosely coupled separate components which are talking together through contracts. Many companies started to rebuild their solutions to work in that way. It's easier to build and maintain systems that have been built from few of smaller blocks. Another step of evolution were fast Clusters which are offering big amount of compute resources. But quickly it has turned out that these clusters are generating really big costs, so companies that owned them tried to found solution for this. This solution is today called Cloud Computing - which is renting compute and storage resources to other companies or even individuals.

Every successfull application sooner or later will have to face the "Internet scale". It is very important to prepare services for this effect. Applications must be scalable enough to serve millions of users in the same time. Developing this kind of applications isn't trivial, and there are many things that must have been taken into account by architects during planning architecture of whole system.

There are several models of cloud services and many vendors. It's hard to decide which company or which model would be best for current requirements. Cloud isn't solution for everything - there are some types of applications which would benefit from cloud solutions:
  • applications with big nonstructural data
  • streaming media content
  • SOA components
Autoscaling these types of applications can be very easy in the cloud. But first of all we need to define what the real Cloud is. There are many definitions of it but the most popular is the definition from NIST organization. Here are some key characteristics of Cloud Computing (cited from enclosured link):
  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
So only when we have all of this mentioned above elements we can name our solution as a Cloud.

Tuesday, October 23, 2012


There are a lot materials about agile software development. Tons of books how quick software development can be. But how it really looks inside? How does it look from the developers perspective?

In most cases it looks as like that:

Agile teams are mostly working with Scrum, because it's really easy to implement it, and it's really easy to work with. But to be honest neither do I or any of my developer fellows know a single company that is really implementing it fully. Of course everybody has a daily meetings, but sometimes there are exceptions to this rule, everybody knows that we need to do retrospective, but not every time, etc. It's really a fact, that most companies have customized version of Scrum. There's even a name for it: ScrumBut ( It's really funny that laziness of some managers and developers has gained its own name.

Scrum isn't the only methodology that is being badly customized. It is a really common way of doing software. I think that there are some explanations for this strange fact. Firstly we do have two worlds meeting here - managers and IT. Developers have their own language and they really don't see the need of managing them - people don't like to be controlled and monitored all the time (daily meetings may have this discomfort of impossibility to hide that we've done nothing productive today) and from the other perspective we have some strange dressed guys that have been doing something all day long, and we even can't tell what was it exactly...

Finding a solution for this problem isn't very easy. Mostly there are communication problems - managers have zero knowledge about IT, and developers have zero knowledge about project management. Both sides are very different and both sides need to cooperate together. It would be great when both sides can think about the other camp and how to make life of both sides easier.

Tuesday, March 06, 2012

Submit is not a function

When you see something like this in your JavaScript console beware. It took me a little while to find out what the problem was. But the problem is really simple. Let me explain.

After doing something the user is redirected to the page on which there is a form and one button (just in case if somebody is not using JavaScript). But for the greater part of users I've added some script to click this only one button on a page.

I was trying to do something like this:
<script type="text/javascript">
with no success. I was getting this strange message "Submit is not a function".

Trying several combinations - maybe something doesn't exist. Maybe I've mixed mootools and jQuery too much. But at last I've found a solution and I hope that my post will help somebody.

In the form I had (I've got this form from the Internet) an input of type "submit" which name was (?!?) "submit". Very strange in my opinion - so in fact when I was typing .submit() the browser was treating this input as a function - and that's the reason of this creepy error.

So watch out and try to avoid copying forms from the Internet.

Wednesday, February 29, 2012


Today I want to write something about cloud computing and cloudbursting especially.

First thing that is worth mentioning is fact that there are several models of using the cloud in general. Everybody knows and understands what the public cloud is - everybody can use resources as much as they want from public share. Even individuals can buy some resources.

But not everybody is happy in this model - there are some companies that doesn't want to share physically their data with other companies. Of course the data stored in data centers are much safer than every other place, but it's very difficult to convince some people to this new model of computing.

Anyway, we have in major two types of clouds: public and private. In public cloud the resources are being shared between everybody and in private cloud we don't share any physically machine or data with nobody.

But there are some situations when this is not working. Because of financial limitations private clouds are in usual smaller than public massive cloud solutions. So sometimes when they need more compute power they're forced to use public cloud to cope with some unpredictable workloads. It's called "cloudbursting" - really fancy name that in my opinion hides really good fears from using the public cloud.

So cloudbursting is using public cloud to outsource some computing out of our own cloud solution. It shouldn't be used too much, because that would mean that our private cloud isn't powerful enough and / or we're loosing too much time for developing cloudbursting solutions which can be nontrivial. So if this bursts are happening too often it is a first signal that something is wrong with architecture of our services. It should happen only on spikes of load.

Cloudbursting is a first step to moving whole company to the public cloud.

Tuesday, November 08, 2011

Windows Azure - introduction

Recently I have made myself to do some work and start exploring the unexplored. First target that I chose was Windows Azure - Microsoft's cloud services platform. So here are my thoughts and first impressions about it.

First thing you must to do is creating an account in Microsoft Online Services Customer Portal: As a Microsoft client you must purchase a subscription for Windows Azure services. For all new clients there is a trial period in which we can test Windows Azure for free - it's very good option for start. We can just try it for ourselves.

When we have access we can sign in to Management Portal on From there we can manage all our Windows Azure services, check what is going on and how our application is working. We can create dabase servers, compute workers, etc. in just few clicks, just choosing where we want to place our servers and after a while everything is up and ready to deploy our aplication. These all informations are presented by responsive web interface.  My first impression was very good - it's very intuitive in my opinion and it's first web page using Silverlight which does not makes me angry. Who knows better how to use Silverlight properly better than Microsoft? Good for them, for now.

But to do something smart and not pay too much we need to know something more about Windows Azure and generally what cloud services are? According to Microsoft:
Microsoft thinks of the cloud as simply an approach to computing that enables applications to be delivered at scale for a variety of workloads and client devices.
I can agree with that mostly but I think there is something more about cloud computing rather than just simply scaling mechanism - using the cloud of course gives us an opportunity to face lots of computings but this approach would not have it's name without using very large number of computers and treat them like one big powerful one.

There are few business models of cloud services:

IaaS was one of the first approaches used by Amazon Web Services Elastic Cloud Compute (EC2). We don't have to worry about Networking, Storage, Servers and Virtualization but still we have to consider things like Operating Systems, Middleware, etc.

PaaS is best fit for Windows Azure - all we need to worry about are our applications and data. It simplifies our tasks and helps us concentrate on our product.

SaaS is another more abstract level of managing our business - all we want is delivered to us as a service and we don't have to worry about anything - we are all using SaaS and often doesn't even realize it - Google has many services which are being delivered to us as a service (Gmail, Calendar, etc.)

Ok, so now when we know something about what cloud computing is let's get back to how Microsoft tries to approach it.

Windows Azure Platform consists of three main components:
  • Windows Azure
  • SQL Azure
  • Windows Azure AppFabric
These three main parts are the core of Windows Azure. So let's have a closer look on every one of them.

Windows Azure component includes three subcomponents:
  • Compute (responsible for doing heavy work)
  • Storage (not database, just simple storage)
  • Virtual Network (connects them with each other)
SQL Azure is just cloud evolved SQL Server with minor differences:
  • Database (relational database, compatible with SQL Server)
  • Reporting (similar to normal Reporting Services)
  • Data Sync (synchronization)
  • Service Bus (general purpose application bus, available on internet scale)
  • Access Control (rules-driven access control for cloud applications)
  • Caching (general purpose distributed cache mechanism)
As we can see Windows Azure Platform consists of several loosely connected services which we can use. From these tools we can build our applications very fast.

The core functionality is Compute of course, in which we can use any other non Microsoft technology (Java, PHP and almost everything). Last but not least is that we can migrate our existing solutions to cloud in few simple steps and for example make our web application work on several distributed instances. We can also migrate our existing databases to the cloud - we can use SQL Azure just like normal SQL Server.

So to sum up Windows Azure is very easy way to build scalable network applications. All we have to worry about is our application, our idea and we can focus only on our product.

Friday, September 02, 2011

ASP.NET MVC3 - Razor view engine

Recently I've started working with ASP.NET MVC3 - I had some break from working with .NET. My first impression when looking at new project was - hey where are controls known from aspx pages? I've used to work with them some time later, but now files have strange .cshtml extensions?
ASP.NET MVC3 come with new default viewing engine - Razor - a very nice engine I must say. Its syntax is very easy to remember and generally is very intuitive and lightweight.
Some examples:
Hello @ViewBag.UserName
As you see there is nothing such as closing tag. We only write '@' and that's all. Now have a look at the for loop:
@foreach (var element in Model)
Mixing HTML and Razor is very unobtrusive. If we want to do some code inside view (bad practice) we can just type:

   string text="Hello";
   text+=" World";
Our layout page can look as follows:
        <title>Title of our page</title>
  <div>Common part of all pages</div>

Writing own view helpers is also very easy:

@helper MyHelper(string text)
Razor have some small disadvantages - we can't use big collection of prepared controls - but we can have ASPX and Razor views in one project - so there's no problem at all. When using Razor we now have better control over generated HTML. Better looking code, and views - Razor it's really worth using. As I said earlier Razor doesn't allow us to use .NET controls in it, we also can't use designer view in creating views - but we have something in exchange - the very good set of prepared helpers hidden it @Html.
We have helpers for creating forms, form inputs, links to another actions, to render another views or actions inside current view. For example this is the code responsible of viewing form input and validation message for it:
@Html.EditorFor(model => model.Name)
@Html.ValidationMessageFor(model => model.Name)  

Whe talking about Razor I can't ommit the @model keyword - which is very helpful. We can create strongly typed views - so view can only display one model class. So in action we write:
return View(modelToRender);
And in the beginning of the view we put something like this
@model OurNamespace.TypeOfOurClassToRender
From now on this view takes only instances of this class to render. To use this instance inside a view we just type:
Take a good look, because there a difference in these keywords - one starts with upper case "M", and it's very important difference.
So I can say that Razor is very kind to work with. If you are not sure about using Razor in your projects don't hesitate to try it - you're going to love it for its simplicity and ease of use.

Thursday, August 25, 2011

Worse is better

Recently I've came across lot of development problems which every developer is finding is his or her normal work. Every time I was facing any design problem I'd had two options. First option - design it in a good way, taking in account future development of this software. In opposite we always can write something in a rude bad way, but few times faster.

And I have to say that it's very difficult question which way choose - because we will be doing something that nobody can ever see inside (I'm not counting any future developers of our system). So if it's worth doing it at our's best? Yes indeed it's worth spending some time if we'll be developing our system for couple of years, but if something is working identically why not to choose the shortest way to do it?

I'll be punished for what I say... take an MVC architecture for example. It's common knowledge what is it for - to separate model classes from views and from logic. The main reason of this separation is to keep eventually changes simple - and I agree with it, but come on - do you know anybody who ever switched to another database in project? Or changed view / templating engine? It's rather rare situation but now everybody wants to have this possibility. I think it's mostly to being cool. We can say - oh I can switch my db to another one and my website will still be working! In the meantime this website could be doing much more efficiently. Because if we are on some abstract level to be compatible with every db solution we're just loosing advantages which are given to us from the very specific db engine for example. Because if we use some specific options from one db engine changing it to another one is again more complicated. And we're in the same point with code that is rather blocking us but in fact we've chosen it to not being blocked by anything.

It's very funny in my opinion, and we should always consider our choices before we can start. And I don't blame MVC pattern - it's the best pattern to build websites and such. Only thing I blame is bad software management and bad planning.

So returning to the main point it's not always worth to do something the best possible way - it's rather to sustain project requirements as fast as possible. Because from the business point it's better to deliver something fast rather than working with something for couple of years and deliver the same product little easier to develop in future - if our competition deliver something earlier future development of our idea won't even exist.

So my point is if we're doing something - I don't know a website or even a desktop software we must focus on our requirements, and not doing something the best possible way. But in the other hand we must be careful enough and try to deliver solution that is fullfilling our all requirements.

So good requirements and good business plan is everything we need to deliver best IT solutions.

Friday, August 12, 2011

How to become a good programmer

Being a programmer isn't very easy. Of course learning one programming language is easy, but mastering it real good takes a lot of time and lot of problems to solve in that specific language. Learning next language if it is in the same paradigm is easier than the first one and so on.

If somebody is intelligent enough he or she could learn any new technology and any new language. It only takes time. I don't say that finding free time when you're working and / or studying at the same time is easy, but in our job we must learn at all the time. Languages, frameworks and patterns are evolving still.

If it comes to design patterns here it's identical situation - the only thing that matters is using them in practice. Only when we are doing something we encounter real life problems and we can solve them. Year by year problems to solve are much harder and harder. People are still searching solutions for the more difficult ones.

People say that good developer is when he tend to stay in deadlines. But in my opinion being a good programmer is being curious of how it's done inside in application. Because only if we knew what will be in memory and what computer must do if we do A or if we do B we can write effective applications. I know that when it comes to business time is very important, but for the cost of doing something well.

I've worked in company (I will not say name) in which everything was done in two ways:
- as fast as we can but doing dirty code
- as long as we could reapairing this dirty code made earlier
After some time I've noticed that 90% of programmers time can be spared bo doing something well once. So doing something well should be our goal from the start.

Todays languages are making programming lot easier - we don't need to worry about memory consuming and heavy weight of application. People will buy faster machines and / or client will buy bigger server - it's just not our problem they say.

Computer games today are being made using graphic tools and only a few scripts (of course there are some exceptions). If we look at demoscene we could find really cool graphics and really cool stuff which will run faster and look much better than games we can buy.

The difference is huge - the demos are from 4KB to 64 KB size only. And they look very nice and produce the best graphics possible at good performance - how could that happen? Because these small apps weren't done in stress of time and money - they were done by people who really enjoys what they are doing.

So in my opinion we should do something interesting for us or quit and find another job. Because only if we are doing something which is fun for us we would do it with enough passion and curiosity needed to do it well.

Thursday, August 11, 2011

Javascript: The Good Parts

In my projects I've encountered many problem involving doing something in JavaScript. First when I've came to problem when I needeed to use it I've searched the internet and found jQuery - which is very helpful. I copied it and it works! Really simple!

But later when I've stucked with some unknown for me problem and nobody could help me with that I've realized something is wrong. I knew really well object programming but here with this strange syntax I was helpless. Then I found this book:,jscmoc.htm

When I read the table of contents I realized what I was missing. After reading it I can now say that I understand what really JS is. Author definetely described solutions for my problems - and now it was clear for me what I was misunderstanding.

In all languages we can always solve one problem in multiple ways - it's up to us which way we preffer, but some ways are easier to do than others and some ways just produce cleaner code. JavaScript is a language in which beginner with experience from other objectives languages can do a huge amount of mistakes. Programming in JS is very tricky, but also very fun. You must ensure yourself that you're really know what you're doing and what's happening in the code before doing something real big.

Author explains typical anti-patterns and shows what dangers are waiting for us if we do something wrong.

So now, after reading this book and several talks with smarter than me I can say that there are very few people who know this language very good - but it's a shame, because I see some perspectives in it. Its popularity is growing really fast in the dev community.

To sum up I recommend this book to every developer.

Thursday, July 14, 2011

Developers salary

As a developer I'm always searching for jobs offers on the Internet. I'm always wondering how much they can pay me for this position? Here, in Poland talking about salary in job is taboo. There are only few offers in IT which we can see salary before we go to the conversation.

It is rather strange and many of Polish companies are abusing this fact - I've been on countless interviews, wasted huge amount of time to get there, to obtain some information about company, etc. only to hear that they can give me only about half of my financial expectations.

I must say that I'm disappointed about that. Many of my friends from University or from my few past jobs are working for very small salaries according to what they can.

I think that in Poland we must wait couple of years to catch up to other bigger coutries in this topic. In my opinion on every job offer should be information about salary (it could be with some margins). It would spare everybody some time.

According to that I've googled some about developers salaries and I've found interesting article - "How much should you pay developers?".
Author of it says about few principles about financing development team.

Firstly algorithm is giving points in few categories. Each point is worth some money - there are no margins - it must be clear and fair for everybody.

Secondly another principle (my favourite) is transparency. Everybody knows how much others are earning and knows why - it can clarify what we need to do to earn more - and there is no situation that I'm not satisfied because of my colleague who is earning more than I but he has less experience. It is clear for everybody and team can even talk about their salaries in public.

Another principle is competetive - if we'll know that in company X we can acquire twice more for doing the same it's obvious that we'll quit our current job to take another. But if we don't know we can stay there for life. Companies should use this information.

In Poland many companies are very greedy. I've met and I'm still meeting managers who are searching for IT specialist with minimum 3 years of experience and offering them salary of a shop assistant. It's sad even more if we realize there are some people who will agree working for them.

So to sum up my point is that there should be information about salary on every job offer and we should always check if this salary is fair for this position.

Sunday, July 03, 2011


Some time ago I've been searching for book which would help me ta master techniques of SEO and SEM.

First I was trying to find some articles in the Internet but there were only single publications about it loosely connected with each other. Then I found interesting book - bestseller about SEO - "SEO Warrior".

I was kind of person that was thinking that all this SEO stuff was something we could do and forget about it. And I was really wrong... In fact the most important thing on our site is the content. Of course we could help search engines to index our site but if we don't have interesting articles and stuff it would be very hard.

We can acquire some extra traffic from social media but again if we don't have something really interesting it would be even harder. The book describes this techniques as SMO.

Another important thing about our website is external links - the more links are to our site the more chances are that someone will find us in the big world of Internet.

I must say that I was impressed by it. It's definetely worth reading. Everybody who is responsible in e-commerce should read it.

The book has also website connected with it:

So if you're looking for companion of techniques and you're trying to prepare checklist want you need to do if you want to acquire some traffic to your website I can recommend this position as the good source of knowledge.