Microsoft finally announces on July 22nd, 2009 that Windows 7 and Windows Server 2008 R2 (both share same code base and designed to lock-step) have been released to manufacturing (RTM) milestone. In fact, the RTM of Windows 7 and Server 2008 R2 has been expected for quite some time, which is rumored to be on July 13th, 2009, although official time line stressed second half of July 2009. In fact, the official Windows 7 RTM build version 6.1.7600.16385 has been confirmed days earlier.

With the RTM, it means that Windows 7 and Windows Server 2008 R2 development has finally wrapped up, the code is final, and will be released to OEM (original equipment manufacturers) and system builders within 48 hours (Windows 7 OEM availability on July 24 2009) to allow them some time to build the operating system into computers and other “smart” hardware so that these Windows 7 or Server 2008 R2 powered machine can be available in time for the Windows 7 and Windows Server 2008 R2 worldwide general launches. Windows 7 set to debut publicly on October 22nd, 2009 and Windows Server 2008 R2 will be generally available on or before that date, accourding to Microsoft’s press release. The whole release schedule of Windows 7 RTM to OEM, MSDN, TechNet, Action Packs and Microsoft partners has also been announced.

According the the Windows 7 Team Blog, the RTM is build 7600, and was declared and signed off after all validation checks and significant RTM quality bar testing on a RTM contender were met. Microsoft released Windows 7 Beta at build 7000, and Windows 7 RC at build 7100. Steve Ballmer, Microsoft Chief Executive Officer also confirmed Windows 7 has finalized during Microsoft Global Exchange (MGX) in Atlanta, Georgia later in the day.

Windows Server 2008 R2 and free standalone Hyper-V Server 2008 R2 have also been declared to RTM milestone by Windows Server Division Blog. Windows Server 2008 R2 and Hyper-V Server 2008 R2 release schedule is little faster but almost similar when compared with Windows 7, with an evaluation software available for download in the first half of August and the full product available to customers with Software Assurance in the second half of August.

The Windows 7 and Windows Server 2008 R2 builds that are being RTM were compiled on Monday July 13, with full build version string of 6.1.7600.16385, also written as 7600.16385.090713-1255, as confirmed by Larry Osterman, a 20+ years Microsoft software design veteran engineer on his blog. The official RTM build is signed off on July 17, 2009 in a long process that only completed today.

With the RTM of Windows 7 and Windows Server 2008 R2, it also officially marks the end of Windows 7 and Server 2008 R2 alpha, beta and release candidate phase. However, don’t expect Windows 7 development to stop, as future updates will come soon (probably sooner than most expected) in the form of hotfixes and service pack with rumor of Service Pack 1 (SP1) already emerging.

While no official untouched and unmodified Windows 7 and Windows Server 2008 R2 DVD ISO images have been leaked yet, end-user self-made ISO images of Windows 7 RTM and home-made ISO of Windows Server 2008 R2 RTM have already available for download, which although may not stamp with official signature of Microsoft, but it’s based on original install.wim (the archive that stores all Windows 7 system files) extracted from original Windows 7 RTM DVD.

Windows 7 RTM includes a version check that blocks upgrade path from pre-release version of Windows 7 (e.g. Windows 7 RC or Beta) to Windows 7 final RTM build. Use this hack to modify cversion.ini to allow in-place upgrade from prerelease version of Windows 7.

Update: Original Windows 7 RTM ISO (x64 andd x86), original untouched Windows 7 RTM OEM ISO (32-bit and 64-bit) and Windows 7 E RTM ISO (32-bit and 64-bit) have leaked.

Lastly, Engineering Windows 7 blog has published a video clip showing the final few minutes before RTM, where it’s a sign-off process where each and every team that contributed to Windows formally commits to having successfully executed the work necessary for the product to be in the release process. The video shows Windows 7 Team gather one last time (for Windows 7) in the “Ship Room” and a representative from each team signs (literally) and signifies their teams readiness for manufacturing.

Sometimes you might need to generate a Kernel Memory dump file to troubleshoot the issues related to Kernel Mode components. There are three methods you can use to do so as described below:

* Crash.exe
* CrashOnCTRLScroll
* NMI (NMICrashDump)

The Crash.exe is available in both command line and GUI version.

To use CrashOnCTRLScroll, you need to create a registry entry at the following location in the registry:

* KEY Name: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters
* Entry Name: CrashOnCTRLScroll
* Type: DWORD
* Value: 1(enabled), 0(disabled)

You need to restart the server for changes to take effect. After you have restarted the server, use the “CTRL+Scroll Lock Scroll Lock” to crash the server. A memory dump file will be generated at the default location on system drive.

To use the third method (NMI), you need to make sure that your server support the Non-Maskable Interrupt (NMI) capabilities. To use this method you need to create a registry entry at the following location and then press a switch located on the server to crash and generate a Kernel Memory dump file:

*
KEY Name: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
*
Entry Name: NMICrashDump
*
Type: DWORD
*
Value: 1

Introduction

In Windows Server 2008, Microsoft has brought back a feature that we have not seen since Windows NT; Read Only Domain Controllers. In this article, I will explain why this is, and the advantages of using Read Only Domain Controllers.

I hardly ever watch television, but when I sat down to write this article, I couldn’t help but remember an episode of 30 Rock that I saw a while back. In that episode, the show’s main character, Liz Lemon, was dating a guy who was the only person in New York City who was still selling pagers. When Liz told him that nobody uses pagers anymore because everybody uses cell phones, he insisted that technology was cyclical, and that the pager was going to make a big comeback.

Although the remark was intended to be comical, I think that technology is more cyclical than most people realize. For example, I do not expect to ever see the pager making a comeback, but is cell phone texting really all that different from the text based pagers that we all had fifteen years ago?

Perhaps a better example of the cyclical nature of some technology is a new type of domain controller found in Windows Server 2008 called a Read Only Domain Controller, or RODC. The reason why I say that this is an example of cyclical technology is because in a way, RODCs are a left over from over a decade ago.

Windows NT was Microsoft’s first Windows Server operating system. Like modern Windows Server operating systems, Windows NT fully supported the use of domains. What was different though, was that only one domain controller within each domain was writable. This domain controller, known as the Primary Domain Controller or PDC, was the only domain controller that an administrator could write information to. The primary domain controller would then propagate updates to the other domain controllers within the domain. These other domain controllers were known as backup domain controllers, and were read only in the sense that they could only be updated by the primary domain controller.

Although this domain model worked, it had its downside. Most notably, a problem with the primary domain controller could cripple the entire domain. As you probably know, Microsoft introduced some major changes to the domain model when they released Windows 2000 Server. Windows 2000 Server introduced two new technologies for domain controllers, both of which are still in use today; the Active Directory, and the multi master domain model.

Although there is still a PDC emulator role and a few other specialized roles, for the most part every domain controller in a multi master domain model is writable. That means that an administrator can apply an update to any domain controller, and the update will eventually be propagated to all of the other domain controllers in the domain.

The multi master domain model was retained in Windows Server 2003, and is still used in Windows Server 2008. However, Windows Server 2008 also allows you to create Read Only Domain Controllers. RODCs are domain controllers on which the Active Directory database cannot be updated directly by administrators. The only way of updating these domain controllers is to apply a change to a writable domain controller, and then allow the change to propagate to a RODC. Sound familiar?

As you can see, RODCs are nothing short of a relic from the days of Windows NT. In this case technology truly has become cyclical! Of course Microsoft would not have brought back RODCs if there were not some advantage to doing so.

Before I begin explaining why Microsoft brought back RODCs, let me first clarify that the use of RODCs is completely optional. If you want every domain controller in your entire forest to be writable, then you can certainly do that.

The other thing that I want to quickly mention is that even though RODCs are very similar to the Backup Domain Controllers (BDCs) that were used in the days of Windows NT, they have evolved a bit. There are a couple of things that are unique about RODCs, and I will point those things out as we go along.

OK, so why did Microsoft bring back RODCs? It has to do with the challenges of supporting branch offices. Branch offices have traditionally been tough to support because of their isolation and because of the nature of the connection between the corporate headquarters and the branch office.

Traditionally, there have been several different options for managing branch offices, but each has its own set of advantages and disadvantages. One of the more common ways of dealing with branch offices is to keep all of the servers in the main office, and provide the branch office users connectivity to those servers through a WAN link.

Of course the most obvious disadvantage to using this method is that if the WAN link goes down then the users who are in the branch office are unable to do much of anything, because they are completely cut off from all of the server resources. Even if the WAN link is functional though, productivity may suffer because the WAN links are often slow and easily congested.

Another common option for dealing with branch offices is to place at least one domain controller in the branch office. Often times, this domain controller will also act as a DNS server and as a global catalog server. That way if the WAN link goes down, the users in the branch office will at least be able to log into the network. Depending on the nature of the branch office user’s jobs, there may also be other servers located at the branch office.

While this solution usually works out pretty well, there are some disadvantages to using it. The primary disadvantage is the cost. Placing servers in branch offices requires the organization to shell out money for server hardware and for any necessary software licenses. There are also support costs to consider. An organization needs to determine whether they want to hire full time IT staff to support the branch office, or if they can deal with the amount of time that it takes the IT staff to travel from the main office to the branch office when onsite support is needed.

Another issue with keeping servers at the branch office is security. It has been my experience that servers located outside of the datacenter are basically unsupervised. They are often just locked in a closet at the branch office, and employees at the office who have a key to the closet have to be trusted not to mess with the servers.

As I mentioned earlier, WAN connections can often be slow and unreliable. Herein lies another problem with placing servers in a branch office. Domain controller replication traffic can congest the WAN link.

This is where RODCs come into play. RODCs are just like any other domain controllers, except that the Active Directory database is not directly writable. Placing an RODC at a branch office does not get rid of Active Directory replication traffic, but it does reduce the workload of the bridgehead servers because only inbound replication traffic is allowed.

RODCs may also improve security, because people at the branch office cannot make any changes to the Active Directory database. Furthermore, no account information is replicated to RODCs. This means that if someone were to steal a RODC, they would not be able to use the information that they get off of it as a means for hacking user accounts. The fact that user account information is not written to RODCs also reduces the amount of replication traffic that flows across the WAN link, but it does mean that with some exceptions user authentication still depends on the WAN link being available.

Introduction

We all need proper DNS resolution for our network applications. When it this is not working, what do you do? Let us find out…

Let’s face it, when DNS resolution is not working, using anything on your computer that has to do with networking is painful because there is good chance it will not work. DNS really is not a “nice feature” of a network, it is a requirement. As a network admin, I have heard the alarming cry of end users moaning that the network is down, when it would be the cause of the DNS servers. In these cases I assure them that the network is up and running fine but it is the DNS servers that are down! As you can imagine, that does not go over very well with them because to an end user, it is all the same thing. DNS is “the network” (not that they know what DNS is anyway).

So how do you troubleshoot this critical network infrastructure service when you are on an end user PC (or your PC) and DNS is not resolving a DNS name? Here are the 10 tips and tricks that I recommend you try to get DNS working again…

1. Check for network connectivity

Many times, if you open your web browser, go to a URL, and that URL fails to bring up a website, you might erroneously blame DNS. In reality, the issue is much more likely to be caused by your network connectivity. This is especially true if you are using wireless networking on a laptop. With wireless security protocols, the key will be periodically renegotiated or the signal strength will fade, causing a loss of network connectivity. Of course, you can lose network connectivity on any type of network.

In other words, before blaming DNS for your problems, start troubleshooting by checking “OSI Layer 1 – Physical” first and then check your network connectivity. Here you should find a wireless connection with a valid Internet connection.

image0021244573867828

Figure 1: Good Wireless Network Connection

Notice how the Access is Local and Internet. If it just said “Local” then you do not have a valid network address (you only have a private APIPA that starts with 169.x.x.x).

This brings me to my next point. Make sure that you have a valid IP address on your network. You can check this out by going to View Status on the screen above and then to Details, you can check your IP address and verify your DNS Server IP addresses. Again, if you have a 169.x.x.x IP address you will never get to the Internet. Here is what it looks like:

image0041244573867843

Figure 2: Verifying your IP address and DNS Server IP addresses

2. Verify your DNS server IP addresses are correct and in order

Once you know that you have network connectivity and a valid IP address, let us move on to digging deeper into DNS by verifying that your DNS Server IP addresses are correct and are in the right order.

If you look at Figure 2 above, you can see the IPv4 DNS Server IP addresses. Notice that these are both on my local LAN / subnet so that I can access them even if my default gateway is down. This is how it works on most enterprise networks. However, your DNS servers do not always have to be on your subnet. In fact, with most ISPs, the DNS Server IPs would not even be on the same subnet as the default gateway.

In most home/SMB router configurations, they do not have their own DNS servers and the SMB router is proxying DNS to the real DNS Servers. In that case, your DNS Server IP address may be the same as your router.

Finally, make sure that your DNS Servers are in the right order. In my case, with the graphic in Figure 2, my local DNS Server is 10.0.1.20. It is configured to forward any names that it cannot resolve to 10.0.1.1, my local router. That router is proxying DNS to my ISP’s DNS Servers. I can look up those DNS Servers on my router, shown below in Figure 3.

image0061244573867843

That brings me to two more points. First, make sure that your DNS Servers are in the right order. If you have a local DNS Server, like I do, and you are looking up a local DNS name, you want your PC client to lookup that local DNS name in the local DNS Server FIRST, before the Internet DNS Server. Thus, your local DNS server needs to be first in your DNS settings as these DNS Server IPs are in the order that they will be used.

Secondly, you should be able to ping the IP address of your ISP’s DNS Servers. So, just as my DNS servers are listed above on my router, I can verify that I can ping them even from my local PC:

Notice how the response time from the ping to my ISP’s DNS Server is horrible. This could cause slow DNS lookups or even failure if it takes too long for the DNS server to respond.

3. Ping the IP address of the host you are trying to get to (if it is known)

A quick way to prove that it is a DNS issue and not a network issue is to ping the IP addressof the host that you are trying to get to. If the connection to the DNS name fails but the connection to the IP address succeeds, then you know that your issue has to do with DNS.

I know that if your DNS Server is not functioning then it could be hard to figure out what the IP address is that you want to connect to. Thus, to carry out this test, you would have to have a network diagram or, like many network admins do, just have the IP address of a common host memorized.

If this works, until the DNS server is available again, you could manually put an entry in your hosts file to map the IP to the hostname.

image0081244574026375

4. Find out what DNS server is being used with nslookup

You can use the nslookup command to find out a ton of information about your DNS resolution. One of the simple things to do is to use it to see what DNS server is providing you an answer and which DNS server is NOT


Figure 4: Pinging my ISP’s DNS Server

image0101244574026375

Notice, in Figure 5, how my local DNS server failed to respond but my ISP’s DNS server did provide me a “non-authoritative answer”, meaning that it does not host the domain but can provide a response.

Figure 5: nslookup output

You can also use nslookup to compare the responses from different DNS servers by manually telling it which DNS server to use.

Notice, in Figure 5, how my local DNS server failed to respond but my ISP’s DNS server did provide me a “non-authoritative answer”, meaning that it does not host the domain but can provide a response.

You can also use nslookup to compare the responses from different DNS servers by manually telling it which DNS server to use.

5. Check your DNS suffix

If you are looking up a local host on a DNS server that your PC is a member of, you might be connecting to a host and not using the FQDN (fully qualified DNS name) and counting on the DNS suffix to help out. For example, if I were to connect to “server1”, the DNS server could have multiple entries for that DNS name. You should have your network adaptor configured with the connection specific DNS suffix, as shown on the first line on the graphic above, labeled Figure 1. Notice how in that graphic my DNS suffix is wiredbraincoffee.com. Whenever I enter just a DNS name like server1, the DNS suffix will be added on the end of it to make it server1.wiredbraincoffee.com.

You should verify that your DNS suffix is correct.

6. Make sure that your DNS settings are configured to pull the DNS IP from the DHCP server

It is likely that you would want your network adaptor to obtain DNS Server IP addresses from the DHCP Server.  If you look at the graphic below, this adaptor has manually specified DNS Server IP addresses.

image0121244574026390


You may need to change to “Obtain DNS server address automatically” in order to get a new DNS server IP. To do this, open the Properties tab of your network adaptor and then click on Internet Protocol Version 4 (TCP/IPv4).

7. Release and renew your DHCP Server IP address (and DNS information)

Even if your adaptor is set to pull DNS information from DHCP, It is possible that you have an IP address conflict or old DNS server information. After choosing to obtain the IP and DNS info automatically, I like to release my IP address and renew it.

While you can do this with a Windows Diagnosis in your network configuration, I like to do it in the command prompt. If you have UAC enabled, make sure you run the Windows cmd prompt as administrator then do:


IPCONFIG /RELEASE

IPCONFIG /RENEW

Then, do an IPCONFIG /ALL to see what your new IP and DNS Server info looks like.

8. Check the DNS Server and restart services or reboot if necessary

Of course, if the DNS server is really hung, or down, or incorrectly configured, you are not going to be able to fix that at the client side. You may be able to bypass the down server somehow, but not fix it.

Thus, it is very likely that you, or the admin responsible for the DNS server, need to check the DNS Server status and configuration to resolve your DNS issue.

9. Reboot your small office / home DNS router

As I mentioned above in #2 and showed in Figure 3, on home and small office routers, the DNS server settings are typically handed out via DHCP with the DNS server set to the IP of the router and the router will proxy the DNS to the ISP’s DNS server.

Just as it is possible that your local PC has network info (including DNS server IP Addresses), it is also possible that your router has bad info. To ensure that your router has the latest DNS server information, you may want to do a DHCP release and renew on the router’s WAN interface with the ISP. Or, the easier option may be just to reboot the router to get the latest info.

10. Contact your ISP

We all know how painful it can be to contact an ISP and try to resolve a network issue. Still, if your PC is ultimately getting DNS resolution from your ISP’s DNS servers, you may need to contact the ISP, as a last resort.

When you add drives to your computer, such as an extra hard drive, a CD drive, or a storage device that corresponds to a drive, Windows automatically assigns letters to the drives. However, this assignment might not suit your system; for example, you might have mapped a network drive to the same letter that Windows assigns to a new drive. When you want to change drive letters, follow these steps:

  1. Right-click My Computer, and then click Manage.
  2. Under Computer Management, click Disk Management. In the right pane, you’ll see your drives listed. CD-ROM drives are listed at the bottom of the pane.
  3. Right-click the drive or device you want to change, and then click Change Drive Letter and Paths.
  4. Click Change, click Assign the following drive letter, click the drive letter you want to assign, and then click OK.

You will not be able to change the boot or system drive letter in this manner. Many MS-DOS-based and Windows-based programs make references to a specific drive letter (for example, environment variables). If you modify the drive letter, these programs may not function correctly.

One popular post can bring your more traffic and links than a month’s worth of your usual content.

In this post, I want to set you a challenge with the potential to launch your blog into the stratosphere.

Make the next post you write your most popular post ever.

The following ten tips form my key advice for tackling this task. I used all of them when hitting the Digg front page for the first time. There’s no blueprint you can follow to write an incredibly popular post, but you won’t have a chance unless you try. I’m confident these tips will give you a good shot at success.

1. Time is more important than talent. Work on something for eight hours and you can bet it will be good. You don’t need to spend that long, however (though that’s how long it took me to craft the first post I wrote that hit the Digg front page). More time means you can refine, format and fill your post with plenty of value. Take the time to really craft your content. It will show in the finished product.

2. Use your best idea. A post will never become wildly popular unless it fulfills a need, and does so emphatically. What’s something your niche wants but hasn’t got yet? Can you assemble a whole lot of really awesome (targeted) resources in one place? The more your posts helps people, the better it will do.

3. Use formatting to your advantage. These days, social media is key when it comes to launching your posts into the stratosphere. Social media users are notoriously spoiled for choice, however. Use formatting to emphasize the best aspects of your post. Hone in on your funniest lines, your most profound bits of advice, your best resources. Make them stand out.

4. Brainstorm headlines. There are probably one or two bloggers who’ve completely mastered the art of writing headlines for social media (you’ll know who they are). The rest of us haven’t been blessed with such skills. When you see a great headline, chances are it’s option #12 of a dozen choices. Few of us can think of a great headline straight away. Spend ten minutes brainstorming and you’re bound to stumble across something that works. A weak headline will cripple your post’s chances of success. It’s essential that you put a lot of work into getting it right.

5. Invest plenty of value in your post. Ever bookmarked or voted for something without completely reading it? We’ve all done it. It’s because of the ‘Wow’ factor — the presence of enough promised value in one place gets the reader enthusiastic about the post straight away. Instead of 5 tips, why not share 50? Instead of 9 resources, why not 40 or more?

7. Beauty is in the eye of the beholder. If your post looks good, it will draw readers in. Take the time to add images, thumbnails and formatting to what you create. Make your post a visual feast. With so much web content presented in a bland way, your post is guaranteed to stand out.

8. Tell them what you’re going to tell them. Readers will skip your waffly introduction. You can say the same in less words, particularly when you’re writing for an impatient reader: someone who wants to get straight into your tips/resources/opinions. Use your introduction to highlight why the reader should stick with your post. There’s a reason my post introductions mainly consist of: “In this post, I’m going to do this, this and that.” It’s what people really want to know: what am I getting in exchange for my attention?

9. Send messages with links. The best way to get a blogger to investigate your blog is by linking to them. We’ve got a natural desire to know what’s being said about us. If your post becomes really popular, each link inside it should send enough traffic outwards to be worth investigating. Be generous with your outbound links when writing your most popular post. It gives other bloggers an incentive to link to you, because it’s ultimately more promotion for them.

10. Utilize your network. If you want people to Digg, Stumble or Reddit your post, there’s no reason why you need to sit back with fingers crossed and hope it happens. Ask them. Your loyal readers like you. You entertain them, or teach them, or help them. If voting is a simple matter of clicking a link they’ll be more than happy to do so. Ask for votes in your post and email readers and social media influencers. In most cases you will need to get the snowball rolling. After that, others will do most of the work for you.

Bonus tip:

11. Examine what worked before. Study your most popular posts so far. What’s common about them? Why did they work? What needs did they address? In creating your most popular post, it’s important to learn by example and build on what has worked for your blog in the past. Another good idea is to analyze the most popular posts on other blogs in your niche. Why did they work? What’s remarkable about them? You can transfer those qualities over into what you write.

Introduction

Windows Server 2008 offers a lot of improvements over Windows 2003, but the backup program is not one of them. This article discusses the issues that you need to know about before you attempt to backup your Windows 2008 server.

Microsoft has included a low end backup utility (NTBACKUP) with Windows Server ever since Windows NT 3.51 was released. Although NTBACKUP has undergone a few changes over the years, it has always retained the same basic structure. When Microsoft created Windows Server 2008, they decided to completely rewrite the backup application. In doing so, they have made some major changes to it that any Windows Administrator who is considering deploying Windows Server 2008 needs to be aware of.

Compatibility Issues

The first change that many administrators notice is that NTBACKUP is no longer called NTBACKUP, but rather Windows Server Backup. The new name is far from being the most important change though. From an administrator’s standpoint, the most important change that you need to be aware of is that Windows Server Backup is not compatible with backups that you have made using NTBACKUP.

If you use NTBACKUP to back your data up to an external hard drive or to a network drive, then the data is encapsulated within a .BKF file. Although Microsoft has used the .BKF format for many years now, they have discontinued support for it in Windows Server 2008.

All is not lost though. If you have data that is backed up in .BKF format, you can restore that data to a Windows 2008 server. You just can not do it natively. Instead, you will have to download Microsoft’s Windows NT Backup – Restore Utility. This utility will not allow you to create backups in .BKF format, but it will allow you to restore your data.

That is the good news. The bad news is that unlike its predecessors, Windows Server Backup does not offer support for tape drives. Therefore if you have been using NTBACKUP to write data to tape backup, then you are going to want to leave at least one Windows 2003 server on your network so that you can retrieve the data off of your backup tapes should the need arise.

Access to Windows Server Backup

Another thing about Windows Server Backup that seems to throw some administrators a curve ball is the fact that it is not installed by default. In the past, Microsoft has always included NTBACKUP in a default Windows installation, but if you want to use Windows Server Backup, you have to install it first. Fortunately, this is not difficult to do.

To install Windows Server Backup, open the Server Manager, and click on the Features container. Next, click on the Add Features link, and Windows will display a list of the available features. Select the Windows Server Backup Features check box, and click Next. Take a moment to look at the summary screen and verify that you have selected the correct feature to be installed. Assuming that everything looks good, click the Install button. When the installation process completes, click the Close button.

Loss of Flexibility

Some of Microsoft’s decisions in the way that they designed Windows Server Backup almost make sense. For example, I can see why they dropped support for tape drives. It is probably because tape drives are starting to go extinct in favor of disk based backup solutions. Good tape backup drives do not come cheap though, so I wish that Microsoft would continue to allow us to use them, but I digress.

Some of the other design changes really do not make sense to me. For example, if you want to run a scheduled backup, you have to provide Windows with a dedicated hard drive that it can use. Granted, hard drives are cheap these days, but requiring a dedicated drive all but rules out backup media rotation or storing backups offsite. You can use external dedicated hard drives, but there are practicality issues to consider.

Furthermore, when I say that a dedicated drive is required, I do mean dedicated. Windows will not even give you access to the drive through Windows Explorer. You can only access it through Windows Server Backup.

Fortunately, this does not mean that a dedicated hard drive is your only option for backing up Windows. If you are running a scheduled backup, then you pretty much have to write the backup to a dedicated hard drive (although you can get around this restriction if you are into scripting). If you are performing a non scheduled backup, you have the option of writing the backup to a UNC share, or even to removable media.

In my opinion, the area in which Windows Server Backup has lost the most flexibility is in the fact that it does not allow you to backup individual files or folders. Yes, you read that right. The lowest level of granularity that is supported is an entire volume. You can backup a volume, or the entire server, but you really do not have any other choices.

I tend to think that this has something to do with the new backup file format that Windows Server Backup uses. Rather than using .BKF files, Windows Server Backup writes backup in .VHD format. I will talk more about this in the next section.

Why Did They Do It?

So why did Microsoft take a feature that has been working well for the last decade and basically ruin it? Well, I have not had the opportunity to ask anyone at Microsoft this question, but I suspect that it may have something to do with trying to discourage administrators from using Windows Server Backup as an enterprise backup solution.

Microsoft has always told us that NTBACKUP should only be used as a lightweight backup solution, and yet I know plenty of administrators who use it as a comprehensive backup solution for their entire enterprise. Windows Server Backup’s restrictions make it easy to backup an individual server, but make it completely impractical to use it to backup an entire organization.

I think that another reason why Microsoft has made these changes may have to do with the new backup format. Rather than writing backups as .BKF files, backups are written as .VHD files. As you may know, .VHD files are virtual hard drive files. You can not take a Windows Server 2008 backup file, link it to Virtual Server, and boot off of it (although that is probably going to be possible at some point in the future). You can however, mount a Windows backup file as a volume in Virtual Server. This feature provides administrators with a very easy way of extracting individual files from a backup set.

Conclusion

If you have read through this article, then you know that I consider Windows Server Backup to be a major disappointment. Even so, it is important to remember that you do not absolutely have to use it. You can still run NTBACKUP off of another server, or you could resort to using a third party backup application. It is also important to keep in mind that Windows Server Backup does have a few good points.

Windows Server 2008 offers a lot of improvements over Windows 2003, but the backup program is not one of them. Even so, there are a few redeeming features. This article discusses the issues that you need to know about before you attempt to backup your Windows 2008 server.

If you read the first part of this article series, then you know that I am not exactly a big fan of the new Windows Backup program. Even so, I did not want to just write an article bashing the new backup utility, and have that be the end of it. Windows Backup does have some good points, and I would not be doing my job if I did not tell you about them. Therefore I want to wrap up the series by telling you about some of Windows Backups good points.

Before I Begin

Before I get started, there is one additional caveat to using Windows Backup that I want to mention. This really should have gone in my last article, but I forgot to mention it. Windows Backup can only backup volumes that are using the NTFS file system. Volumes formatted using other file systems cannot be backed up.
Simplified Restoration

The best change that Microsoft made in Windows Server backup (at least in my humble opinion) was that they made it a lot easier to perform restorations. Even though you can not pick and choose which files and folders you want to back up, you do have the option of restoring individual files and folders. Of course you have always been able to do that with NTBACKUP.

You will notice Microsoft’s simplified restoration if you ever need to restore an incremental backup. Previously, restoring an incremental backup usually meant that you had to restore multiple backups. Now, you can just choose the date that you want to restore your data from, and the restoration process will restore any necessary files or folders automatically, even if the data is scattered across multiple incremental backups.

Another area in which Microsoft has made some improvements to Windows Server Backup is in its ability to restore the Windows operating system. I have to confess that I have yet to use the Windows Server 2008 version of Windows Backup to perform a bare metal restore, but I have used the Windows Vista version. Aside from a couple of minor quirks, performing a bare metal restore is really simple.

If you have ever performed a bare metal restore on a machine that was running Windows Server 2003, using NTBACKUP, then you know that there was quite a bit of work involved in the process. Whenever I have had to perform a full restoration of a Windows Server 2003 machine, I had to install the Windows operating system before I could even begin the restore process. I also found through experience that the restore process usually would not work right unless I also installed the same service pack that the server was running at the time that the backup was made.

In contrast, I performed a bare metal restore of a Windows Vista machine last week. Like Windows Server 2008, Windows Vista also uses the Windows Server backup program. There are some minor differences between the two versions, but they are very similar to each other.

At any rate, I have made a full system backup to a USB hard drive. I then installed a new hard drive into the machine. I did not bother to format the drive, partition it, or do anything else to prepare it for use. I simply inserted my Windows Vista installation disk into the machine and boot off of it. Rather than installing Windows Vista, I chose the Repair option, followed by the option to restore my backup. Windows Backup took care of everything. My hard drive was automatically partitioned and formatted, and my PC was returned to its previous state in no time.

Faster Backups

This leads me to another improvement I want to talk about. Windows Backup seems to run more quickly than NTBACKUP did. I have not actually timed the backup process, but it does feel faster than what I was used to with NTBACKUP, and Microsoft also claims that Windows Backup performs better than NTBACKUP because of the way that uses block level backup technology and the Volume Shadow Copy Services (VSS).

Whenever you perform a full backup, Windows scans the disk that is being backup and copies any hard drive blocks that contain data. These blocks are copied to a .VHD file, which is the same file format as the virtual hard drive files that are used by some of Microsoft’s virtualization products. Because of the way that these blocks are copied, the backup is not compressed. It is however smaller than the volume that is being backup, because only blocks containing data are copied.

If you happen to perform an incremental backup, then Windows will scan the hard drive to see which blocks contain new data, or data that has changed since the previous backup was made. Only these blocks are backed up, which makes incremental backups really fast.

Of course this raises the question of what happens to the data that is stored in the virtual hard drive file when you perform an incremental backup. Old data that was previously residing in blocks that are being replaced is written to the shadow copy storage area. The Volume Shadow Copy Service is used to differentiate between backup sets, and to track where the various blocks are being written to within the shadow copy storage.

Bare Metal Restoration

Window Backup’s use of block level backup technology has at least one side effect that you need to be aware of when you are performing a bare metal restore. When you perform a bare metal restore using Window Backup, the new hard drive will be partitioned identically to the way that the old one was. If the new hard drive is larger than the old one, you will find that there is lots of wasted space on the drive. That does not mean that you can not use this space, it simply means that the space is not used by default. You always have the option of extending a volume so that you can make use of empty space on the drive.

Manageability

One last thing that I want to mention is that there are some very welcome changes is to Windows Backup in regard to its manageability. For starters, Windows Backup can finally be run within the Microsoft Management Console. This means that you can use the console to manage backups scheduled to run on other servers.

The other really welcome change is that you can control virtually every aspect of the backup process through the WBADMIN command. NTBACKUP was also command line driven, but the WBADMIN command offers a whole lot more flexibility. You can see a summary of some of the WBADMIN commands here.

Conclusion

As you can see, Windows Server Backup is not all bad. In spite of the fact that I miss some of the features (OK, most of the features) from NTBACKUP, Windows Backup does have its good points.