quarta-feira, 20 de janeiro de 2016

Use HTTP instead of HTTPS on Exchange 2013

Using Exchange PowerShell:

Set-MapiVirtualDirectory -Identity "servername\mapi (Default Web Site)" -InternalUrl https://subdomain.domain.com/mapi -ExternalUrl https://subdomain.domain.com/mapi -IISAuthenticationMethods NTLM,Negotiate

Set-OrganizationConfig -MapiHttpEnabled $true

Enable Multiple RDP Sessions 2012 Server


Enable Multiple RDP Sessions
  1. Log into the server using Remote Desktop.
  2. Open the start screen (press the Windows key) and type gpedit.msc and open it
  3. Go to Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections.
  4. Set Restrict Remote Desktop Services user to a single Remote Desktop Services session to Disabled.
  5. Double click Limit number of connections and set the RD Maximum Connections allowed to 999999.
Disable Multiple RDP Sessions
  1. Log into the server using Remote Desktop.
  2. Open the start menu and type 'gpedit.msc' and open it
  3. Go to Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections.
  4. Set Restrict Remote Desktop Services user to a single Remote Desktop Services session to Enabled.

quarta-feira, 6 de janeiro de 2016

Reduce the size of the WinSxS Directory and Free up Disk Space with a New Update for Windows 7 SP1 Clients

Found this insteresting link:


sexta-feira, 11 de dezembro de 2015

Windows Server 2012 Deduplication is Amazing!


The following article describes how to use Windows Server data deduplication on an Solid State Drive (SSD) that holds active Hyper-V virtual machines.

Coloring Outside the Lines Statement:
This configuration is not supported by Microsoft.  See Plan to Deploy Data Deduplication for more information.  Use these procedures at your own risk. That said, it works great for me.  Your mileage may vary.

A while back I decided to add another 224GB SATA III SSD to my blistering Windows Server 2012 Hyper-V server for my active VMs.  The performance is outstanding and it makes the server dead silent.  I moved my primary always-on HyperV VM workloads to this new SSD:

  • Domain Controller on WS2012
  • Exchange 2010 multi-role server on WS2012
  • TMG server on WS2008 R2
These VMs took 134GB, or 60%, of the capacity of the drive which was fine at the time.  Later, I added a multi-role Exchange 2013 server which took up another 60GB of space.  That left me with only 13% free space, which didn't leave much room for VHD expansion and certainly not enough to host any other VMs.  Rather than buy another larger and more expensive SSD, I decided to see how data deduplication performs in Windows Server 2012.
Add the Data Deduplication Feature
Data Deduplication is a feature of the File and Storage Services role in Windows Server 2012.  It's not installed by default, so you need to install it using the Add Roles and Features Wizard (as above) or by using the following PowerShell commands:
PS C:\> Import-Module ServerManager
PS C:\> Add-WindowsFeature -Name FS-Data-Deduplication
PS C:\> Import-Module Deduplication

Next, you need to enable data deduplication on the volume.  Use the File and Storage Services node of Server Manager and click Volumes.  Then right-click the drive you want to configure for deduplication and select Configure Data Deduplication, as shown below:

Configuring Data Deduplication on Volume X:
So far, this is how you normally configure deduplication for a volume.  You would normally configure deduplication to run on files older than X days, enable background optimization, and schedule throughput optimization to run on at specified days and times.  It's pretty much a "set it and forget it" configuration.

From here on I'm going to customize deduplication for my Hyper-V SSD.

In the Configure Data Deduplication Settings for the SSD, select Enable data deduplication and configure it to deduplicate files older than 0 days. Click the Set Deduplication Schedule button and uncheck Enable background optimization, Enable throughput optimization, and Create a second schedule for throughput optimization.

Enable Data Deduplication for Files Older Than 0 Days

Disable Background Optimization and Throughput Optimization Schedules
Click OK twice to finish the configuration.  What we've done is enabled data deduplication for all files on the volume, but deduplication will not run in real-time or on a schedule.  Note that these deduplication schedule settings are global and affect all drives configured for deduplication on the server.

You can also configure these data deduplication settings from PowerShell using the following commands:
PS C:\> Enable-DedupVolume X:
PS C:\> Set-Dedupvolume X: -MinimumFileAgeDays 0
PS C:\> Set-DedupSchedule -Name "BackgroundOptimization", "ThroughputOptimization", "ThroughputOptimization-2" -Enabled $false
This configuration mitigates the reason why Microsoft does not support data deduplication on drives that host Hyper-V VMs.  Mounted VMs are always open for writing and have a fairly large change rate.1  This is the reason Microsoft says, "Deduplication is not supported for files that are open and constantly changing for extended periods of time or that have high I/O requirements."

In order to deduplicate the files and recover substantial disk space you need to shutdown the VMs hosted on the volume and then run deduplication manually with this command:
PS C:\> Start-DedupJob –Volume X: –Type Optimization
This manual deduplication job can take some time to run depending on the amount of data and the speed of your drive.  In my environment it took about 90 minutes to deduplicate a 224GB SATA III SSD that was 87% full.  You can monitor the progress of the deduplication job at any time using the Get-DedupJob cmdlet.  The cmdlet shows the percentage of progress, but does not return any output once the job finishes.

You can also monitor the job using Resource Monitor, as shown below:

Process Monitor During Deduplication
Here you can see that the Microsoft File Server Data Management Host process (fsdmhost.exe) is processing the X: volume.  When the deduplication process completes, the X: volume queue length will return to 0.

Once deduplication completes you can restart your VMs, check the level of deduplication, and how much data has been recovered.  From the File and Storage Services console, right-click the volume and select Properties:

Properties of Deduplicated SSD Volume
Here we can see that 256GB of raw data has been deduplicated to 61.5GB on this 224GB SSD disk - a savings of 75%!!!  That leaves 162GB of raw disk storage free.  I could easily create or move additional VMs to this disk and run the deduplication job again.

The drive above now actually holds more reconstituted data than the capacity of the drive itself with no noticeable degradation in performance.  It currently hosts the following active Hyper-V VMs:

  • Domain Controller on WS2012
  • Exchange 2010 multi-role server on WS2012
  • TMG server on WS2008 R2
  • Exchange 2013 multi-role server on WS2012
  • Exchange 2013 CAS on WS2012
  • Exchange 2013 Mailbox Server on WS2012
  • Because real-time optimization is not being performed, the VMs will grow over time as changes are made and data is added. The manual deduplication job would need to be run as needed to recover space.
  • Since the SSD actually contains more raw duplicated data than the drive can hold, I'm unable to disable deduplication without moving some data off the volume first.
  • Even though more VMs can be added to this volume, you have to be sure that there is sufficient free space on the volume to perform deduplication.
For even more information about Windows Server 2012 data deduplication, I encourage your to read Step-by-Step: Reduce Storage Costs with Data Deduplication in Windows Server 2012!
I hope you find this article useful in your own deployments and I'm interested to know what your experience is.  Please leave a comment below!

Lab Server Builds and Parts Lists (Dec-2015)


Build your own blistering fast Windows Hyper-V lab server starting at $900!

I'm very pleased to provide you my latest EXPTA Gen6 home lab server builds. Advances in hardware and virtualization technology have made it possible for IT Pros to build sophisticated systems that host more VMs than ever before. My Home Lab Server Survey results show that while there's still tremendous interest in 32GB entry-level servers at around $1,000, there's also a lot of interest in 64GB servers at the $1,700 price point.

Based on these survey results and for the fist time ever, I'm providing three different server builds:

  • Intel Core i5 quad-core, 32GB RAM, SSD, small form-factor for $900. I can finally break the $1,000 barrier without sacrificing quality! This makes it super-easy for IT Pros to build a blistering fast Windows Hyper-V server that can run many VMs.
  • Intel Core i7 hex-core, 64GB RAM, SSD, ATX form-factor for $1,725. This build is geared toward those who want double the VM density and outstanding performance.
  • Intel Xeon E5 hex-core, 64GB RAM, SSD, ATX form-factor for $1,835. This build uses true server hardware for the ultimate in reliability and scalability.
Each of the three server builds use components from the vendors' hardware compatibility lists to ensure the utmost in reliability. They will all run Windows Server 2012 R2 and should be "future-proof" to run the upcoming Windows Server 2016 release.

Each build uses the same storage format -- a 256GB SSD for the OS, a 500GB or 1TB SSD for regularly running high performance VMs, and a 1TB traditional hard drive for storing ISOs, software applications, and base images. Each server utilizes SATA III 6Gb/s drives and USB 3.0 ports for the fastest I/O performance.

Most survey respondents indicated that they did not need step-by-step installation guides. If you do need help, look back at my previous Gen4 and Gen5 server build articles for assistance.

As usual, I link to Amazon for components and prices. Amazon does a very good job of maintaining stock, has an excellent return policy, and most of these items are eligible for free two-day shipping via Amazon Prime. If you don't have Prime, you can sign up for a free trial here and cancel after you order the equipment if you want. Please note that it's normal for Amazon prices to fluctuate (usually down) over time.

Add an IP Block List Provider to Exchange Server 2013 Edge Transport

One of the transport agents that is installed on the Exchange 2013 Edge Transport server is the connection filter agent.
[PS] C:\>Get-TransportAgent

Identity                                           Enabled         Priority
--------                                           -------         --------
Connection Filtering Agent                         True            1
Address Rewriting Inbound Agent                    True            2
Edge Rule Agent                                    True            3
Content Filter Agent                               True            4
Sender Id Agent                                    True            5
Sender Filter Agent                                True            6
Recipient Filter Agent                             True            7
Protocol Analysis Agent                            True            8
Attachment Filtering Agent                         True            9
Address Rewriting Outbound Agent                   True            10
The connection filter agent looks at the IP address of a server that is making an SMTP connection to the Edge Transport server and decides whether to block or allow the connection. It makes the decision by looking up the IP address in a block list, allow list, or by querying a block/allow list provider.

When your Exchange organization is receiving spam you can add the IP addresses of the spammers to an IP block list on the Edge Transport server. However this is quite inefficient, as you’ll constantly be adding new IP addresses to the list.

A more effective approach is to use one or more IP block list providers, such as Spamhaus (my personal favourite) or SpamCop.

To add Spamhaus to your connection filter agent run the follow Exchange Management Shell command on the Edge Transport server.

[PS] C:\>Add-IPBlockListProvider -Name Spamhaus -LookupDomain zen.spamhaus.org -AnyMatch $true -Enabled $true -RejectionResponse "IP address is listed by Spamhaus"

Note you can change the rejection message that it sent back to the sender.
[PS] C:\>Set-IPBlockListProvider Spamhaus -RejectionResponse "IP address is listed by Spamhaus Zen."

You can add multiple providers, just make sure you check their guidance on whether there are issues adding multiple lookup domains from the same provider. Also make sure you check their terms and conditions and comply with any commercial usage policies they have.
[PS] C:\>Get-IPBlockListProvider
 Name                                    LookupDomain                            Priority
----                                    ------------                            --------
Spamhaus                                zen.spamhaus.org                        1
SpamCop                                 bl.spamcop.net                         2
After the block list provider has been in place for a day or two you can see the results by running the Get-AntispamTopRBLProviders.ps1 script that ships with Exchange.
[PS] C:\Program Files\Microsoft\Exchange Server\V15\scripts>.\get-AntispamTopRBLProviders.ps1

Name     Value
----     -----
Spamhaus    12

quarta-feira, 25 de novembro de 2015

How to Defrag an Exchange 2010 Mailbox Database




Exchange Server 2010 mailbox databases grow in size as the data within them grows. But they will never shrink when data is removed from them.

For example if you have a 20Gb mailbox database file and move 4Gb worth of mailboxes to another database, the file will remain at 20Gb in size.

However, the database itself will have 4Gb of “white space” in it, which is space that is available for new data to be written without growing the size of the file.

Your options to reclaim that space are to either:

  • Create a new mailbox database and move all the mailboxes to that database
  • Perform an offline defrag of the existing database to shrink the file

Each option has pros and cons. An offline defrag involves an outage for all users on that database, but may be more convenient if there is not additional storage available to allocate to the Exchange server to hold the new database.

On the other hand a mailbox migration has fewer risks, can be less disruptive as a whole, but will generate a lot of transaction logging that needs to be kept under control so it may take longer (ie several nights/weekends to migrate) as opposed to just one outage for a defrag.

Determining Free Space in an Exchange 2010 Mailbox Database

In Exchange 2010 you can see how big your mailbox databases are, and how much white space they have, by running the following command in the Exchange Management Shell.

[PS] C:\>Get-MailboxDatabase -Status | ft name,databasesize,availablenewmailboxspace -auto

Name DatabaseSize AvailableNewMailboxSpace
---- ------------ ------------------------
MB-HO-01 18.26 GB (19,604,766,720 bytes) 9.544 GB (10,247,766,016 bytes)
MB-HO-02 15.63 GB (16,785,670,144 bytes) 3.696 GB (3,968,761,856 bytes)
MB-HO-Archive-01 648.1 MB (679,542,784 bytes) 134.6 MB (141,164,544 bytes)

In the example above the database MB-HO-01 is 18.26Gb in size but has 9.544Gb white space due to archiving that has occurred. If you want to reclaim that disk space then the file can be shrunk by using eseutil to defrag it.

In this example I will demonstrate how to defrag a mailbox database for a single Exchange 2010 Mailbox server that is not a member of a Database Availability Group.

Do not follow the procedure in this document if your Mailbox server is a member of a DAG. Before you defrag any mailbox database please read and understand the pros and cons of this operation and make the best decision for your specific situation.

Preparing to Defrag an Exchange 2010 Mailbox Database

The first thing to be aware of when planning a defrag is that you can only perform this task when the database is dismounted. This means that users with mailboxes on that database will not be able to access their email while you are defragging it.

The second thing to be aware of is that you need some available disk space to perform the defrag. This is because a new file is written during the defrag process, so for a period of time both the old and new files will exist, as well as a temporary file that eseutil creates.

So to plan for an Exchange 2010 mailbox database defrag you need an amount of free space equivalent to 1.1x the predicted size of the new file.

In this example that would be:

18.26 – 9.544 = 8.7

8.7 x 1.1 = 9.57

In other words, I’ll need about 10Gb of free disk space to run this defrag. Since I don’t have that much free space on the same drive as the database I will need to specify a different temporary location when I run eseutil. This can be another local drive or a UNC path, just be aware that if you are using a UNC path the defrag will take longer due to network latency.

Before proceeding you should be sure that you have a good, working backup that you can use for recovery if something goes wrong during the defrag.

Using ESEUtil to Defrag an Exchange 2010 Mailbox Database

Open the Exchange Management Shell and navigate to the folder containing the database file.

cd D:\Data\MB-HO-01

Dismount the mailbox database.

Dismount-Database MB-HO-01

No run ESEUtil to defrag the file.

[PS] D:\Data\MB-HO-01>eseutil /d MB-HO-01.edb /t\\testserver\defrag\temp.edb

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.01
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating DEFRAGMENTATION mode...
Database: MB-HO-01.edb

Defragmentation Status (% complete)

0 10 20 30 40 50 60 70 80 90 100

Moving '\\testserver\defrag\temp.edb' to 'MB-HO-01.edb'...
File Copy Status (% complete)

0 10 20 30 40 50 60 70 80 90 100

It is recommended that you immediately perform a full backup
of this database. If you restore a backup made before the
defragmentation, the database will be rolled back to the state
it was in at the time of that backup.

Operation completed successfully in 3788.218 seconds.

Mount the database again.

mount-Database MB-HO-01

You can now see that the file is smaller, and all the white space is gone.

Get-MailboxDatabase -Status | ft name,databasesize,availablenewmailboxspace -auto

Name DatabaseSize AvailableNewMailboxSpace
---- ------------ ------------------------
MB-HO-01 8.328 GB (8,942,190,592 bytes) 5.219 MB (5,472,256 bytes)
MB-HO-02 15.63 GB (16,785,670,144 bytes) 3.696 GB (3,968,761,856 bytes)
MB-HO-Archive-01 648.1 MB (679,542,784 bytes) 134.6 MB (141,164,544 bytes)

As you saw when ESEUtil completed you should run a full backup of the database at your next backup window.