Thursday, April 30, 2015

Michael's meanderings: TinyMonitorService

This is my first binary release of anything.

TinyMonitorService is exactly what is sounds like. It is a service that checks only a few characteristics associated with any computer. It checks the processor utilization, the memory utilization, and the diskspace utilization for the computer on which it is installed. Also, it reports on a subset of event logs on that computer.

The event logs that are searched are:

System
Application
DNS Server
Directory Service
File Replication Service

Errors in the event logs are sent via email to a specified address. The file log-excludeditems.txt allows you to identify specific log entries that should not be sent via email. Obviously, there are a few which are so common, which are completely ignored, that one can wonder why Microsoft made the particular items errors!

TinyMonitorService began as a PowerShell script. However, the overhead associated with PowerShell is fairly high and running it as a service, while possible, is not very practical.

I had a number of clients who needed basic monitoring, but did not own a monitoring solution, and for whatever reason, could not utilize one of the other free solutions available. Initially, it was installed exclusively on Exchange Servers, but after a few enhancements, it became a good-enough solution for most servers.

Why don't I release the source? Quite frankly, I'm not happy with it. I'm somewhat embarrassed by it. The service is written in C#, almost a line-for-line translation of the PowerShell script, plus the things that are necessary to create a Windows service (which came from MSDN). It's pretty damn ugly.

Expand the contents of TinyMonitorService.zip into C:\TinyMonitorService. Then execute the configuration PowerShell script in that folder:

powershell.exe -file TinyMonitorConfig-v2.ps1

After you have done this, create the service as described in ReadMe.txt in that folder.

If you have questions, please email me.

Please follow me at @essentialexch on Twitter.



from Exchange News Full Article

Tony Redmond: Formatting the “Office 365 for Exchange Professionals” eBook

The last stage in our preparations to launch “Office 365 for Exchange Professionals” on May 4 at the Microsoft Ignite conference has centered on formatting the eBook. Those who buy a copy will be able to download PDF and EPUB versions. The PDF is best if you want to read on a PC while EPUB is supported by iPads and other eBook readers.

Word for Windows 2013 does an excellent job of generating PDF output and the only thing we have to check is how the text, graphics, and other elements flow from page to page. For example, it’s generally not good to split a table across two pages or to start a new section a line or two from the bottom of a page.

EPUB is a different matter. We generate the EPUB files using Caliber, a really nice open source eBook manager. Some might not like its user interface but I haven’t run into any problems. I like programs that just work and Caliber falls into this category, providing of course that the input you provide is reasonable. We put the Word document through several runs to produce an EPUB file, checking the output each time to detect issues that we had to fix. Here is the list of problems that we encountered.

Graphics with bad anchors: Word uses “anchors” to fix where graphics and other objects should appear on a page. Typically you want a graphic to move in line with the text to which it refers and the anchor is usually fixed to the figure caption. However, if you use Word as a simple word processor and only ever create documents of a few pages, you probably have never heard about anchors, nor do you worry about text flowing to the top and bottom of a graphic or around a graphic. Our mixed team of authors had varying degrees of experience with Word and we generated a variety of anchors in different places, many of which were absolutely wrong. You find this out when generating output files as graphics then appear in the strangest of places. The only solution is to go through all the chapters and fix the anchors correctly.

Anchors in a Word document

Anchors in a Word document

Oh, and by the way, always use the Insert Picture command to insert a graphic file (JPEG, BMP, etc.) in Word. You can paste a graphic into Word and this will format and print properly for PDF, but the graphics are likely to be dropped when you generate files in an eBook format.

The next thing to avoid are complex tables. Word allows you to get very creative with the way that you layout rows and columns in tables and the colors that are used to highlight table headings and so on. But complex tables have a nasty habit of being thoroughly munged during formatting, so it’s best to stay with simple tables that you can be sure will be supported on an eBook reader. For example, the table shown below has a reasonable chance of surviving the transfer to an eBook. Its sole problem is its width as eBooks tend to be narrower than an A4 or letter-size page. For this reason, narrow tables usually produce better result, so when we go through the book to apply final formatting, we check the width of tables and adjust as necessary.

A simple (but wide) table

A simple (but wide) table

It’s common to construct books from a set of files, each of which is a chapter or other significant section. If you use fields to track figure and table numbers such as Figure 3-1 and Table 7-2, you can run into problems when the time comes to combine everything. It all looks OK when you bring all the files together until you print the file. At that time the fields will be updated and you’ll end up with Figure 3-34 where it should be Figure 3-1. What’s happened here is that all the figures in chapters 1 and 2 have been numbered consecutively so the first figure in chapter 3 starts off at 34 instead of 1.

The fix is easy. Select all the text in the document (CTRL/A) and then lock the fields (CTRL-F11). The fields will now maintain whatever value they have and won’t renumber when printing (and outputting to PDF is a form of printing). Fields can be updated selectively by being unlocked (CTRL-Shift-F11) and then updated (F9).

During the final formatting of the book, we look through pages to make sure that the layout is attractive. We don’t want tables or code examples to split across pages and it always looks better if a section starts on a fresh page. Page breaks are inserted to create the necessary flow and everything looks great in the PDF output. However, those page breaks create a new problem for eBooks because the flow on these devices is more dynamic than in a PDF. We therefore have to remove the page breaks before generating the eBook output files. Fortunately, there’s a reasonably easy way to remove the breaks. In all cases, inserting extra carriage returns to format a document is a bad idea as the extra returns invariably get in the way of formatting.

Section break near the bottom of a page

Section break near the bottom of a page

Our book is about Exchange so it is reasonable to anticipate that PowerShell is mentioned frequently. In fact, we have tons of PowerShell examples and have formatted them in 8-point Consolas typeface to clearly identify code. But here’s the thing – most eBook readers are limited to the fonts that they support and all our nicely formatted PowerShell examples that look so well in the PDF file don’t look so well in the EPUB format and look even worse if you output in MOBI (for Amazon Kindle). The newer AMZ3 format supported embedded fonts but, very oddly, you can’t upload AMZ3 files to Amazon’s Kindle Publishing Platform.

We’ve played around with various approaches but the PowerShell sample code doesn’t look as well as we want in the EPUB files. I’m sure that it is possible to get around this problem with other tools but we don’t have the time to pursue that option at present. Another route might be to outsource the problem to a company that specializes in preparing output for eBook readers, but the costs of processing a 22MB, 630-page, 240,000-word document with 347 graphics and many PowerShell examples is likely to run into several thousands of dollars. Our EPUB output is not that bad!

Using the free PC EPUB reader to review Office 365 for Exchange Professionals

Using the free PC EPUB reader to review Office 365 for Exchange Professionals

One thing we did find out and fix is that it is a very bad idea to include PowerShell examples in the format that we use (enclosed in a 1×1 tables to separate them from text) in bulleted lists as extra bullets can appear when the Word document is converted to EPUB. Thanks to Justin Harris for helping us to chase this problem down to a good resolution.

These are just some of the issues we have run into in the last week. The good news is that the EPUB output has improved steadily and now looks quite good, especially on an iPad. But it also looks quite good on a PC using the free EPUB reader.

We have been asked if we will publish through Amazon in addition to selling books through exchangeserverpro.com. Well, we have done the work to understand the publishing process and what we need to generate both MOBI and AMZ3 format files. Apart from the substantial fees charged by Amazon, we have two major problems to overcome. First, we are not happy with the way that PowerShell code examples are output. That problem might go away when Amazon allow AMZ3 files to be uploaded.

The second issue is how Amazon deals with book updates. Given the dynamic nature of Office 365, we have a plan to produce regular updates for the book and are considering September as a good target for the first update. Given the work required to update the content, we would like to charge a fee for the update. We have not settled on the details yet, but those who have purchased a previous version will probably receive a large discount. Amazon doesn’t seem to have a mechanism to allow this to happen. Book updates use a mechanism that is suitable for updates to a novel where you might want to fix some formatting problems or misspellings, but we might want to rewrite large sections to reflect new functionality or information. It seems like the only way to do this on Amazon is to publish a brand new book. So we would have “Office 365 for Exchange Professionals (May 2015 edition)”, “Office 365 for Exchange Professionals (September 2015 edition), and so on. Pushing ouyt new editions is OK but we wouldn’t be able to offer a discount to those who had bought before.

We don’t intend to use Amazon for now. We will track developments and make sure that we can publish if necessary and it’s possible that we have the wrong end of the stick or misunderstand the process. If all our problems disappear, then we will reconsider. But life is seldom that simple.

In the interim, we are locked and loaded and ready for May 4. Site subscribers have access to a discounted version of the book now (the files won’t be available until Monday), but if you’re interested, you can head over to exchangeserverpro.com to buy your copy!

Follow Tony @12Knocksinna




from Exchange News Full Article

Exchange Team Blog: Troubleshooting High CPU utilization issues in Exchange 2013

Introduction

In Exchange support we see a wide range of support issues. Few of them can be more difficult to troubleshoot than performance issues. Part of the reason for that is the ambiguity of the term "Performance Issue". This can manifest itself like anything from random client disconnects to database failovers or slow mobile device syncing. One of the most common performance issues we see are ones where the CPU is running higher than expected. "High CPU" can also be a bit of an ambiguous term as well. What exactly is high? How long does it occur? When does it occur? All of these are questions that have to be answered before you can really start getting to the cause of the issue. For example, say you consider ‘high’ to be 75% of CPU utilization during the day. Are you experiencing a problem, are databases inadequately balanced, or is the server just undersized? What about a 100% CPU condition? Does it happen for 10 seconds at a time or 10 minutes at a time? Does it only happen when clients first logon in the morning or after a failover? In this article I'll go into some common causes of high CPU utilization issues in Exchange 2013 and how to troubleshoot them.

At this point I should note that this article is about Exchange 2013 specifically, not earlier versions. High CPU issues across versions do have some things in common, however much of the data in this article is specific to Exchange 2013. There are some fairly significant differences between Exchange 2010 and Exchange 2013 that change the best practices and troubleshooting methodology. Some of these include completely different megacycle requirements, different versions of the .NET Framework, and different implementation of .NET Garbage Collection. Therefore, I will not be covering Exchange 2010 in this post.

Common Configuration Issues

Those of us that have worked enough performance issues start by following a list of things to check first. This was actually the main motivation for a TechNet article we recently published called Exchange Server 2013 Sizing and Configuration Recommendations. I'm not going to duplicate everything in the article here, I would suggest that you read if you are interested in this topic. I will however touch on a few of the high points.

.NET Framework version

Exchange 2013 runs on version 4.5 of the .NET Framework. The .NET team has published updates to .NET 4.5, released as versions 4.5.1 and 4.5.2. All of these versions are supported on Exchange 2013. However, I would strongly recommend that 4.5.2 be the default choice for any Exchange 2013 installation unless you have very specific reasons not to use it. There have been multiple performance related fixes from version to version, some of which impact Exchange 2013 fairly heavily. We've seen more than a few of these in support. You can save yourself a lot of trouble by upgrading to 4.5.2 as soon as possible, if you are not already there. It should also be noted that 4.5.2 is the latest version as of the publishing of this blog post. Future releases will contain even more improvements so be sure to always check for the latest available version. You can read more about the different versions of the .NET Framework here.

Power Management

I started losing count a while back of the number of high CPU cases I encountered that were caused by misconfigured power management. Power management sounds like a good thing, right? In many cases it is. Power management allows the hardware or the OS to, among other things, throttle power to the CPU and turn off an idle network card when it isn't in use. On workstations and perhaps on certain servers this can be a good thing. It saves power, lowers the electric bill, gives you a nice low carbon footprint, and makes vegetables taste good. So why is this a bad thing? Consider this. You have a server running at about 80% CPU throughout the work day consistently. You've ran the sizing numbers over and over and you should be closer to 55%. You don't see any unusual client activity. Everything looks great except the CPU utilization. Now what if you were to find out that your 2.4GHz cores are only operating at 1.2GHz most of the time? That might make a difference in your reported CPU utilization. For Exchange the guidance is straight forward. If hardware power management is an option, don't use it. You should allow the operating system to manage power and you should always use the "High performance" power plan in Windows. Even if you aren't using hardware based power management, just having the power plan set to the default "Balanced" can be enough to throttle the CPU power.

How do you know if this is happening? On a physical server the answer is easy. There is a counter in performance monitor called "Processor Information(_Total)\% of Maximum Frequency". This should always be at 100. Anything lower indicates that the CPU is being throttled which is usually a result of some kind of power management, either at the hardware or OS level. On a virtual server things get a bit more complicated. To the Exchange server, a VM guest, it is difficult to completely trust the CPU performance numbers. If power is being throttled at the VM Host layer, it will not be overly apparent to the Guest OS. You need to use the performance monitoring tools of the VM Host to check for processor power throttling.

Screenshot of CPU throttling in Perfmon:

image

Health Checker

We've recently published a PowerShell script on the TechNet gallery that makes checking for common configuration issues easy. The script reports Hardware/Processor information, NIC settings, Power plan, Pagefile settings, .NET Framework version, and some other items. It also has a Client Access load balancing check (current connections per server) and a Mailbox Report (active/passive database and mailbox total per server). It can be executed remotely and can run against all servers in the Organization at once, to save the trouble of having to check all of these settings individually on each server. The TechNet gallery posting contains more details on the script as well as some common usage syntax.

Sizing

After we've ruled out the common causes from the previous section, we now have to move on to sizing. Perhaps the CPU is running high because the server doesn't have enough megacycles to keep up with the load being placed on it. Sizing Exchange 2013 is covered in multiple blog posts.. If you want a good understanding of sizing, I suggest reading Jeff Mealiffe’s post Ask the Perf Guy: Sizing Exchange 2013 Deployments. If you haven't done it already, you should also run through Ross Smith IV's sizing calculator. Most deployments have utilized the calculator for planning and sizing. I'm a support guy so I'm approaching this topic from the angle of troubleshooting an existing environment. In the world of troubleshooting we don’t need to size and plan a deployment, but we do need to know enough about it to know if a performance problem is simply an issue of being undersized. Troubleshooting a high CPU issue with no knowledge of sizing can at best be difficult and many times just not possible. When it comes to CPU sizing it comes down to this question - do I have enough available megacycles to handle the load?

Easy enough right? Not quite. How many available megacycles you have is fairly straight forward, although it does require a bit of math. The basic formula (taken directly from Jeff's sizing blog) is as follows:

image

Two of these numbers are already known. The MHz per-core of the baseline platform is always 2000, and the Baseline per-core score value is always 33.75. Again, this is specific to Exchange 2013 only. All you need now is your target platforms per-core score value. This value is the SPECInt 2006 rating of your server divided by the total number of physical cores. If you don't want to use the website you can look up your server's rating with the Exchange Processor Query Tool. Say our SPECInt 2006 rating on a 12 core server is 430, giving us a per-core rating of 35.83 (430/12). The formula now looks like this:

image

2123.26 megacycles per-core, times 12 cores, gives you 25,479 total megacycles available. Now we have to find out the required megacycles. This is a bit more complicated. It depends on the number of active and passive mailboxes you have along with message profile (messages sent/received per day) and any multipliers that may be required by 3rd party products. Luckily, there is a script to help with this as well.

The Exchange 2013 CPU Sizing Checker will run these numbers for you. You can pass in all of the profile information but it is easier to just import the values directly from your sizing calculator results. Syntax can be found on the download page.

Screenshot of the Sizing Checker:

image

Version 7.2 of the Sizing Calculator also allows us to get an idea of the expected CPU utilization. The difference is it will calculate expected CPU utilization based on the number of active and passive mailboxes planned by taking the values from the Input page of the spreadsheet (as opposed to querying the mailbox server for a current total). The new features in version 7.2 provide insight that lets you know what to expect from a CPU utilization standpoint in multiple different scenarios that include Normal Runtime (no failures, evenly distributed databases), Single Failure (a single server in the datacenter has failed, resulting in database copy activation), Double Failure (two servers in the datacenter have failed, resulting in database copy activation), Site Failure (a datacenter has failed, requiring failover to another datacenter), and Worst Failure (worst possible failure based on design requirements for the environment).

Message Profile and Multiplier

By now you're probably saying "this is nice, but how do I know my message profile and multiplier numbers?" Great question. The message profile numbers on a live production deployment can actually be determined by yet another great script from Dan Sheehan called Generate-MessageProfiles.ps1, available on TechNet Gallery. This script will parse your transport logs and give you an actual number of messages sent/received per day. In addition to publishing the script, Dan has written a blog post that explains the script and its usage in detail.

That works for message profiles. What about the multiplier? This is the tough one. Some 3rd party vendors will actually give you a suggested multiplier for their software. Sometimes this information is not available. In this case you can use the previously referenced Exchange 2013 CPU Sizing Checker script to reverse engineer the multiplier. Let's say you run the script with a multiplier of 1.0. It gives you a CPU number of 50% which is the average CPU usage you can expect from the Exchange specific processes during the busiest hours of the day. You, however, are seeing a value closer to 65%. You can run the script again, modifying the multiplier, until you get a result close to 65%. Once you do, that can give you an idea of what multiplier number you should be using in your sizing plans.

As previously mentioned, version 7.2 of the sizing calculator has the ability to predict CPU values based on your planned deployment numbers. This means that you can modify the “Megacycles Multiplication Factor” in the profile settings on the calculator’s Input tab and view the results in the “CPU Utilization/Dag” section on the Role Requirements tab to get an idea of which multiplier value suits your deployment best. In most cases this is preferable to using the script as the calculator is faster and designed around helping you plan your deployment (as opposed to the script which is more for troubleshooting).

Oversizing

Contrary to what you may think, it is possible to oversize your servers from a CPU standpoint. This doesn't come down to raw processing power. It might be inefficient use of hardware in some cases to deploy on servers with high core counts, but too much processing power isn't the problem. When I talk about oversizing I'm not really talking about the available megacycles more than I am the number of cores. Exchange 2013 was developed to run on commodity type servers. Testing is generally done on servers with processor specifications of 2 sockets and about 16-20 cores. This means that if you deploy on servers with a much larger core count you may run into scalability issues. Core count is used to determine settings at the application level that can make a difference in performance. For example, in processes that use Server mode Garbage Collection we will create one managed heap per core (you can read in detail about Garbage Collection in .NET 4.5 here). This can significantly increase the memory footprint of the process and it goes up the more cores you have. We also use core count to determine the minimum number of threads in the threadpool of many of our processes. The default is 9 per core. If you have a 32 core server, that's 288 threads. If, for example, there is a sudden burst of activity you could have a lot of threads trying to do work concurrently. Some of the locking mechanisms for thread safety in Exchange 2013 were not designed to work as efficiently in high core count scenarios as they do in the recommended core count range. This means that under certain conditions, having too many cores can actually lead to a high CPU condition. Hyper-Threading can also have an effect here since a 16 core Hyper-Threaded server will appear to Exchange as having 32 cores. This is one of the multiple reasons why we recommend leaving Hyper-Threading disabled. These are just a few examples but they show that staying within the recommendations made by the product group when it comes to server sizing is extremely important. Scaling out rather than up is better from a cost standpoint, a high availability standpoint, and from a product design standpoint.

Single Process Causing High CPU

Generally if you have a CPU throttling issue or are undersized, you will see high CPU that will not seem to be caused by a single process. Rather, the server just looks "busy". The CPU utilization is high, but no single process appears to be the cause. There are times though where a single process can be causing the CPU to go high. In this section we will go over some tricks with performance monitor to narrow down the offending process and dig a bit into why it may be happening.

Perfmon Logs

Perfmon is great, but what if you were not capturing perfmon data when the problem happened? Luckily Exchange 2013 includes the ability to capture daily performance data and this feature is turned on by default. The logs are usually located in Exchange Server installation folder under “V15\Logging\Diagnostics\DailyPerformanceLogs”. These are binary log (*.blg) files that are readable by perfmon.exe. To review one just launch perfmon, go to Monitoring Tools\Performance Monitor, click the “View Log Data” button, and under Data Source select “Log Files”, click add, and browse to the file you wish to view. The built in log capturing feature has to balance between gathering useful data and not taking up too much disk space so it does not capture every single counter and it only captures on a one minute interval. In most cases this is enough to get started. If you find you need a more robust counter set or a shorter sample interval you can use ExPerfWiz to setup a more custom capture. A tip here: if you want to collect this information regularly and from multiple servers, check out this blog post.

Perfmon Analysis

The very first counter I load when analyzing a perfmon log for a high CPU issue is "Process(_Total)\% Processor Time". It gives you an idea of the total CPU utilization for the server. This is important because first and foremost, you need to make sure the capture contains the high CPU condition. With this counter a CPU utilization increase should be easy to spot. If it was a brief burst you can then zoom into the time that it happened to get a closer look at what else was going on at the time. I'll note the difference between Process(_Total) and Processor(_Total). Processor is based on a scale of 0-100 (CPU usage in overall percentage). Process(_Total) is based on the core count of the server. If you have a 16 core server, a 100% CPU spike would have a value of 1600. Either one can be used to start, as long as you realize the difference. If you are looking at a perfmon capture and don't know the total number of cores, just look at the highest number in the instances window under the Processor counter. It is a zero based collection, each number representing a core. If 23 is the highest number, you have 24 cores. During this phase of troubleshooting it may be best to change the vertical scale of the perfmon window. To do this right click in the window, properties, graph tab, change the maximum to core count x 100. In our 16 core example you would change it to 1600.

Now that you know that there was a high CPU condition and when it occurred, we can start narrowing down what caused it. The next thing to do is load all instances under "Process\% Processor Time". You can ignore "_Total" as we're already using it as our measurement for overall CPU utilization. You can also ignore Idle for now as it will inversely mirror "_Total". Look for any specific process that goes up in tandem with the overall CPU utilization. If there isn't one in particular, you don't have a single process causing the issue. This tends to point to some of topics covered in the previous sections such as sizing, load, and CPU throttling.

Mapping w3wp instances to application pools

Let's say you do find one particular process that is causing the high CPU condition. Suppose that the process has the name "w3wp#1". What exactly are you supposed to do with that? Exchange runs multiple application pools in IIS for the various protocols it supports. We need to find out which application pool "w3wp#1" maps to. Luckily perfmon has the information we need, you just need to know how to find it.

The first thing you want to do is load the counter "Process(w3wp#1)\ID Process". This will give you the process ID (PID) of that w3wp instance. Let's say it's 22480. With that information we go back to the counter load screen and look under "W3SVC_W3WP". Click on any of the counters. Below you will see a window that contains entries with the format PID_AppPool. In our example it says 22480_MSExchangeSyncAppPool. That tells us that w3wp#1 belongs to the Exchange ActiveSync application pool. Now we know that ActiveSync is the cause of our high CPU. At this point you can remove all of the counters from your view except for "Process(w3wp#1)\% Processor Time" as the extra clutter is no longer needed. You may also want to set the vertical scale back to 100 and right click on the counter and choose "Scale Selected Counters".

I should also note here that due to managed availability health checks, sometimes an application pool is restarted. When this happens the PID and the w3wp instance may change. Pay attention to the “Process(w3wp*)\ID Process” counter for the worker process you are interested in. If this value changes that means the process was recycled, the PID changed, and perhaps the w3wp instance as well. You will need to verify if the instance changed after the process recycled to make sure you are still looking at the right information.

What is the process doing?

Now that we've narrowed it down to w3wp#1 and know that ActiveSync is the cause of our issue, we can start to dig into troubleshooting it specifically. These methods can be used on multiple other application pools but this example will be specific to ActiveSync. The most common thing to look for is burst in activity. We can load up the counter "MSExchangeActiveSync\Requests /sec" to see if there was an increase in requests around the time of the problem. Whether there was or was not, we now know if increased request traffic led to the CPU increase. If it did, we need to find the cause of the traffic. It's a good idea to check the counter "MSExchange IS Mailbox(_Total)\Messages Delivered /sec". If this ticks up right before the CPU increase, it tells you that there was a burst of incoming messages that likely triggered it. You can then review the transport logs for clues. If it wasn't message delivery it may have been some mobile device activity that caused it. In this case you can use Log Parser Studio to analyze the IIS logs for trends in ActiveSync traffic.

Garbage Collection (GC)

If there was no noticeable increase in request traffic or message delivery before the increase, there may be something inside the process causing it. Garbage collection is a common trigger. You can look at ".NET CLR Memory(w3wp#1)\% Time in Garbage Collection". If it sustains higher than 10% during the issue it could trigger high CPU. If this is the case also look at ".NET CLR Memory(w3wp#1)\Allocated Bytes /sec". If this counter sustains about 50,000,000 during the high CPU condition and is coupled with an increase in "% Time in Garbage Collection", it means the Garbage Collector may not be able to keep up with the load being placed on it. I want to note very clearly here that if you encounter this, Garbage Collection throughput usually isn't the root of the problem. It is another symptom. Increases of this type usually indicate abnormal load is being placed on the system. It is much better to find the root cause of this and eliminate it rather than to start changing the garbage collector settings to compensate.

RPC Operations/sec

This is perhaps the best counter we have in mapping client activity to high CPU. You can load up "MSExchangeIS Client Type(*)\RPC Operations /sec" to get an idea of how many RPC requests are being issued against the Information Store by client type. Usually the highest offenders will be momt (Requests from the RPC Client Access Service, usually Outlook MAPI clients), contentindexing, webservices (EWS), and transport (mail delivery). You really need to have a baseline of your environment to know what "normal" is but you can definitely use this counter to compare to the overall CPU utilization to see if client requests are causing a CPU utilization increase.

Log Parser Studio (LPS)

If I were stuck on a desert island and had to troubleshoot Exchange performance issues for food, and could only bring two tools, they would be perfmon and Log Parser Studio. LPS contains several built in queries to help you easily analyze traffic for the various protocols used by Exchange. You can use it to get a view of the most ActiveSync hits per day by device, EWS requests by client type, RPC Client Access MAPI client version by percentage, and many others. The built in queries are great for just about anything you'd need to find out. If you need more and know a bit of TSQL, you can even write your own. LPS is covered in depth in Kary Wall's blog post. If you get to the point where you have the client type causing your issue narrowed down, LPS is usually the next step.

Conclusion

Performance is a vast topic and I don't expect this blog post will make you an expert immediately, but hopefully it has given you enough tips and tricks to start tracking down Exchange 2013 high CPU issues on your own. If there are other topics you would like to see us blog about in the realm of Exchange performance please leave feedback below. Happy troubleshooting!

Marc Nivens



from Exchange News Full Article

MSExchange.org: Deploying an Exchange 2013 Hybrid Lab Environment in Windows Azure (Part 26)

In this article we will verify the hybrid configuration that has been configured on the on-premises side.

from Exchange News Full Article

msexchange.org: GSX, the watchful eye for all IT administrators!



from Exchange News Full Article

Wednesday, April 29, 2015

Michael's meanderings: Forcing a Server's Active Directory Site

In January 2010 I wrote a blog post Where oh where, did my AD site go...[Alternate title: It's the DNS, stupid.]. In that blog post I discussed a situation where an incorrect DC locator record could cause a server to report itself as a member of an improper Active Directory site. That can cause a number of issues with Exchange.

I am in the process of migrating that same customer to Exchange 2013 (the prior blog post was written when migrating a particular customer to Exchange 2010).

The first Exchange 2013 server was brought online after the OS was installed. I went through the normal process of installing Exchange 2013 role and feature pre-requisites, installed Ucma 4.0, etc. etc. When it came time to do the first actual step in installing Exchange 2013, PrepareSchema, setup.exe reported that the Schema Master FSMO was not in the same Active Directory site as the computer running setup.

Huh?

Of course it was. I know this requirement and made certain it was satisfied! The Schema Master FSMO was in the AD site named "10-129-59". The new server was in the same subnet.

However, when executing "nltest /dsgetsite", nltest reported that the AD site was "Default-First-Site-Name". Uh, wow.

I immediately reviewed AD Sites and Services to ensure that AD Subnets and AD Sites were properly configured. Indeed, they were. Next, I reviewed the customer's DNS, in detail, as described in the above blog post. The DNS was correct.

Finally, with little hope of success, I tried resetting the secure channel to the proper FSMO DC. That succeeded.

So, I rebooted. After the reboot, the secure channel was again reset to a DC in "Default-First-Site-Name". OK, I tried the same thing again (resetting the secure channel and then rebooting) with no change in behavior.

No need to try a third time. That would meet a classical definition of insanity. :)

I spent a limited amount of time investigating the particular reasons for why this should occur. But when it comes down to it, as a consultant, my job is to accomplish this project. So, I went out to find ways to ensure that a particular computer is a member of a particular AD site.

It turns out to be pretty simple. You must set a registry value for this key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters

The value is called SiteName and is of type REG_SZ (the name is case sensitive).

In my case, I set SiteName to "10-129-59" and closed regedit.exe (of course you can set this value in many ways - you can use PowerShell, .NET, Win32, reg.exe - whatever you wish to use). Documentation says that restarting the NetLogon service should correct everything, but that is not my experience. After rebooting the server, the computer came up in the proper AD site and I was able to proceed with installing Exchange Server 2013.

Follow me on Twitter: @essentialexch



from Exchange News Full Article

msexchange.org: The TOP 3 objectives to achieve at Microsoft Ignite



from Exchange News Full Article

MSExchange.org: Product Review: Stellar Mailbox Exchange Toolkit

This article reviews Stellar Mailbox Exchange Toolkit.

from Exchange News Full Article

msexchange.org: Lync 2013 Operations Checklists - April 2015

How cool is this? I would love to see this for other Microsoft products.

from Exchange News Full Article

msexchange.org: Office App Compatibility Kit

Are you planning to write some Office App? Outlook could use some more ;) Is yes, this might interest you:

from Exchange News Full Article

Tuesday, April 28, 2015

MSExchange.org: Exchange Archiving: On-Premises vs Cloud-Based (Part 1)

A real-world analysis of the benefits and caveats of mailbox archiving, leveraged by the compliance features in Exchange 2013 (on-premises) versus the cloud-based service provided by Office 365.

from Exchange News Full Article

Monday, April 27, 2015

Exchange Team Blog: Generating user message profiles for use with the Exchange Calculators

Greetings Exchange Community!

My name is Dan Sheehan, and I work as a Premier Field Engineer for Microsoft, specializing in Microsoft Exchange. As a long time Exchange engineer I am an avid PowerShell scripter, and as such I end up writing a lot of PowerShell scripts.

Today I present to you one of those scripts that assists Exchange administrators/service owners with generating an Exchange “user message profile”. This “user message profile” is a critical part of the information entered into the Exchange Server Role Requirements Calculator and the Exchange Client Network Bandwidth Calculator (more on those below).

The script, which is published here on the TechNet Gallery, is designed to work in environments of all sizes, and has been tested in environments with hundreds of Exchange sites. The current version works with the Management Shell of Exchange 2010 and 2013, and I am working on a version for Exchange 2007. I have a number of scripts published on the TechNet Gallery, both from before I joined Microsoft and after, and I encourage you to check them out as well as the TechNet Gallery in general.

Without any further ado, on to the script.

Background

An Exchange “user message profile” represents the amount of messages a user sends and receives in a day, and the average size of those messages. This critical information is used by the Role Requirements Calculator to determine the typical workload a group of users will place on an Exchange system, which in turn is used to properly size a new Exchange environment design. This information is also used by the Client Bandwidth Calculator to estimate potential bandwidth impact email users will have on the network, depending on their client type and versions used.

Some Exchange service owners “guesstimate” a couple of different user message profiles based on the anticipated workload, while others use data from their existing environment to try and create a messaging profile based on recent user activity. Gathering the necessary information based on recent user activity and creating a user message profile is not an easy task, and quite often service owners turn to third party tools for assistance with this process.

This PowerShell script was created to assist Exchange service owners who want to generate average user message profiles based upon their current environment, but don’t have or want to use a third party tool to gather the necessary information and generate a message profile.

There are other messaging statistics gathering scripts published on the Internet, such as this this one by Mjolinor on the TechNet Gallery and this one by our own Neil Johnson (who BTW is responsible for the Client Bandwidth Calculator). Typically those types of “messagestats” scripts create a per-user report of all messaging activity which takes a long time, includes information beyond what is required to create a user message profile, and the output requires further manipulation to come up with an average user message profile. This script on the other hand focuses on just the messages sent and received by users, which is faster than gathering all messaging activity, and provides a user message profile per Exchange (AD) site versus individual user results.

Functionality

The script uses native Exchange PowerShell cmdlets to extract the mailbox count from mailbox role servers and mailbox messaging activity from the Hub Transport role server message tracking logs for the specified date range. The information is then processed to obtain user/mailbox message profiles consisting of averages for sent messages, received messages, and message sizes.

The script requires a start and end date, and can be run multiple times to accumulate groups/blocks of days into the final output. For instance instead of gathering 30 straight days of data from the Exchange servers, which includes weekend days that generally negatively skew the averages due to reduced user load, the script can be run 4 consecutive times using the 4 groupings of weekdays within that 30 day period which helps keep the averages reflective of a typical work day. The output to a CSV file can then be performed on the 4th and final week.

The script can be run against Exchange servers in specific AD sites, collections of AD sites, or all AD sites, and the generated message profiles that are returned are organized by AD site. The ability to specify a specific collection of AD sites is important for multi-site international Exchange deployments because not every location around the world follows a Monday through Friday work week. This functionality can be combined with the script’s ability to accumulate and combine data from multiple runs into a single report, even if some sites had to be queried using different date ranges.

The script can optionally provide a “total” summary user message profile for users across all collected sites under the site name of “~All Sites” which will show up at the top of the output. The collected data can be exported to a CSV file at the end of each script run, otherwise it will be automatically stored as PowerShell variable for further manipulation.

The script provides detailed output to the screen, including tiered progress bars indicating what site is currently being processed, what server in that site is being processed, and what specific processing activity is occurring. The script output also includes an execution time summary at the end so you can plan for future data gathering time requirements:

clip_image002

Resultant Data

There are a number of script parameters (covered below) that can be used exclude certain types of mailboxes and messages from the data gathering and subsequent output of the script. For example if you exclude all RoomMailboxes from the data gathering, then they won’t be reflected in the script’s output. This means the use of the words “all” and “total” below are in reference to the messages and mailboxes the script was told to gather and process, and not necessarily all of the data available on the servers.

The data in the output is grouped into the following columns per Exchange site (as well as the optional “~All Sites” entry):

image

  1. Site Name – This is the name of the AD site that the Exchange servers live in, as defined in AD Sites and Services.
  2. Mailboxes – This is the count of all mailboxes discovered in the site. This information is used by both Calculators.
  3. AvgTotalMsgs – This is the count of sent and received messages for the mailboxes in the site. This information is used by the Role Requirements Calculator.
  4. AvgTotalKB – This is the average size in KB of all included sent and received messages in the site. This information is used by both Calculators.
  5. AvgSentMsgs – This is the average count of sent messages for the mailboxes in the site. This information is used by the Client Network Bandwidth Calculator.
  6. AvgRcvdMsgs – This is the average count of received messages for the mailboxes in the site. This information is used by the Client Network Bandwidth Calculator.
  7. AvgSentKB – This is the average size in KB of sent messages for the mailboxes in the site.
  8. AvgRcvdKB – This is the average size in KB of received messages for the mailboxes in the site.
  9. SentMsgs – This is the total amount of sent messages for the mailboxes in the site.
  10. RcvdMsgs – This is the total amount of received messages for the mailboxes in the site.
  11. SentKB – This is the total size in KB of all sent messages for the mailboxes in the site.
  12. RcvdKB – This is the total size of KB of all received messages for the mailboxes in the site.
  13. UTCOffset – This is the UTC time zone offset for the AD site. This information is used by the Client Network Bandwidth Calculator.
  14. TimeSpan – This represents the amount of time difference between the clock on the local computer running the script and the clock of the remote server being processed. This is informational only.
  15. TotalDays – This represents the number of days collected for the site. This information is needed by the script when you are using it to combine multiple runs into a single output.

Parameters

The script has a number of parameters to allow administrators control what goes into/is excluded from the user message profile generation process. Most of the parameters are grouped into one of three “parameter sets”, with the exception of one parameter that is in 2 sets and a couple that are not in any set.

Parameter sets group related parameters together, so once a parameter is one set is chosen the only other available parameters are those in that same set and those that aren’t assigned to any set. Furthermore a required parameter is only required within its parameter set, meaning if you are using one parameter set, then the required parameters in other sets don’t apply.

If the concept of parameter sets is a little confusing and you are using Exchange 2013, then you can use the PowerShell 3 (and later) cmdlet Show-Command with the script to create a graphical representation of the parameter sets like this:

Show-Command .\Generate-MessageProfile.ps1

which will pop-up the window:

image

The script also supports the traditional -Verbose and -Debug switches in addition to what’s listed below:

Parameter

Set

Required

Description

ADSites

Gather

Optional

When set to “*” indicates all AD sites with Exchange should be processed. Alternatively explicit site names, site names with wild cards, or any combination thereof can be used to specify multiple AD sites to filter on. If no site is defined, the local AD site will be used. The format for multiple sites is each site name in quotes, separated by a comma with no spaces such as: “Site1”, “Site2”, “AltSite*”, etc…

StartDate

Gather

Required

Specifies the day to start the message tracking log search on, which starts at 12:00AM. The format is MM/DD/YYYY.

EndDate

Gather

Required

Specifies the day to end the message tracking log search on, which ends at 12:00AM. This means that if you want to search Monday through Friday, you need to specify the end date of Saturday so the search stops at 12:00AM Saturday. The format is MM/DD/YYYY.

ExcludePFData

Gather

Optional

Tries to filter out messages sent to or from Exchange 2010 Public Folder databases.

NOTE: This parameter is not recommended because its filter relies on message subject line filtering which could potentially filter out user messages. Additionally this does not filter out all Public Folder messaging data because some Public Folder message subject lines were not included due to the high likelihood that users would use them in their own messages.

ExcludeHealthData

Gather

Optional

Excludes messages sent to and the inclusion of “extest_” mailboxes and Exchange 2013 “HealthMailbox” mailboxes.

NOTE: Because the extest and HealthMailboxes can generate a lot of traffic, it is recommended to use this switch to get a more accurate message profile reflection of your users.

ExcludeRoomMailboxes

Gather

Optional

Excludes message sent to and the inclusion of room mailboxes. By default equipment and discovery mailboxes are excluded from the as they negatively skew the average user message profile. Room mailboxes are included by default because they can send/receive email.

NOTE: This parameter is not recommended if you have active conference room booking in your environment as that means you have active message traffic to and from room mailboxes.

BypassRPCCheck

Gather

Optional

Instructs the script to bypass testing RPC connectivity to the remote computers by using Get-WMIObject. Bypassing the RPC check should not be necessary as long as the account running the script has the appropriate permissions to connect to WMI on the remote computers.

ExcludeSites

Gather

Import

Optional

Specifies which sites should be excluded from data processing. This is useful when you want to use a wild card to gather data from multiple sites, but you want to exclude specific sites that would normally be included in the wild card collection. For data importing, this is useful when you want to exclude sites from a previous collection. The format for multiple sites is each site name in quotes separated by a comma with no spaces such as:

"Site1","Site2", etc...

NOTE: Wild cards are not supported.

InCSVFile

Import

Required

Specifies the path and file name of the CSV to import previously gathered data from.

InMemory

Existing

Required

Instructs the script to only use existing in-memory data. This intended only to be used with the AverageAllSites parameter switch.

AverageAllSites

<None>

Optional

Instructs the script to create an "~All Sites" entry in the collection that represents an average message profile of all sites collected. If an existing "~All Sites" entry already exists, its data is overwritten with the updated data.

OutCSVFile

<None>

Optional

Specifies the path and file name of the CSV to export the gathered data to. If this parameter is omitted then the collected data is saved in the shell variable $MessageProfile.
NOTE: Do not use this parameter if you are collecting multiple weeks of data individual, such as collections of work weeks to avoid weekends, until the last week so only the complete data set exported to a CSV.

NOTE: This list of parameters will be updated on the TechNet Gallery posting as the script is updated.

Examples

The following are just some examples of the script being used:

1. Process Exchange servers in all sites starting on Monday 12/1/2014 through the end of Friday 12/5/2014. Export the data, excluding the message data for Exchange 2013 HealthMailboxes and any extest_ mailboxes, to the AllSites.CSV file.

Generate-MailboxProfile.ps1 -ADSites * -StartDate 12/1/2014 -EndDate 12/6/2014 -OutCSVFile AllSites.CSV -ExcludeHealthData

2. Process Exchange servers in AD sites whose name starts with "East", starting on Monday 12/1/2014 through the end of Monday 12/1/2014. Output the additional Verbose and Debug information to the screen while the script is running. The collected data is made available in the $MessageProfile variable after the script completes.

Generate-MailboxProfile.ps1 -ADSites East* -StartDate 12/1/2014 -EndDate 12/2/2014 –Verbose -Debug

3. Process Exchange servers in the EastDC1 AD site, and any sites that start with the name "West", starting on Monday 12/1/2014 through the end of Tuesday 12/30/2014. Export the data, which should exclude most Public Folder traffic, to the MultiSites.CSV file.

Generate-MailboxProfile.ps1 -ADSites “EastDC1”,”West*” -StartDate 12/1/2014 -EndDate 12/31/2014 -OutCSVFile MultiSites.CSV –ExcludePFData

4. Import the data from the PreviousCollection CSV file in the current working directory into the in-memory data collection $MessageProfile for future use.

Generate-MessageProfile.ps1 -InCSVFile .\PreviousCollection.CSV

5. Process the previously collected data stored in the $MessageProfile variable and add an average for all the sites is to the data collection as the site name “~All Sites”.

Generate-MessageProfile.ps1 -InMemory -AverageAllSites

FAQ

1. Is the output generated by this script an accurate representation of my user’s messaging profile, which I can use in other tools such as the Role Requirements Calculator?

  • This script generates a point in time reflection of your user’s messaging activity. The data is only as good as the date range(s) you selected to run it in, the data you opted to include or exclude, and the information stored on the accessible servers. For example if you ran this script during date range that included a holiday and a lot of users took vacation, then the information is going to reflect a lower average message profile than a more “normal” work period would reflect.
  • Taking into consideration that this script will only reflect the messaging activity of your users during your selected date range, you should use the output as a guideline for formulating the message profile to represent your users in other tools.

2. Should I inflate/enhance the message profile produced by this script to give myself some “elbow room” in my Exchange system design?

  • If you are designing an email system that is going to need to last for multiple years, it’s probably a good idea to increase the numbers slightly to account for future growth of your system and the likelihood that yours will increase their message profile over time. How much you inflate the information is up to you.

3. The messaging profile for my users seems lower than I expected. What are some factors that could attribute to this/how can I increase the values generated by the script?

  • Review the data range(s) you chose when running the script to see if they were periods of time where user activity was expected to be low.
  • If your date range(s) include weekends/non-work days, re-run the script excluding those days. This may require multiple cumulative runs if you want to include multiple work weeks in the average.
  • If you have a lot of resource rooms that are rarely used but you did not exclude them, then try re-running the script with the ExcludeRoomMailboxes parameter to see if the averages increase. Conversely if you used some of the script’s parameters to exclude data, re-running the query without the exclusions may increase the average as well. You will need to test various parameter combinations in your environment until you are happy with the results.
  • If you recently decommissioned any Hub Transport role servers in a site, then the message tracking logs stored on those servers that provide user activity details were removed as well. Therefore it his highly recommended that this script only be run on sites that have not had any Hub Transport role servers decommissioned during the specified time ranges. The script even has a built in warning when it detects a Hub Transport role server was added to a site during the specified date range, to remind you that if another Hub Transport role server was recently removed from that site as well then the user message profile could be negatively affected.

4. Why don’t I see any per-user information? Why is this site based?

  • This script was designed to maximize speed by gathering messaging profile information on a per-site basis to facilitate the use of both the Role Requirements and Client Network Bandwidth Calculators. The Client Bandwidth Calculator wants the message profile information on a per-site basis, and the per-site basis works for the Requirements Calculator as well. Reporting on per-user information is being considered for a future version of this script.
  • Per-user information is not needed for either Calculator. Separate user profiles can be optionally put into each Calculator using the same message profile but reflecting other differences such as larger mailboxes or expected IOPS increases (such as when a group of users also using mobile devices).
  • If you require per-user reporting, please use one of the scripts I referenced in the Background section.

5. Why did I get an alert that one or more sites were skipped or excluded?

  • A site will be skipped if there were connectivity issues to any server in the site. Since a message profile for a site must contain data from all of the servers, missing data from even one server could result in incomplete information. Therefore the script will skip the site if it encounters connectivity issues to even one server versus reporting only partial data.
  • A site will be excluded if there are no mailboxes or messaging activity found in it. Passive Exchange DR sites with no active mailbox databases are an example of a site that will be safely excluded. Even though there may be active Hub Transport servers in those sites, their message tracking data is not needed as they will hand messages off to Hub Transport role servers in the site(s) with the target mailboxes. The logs from those final Hub Transport role servers will in turn be used for the message profile generation.
  • If any sites were skipped for data collection issues, they will be recorded in a $SkippedSites variable which will be available after the script finishes. This allows you to re-run the script and specify the $SkippedSites as the value for the ADSites parameter, which causes the script to focus gathering data only from those skipped sites. This is helpful in cases where server connectivity issues were due to temporary WAN connectivity issues, and another run of the script will process those skipped sites successfully.

6. Why can’t I specify the hours of a day I want to be searched in addition to the days?

  • The script is designed to work with whole/entire days, not fractions of a day, to create the averages. Specifying a time of day would result in a faction of a day which is not supported in creating a “per day” user message profile average.

7. Why does the EndDate need to be the day following the day I want to stop reporting on?

  • When only a date is used for a “DateTime” variable, PowerShell assigns the time for that day as 12:00AM. For the StartDate, that time is exactly what needs to be used as that represents the entire day starting 12:00AM. However for the EndDate this causes the data collection to stop at 12:00AM on the specified day, therefore the EndDate needs to be the day following the last day you want included in the output.
  • The script has logic built in to ensure that the Start date does not occur in the future, that the End date does not occur before the Start date, the Start date is at least on day prior to the current date, and that the End date is no later than the current date.

8. Why would I want to store data in a CSV file and then later import it with the script?

  • Sometimes some sites just can’t be reached over the WAN. This allows for the data collection to be performed locally on server in the remote site, and then the data transferred back to the main site via a CSV file where it can be imported into the main data collection.
  • This functionality also allows you to take data collections from different points in time, such as over the course of several weeks or months, and import it into a single longer term user message profile generation.
  • This functionality also allows you to take the data in-memory and remove sites from the collection by exporting it to a CSV, and then re-importing the data to a new collection and using the ExcludeSites parameter to block the import of the unwanted sites.

9. What is the purpose of the InMemory parameter?

  • The only reason to use this switch is if you already have your data loaded into memory, either through one or more gathering or importing processes, and want to use the AverageAllSites parameter to provide a single global user message profile under the site name of “~All Sites”. Essentially this parameter allows you to bypass gathering or importing data and just use what is already “in memory”.

10. Why do I get an error about “inconsistent number of days” when I try to use the AverageAllSites?

  • The process that generates a single global user message profile requires that the value for TotalDays be the same for all collected sites. Otherwise the aggregated data would be represented incorrectly because the TotalDays value is used to calculate the “per day” average. You need to review your site data, most likely by exporting it to a CSV file and reviewing it manually, to determine which sites have different TotalDays recorded and deal with them accordingly.

11. Why is the information saved to the $MessageProfile variable if I don’t use the –OutToCSV parameter? Also how do I “wipe” the collected data from memory so I can start over?

  • Storing the data inside of PowerShell variable is necessary if you want to run the script multiple times to accumulate data, because the script uses this variable to store the cumulative data in between runs.
  • This also allows you to take the in-memory $MessageProfile variable data and pass it to other PowerShell scripts or commands that you wish.
  • You have the option of using the command “$MessageProfile | Export-CSV ….” to create your own CSV if you decide to later store the collected data in a CSV file.
  • To clear the $MessageProfile data from memory use the following command:

$MessageProfile = $Null

12. Why does the output of the script include a value called “TimeSpan” and also the time zone of the remote site?

  • The time span represents the delta in hours, positive or negative, between the server running the script and the remote server it is connecting to. By default when the Get-MessageTrackingLog cmdlet is executed against a remote server, the DateTime values used for the start and end dates passed to it are always from the perspective of the server running the cmdlet. This means that if the computer running the cmdlet is 5 hours behind the remote server, then the dates (which include a time of day) passed to that remote server by the cmdlet would actually be 5 hours behind your intended date.
  • The script uses this time span to properly offset the DateTime values as they are passed to the Get-MessageTrackingLog cmdlet, so they are always processed by the remote server with the original intended dates (and the 12:00AM time of day). Following the example above, the script will add 5 hours to the date when the cmdlet is run against the remote server. Since this value is crucial to accurate script execution, it is recorded in the output for tracking purposes.
  • The Client Network Bandwidth Calculator wants to know the time zone of the user message profile being specified. To facilitate use of this calculator, the site’s time zone information is recorded in the output of the script.

13. Why did you build in an ExcludePFData parameter switch if it doesn’t exclude all Public Folder traffic?

  • Initial testing of the script showed that dedicated Public Folder servers reflected a large amount of Public Folder replication based Hub Transport messaging activity.
  • Because the most accurate depiction of the user messaging profile was desired, a switch was added to try and filter out some Public Folder replication data. Since the only way to consistently identify the Public Folder traffic was by message subject line keyword matching, a filter was created that strips out messages with Public Folder replication subject phrases not likely to be used by users to try and limit accidentally stripping actual user messages.

14. I see Equipment and Discovery mailboxes are excluded, why aren’t Arbitration Mailboxes excluded?

  • Equipment and Discovery mailboxes do not send and receive email through the Hub Transport service, so including them would only serve to negatively impact the user message profile.
  • Arbitration mailboxes on the other hand are normally limited in number and therefore including them in the mailbox count is not expected to dramatically impact the message profile in a negative way. At the same time messages can be sent to and received from Arbitration mailboxes, depending on the organization’s use of features like moderated Distribution Groups, so including them could positively impact the message profile.

Conclusion

So there you have it, a PowerShell script to assist you with generating an average user message profile for your environment, with a number of options for you to tailor it to your preferences. I hope you find it useful with the two calculators, but also any future troubleshooting efforts of your existing environment.

When I finish the Exchange 2007 version, I will attach it to the TechNet Gallery posting, so if you are looking for that version please check back periodically. Likewise as I make enhancements or other changes to the script, I will be updating the TechNet Gallery posting. So please check back with that posting periodically.

Lastly I am always open to suggestions and ideas, so please feel free to leave a comment here or reach out to me directly.

Thanks and happy PowerShelling!

Dan Sheehan
Senior Premier Field Engineer



from Exchange News Full Article

Anderson Patricio: How does it work the Exchange Server 2013 setup resume process?

In this Tutorial we are going to test how the resume setup of Exchange Server 2013 works. In order to test it properly during the Exchange Server 2013 setup process we will be turning off the domain controller, and as we know the Active Directory is the foundation for Exchange Server.

Solution


In the image below we can see the error message where a Domain Controller is not available, by default a new Exchange Organization requires 15 steps and the issue occurred during the Step 8, which means that at this point the Steps 1 to 7 were completed successfully. After getting the error message, there is nothing that we can do, just click on Exit and solve the situation before trying again the installation process.

image[2]

In our case, the solution was easy, we brought the domain controllers up, and we ran the setup.exe and the following page is displayed. The setup detects that the last attempt was not completed and it will retry the process, from the administrator side we just need to click on Next.

image[5]

The setup will resume, and if we can notice that now we have only 8 steps left, and the reason is that the first 7 steps completed successfully are not part of this new version of the setup process.

image[8]

So, now you know, if something goes South during the setup, we can always try to resume the operation by running again. Just make sure that you solve the error before trying it again.

Please join our ITPROCentral.com community at http://ift.tt/1vM4SGL



from Exchange News Full Article

Exchange Group Team Blog: Outlook says .. Microsoft Exchange is not available

This was is a good one and has been doing my head in!

I had a user mailbox that could log in to OWA but not Outlook.

With outlook you would get an error saying:

"Cannot open your default e-mail folders. Microsoft Exchange is not available. Ether there are network problems or the exchange Computer is down for maintenance"

If you run:

Get-LogonStatistics -Identity <email address>

Then have a look at the FullMailboxDirectoryName

This should match the users legacyexchangedn.  In my case, another user has the same address as a proxy address.

Just removed it and everything worked fine!




from Exchange News Full Article

MSExchange.org: Product Review: QUADROtech ArchiveShuttle

In this review we will take a look at QUADROtech ArchiveShuttle.

from Exchange News Full Article

msexchange.org: How to remove non-routable domains from your Exchange Organization



from Exchange News Full Article

msexchange.org: Enhanced non-delivery reports (NDRs) in Office 365



from Exchange News Full Article

msexchange.org: Centralized and Customizable backup reports using Data Protection Manager

Update Rollup 5 for System Center 2012 R2 Data Protection Manager brought new and enhanced reporting capabilities.

from Exchange News Full Article

msexchange.org: New Office Visio Stencil - April 2015

These stencils contain more than 300 icons to help you create visual representations of Microsoft Office or Microsoft Office 365 deployments including Microsoft Exchange Server 2013, Microsoft Lync Server 2013, and Microsoft SharePoint Server 2013. The zip file now includes both stencil sets from 2012 and 2014.

from Exchange News Full Article

msexchange.org: RMS Tools updates

Download the latest updates to use and manage RMS.

from Exchange News Full Article

Tony Redmond: Fixing a “FailedAndSuspended” content index for an Exchange 2013 database

For space reasons, this text is another bit that was cut out of my Exchange 2013 Inside Out: Mailbox and High Availability book. FWIW, here it is…

Imagine my annoyance when I ran the Get-MailboxDatabaseCopyStatus cmdlet on a test Exchange 2013 server and found that three of the databases reported a “Failed and Suspended” status for their content index. Despite the rumors to the contrary, I don’t spend my days idly running commands against Exchange to see what happens. I’d been playing with health sets and health reports in an effort to understand these concepts better and was provoked to finding out why my server was deemed unhealthy for data protection when I ran the command to extract the health report based on the data protection health set:

Get-HealthReport –Identity ExServer1 –HealthSet DataProtection

The content index is necessary to enable fast client searches so it is something to be concerned about. Failed and suspended means that Exchange hasn’t been able to fix any problems that it might encounter with the content indexes during normal operations and a reseed is necessary. I wasn’t all that worried in this instance because only test databases were involved, but it’s good to run a tidy shop so the problem had to be addressed.

The failed index

The failed index

These databases form part of a Database Availability Group (DAG). Normally when a content index fails and needs to be reseeded, you simply run the Update-MailboxDatabaseCopy cmdlet and specify the CatalogOnly switch to request Exchange to reseed the content index from a good copy belonging to another database copy. But when you’re running a single-copy database there’s no other good copy (of either the database or the content index) hanging around waiting to be called into play.

Hmmm… TechNet wasn’t too helpful on the topic of reseeding a single-copy database and the suggestions offered in various web sites all leaned toward a complete rebuild of the index. Eventually I decided to go with that plan because there didn’t seem to be any good alternative. Note that users are able to keep on working with their mailboxes even when a database has a failed index. It just means that searches performed with OWA will be slower.

In any case, I used the following steps to get my three errant databases back to good health.

  1. Stopped the Microsoft Exchange Search service
  2. Stopped the Microsoft Exchange Search Host Controller service
  3. Dismounted the database
  4. Deleted the [guid.single] folder in the folder holding the database file. Guid is the Globally Unique Identifier for the database. You can find this by running Get-MailboxDatabase database-name | Select Guid. For example, the folder you need would be something like d:\Databases\DB2\79c03cca-9b53-4959-982a-8773591c5f70.single
  5. Restart the Microsoft Exchange Search Host Controller service
  6. Restart the Microsoft Exchange Search service
  7. Remount the database

As each database was remounted, the Search service recognized that its content index was missing and began the process of rebuilding the index. The content index status will remain  as “Failed” until the rebuild is complete. A couple of minutes later all was well and the server reported full health. Of course, this was a relatively small database so the Search Foundation didn’t have too much work to do to recreate the content index. The process will take longer as the database size grows; it is definitely not something that you will want to do if the index fails for a large database.

I’m not recommending that you delete folders on a production server. Then again, I hope that on a production server you’ll have more than a single copy of any database within the DAG (remember, two copies provides basic redundancy, three is much better, and four copies provides a warm blanket feeling) and will therefore be able to run Update-MailboxDatabaseCopy. But if you do get into a hole, you might be able to use the steps outlined above to get out of it. And that’s always a good thing, isn’t it?

Follow Tony @12Knocksinna




from Exchange News Full Article

Sunday, April 26, 2015

Subject Exchange: Update for Outlook Junk E-mail Filter – April 2015

Microsoft has recently released the April updates for the Outlook 2007/2010/2013 Junk E-mail Filter.

This update provides the Junk E-mail Filter in Microsoft Office Outlook with a more current definition of which e-mail messages should be considered junk e-mail.

The update is available for Outlook 2007, Outlook 2010 (32-bit, 64-bit) and Outlook 2013 (32-bit, 64-bit) or you can use Microsoft Update. As usual, the updates come with the corresponding Knowledge Base article:



from Exchange News Full Article

Saturday, April 25, 2015

EighTwOne: Official 70-341 and 70-342 Preparation Books

mcse-messagingFor those striving for Exchange certification, there is nothing like a good written material to prepare you for the exam at hand. Of course, hands-on experience is invaluable but not everyone has that or knows when to start. The Technet contents is also great reference material, but that’s written with a support audience in mind, and is not intended to guide you through the exam subjects.

Recently, Microsoft Press released the following two titles:

Both books are written by authors with proper field experience, and both Bhargav and Reid also teached on Microsoft Certified Master (MCM/MCSM) program at Microsoft in Redmond. If getting certified for Exchange 2013 is on your roadmap, be sure to check out these titles.


Filed under: Exchange 2013 Tagged: Certification, Exchange

from Exchange News Full Article

Subject Exchange: Weekend reading



from Exchange News Full Article

Friday, April 24, 2015

msexchange.org: Skype for Business 2015 Protocol Workloads Poster - April 2015

Download this poster to better understand requirements of Skype for Business 2015 servers and clients under different levels of service complexity.

from Exchange News Full Article

msexchange.org: System Center Management Packs for DPM 2012 R2 Reporting, DedupReporter, Discovery and Monitoring

This download contains the management packs required to monitor and generate reports on Data Protection Manager(DPM) Servers using System Center Operations Manager.

from Exchange News Full Article

msexchange.org: IdFix DirSync Error Remediation Tool v1.08

IdFix is used to perform discovery and remediation of identity objects and their attributes in an on-premises Active Directory environment in preparation for migration to Office 365. IdFix is intended for the Active Directory administrators responsible for DirSync with the Office 365 service.

from Exchange News Full Article

msexchange.org: Microsoft Azure Active Directory Sync Services v1.0.0491.0413

Azure AD Sync allows you to onboard to Azure Active Directory and Office 365.

from Exchange News Full Article

msexchange.org: Update Rollup 3 for Microsoft Dynamics CRM 2013 Service Pack 1 (KB 3016464)

Microsoft has released Update Rollup 3 for Microsoft Dynamics CRM 2013 Service Pack 1 (SP1). This update rollup contains new updates that improve security, performance, and stability.

from Exchange News Full Article

Subject Exchange: Keep calm and… Happy 10th Anniversary!

Happy_10_Year_Work_Anniversary_07

It was 10 years ago that this blog was born with an “Ehlo World” post. Many things happened in the email space in these 10 years: Exchange evolved from version 2003 to 2013 (soon to be 2016), the way we consume messages shifted to mobile, the average mailbox quota went from a couple of MB to a few (at least) GB and we witnessed the rise of the Online services.

On the personal side, although I’m no longer an MVP (for professional reasons), I still miss those days and to be part of that passionate community. Nevertheless I try to keep in touch with the technical communities in general by attending events, writing articles and blogging, although not as much as I’d like because my daytime job really keeps me busy (in fact, so busy that I missed this anniversary by 6 days Smile).

Please stay tuned for the next 10 years!



from Exchange News Full Article

Thursday, April 23, 2015

msexchange.org: Lepide Software announces launch of Lepide Exchange Migration Services

April 22, 2015, Lepide Software Pvt. Ltd. announces launch of Lepide Exchange Migration Services – a remote, end-to-end migration services for doing Exchange-to-Exchange (on-premise) migrations and Exchange to Office 365 migration remotely.

from Exchange News Full Article

Tony Redmond: Preparing to launch “Office 365 for Exchange Professionals”

office-365-for-exchange-pros-cover-full-option-1

As you might know, we closed off the formal writing phase for “Office 365 for Exchange Professionals” on April 15 to be sure that we would have books available for the first day of the Microsoft Ignite conference in Chicago (May 4). I guess it’s natural that some would ask why we would stop writing 19 days before the books are needed. After all, Microsoft has continued to pump out announcements relating to Office 365 on the Office Blog with these posts appearing recently:

Enhancing transparency and control for Office 365 customers (April 21): New security features that aren’t in Office 365 yet, but will be over the next year or so. This was an announcement for the RSA conference, but contained some nuggets that affected chapters 2 and 17.

Evolving Data Loss Prevention in SharePoint Online/OneDrive for Business and Office applications (April 21): Thankfully we knew all about this stuff because we were on the preview program, so DLP for SharePoint and OneDrive for Business is covered in chapter 16.

Enhanced non-delivery reports (NDRs) in Office 365 (April 17): A really nice enhancement that isn’t available yet, so our mail flow chapter (11) survived.

Office 365 Video begins worldwide rollout and gets mobile (April 16):  Video has been available to First Release tenants for a while and we had it well covered in chapter 18, but a number of new details came to light when Video achieved general availability.

Office 365 now supports larger email messages—up to 150 MB (April 15): This was an easy detail to insert into chapter 6. I’ve also commented on the topic on WindowsITPro.com.

New Office Delve People Experiences in Office 365 (April 14): This update arrived just before we stopped writing, but it caused some issues because Delve now boasted a new user interface. So we had to get screen shots to insert into chapter 18 as well as check out the new information to make sure that our text was still accurate.

Other stuff has happened to affect the content of the book, but the detail above is enough to give you the picture of the dynamic environment found inside Office 365 and the difficulty often found when writing about cloud services. Every day we have been scanning headlines and our inboxes to figure out what’s happening inside Office 365 so that we can adjust as required. Fortunately, because we are creating an eBook, we can update text much more easily and faster than we would be able to do for a traditional printed book.

At least, we can in the Word documents that we have for each chapter and the overall book file. Word does a good job of generating PDFs, so that part is handled quickly. The fun starts when it comes to generating files in the format used by mobile book readers. We’re focusing on EPUB for the start as this is a free and open format supported by many readers. We might then look at other eBook formats, including Kindle.

The route from Word to PUB is via the Caliber eBook generator. This decision leverages the experience of Paul Cunningham, who has generated quite a few eBooks for his exchangeserverpro.com site. However, Office 365 for Exchange Professionals is both larger (18 chapters, 1 appendix, 630 pages, 240,000+ words) and more complex in terms of styles and formatting than Paul’s previous books, so we are running into a couple of interesting challenges.

Based on some recommendations, we looked at using Adobe InDesign to format the Word content for publication. InDesign is used by many professional designers and layout specialists to prepare files for publication. This is especially important when dealing with printers because the RGB colors used for screens need to be translated into the CYMK palette used by printers. InDesign does all of this when it outputs files for printers.

Adobe offers a 30-day free trial to see whether InDesign is for you. There’s no doubt that InDesign is a powerful layout and formatting program for many types of documents but using that power requires a steep learning curve. I have been using various forms of text editors and layout tools since the early 1980s, including VAX Document (think TeX for VAX) and DECwrite, which InDesign reminded me of a lot. DECwrite was a “Compound Document Architecture” WYSIWYG editor that ran under the DECwindows windowing system on VAXstations in the 1989-1994 period. I used it to create my ALL-IN-1 books and generally enjoyed the powerful formatting capabilities that were built into DECwrite, even if the program was prone to crashes (thankfully, its ability to recover work was excellent).

The DECwrite engineer who dealt with many of my cases was Jo-Ann Snover, married to Jeffrey Snover, then also in DEC engineering and since famous as the father of PowerShell (this video tells all). Jeffrey is now the lead architect for Windows… It’s amazing how people are connected in the industry

In any case, two days of dedicated trial and error plus lots of Internet searches and sitting through a number of video tutorials got me to the point where I could import the book from Word in a reasonably efficient manner and generate an EPUB file. The imported content needed a fair amount of fix-up beforehand to make sure that graphics were in the right place and that the flow looked good, but I got there. Many of the problems originated in the source Word documents and needed to be fixed there to ensure that any future import would work smoothly.

But the output I generated was inferior to what Paul is able to create with Caliber, so seeing that we do not need to create files for printing, we decided to revert to Plan A and go with Caliber, which is how we are currently proceeding.

When we have everything done, I’ll publish another post to give some advice of how to format and fix Word documents if you are interested in publishing technical books along this route. It might interest some!

Follow Tony @12Knocksinna




from Exchange News Full Article