Jetstress – Too Many IOPS? Andrew Higginbotham

Hello all,

This is a shout out at my Friend Andrew Higginbotham. This man is a multi-MVP and MCM in Exchange Server. He penned an article about Jet Stress, Which is very useful.

The issue is Page Fault Stalls/sec and the subject is SSD Solid State Drives.

I admit to not spending my time in Jet stress, as I don’t work on design elements as much as I do Skype. Andrew has come to my rescue on a few Design issues and Jet Stress on more than one occasion.

It turns out you should read this if you using SSD Flash Drives and Jet Stress= Here

This quick reference in my Blog is to support Andrews Blog and Recommend you read everything he writes. He is truly one of the best Exchange Persons around the US neighborhood.

Andrew thanks for your time on this case. I hate not being the expert but I am proud to work on a team with such strengths. I am just glad to be part of a team of individuals whose strengths compliment each other.



Edge Replication Status is false and the Last Update Creation time stops updating for command get-csmanagementstorereplicationstatus

When it comes to Edge Replication checking, this looks like a false positive below. But I know we all like to see true. So you see where the date says 6/22? That change means the last status report was a few month’s earlier. That missing update creation, is possibly saying the replication is not working. this is not hard to fix, so lets fix it!


Perform the steps below:

  1. Go to the Front end server and open Skype Management Shell
  2. Run the command Export-CsConfiguration –Filename C:\
  3. Copy the file to the Edge Server.
  4. Open the Skype for Business Deployment Wizard
  5. Choose Install Or Update Skype for Business Server System
  6. Choose install local configuration store.
  7. Browse to the file and finish the wizard.
  8. You can restart the Edge Server or just wait several minutes.
  9. If this fails, you just need to restart the SFB replication service on the FE and Edge Servers.


This is the point at which you browse to the configuration Zip file. Its Step 7.

I hope this helps your issue. I have seen this just stop refreshing and this step normally fixes the issue in my experience.



Getting Accurate Latency from Dynamically expanding Hyper-V virtual Machine Disks


This Article is about the tool called Hyper-V Performance Monitor Tool (PowerShell)

you can download it from the tech net article down the page, or use the link above.

So Hyper-V gets thrown around loosely these days, when you talk about Virtualization or Performance Tuning, or Planning or any other aspect of the product life-cycle of a new Host Deployment.

Over the last few years, we have made rapid changes from Physical Host Machines for production work loads, to these Virtual monstrosities, that now host our whole company.

Along with this change, you may recall that Early Hyper-V documentation has gently let us know that monitoring inside the Virtual Machine was not going to give us the parity results to the Physical counters, depending on configuration.  This is for a few reasons, which are beyond this articles scope. However, I would like to shine light on this, so more people can think differently about their Virtual performance.

The most common measure of how well a server is performing is Latency in Milliseconds. Every one is most concerned with how much latency is in the storage system. Perhaps with good reason. The San storage vendors can perform so fast now days, that you can throw the empire state building at a server, and the Latency is less then 10 Milliseconds (MS). Or is it throughput?

To be clear, we are interested more in Latency then throughput. Latency should be minimized, and throughput will generally increase.

Can I make a case that Counters are not reliable?

Well let me tell your actual Latency is not as easily attained as you would think. If I told you that your tests are lying to you, would you believe me? Lets say your in shock. Without knowing your design, my generalized answer has a very high chance of being correct.  The issue is that you have a SAN and you are tying to get latency by measuring responses to a file, that go though 4 different filters, and then has to wait until it gets queued to a disk subsystem, that is always expanding. I promise you; your numbers are incorrect.

If you cannot quantify how latency and throughput are different, but related, then I would say you should not stop reading.

Storage Latency of VM guests

There are many problems with calculating storage latency, but disk is the model we are going to use to illustrate how tricky it can be to find out how your VM is performing.

The most common approach to gaining latency information is to use a command line tool. Normally the tool will work fine. The model breaks down when the Disk itself is changing, along with the Ram and processor availability. The bottom line is a Virtual machine may lie to you about resource numbers at any given time.  Add to the mix, that the clock cycle is a weakness in any virtualization platform. That means that the calculation of time itself can result In poor results based on good math with bad numbers.

There are a crowd of you who say that is bull. Well all I can say is; don’t read this and good luck solving your latency issues.

Let me try to list some areas where the numbers may go awry. I am just making a one line explanation with a link, so you can read more. I don’t want this to be about the problems. Below, I talk more about the solution. Read more if you have a specific issue:


I could keep going. Do you get the feeling there are a ton of variables that change how storage latencies should be calculated?

From my experience, I have found that every set of servers are their own data set of network behavior. There are some basic assumptions I found to pass along to Admins who want to find out latency of Virtual machines.

Guidelines for VM latency Study

Who to Blame

So again, the basic message is that the calculation of latencies is totally based on the sum of the deployment factors. In one data center you may find under reporting, and the other Over Reporting. Support agents do not have the Onus to prove why one is slower then the other. We will have to look at your Design and deploy and try to make a story of things we can identify. it is not likely we will find that moment where the Deployment deviated from your Baseline storage latency measurements. We offer Best effort, but encourage you to strip down your deployment to make a core Baseline latency for a Dynamically expanding VM. All Vms will compare to that one. We go from there.

Using the Stop Gap solutions for Monitoring Virtual Machines

SO it was just a few years ago, this issue with VM monitoring was not easily remedied. You could certainly use the Perfmon counters to get VM stats. But Customers just want to run Disk Speed or SQLIO, and get an output to look at. This did not exist for quite some time. Thankfully there is a script out there, that will now carve out some parity to those tools. the link is at the TechNet Gallery:


Hyper-V Performance Monitor Tool (PowerShell)

Below is the walk through of the basic performance collection.

you Just run the Script from an Admin PowerShell. There are a few ways to run it:


### export data to csv via GUI, defaults to current dir
.\Monitor-HyperVGuestPerformance.ps1 -ExportToCsv

### retrieve data as PSobjects, great for parsing and logging, -name parameter is optional, defaults to automatic discovery
.\Monitor-HyperVGuestPerformance.ps1 -PSobjects

### specify host and interval/samples manually
.\Monitor-HyperVGuestPerformance.ps1  -Name host1,host2 -PSobjects -Interval 2 -MaxSamples 5
### accepts pipeline input
| .\Monitor-HyperVGuestPerformance.ps1 -PSobjects

### Log to SQL server with Write-ObjectToSQL , this example uses SQL auth
.\Monitor-HyperVGuestPerformance.ps1 –PSobjects  |  Write-ObjectToSQL –TableName table –Database db -Server server –credential (get-credential)



If the domain connection fails, it tries for a Local connection:



In my case, I ran the tool on the Host, and this GUI below popped up. All I did was hit monitor, and I got an export vm_perfmon_stats file. This file can be used to find your latency.


While this method may not be pretty, It does follow the rules for Hyper-V guest. The main purpose for this tool would be to use instead of SQLIO or DISK SPEED. tools like these should be used for hardware testing. A Hyper-V Server, running on ISCSI shared storage, with two VHDX files attached, is likely going to come back with Erroneous Latencies. This may not be perfect, but I do believe you will see a consistent result that is not a totally unbelievable number.

See I changed the Sample and interval:


And I get a time-frame to wait for the test results:


Find the Link at the Microsoft TechNet Center. Thank you for taking the time to Read about Storage Performance for Hyper-V virtual Machines

I hope this helps in your Baseline Studies.

The result is a nice little Excel Display of the data, that I cleaned up a little with colors, to the Excel Fields.




Using Jperf Graphical Interface for Iperf, for basic network testing and Diskspeed or Dskspeed for storage testing.


Late night yawl.

Its 11:52 PM on 3/28/2017. It has been pouring in Edmond for a few hours now. For some reason I am not sleepy yet. I thought I would spend a few minutes putting together how to use a tool I was interest in.

Jperf is a script that runs on windows, and takes the place of IPERF.This little GUI should do most of the things you need, from a two windows machines. If you want to do it on Linux, then you can just use the Iperf command line.

Half the battle for Jperf is just finding the Bits. They are located here:

JPERF Download

There is also a copy of DiskSPeed or DSKSPEED.

The concept of running Jperf is simple:

  • Java must be running on both machines
  • Unzip the Jperf Folder to the Desktop
  • Use PowerShell 5 to run the Batch file on both machines
  • The server just sends data to the test subject, so to start the server, you just select the Server Radio button and hit Run IPERF in the upper right hand corner.
  • This will look like this:




and to run the script, all you had to do was run a batch file called Jperf.bat:



Now after starting the Jperf on the Server machine, you simply run the same batch file, on the machine that is going to be the client. Choose the client radio button, the Server Port, and Server IP. You have other choices, but you can test various packet configurations after you have showed that the two applications are doing the basic monitoring succeeding.


SO now that you have both GUIs up, chose run IPERF on the server, then on the client. you should see some graphing immediately.


The tool is nice to just show how the network is working between two points, with various network conditions, that you can control.

If you look at the enclosed User Guide, you will find where it calls out a baseline number you can use for making comparisons  the average file transfer size and the time for that average transfer, in the last line of the test is what I am talking about (see figure below)

These numbers you want to take note of. you can use this as your baseline, to compare other situations or other iterations of the test.


That’s IT!

Good luck and I hope this helps to explain how you can use IPERF with a basic GUI and provide good troubleshooting  information The tool is called Jperf and It uses Iperf. Its all in the download folder.

Next Time

Watch out next time when I will review the replacement for SQLIO. The tool is called DSKSPD or Disk Speed. We will talk about how  “dsksped test not valid for hyper-v virtual machine storage” Hyper-v performance is not as easy as just using Disk SPeed

MS even states that Dynamically expanding disks should not be used for production. That is also where counters start to come back with irrational results.

Here is something to read on Hyper-v Storage optimization, until I have time to write the next article

Auto-Attendant announces there has been a System Error! and hangs up!


Hello All. I had a support case where the answer was right in front of me but I didn’t initially put my hand right on the problem. Actually there is two incidents and the customer was the same. Let me share both stories with you.

The first issue was an auto-Attendant who failed to use voice to interface with a user. Instead It screams out System error! In fact the error simply said there was a system error and hung up. It was very nasty and there was no definitive help in the event log or anywhere else.

The work around plan to fix this was to simply upgrade the Exchange CU and the .NET Framework version. As it turns out, even that would not have helped this situation.

So if your environment is just one Gateway, one Dial Plan, one Auto attendant and one Hunt group, there is a good chance you wont have this problem. However, that may also be the perfect situation for the problem. The issue looks like the product of recreating the Dial plan multiple times. Below is a good result. But what if the results were the same for three lines in a row? If you only have one dial plan, this would be something to be alarmed about!


Not so much the picture above, but the one below:

So with multiple hunt groups per dial plan, the search behavior changes and you may get an un documented failure. Not so much because its a problem failure, but

because, often times, documentation is written in terms of what some application will do, not what it wont do. So word to the wise, check your hunt hunt groups.

I saw the problem when you have multiple hunt groups and only one dial plan. I know there is also only one Dial Plan. But I think you can see how a seach outside of the local dial plan could render a problem.

When you run the get-UMHuntGroup command, You will see the Hunt group displayed, one for every Dial plan. This is an area where you may have old hunt groups pile up for dial plans you created and deleted.

Its fine to have old hunt groups, but if they have the same name, there could be a system error.

The system error occurs, If you have a ghost Hunt group object and:

the name of the hunt group matches the one your using for your production dial plan. and you have the “in the entire organization” checked in the address book and operator access setting in the auto attendant setting.


Changing the radio button to this dial plan only, protects the Auto Attendant for issues with other hunt groups, or dial plans. This is a lesson learned for me! My default setting is not this dial plan only! Such an easy thing to miss!

This seems more like a corner case then a call driver, but I have to mention it, as I am sure we will see it again! Have a great roman holiday!



Move-CsUser Fails from command line when Migrating User to New Pool. It succeeds from Graphical Interface


Hello All,

I am hoping to make some videos about SFB, but I am still low on time. In the mean time, I hope these articles are helpful to some. My Friend called me with an interesting problem. His move-Csuser command failed from the command line. The GUI move succeeded. I provide below a few things to check and set to repair the issue.

Figure 1. Roman Numerals of Lync Issues Colosseum-Entrance_LII


There are a couple reasons for the failure you are having. I will list below, along with the most plausible solutions:

I. The difference between the Command line and GUI is permissions related. When you open the command line, you need to be a member of the following groups :

  • 1. RTCUniversalUserAdmins (not CSUserAdministrators
  • 2. CsAdministrator 
  • 3. I know you think you have proper permissions but please check- This is often gotten wrong
      • a. You will check and see you have two permissions – CSAdministrator and RTCUniversalServerAdmins
      • b. You also need to add – you need to be a member of CsAdministrator and RTCUniversalUserAdmins

II. The other side of this issue is the User. The user may have been one of many users who had their default user created without inheritable permissions. Lync move command will fail!! Fix it before making the move command!

Move command fails due to user permissions

III. User is legacy OCS user? Your error contains the text OCSADUser. Without the full text of the error, there is some guesswork here but, perhaps try this out:

Lync fails to move between pools

    • a. Port 135 is blocked between pools. (not sure how the GUI gets around that)
    • b. Run get-CsManagementStoreReplicationStatus on all Servers. Correct failures
    • c. Check any SBAs they need the right ports etc..
    • d. Did you try the –Force yet? Try it out. If it succeeds, then likely we have a data issue.
    • e. Run Get-CsFabricPoolState and Get-CsBackupServiceStatus if either fail, then we know this needs to be fixed first.
    • f. Move-CsLegacyUser -Identity “”-Target “

IV. Are the users potentially legacy OCS users? They could be. Try Move-CsLegacyUser

V. Weather legacy or not, the database may have a problem. Try to check the database for clarity below

  • a. The error in this link may not match, but it contains the how to check for Database corruption DBANALYZE
  • b. If the user database is not right, and you cant repair then you may have to homogenize the data by completing the CMS move or moving the CMS to another machine.
  • Or you want to Export and Import the User data, after running a –force on the move command. see roman num. 8 below

VI. User or pool Attributes are wrong or corrupt, or not changeable in AD. Note the following attributes. You can even change manually if you know the values for the desired state. For the Pool:

  • a. msRTCSIP-PoolDomainFQDN
  • b. msRTCSIP-PoolDisplayName
  • c. msRTCSIP-BackEndServer

2. For the User

  • a. msRTCSIP-UserRoutingGroupId
  • b. msRTCSIP-UserEnabled
  • c. msRTCSIP-PrimaryHomeServer

VII. Lync Server Move-CsUser and Move-CsLegacyUser commands fail with error –like  SetMoveResourceData failed because the user is not provisioned.

VIII. This is a perfect little process if Force works. So the commands are restated below. Thanks FlinchBot:

  • a. Export-CsUserData -UserFilter “” -Poolfqdn -filename “e:\
  • b. Move-CsUser “” -Target –force
  • c. Update-CsUserData -UserFilter “” -FileName “e:\” –verbose

IX. If you Move back in version, it will automatically fail without a force. Here is a long time disclaimer:

“WARNING: Moving a user from the current version to an earlier version (or to a service version) can cause data loss”

X. I just had to get to 10. Now I know My Roman numerals. Ok I am leaving you with a more complex example, which includes two of my fixes from above, in combination. I think I have captured a good number of the reasons why Move-CsUser may fail.

Bonus #11 – Issue with Move command and AD Connect


I hope this has been fun and informative. This is a summary article about the many reasons you may not be able to run move-CsUser in the command line. I will leave you with a couple last articles which have to do with getting all the user objects that may be causing things to fail. You can manually parse the list to see if there are any that show up with a problem.




Update all of your Skype For Business Servers



Good Morning Class, Today I just wanted to put into your hands a needed cheat sheet that puts together all of the update changes in SFB into one simple upgrade document for any set of SFB servers.  So lets begin.

Pre-Requisite Install work for Skype for Business Updates

Updates should be done in the following groups, in the following order:

  • Standard Edition Servers
  • Front End Servers
  • Mediations Servers, Director and Edge Servers
  • Back End SQL Servers

To begin, If you have Skype for Business (SFB) Standard Edition, you will follow this process:

Standard Edition Updates for SFB Server Environment

  • 1 Stop-CsWindowsService
  • 2 net stop w3svc
  • 3 SkypeServerUpdateInstaller.exe
  • 4 Once this is complete move to step 5
  • 5 Open a new SFB Shell after closing the update window
  • 6 Stop-CsWindowsService
  • 7 net start w3svc
  • 8 Depending on your Database setup- you may run one or the other
    1. Install-CsDatabase -Update -ConfiguredDatabases -SqlServerFqdn <SQL Server FQDN>
    2. Install-CsDatabase -Update -ConfiguredDatabases -SqlServerFqdn <SQL Server FQDN> -ExcludeCollocatedStores
    3. Install-CsDatabase -Update -LocalDatabases

If you have Standard SFB, Stop. You have completed all you need for your Deployment. If you have SFB Enterprise, Follow the Steps Below.

SFB Enterprise Updates of Servers and SQL

Front End Servers Patched First. Patch One Pool At a time, One Server at a time.

Run: 1. Get-CsPoolUpgradeReadinessState

Only if you get a failure, and only if your results show a missing replica, then you run this command:

    • Get-CsPoolFabricState -PoolFqdn <PoolFQDN>
    • Reset-CsPoolRegistrarState -ResetType QuorumLossRecovery

Otherwise Continue

  • 2. (Ignore if non Clustered or Mirrored) Invoke-CsComputerFailOver -ComputerName <Front End Server to be patched>
  • 3. Get-CsWindowsService (services will be running) & Get-CsPoolFabricState (Fabric will show 1 less server in the pool)
  • 4. Run the latest Installer package
  • 5. Don’t Be impatient!!!
  • 6. Only when the updates are done, move to step 7 or 8
  • 7. (Ignore if non Clustered or Mirrored) Invoke-CsComputerFailBack –ComputerName <Front End>
  • 8. You may check for the pending restart. Be aware you may want to do the restart before moving to the next server

Mediations Servers, Director and Edge Servers Patch One at a time

The Steps are the same for this group of servers as well. However, they need to be completed as a separate group. So begin with the same steps and when complete, Make sure you have restated your servers if needed.Edge Servers should be done together as well.

Back end SQL Servers and other SQL Servers

Once all of these servers in your deployment are updated, You need to update the SQL instances:

  • 1.On the Back End SQL machines OR On the Master FE sever of your Pool (RTClocal) or Monitoring Database

      • Install-CsDatabase -Update -ConfiguredDatabases -SqlServerFqdn
  • If any of #1 is on the back end with the root BE database instance, Use:
      • Install-CsDatabase -ConfiguredDatabases -SqlServerFqdn <FEBE.FQDN> -ExcludeCollocatedStores -Verbose

Once you have completed the final step, you can then run the command Start-CSPOOL, and that should cause the SFB Pool to verify the pool fabric is happy and all will e started up again properly.

Thus having done this, you will have successfully updated you Skype for Business Environment. I hope this makes it a little easier to carry out Your updates.

Yours Truly.