For anyone trying to troubleshoot a Slow SQL server, I wanted to come up with a test that will take the SQL issue and generalize it. Why does this need to be generalized? I have found that a customer or a support team may introduce a bias in all aspects of the tests. Begin with the Data. Data is impossible to to show a unique result. You may say this database does not go as fast as my favorite one, on a separate server. you cannot accurately prove one server is faster or slower then another server. Why?; for a basic Idea, take look at another case, where I lay out some basic testing tenets to go by.I will re-state them here. They sound like car rules, but they are universal testing rules you can apply to any situation.
From Car Rules to Computers
The performance should be documented and repeatable.
More than one test should be run, and simple is usually more realistic.
Tests should be standardized, down to a science, so that if applied to another matching scenario, you would expect similar results.
Keep the time down to a short test. The longer the test, the more variables can be introduced.
Do not focus on two separate car models not functioning the same, find a way to introduce a baseline into what a reasonable car will perform like. Then prove or disprove your baseline.
In order to get a good unbiased test result for SQL, I came of with a dynamically created SQL database, that gets created once. Once Created, you can run some test on this standardize database, and compare with results on, say your laptop, or another machine, where your processor, Memory and disk resources are similar. All you have to do is to follow the method. One simply must not use ones own data.
The SQL Baseline for Customers who report server A is slow then Server B.
When a customer claims that one machine is slower than the other, there is always the possibility the customer has an actual baseline. However, when they say one is slower than another, this usually indicates they don’t know what a baseline is.
A Baseline is a collection of metrics, about the server, when it is installed at Greenfield time. When the Server is first Deployed with SQL, a baseline should be taken. Then, future claims as to a slow server, should be taken against itself; not another server.
When a person wants to compare two servers, this is almost an impossible ask. It’s like asking us to compare why two people do not complete a personality test in a similar way. From a support standpoint, it is a fruitless pursuit, and often creates a bad CE, in trying to fulfill their request.
The goal of this process, is to give Support and the customer a way to meet on common ground. The customer claim that the server is slow may as well be translated into, The Data on my servers does not match!! And they are correct. And we don’t support data. The Key word is Data. This question of “SLOW-ER” pulls us into the customers’ data sets.
This process gives us a way to use our own data set. The advantage cannot be understated. We will be telling them one machine is slower or it’s not.
Accepting that generally one machine is slower, do not underestimate this result, as the customer re-introduces his production elements. If the Baseline test show a machine 20% slower, then any difference, more than 20%, will be due to specific workloads introduced by the customer. All of the SQL Subject matter experts have known this, but we all spend weeks trying to find the leverage to prove it. Without an “absolute”, we could not substantiate that claim. This caused these cases to last for months. This method below, should cut these cases into a two day case, at most.
In the following test for SQL you will see four files, which compose of a method of base-lining SQL performance, without using Bias Data from the Customer, or a third party company. This Test Is devoid of the implication of using cashing or indexing, so it is a perfectly simple test to illustrate capabilities between two machines.
The reason this test has been devised is due to customer demand. Customers often ask us to compare two adjacent machines. Often times these comparisons can only be done using apples to oranges methods. Often times, these cases end up being a point of contention for the customer and for support teams. The goal of this test is to mitigate that disparity.
Here are the files you will need
Figure 1. Files you will need
The Excel Spread sheet is to be filled out and returned to Support. We keep a master copy of this spread to monitor the scripts performance against a variety of machines and situations. Over time, we will have a database of how this script performs, on average, across a multitude of platforms. And the simple measure we are obtaining, is time. How long does it take the baseline query to complete?
The Results of the script, should answer the question, is my “server” really slower than average, or slower than another server? In order to do this, strict adherence to rules must occur. This test must be run, with all other operations terminated on the SQL server. There should be no Antivirus running, there should be no other applications running. Other than a baseline Windows machine, with core applications and services running, the server should be running SQL with no client connections. In other words the SQL machine needs to be out of production. There are columns In SupportBaseline.xlsx, but it will be noted in the analysis, that the machine was in production, and the results may not reflect a true baseline.
Several baseline runs can be collected with the single variable as the total number of rows, this script will create. The default is set to 1 million. The recommendation is a million rows, on average. However, depending on how powerful the server is, or how much down time you are allowed, you can adjust this variable to fit into your needs CreateSupportTest.sql is the file where this change is made see below.
Figure 2. Where to adjust how long the script will run
How long do I run the initial test?
As a general rule, 1 million rows should take less than 15 minutes on a reasonable SQL server. However Performance degrades fast. For example, A SQL VM with only 3 GB of ram, will take 121 minutes to run the query. So the first run should be 100,000. Then multiply the length of time it takes to complete by 10.
This is how long a million rows should take to complete. You can judge how many rows you should choose, depending on the amount of time you want the query to take to complete.
Record the results in the Excel spread sheet SupportBaseline.xlsx. use start and stop time and it will auto populate the time of execution
Repeat as necessary, populating the spread sheet, and returning to Louis Reeves in Support support. He is keeping the overall list of how the query runs in several different scenarios and can give you more information about how your query results compare to other machines running the same query
When you are finished testing a server, there are two scripts that are cleanup scripts. Run DropSupportTest.SQL. here is a how to if you need it
That’s it, Now you can complicate things, by running things like Diskspd against these machines, but, it will be best to just keep it simple and stay with the program laid out. If you desire to look at diskspd, go ahead and read The Fallcy of Performance or; Are you bringing your Support Agent Apples or Oranges? This will help you the plan for running Diskspd commands. So here you really have two ways to testing the claim of a SLOW:
I hope this series of articles is helpful in troublsehooting issues with model data.
For all you Terminal Server or Remote Desktop Services or RDP Geeks out there, Let me spend a minute to clarify a call driver that continues to be popular.
The Scenario is deploying Remote Desktop Services in a work group. Call this a corner case, or call it what you will. The reason this is a popular support call is due to the fact that there are two articles needed to complete the setup.
Oh sure Microsoft does tell you to add a policy after your setup, but they specify you use GPEDIT. not much help there… Until Today!
First you need to Deploy the roles correctly. The Specific KB I chose for this article, is the one you would use for the simplest setup. One that keeps you clear of very common mis-steps of walking through the setup in the Server Manager. If you did your deployment correctly, you didn’t even need Server Manager.
So Far, so good. Now this is where we start to diverge from some existing Documentation
If you are in a workgroup, go to “edit local users and groups”
Find the group folder and create a group for your RDP users and add your users to this group.
Alternatively, you may add your users to the RDP users group already there
Remember the group you are using. It becomes important
Now you are going to edit the Local Policy by doing the following:
Start and Run GPEDIT.MSC
Navigate to the following:
Local Computer Policy ->Computer Configuration-> Administrative Templates-> Windows Components-> Remote Desktop Services-> Remote Desktop Session Host-> Licensing
Figure 1. GPEDIT.MSC
You are going to see now, the two LSO (Local Security Object) you will be enabling.
Use the Specified Remote Desktop License Servers –Value- IP address of RD License Server.
Set the Remote Desktop Licensing Mode- Value – 2 or 4. 2 is for Device CAL and 4 is user Cal.
Figure 2. Local Policy.
Now there is another Policy to set. For this, you want to just go back to the top. Start out at Local Security policy like before (GPEDIT.MSC).See figure 3.
Figure 3 GPEDIT.MSC
Expand Computer –>Configuration,–> expand Windows Settings, –> expand Security Settings,–> expand Local Policies and then click User Rights Assignment.
Enable this policy and add the group you used earlier. It is highlighted in this article above. Add this group to this policy. In addition, add the Remote Desktop Users group to this policy if desired. Don’t and your administrator name here. The Admin already has access. If you ad your admin name, it will lock you out. So best to to stick to adding the Remote Desktop users group.
Notice it says administrators (Plural), that is fine, but the single administrator should not be in this list. there is a well known break here if you do that. When you are finished, you just need to add the users you want to give access to, to the group we just added to the Policy (likely the remote desktop users group)
Step 8 comes right out of the KB2833839
Open an elevated Windows PowerShell prompt
Type the following command on the PS prompt and press Enter:
$obj = gwmi -namespace “Root/CIMV2/TerminalServices” Win32_TerminalServiceSetting
Run the following command to set the licensing mode:
Note: Value = 2 for Per device, Value = 4 for Per User
Run the following command to replace the machine name with License Server:
Run the following command to verify the settings that are configured using above mentioned steps:
You should see the server name in the output.
You have now covered all your bases, and your RDP should be happy!! It will be happy because you paid attention to all the rights things!
Now I did find an interesting article to which I cant really comment on. However, it is an interesting article. IT deals with some issues, you could run across.
Well That is it. I hope this has been helpful for workgroups or non-work groups. this basically can be set up on either.
I wrote this article for several reasons. There are a lot of reasons why you may want to go from a Software team to Hardware, or Vice Versa. I make no comments as to why you would do either.
This is to say, There are some circumstances where you are better served with a Software team, and some where you will see better results with hardware and the NDIS driver.
I will leave the why up to you. However, If you have the need, here is a rough guide to get you though the conversion of a Software Teamed 2012 or 2016 Cluster conversion of your network, To hardware NICS and Isolation.
This article is high on text an low on screen shots. All I can do is apologize in advance.
Some Additional titles:
Converting the Windows Cluster from Software Teaming to Best Practices, Using Hardware Isolation.
Another title to this article could be switching from Software teaming to Hardware isolation for CSV clusters.
My software teamed cluster performs lousy.
Two Standards in two Documents
Why would you perform this change? You would do this if your using 1GB NICS for your cluster, and software teaming is not working well for you.
I have found that the network changes around 2012 may be better suited to 10GB NICS then 1GB NICS. I find the 1GB NICS aggregation may not always be the best way to setup a cluster, depending on the needs of the customer, and the type of software running.
I find that the original Network Best practices suit some customers from a maintenance perspective, as well as ease of understanding, when the IT persons are not full time network admins.
This article demands that your Storage Networks are on a separate switch from the Clustering Networks.
Furthermore, The Cluster Networks should be ideally isolated with Vlans, by Subnet, to prevent any collisions of packets from a unique subnet. If the Network is flat, That may not stop you from moving to this setup. Just know that some isolation is recommended. But, using a separate Subnet per Virtual network is still a must.
The Design of your network should be based on the Initial Configuration Guide for Hyper-V networks for a CSV cluster (2010).
If you do a Bing search for “network guide for Hyper-V clustering” , you will find this article,
The 2010 Documents lay out the Premise of Windows Clustering, with the goal of having CSV Live Migrations optimized. The 2013 Documents, Shows the embedded method of isolating the traffic within the Teamed interfaces.
The Opinion Section
Having been an Engineer lots of Hyper-V Cluster cases, the main reason for CSV cluster issues, was some form of mis-configuring the network, as per the best practices.
In 2012, we have now added the additional choices of using a software Microsoft Team and or LBFO options using scripting to send the traffic down specific virtual network adapters.
The change in server 2012 and 2016 has changed the perception of what can be done with the new technology. I now see the majority of Hyper-V networks using Microsoft teaming. Which is fine. Its fine, as long as you are using the virtual LANS; creating the isolation, along with the team. So the whole message of server 2012 and 2016 is Teaming and Isolation. What I am seeing all too often is just the teaming portion.
For this reason, I am bringing back the original specifications, to make the statement, that Isolation is the Key element to a CSV cluster. Teaming can be fine, But Proper Isolation comes first.
This means that if you are not a network person, and don’t have access to a network department to set things up for you, then you may want to stick with the original guidelines, which I am about to present to you. In this article, we are going to convert a Cluster from a Microsoft Teamed network, to a Vanilla Hardware Isolated, Cluster Network.
The assumption here is that all of your Network adapters have been thrown into a team or two. In addition, No VLANS were created to isolate your traffic. If there were VLANS created, you simply use power-shell to delete them. See the following script, changing the commands new-LbfoTeam, to remove-LBFOteam and etc…
You will start On a node, that is not being used by the cluster. We will make the changes to one node, at a time. When the first node is done, move to the next.
Please understand, This is a major change. You will likely have to take an outage to get this done across the whole cluster. I would make the changes on the first node (Node 1). Then shut all cluster nodes down, bringing up the node you converted. This will now be the primary cluster node, and as you update each additional node, they will come online, into the cluster represented by your Node 1
How to change you Cluster, One Node at a time
So here are the steps:
Run Ipconfig /all > c:\ipconfig.txt on all cluster nodes.
Pull the MACS on Software Teamed NICS in the Server Manager->NIC Teaming-> Teams
Save these files, even placing them in a spread, so you can match up the NICS you will be using. You need to make sure that you understand which NIC on each server will play each of the roles VMNIC, Live Migrate, CSV, Management, Storage, and Replication
The columns of your spread are NIC name, NIC Description, MAC Address, OLD IP Address, New IP address, Cluster Workload and VLAN If Necessary
Remember not to record NICS that have parenthesis around them or are multi-plex adapters. These will be going away. We will only rely on physical adapters.
The Rows of your Spread are as follows:
(ex. no Subnet)
LIVE MIGRATE (ex. 10.10.11.x)
CSV (ex. 10.10.12.x)
HOST (ex. 10.10.13.x)
Replication (ex. 10.10.14.x)
So your Subnets need to be assigned by #6 above. All VMNICS wont have an IP because they will all be dedicated to the Hyper-V switch. The easiest way to express this is to explain that Every Live Migrate will be on the 10.10.11 network. Every CSV will be on the 10.10.12 network. Every host will be on the 10.10.13 network. Every Replication will be on the 10.10.14 network. Every host will have a dedicated NIC for each of these networks.
Replication above would be Ideal. If you don’t have enough NICS to have this network, then its ok. This NIC may also be called a Storage NIC. IF so, this NIC will Have MS Client sharing disabled, and uncheck register in DNS for this NIC.
the Cluster workload on your spread can be populated from the section below called workloads
Hopefully, these tidbits have helped you prepare for the actual work, because here we go. This will need to happen on every cluster Node.
Go into every VM and change the NIC to not configured
Remove all Network adapters from the Software teams.
Delete the Virtual Hyper-v Network – if external
Verify that all virtual NICS are gone. Restart server if they are not.
Label 5 (or more) NICS as follows: (minimum 4)
Storage1, 2 etc.
Go to Programs and Features and right click the Network adapter management application.
Select the “Change” option
Add NDIS if Broadcom, or whatever the 4 letter acronym is for advanced service of the NIC brand installer.
Place an IP, Subnet, DNS and Gateway into the Management NIC.
Place only an IP and subnet, for the Storage NICS. Disable for Microsoft Networks, and Uncheck Register in DNS
Re-create the Hyper-v Network using the VMNIC. It will be external and dedicated to the VMS. (check the box)
For 1GB nics – go to every NIC in device manager and disable for VMQing. If 10GB nics, leave enabled
That’s it. Once you have done the second machine, make sure you can ping across to the other NIC on host1.
As long as you pass the ping test, you may move to changing the other Nodes in the cluster.
Once you have completed the changes, and all machines ping properly, you may restart the nodes in a round robin style if you have any kinks
The only thing left to do, Is to set your priorities for the networks. The priorities are in the center pane of cluster manager. Choose network in the left selection plane.
The settings should be changed to the following:
VMNIC – NONE (dedicated to VMS)
LIVE MIGRATE – Cluster Communication only
CSV – Cluster Communication only
Management- client only
Replication- Cluster and client
Storage1, 2 etc. – N/A no cluster communication
And that’s it. Once you have that established, your cluster will now be working on physical NICS only.
Now with the NICS you have left, you may go back and add selective hardware teaming where it makes sense. No Teaming on Storage NICS. Honestly, it may be better to wait to see if you have any area of weakness. For Example, If the VMs seem slow during intense workdays, you may remove the VMNIC, from Hyper-V, Team it, and then put the multiplexor back in as the VMNIC used for Hyper-V VMs.
I hope this has been helpful. Please remember that this works automatically, because the Cluster is able to identify NICS on different servers, and automatically matches them to the same subnet. The NICs from the various servers will show up together in the network tab, because they are on the same subnet. As such, the cluster knows that they will be performing a specific job in the cluster, based on the settings laid out in this article.
This is not better or worse then software teaming, but it may be the right choice for some IT teams, who are more familiar with hardware NIC assignments as part of their Job description.
Also please understand the days of teaming with 1GB NICS is coming to a close. Very quickly 10GB NICS are becoming the standard. The difference in these setups becomes MOOT when your no longer using 1GB adapters for clustering.
Hello everyone. I had a strange series of events lead me to this solution. I had three installs of windows, done on three different media, with a different version of Server 2012 on every install.
This includes all versions of Server 2012 and 2012 R2. The minimum media was RTM. In every single install, Server updates failed, no matter what I did.
There is a variance of things seen in windowsupdate.log. However, you are ultimately going to find yourself looking at updates to remove, install or otherwise. If updates cannot be installed, manually, or otherwise, why don’t you go ahead and try this first. If it doesn’t work, go on with your troubleshooting. I think you will be surprised.
1. Run GPEDIT.MSC
Choose Computer Configuration
Chose the 5th selection from the top in the center pane
“Specify Intranet Microsoft Update Service Location”
The policy default is not configured
Change this to disabled.
Now close the policy and restart the MS update service. Then go back to the Windows Update center, and you should notice that the screen has changed. If not, then go ahead and click the check for updates button.
In my case, There was a change that said MS update needs to update itself. It automatically send a file down and repaired itself. Then updates worked normally after that!!
I wanted to share this one, as it looks to me that this could be a very large call driver for support centers.
I hope this fix finds you and I hope you fixed your customer in a quick timeframe.
As Proof I have had a lot of windows update failures late, I present the weirdest one I have seen. It is called Audit Mode Failures to update. Take a look at that one, if this article does not fix you up. It does have some basic troubleshooting steps in it.
TO begin, let me share an article that articulates what may of you have gone through to get to this point. Many of you started at lync 2010 and are now on Skype for Business. Are you using the same hardware? If so, your problems have come from the past to haunt you.
*Disclaimer: I use Lync and SFB and Skype for business, all to refer to the general term of Lync or Skype. I believe we are all moving to just Skype, but it would be nice if MS just rolled it all to just Skype! one word please!
This article is written for a customer I have in a nearby nation, to the United States. He has a well used Lync 2013 Deployment. He has also had a lot of trouble. From having calls fail on Fridays, to having the Web sites fail on Wednesday, every week, for several weeks in a row.
I had some time to look at this case from a distance, after being very close to the situation for several weeks in a row. As a series of chance activities, this customer ended up going to the VMware department, instead of Lync. This was a very good route! As it turns out, The VMware agent caught the Virtual Machine with 4GB of ram!.
What a coincidence! I had caught the same customer changing his resources myself! That was all it took. All the pieces were clear.
I need to express in no uncertain terms, how vigilant you must be if you decide to Virtualize Lync or Skype for Business! While there may be some issues with Lync and Hyper-V, I am being honest, when I say that most of the virtual issues have been with VMware. Now I know I may take flack for that statement. I am a Microsoft Support Agent, so I do have a bias. I don’t make that statement to support my own brand. I am a Lync Professional and I am telling you my honest experience over 7 years with Lync and Skype for Business.
With that said, let me break it down for you, as much as I can. Let me first bulletize what the supported Lync Server would look like on ESX:
Skype Server on ESX
1. The VHDX needs to be fully expanded, and should not be an remote disk- ISCSI disk.
The Ram must not be dynamic, and must be a minimum of 24GB of ram. 32 or more is recommended.
The processors must not have NUMA complications, and Hyper threading should be disabled. The ESX should only be presenting physical processors on a machine where SFB is installed.
Networking is in flux. I think this is where some issues are still being resolved. Let me say that I recommend you use a dedicated physical nic to the VMware VM, which ever Nic type that is. VMXnet3 nics seem to have an issue, but please keep reading the documentation. Example and Example
The Balloon Driver needs to be disabled completely.
A Lync/SFB machine should not be moved, or migrated from host to host, even for maintenance.
Vms should be left alone to run, and cannot stand having resources change at all. This is a real time application, and cannot be live migrated like many machine types can.
Do not use snapshots on Vmware, as this will just cause an issue, if you ever tried to roll back. Just full back up, and restore if necessary.
Do not install Antivirus or threat protection on Lync systems.
Rely on the security provided by an Edge and reverse proxy server.
I suppose I could keep going. So there I said it! I don’t know if your getting the feeling that Lync should be on a physical machine? I have seen many VM lync deployments, and I wont go so far as to say that. I will say, If you choose SFB and you deploy to ESX, you must remain committed to the best practices. You cant even slip once. I guarantee you will see problem.
EDIT Due to Feedback from another Skype Fan
I was called out on the fact that I cannot maintain the Skype cannot be virtualized. In todays market, we must be able to function with any kind of Hypervisor. I do not disagree with that position. Believe me, I did not want to come off that way. So let me clarify, to say Virtualization is not the problem, so much as Resource control. I am not back tracking much. I am saying that your are not likely going to be free to just move a Lync server around, if you don’t want interruptions. The SDP protocol, is not built to be live migrated! My commenter below rightfully said that all three of the technologies I mentioned worked. I don’t disagree. The issue with real time voice, is in those parts of the day when things are busy, and you find you need to move resources around, or you have automation to move resources around. This not going to work with SKYPE! Skype needs dedicated resources, and you should not change them!
I hope it is clear what my intentions are!! I love VMware and I love Hyper-V. However, all of my problem environments are with VMWare. This is on Lync, not VMware! I can say simply that Skype is not very happy with resources that are changing dynamically. So now.. back to your article!
So before you go looking for where I am incorrect in these statements, Please start with the requirements for todays SFB server. Requirements for SFB server, are not small. and then, I don’t mind if you jump into the fire. Here let me supply some oxygen for you! Below find ways to enter the battle:
I think that is enough to get you started. So no matter what you think about Lync, it all boils down to two things, Compute resources, and the network. If either of those is a fail, then your lync deployment is a fail. Microsoft puts it more eloquently:
My friends, you came to this sight, perhaps to find out what not to do with your Lync Server. Look at the statement above. My experience is if you have been having issues for month’s, then there is a 69% chance your answer is in the blue text above. My chances of being right are much lower if you deploy your Lync server like the bullet points at the top of this article.
Fuel? Check. Oxygen? Check. Do you need a light?
Good evening all and I hope your VMS are tuned and your Free memory is abundant!
Good Evening- Copying files takes one or more ESX servers down? Yes- Using a UNC path to copy a file from VMa to VMb could purple screen your ESX server. All you need to do is use the E1000 VM NIC type. Synopsis> Don’t use E1000 VNIC on ESX 5.x if you use windows VMS? Apparently so!
This problem is not Lync Specific, but I saw this happen with a verifiable pattern. To reproduce this issue, all that was necessary was to copy a large file between virtual machines that had a common storage device (SAN). All Virtual Machines on multiple hosts failed, and a hard power down was required in some cases to return server to functionality. A ticket was put in to VMware and the results were surprising.
1. The Issue was confirmed as a problem from VMware under KB-2059053
2. The Recommendation from VMware was to use the VMXNET3 virtual adapter and reduce the usage of the E1000 series adapter as much as possible.
3. In addition, disable RSS within the Windows virtual machine. For more information, see the Resolution section of the KB article Poor network performance or high network latency on Windows virtual machines (2008925).
4. This problem may occur on ESX 5.0, 5.1 or 5.5
The Simple operation of copying a file between virtual machines could occur under many circumstances so I would add tags to suggest this could effect any windows implementation on VMware. Please beware of this problem as the issue could cause multiple VMware Hosts to fail, taking down all machines on the Host. We saw this occur on a copy of Lync Update.exe from one server to another. I placed this in the Lync Blog section of my posts, but any Windows application with the VMware virtual NIC E1000- Is susceptible to this problem.
The cause is listed as- This issue occurs when the rx Ring buffer fills up and the max Rx ring is set to more than 2. The next Rx packet received that is handled by the second ring is NULL, causing a processing error.
My understanding is the Microsoft Certified Solutions Masters program (MCSM) has ben canceled in its entirety. If I didn’t know someone personally who received the email, I wouldn’t be making this public appeal. Unfortunately, this has touched myself and persons close to me who have dedicated their life to Microsoft products and technical knowhow. The MCSM rotation and program has been canceled. the Blogs started reporting this as truth, before noon on 8/31/2013. You can reference several blogs, but this is the one I read:
We are contacting you to let you know we are making a change to the Microsoft Certified Master, Microsoft Certified Solutions Master, and Microsoft Certified Architect certifications. As technology changes so do Microsoft certifications and as such, we are continuing to evolve the Microsoft certification program. Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013.
I would urge anyone who has an elevated position and opportunity to express this grief and dismay at this decision. This move seems to be justified to allow Microsoft to take its focus off of on premise products, and create a market in the cloud. I feel this decision was made in haste. This really is going to cause only a greater rift in the market. Microsoft wave 15 products really do have need for specialized knowledge. Taking away the goal of aspiring Engineers and support personal, not only removes the drive and impetus for goal oriented career path planning, but also causes long term Microsoft professionals to believe we support products the company does not believe in.
I appeal to the powers that be, in Microsoft, the MCM or MCSM status is just good business. If Microsoft is going to succeed, it should embrace the groups who have a vested interest in their success. If there is not way to be a Master of a technology, wont most of the smartest individuals move to platforms that do not just unplug from their distinguished talent?
It is sad that someone in MS decided to drop this on a Friday night; Leaving emails for all to see on Saturday. This will not stop this issue from being brought to light during business hours, when all can see what you have done.
Every other serious software technology has a master level designation. IT is only the laggards who will be waiting around. Our careers are serous and we can push any software we feel good about. I don’t feel good about Lync or Exchange today.