tag:blogger.com,1999:blog-45103943518701372282024-03-14T02:15:23.687-04:00Gnawgnus RealmRandom Scripts, Admin tips, etcUnknownnoreply@blogger.comBlogger141125tag:blogger.com,1999:blog-4510394351870137228.post-9045227622950859092017-12-08T06:50:00.002-05:002017-12-08T06:50:42.581-05:00ASEv2 subnet sizingWhen Microsoft rolled out the original App Service Plans the recommended subnet size was 64 addresses (aka /26). The ASEv1 series was limited to 50 worker process (minus update domain overhead, etc). With the introduction of ASEv2 they now support up to 100 worker processes so naturally the question is do you need to use larger subnets - and the answer is yes.<br />
<br />
In an ASE environment each App Service Plan (container of apps) is equivalent to 1 worker which is really a VM. Each worker consumes 1 IP address and even if you follow the general guideline of leaving 20% or more free capacity for scaling and other events that still puts you in the ball park of 80 IP addresses. On top of that, the ASEv2 consumes 7 IP addresses (with an ILB) between the hidden front end servers, file servers, and fault tolerant instances of small/med/large images. And if you're running in a multi-tenancy configuration you'll consume even more IPs depending on how many IP addresses you attach to it. <br />
<br />
If you're never planning on exceeding more than say ~30 app service plans in your ASEv2 then you can probably get away with a /26 but you're doing so knowing that you're risking scaling or capacity issues down the road. But if you really want to cover your bases properly use a /25 (128 IP) subnet.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-54854352967554464842017-02-20T15:55:00.000-05:002017-02-20T15:55:24.626-05:00365 now supports SHA256 signed tokens from your ADFS<br />
Not sure when they're going to cut off the old SHA-1 but it doesn't hurt to get updated early. It's an easy change which shouldn't have any negative impact on your production environment. Instructionslink below:<br />
<br />
<a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-federation-sha256-guidance">https://docs.microsoft.com/en-us/azure/active-directory/active-directory-federation-sha256-guidance</a><br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-53969994130174458652017-02-09T21:42:00.003-05:002017-02-09T21:42:56.214-05:00Palo Alto NTSTATUS: NT_STATUS_ACCESS_DENIED - Access deniedBeing able to transparently tie in a particular user to traffic passing through your firewall is a great feature (and fairly common in the current gen of firewalls) - provided you set it up right.<br />
<br />
I followed the instructions at<br />
<a href="https://live.paloaltonetworks.com/t5/Configuration-Articles/How-to-Configure-Agentless-User-ID/ta-p/62122">https://live.paloaltonetworks.com/t5/Configuration-Articles/How-to-Configure-Agentless-User-ID/ta-p/62122</a><br />
and set up the dedicated ldap user on my Windows 2012 R2 domain and assigned it to Distributed COM users, Server Operators, and Event Log readers. Then I set up the WMI permissions and started seeing the Access Denied next to my discovered domain controllers. I then SSH'd into the Palo to check the mp-log and useridd.log and ran into the NT_STATUS_ACCESS_DENIED error. After some troubleshooting I realized what I'd messed up - I misread the instructions for the WMI edit. I had drilled down to 'Security' when the instructions had intended for me to stop at CIMv2 prior to editing the properties. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheQP-JyzqC2K1lKUtZUzyxNnJsA30D4wzGfaVgyYRWlIX-l2165zpwhbQeCeJmVTSUQpIoPynsk7clEg9miANN3acCSShWWdkojQSfZeOlv6AOgPC8AZEO43o0llMQtzay9t2ywtLJoc5V/s1600/2017-02-09_17-11-01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheQP-JyzqC2K1lKUtZUzyxNnJsA30D4wzGfaVgyYRWlIX-l2165zpwhbQeCeJmVTSUQpIoPynsk7clEg9miANN3acCSShWWdkojQSfZeOlv6AOgPC8AZEO43o0llMQtzay9t2ywtLJoc5V/s400/2017-02-09_17-11-01.jpg" width="400" /></a></div>
After fixing my mistake, the access denied message went away. Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4510394351870137228.post-63060366067534109032017-01-26T14:32:00.003-05:002017-01-26T14:32:50.369-05:00Ubiquiti Unifi - an SME's best friend - resistance is futileIt's often difficult in small to medium IT shops to get enough budget to build a network that's stable enough to let you sleep at night. For the most part you either pay a premium for your Cisco Catalyst, Juniper, etc and then spend hours learning how to use them properly or you wind up buying small business versions like the SG300, netgear, linksys and pray daily for uptime and accept lower performance. It's kind of like buying a SonicWall instead of a Cisco ASA or a Palo Alto firewall.<br />
<br />
A colleague of mine recently introduced me to Ubiquity Networks which has been around for a little over a decade and has a decent following. Their approach to network design places a high emphasis on a dedicated controller machine or cloud key which in turn manages every other Unifi device in your network. You define all your VLANs, WAP networks, and other settings in the controller and then 'adopt' your other devices. The controller handles all the upgrades and provisioning of the new devices after the device has been adopted and provides statistics on clients, bandwidth usage, and types of hardware. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbV0hnZhll8gp_si9FADaJSXykxwtH1qtujMihK4BldeF4NthHmm3K7JnG9EoKUv-koOIq76a-U1G9yG-YlCG01RYddiJBEP3N_bpyZhXeKN4pUafKPyuMcDyN5YBrh9vQ3zyNDweIX1Xe/s1600/devices.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="123" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbV0hnZhll8gp_si9FADaJSXykxwtH1qtujMihK4BldeF4NthHmm3K7JnG9EoKUv-koOIq76a-U1G9yG-YlCG01RYddiJBEP3N_bpyZhXeKN4pUafKPyuMcDyN5YBrh9vQ3zyNDweIX1Xe/s400/devices.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">One console to rule them all.</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5ezHoUnOz2hVXCgQLHVDFzt68OKyX9YCTQ4z8_3IW7Nq8ASs43cAAmAy9CU6lJbFRUqrLc-EYGQqMPhCxWfoilwFHqHYSDtt4hjXu2wvo-tUBWYzb8Hfn89Ku6RhvfSDuHhf7EEBQWqip/s1600/clients.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="118" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5ezHoUnOz2hVXCgQLHVDFzt68OKyX9YCTQ4z8_3IW7Nq8ASs43cAAmAy9CU6lJbFRUqrLc-EYGQqMPhCxWfoilwFHqHYSDtt4hjXu2wvo-tUBWYzb8Hfn89Ku6RhvfSDuHhf7EEBQWqip/s400/clients.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">bandwidth hogs can't hide.</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ty1fq0if6qV0S2m81s9jCn4ECcCo9wIVdPwkZllemELfATepB5kJcmZTN-B8O-eTQq8RCHwEHkygsgCcHKMa-jm8MzVDPsAk86dCk9mw9sI4aoBp1xEMth77EuofIFiBvo5rSuS0SA_L/s1600/zoom1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ty1fq0if6qV0S2m81s9jCn4ECcCo9wIVdPwkZllemELfATepB5kJcmZTN-B8O-eTQq8RCHwEHkygsgCcHKMa-jm8MzVDPsAk86dCk9mw9sI4aoBp1xEMth77EuofIFiBvo5rSuS0SA_L/s320/zoom1.png" width="203" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">basic switching - and yes it has STP.</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
The built-in Map function is pretty nifty as well. It allows you to upload a floor layout and then define a map scale. You then drag and drop the devices from inventory and the map updates to show you hotspot coverage, topology and other useful network management data. And yes, this is all without buying an additional software package! <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxQs2WsWUOWFtw1i-7ZjJ4bRhREmw0q5nmcR-xv2RFP1kQrHWhZbfrP680FJqYyaC9OwnTGtF5RN__ULyvbDzplfK8IQK0C58bOYp2SxDDh7J4K1Gk418R1V2XdmfdL7BAuuWWsnV0X-2B/s1600/heatmap.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="207" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxQs2WsWUOWFtw1i-7ZjJ4bRhREmw0q5nmcR-xv2RFP1kQrHWhZbfrP680FJqYyaC9OwnTGtF5RN__ULyvbDzplfK8IQK0C58bOYp2SxDDh7J4K1Gk418R1V2XdmfdL7BAuuWWsnV0X-2B/s320/heatmap.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Wireless Cover map - labels removed</td></tr>
</tbody></table>
<br />
<br />
I was able to replace the whole wireless network for a 16,000 sq ft facility for just under $1k.<br />
<br />
My deployment:<br />
a) 1 UniFi Cloud Key (~$95 on amazon) - powers off POE and has a smaller footprint than a dedicated controller machine. <br />
b) 1 Unifi 24 port POE 250W switch (~$365 on amazon)<br />
c) multiple UniFi AP-AC-Pro wireless access points (~$129 on amazon). All POE based and a ridiculous indoor range compared to the Cisco WAP551 units that we used to have.<br />
<br />
Implementation:<br />
<i>Note: Make sure you have working DHCP on your network to make configuring the devices easier. </i><br />
<br />
1) Rack mounted the switch, plugged in the cloud key, ran cabling to WAPs from the switch.<br />
2) Configured the Cloud key - set up multiple wireless networks (limit 4). The WAPs auto switch between 2.4 and 5 GHz using the same wireless network name so both client types work. I set each wireless network to it's own VLAN and RADIUS authentication on the more secure one.<br />
3) I 'adopted' the switch and the WAPs through the cloud controller interface. And then I went ahead and hit the 'upgrade' button next to each to get the latest firmware.<br />
<br />
<b>--------------- And that was all it took -------------</b><br />
<br />
Flat out, the stuff works. Wireless handoff from WAP to WAP and all my client devices worked without a hitch. I'd definitely recommend them if you're doing a greenfield deployment or if you're just looking to upgrade your small to medium sized network. <br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-53035810057346619082017-01-25T11:02:00.001-05:002017-01-25T11:02:28.411-05:00Extending your on premise AD (hybrid 365) into the Azure CloudSure, if your on-premise Active Directory is already being synchronized with Office 365 then you've most likely already been exposed to the benefits of single sign-on. And perhaps you've even spun up your own Azure subscription and set your synchronized Azure AD as the authentication provider so your team can assign Azure admin roles to your on-premise credentials. There's one more nifty thing you can do which is to use Azure AD Services to extend your AD into Azure to provide domain services to the VMs inside your subscription (aka domain join, single-sign on inside the VM, etc).<br />
<br />
The other alternative would be to spin up some servers, build out a site to site VPN, dcpromo the boxes, set up the AD site(s), and then manage it old school. On the upside you'll have more control over your AD and it'll be a complete replica of your on-prem setup. The downside is that you'll have more boxes to patch, more replication traffic to pay for, and possibly split fsmo roles. There isn't a wrong answer, it just depends on if feel that your datacenter is more secure than Azure and what your company's needs are. In my case, I decided to explore the ADDS route. <br />
<br />
Enabling my Azure AD instance started out pretty straightforward, got a little murky with the virtual networks, and then took some patience for password sync. I used Microsoft's documentation at <a href="https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-getting-started">https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-getting-started</a> (Make sure to use the 'synched tenant' instructions for password sync)<br />
<br />
Steps a through e below cover just setting up the basic ADDS. The steps after that explain how I got it integrated with a Resource Management virtual network and VMs using Peering.<br />
<br />
a) Created the AAD DC Administrators Group - this is a special group that is automatically inherited into your new ADDS so you'll want to put your admin accounts in here.<br />
b) ADDS currently only works with the old type of virtual network and not the newer Resource Manager one. So I had to create a legacy virtual network.<br />
c) After enabling ADDS it took around 15 minutes to provision. I chose the 'yourcompany.onmicrosoft.com' domain name and connected it to my new legacy virtual network. Once provisioned, it popped out a new DNS IP.<br />
d) I then edited the legacy virtual network and specified the IP address for the new ADDS. This made it the new default DNS service for that virtual network. Note: After another hour, a second DNS IP showed up in the ADDS view. It doesn't matter what you name them in the virtual network.<br />
e) I then ran the powershell script in the link above to force a full sync in my AAD instance. The first two variables have to be edited by hand before you run the script. If you're not sure what your connectors are called, just open the Synchronization Service Manager and view the Connectors tab. (Hint - the one that ends in 'AAD' is your $azureadConnector)<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxh-0Zd4i6yJr8qdRcxQd5vFTewAfX2yWvD1jGTwyjOkkTWJSOsg-_kTCBvglVj7p0_7dLl9CfaU7dNz4C-O3olLSfYB4Z7fVomP1TswnAoYqvtn-CftArR6F2kgmF3S4py-Z4CRdNk65G/s1600/aad1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="97" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxh-0Zd4i6yJr8qdRcxQd5vFTewAfX2yWvD1jGTwyjOkkTWJSOsg-_kTCBvglVj7p0_7dLl9CfaU7dNz4C-O3olLSfYB4Z7fVomP1TswnAoYqvtn-CftArR6F2kgmF3S4py-Z4CRdNk65G/s400/aad1.png" width="400" /></a></div>
<br />
f) I created a new virtual network in the 'new' Azure portal - making sure that the IP range did not overlap the IP range of the legacy virtual network. (10.10.0.0/24 vs 10.20.0.0/24 and not 10.0.0.0/8 and 10.20.0.0/16 which would have collided).<br />
g) Now to get both virtual networks to play nicely, you can either do a VPN and/or gateway or you can just do virtual network peering which will merge the two together much like joining two switches with a cable in a Layer 2 fashion. From the 'new' Azure portal, under Virtual Networks, I selected the virtual network (ARM type) that I created earlier and then Peerings<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiodcf-VryeRGbnY6c9icucuFu8eFwBdnNUwfExLrleqcdQ3HcwWVyrl9b3aHeu3A1oe5zVrAMbHgqYhP1xwM3DmE2XTPt23tLe10BBJLrF2QNiOQ-DGZVxlLL4p4ep1rNDQ2aVLRzWpoYP/s1600/peering1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiodcf-VryeRGbnY6c9icucuFu8eFwBdnNUwfExLrleqcdQ3HcwWVyrl9b3aHeu3A1oe5zVrAMbHgqYhP1xwM3DmE2XTPt23tLe10BBJLrF2QNiOQ-DGZVxlLL4p4ep1rNDQ2aVLRzWpoYP/s320/peering1.png" width="150" /></a></div>
<br />
<br />
h) I clicked Add at the top of the blade, gave the peering connector a name, chose Resource manager (important), assigned it the same subscription as everything else, and then chose the Classic virtual network from the selector.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0X57wgIj3VrGmrH5c5AbVCFRUDqVOm_yTVL8rsoT0yT7LZqGDKKEOuZXbD8nerGKrsQVg_lI-fQMPrVcCEs6zQ7hNKMYzMBIPqop88nPDncd2hy4j9aUZF7ESFOsai_bwMzajp7n6lerS/s1600/peering2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0X57wgIj3VrGmrH5c5AbVCFRUDqVOm_yTVL8rsoT0yT7LZqGDKKEOuZXbD8nerGKrsQVg_lI-fQMPrVcCEs6zQ7hNKMYzMBIPqop88nPDncd2hy4j9aUZF7ESFOsai_bwMzajp7n6lerS/s640/peering2.png" width="537" /></a></div>
<br />
i) Then I went back in and updated the DNS settings for the ARM virtual network. Remember, out of the box each virtual network defaults to the Azure-provided DNS. I was not able to join a VM to ADDS until I changed it to use the DNS servers for ADDS. (There is a chance that if would have eventually worked without this step but it's up to you if you have more time available to wait it out).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-LZrQjDUKEpuPZeki5xjssOC7hyTJd75JBbU91U-toShXnbdYI7pml5ZgkwBKRN_DGo1IYsgtA5B3b5csmMqV3AWl99ka858FTnpbqztQgWWgpuYk0Xan4tSpR7WuZUYLfDecFnhBkIhi/s1600/peering3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="347" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-LZrQjDUKEpuPZeki5xjssOC7hyTJd75JBbU91U-toShXnbdYI7pml5ZgkwBKRN_DGo1IYsgtA5B3b5csmMqV3AWl99ka858FTnpbqztQgWWgpuYk0Xan4tSpR7WuZUYLfDecFnhBkIhi/s400/peering3.png" width="400" /></a></div>
<br />
<br />
j) I provisioned a new machine, booted it up, and then joined the yourcompany.onmicrosoft.com domain using the on-premise credentials that I'd put in the AAD DC Administrators group.<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-19106545238702239262016-08-17T09:12:00.001-04:002016-08-17T09:12:50.612-04:00The return of UNCHardenedPath problems.Last week we rolled out some new GPO security settings which made our Windows 10 machines stop being able to process group policy changes. First we noticed the GPP drive maps had stopped working and when we ran <i>gupdate /force</i> manually it failed citing that it couldn't access gpt.ini for<br />
<i>31B2F340-016D-11D2-045F-00C04FB984F9 </i>(aka the Default Domain Policy). <br />
While researching it we found many articles on how Windows 10 by default has UNC Hardenening enabled and the various patches (MS15-011, MS15-014) had affected many users in GPO environments. We weren't using user filtering and all of our GPOs had Authenticated users listed with Read and Apply permissions so that wasn't it. So for testing, we added the registry keys to disable Mutual Authentication on a laptop.<br />
<br />
New-ItemProperty "HKLM:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths" -Name "\\*\SYSVOL" -Value "RequireMutualAuthentication=0" -Property "String"<br />
<br />
New-ItemProperty "HKLM:\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths" -Name "\\*\NETLOGON" -Value "RequireMutualAuthentication=0" -Property "String"<br />
<br />
We were able to run <i>gpupdate /force </i>successfully after that but we didn't like that solution because that meant we'd have to manually update a lot of machines since even login scripts were broken at this point. That and it just didn't make sense that Microsoft would have implemented all these security controls if they didn't work so we continued researching. We found the next clue at the end of Sean Greenbaum's post - patch <b>MS16-075 / KB 3161561</b> which was released in June and purportedly had caused issues for users trying to access SYSVOL shares. <br />
<br />
The workaround listed was to set the <i>SmbServerNameHardeningLevel </i>to 0 under<br />
<i>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters</i><br />
on the domain controller servers. That registry key corresponds to the GPO security policy<br />
<b>Ensure 'Microsoft network server: Server SPN target name validation level' is set to 'Accept if provided by client' or higher</b><br />
which was one of the settings that we'd changed the week before. Setting that to Off changes SmbServerNameHardeningLevel to 0. Once that change was made on the Domain Controller GPO and applied, all of our client issues were resolved.<br />
<br />
Ultimately this came down to insufficient testing on our part and it is one of the risks of trying to harden down existing systems.<br />
<br />
References;<br />
<a href="https://blogs.technet.microsoft.com/askpfeplat/2015/02/22/guidance-on-deployment-of-ms15-011-and-ms15-014/">https://blogs.technet.microsoft.com/askpfeplat/2015/02/22/guidance-on-deployment-of-ms15-011-and-ms15-014/</a><br />
<br />
<a href="https://blogs.technet.microsoft.com/askpfeplat/2016/07/05/who-broke-my-user-gpos/">https://blogs.technet.microsoft.com/askpfeplat/2016/07/05/who-broke-my-user-gpos/</a><br />
<a href="https://www.blogger.com/goog_220206573"><br /></a>
<a href="https://social.technet.microsoft.com/Forums/en-US/6a20e3f6-728a-4aa9-831a-6133f446ea08/gpos-do-not-apply-on-windows-10-enterprise-x64?forum=winserverGP">https://social.technet.microsoft.com/Forums/en-US/6a20e3f6-728a-4aa9-831a-6133f446ea08/gpos-do-not-apply-on-windows-10-enterprise-x64?forum=winserverGP</a><br />
<br />
<a href="https://community.spiceworks.com/topic/1389891-windows-10-and-sysvol-netlogon">https://community.spiceworks.com/topic/1389891-windows-10-and-sysvol-netlogon</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-17879449734441842092016-06-17T16:14:00.004-04:002016-06-17T16:14:45.184-04:00Veeam error after Hyper-V migration<br />
In general I find Veeam backup and replication 9 performs brilliantly once it's configured. But sometimes infrastructure changes can really throw it for a loop. I recently had to shuffle several VMs around between Hyper-V hosts using the built-in Move command and afterwards Veeam started throwing errors on some of the VMs. (<b>Task failed: failed to expand object. Error: Cannot find VM on host...</b>)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRbAFKv2Mru-YrypPakJhO8HO4P6Gjm2Rfb6bJUsLW5RhTBLJc-5aFpHz6aUeScFrwaCYbhiGWBXCaMjzhUXtndS7I_om8bk93eG74dw0d8udTAFmr80SMuTdJTyGgwbyFjwZQXlwWfquI/s1600/veeam1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="144" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRbAFKv2Mru-YrypPakJhO8HO4P6Gjm2Rfb6bJUsLW5RhTBLJc-5aFpHz6aUeScFrwaCYbhiGWBXCaMjzhUXtndS7I_om8bk93eG74dw0d8udTAFmr80SMuTdJTyGgwbyFjwZQXlwWfquI/s640/veeam1.jpg" width="640" /></a></div>
<br />
The main thing they all had in common was they they were configured to use alternate guest OS credentials (which Veeam uses to take the internal snapshots). In the Veeam GUI these all appear to be tagged by VM Name but what I suspect is that on the back end it's latched on either to the GUID or the host server name so by Moving the VMs it started treating them as new entities.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWO-6bhrKat0Sb1UTWc7j_Xv8V40xX_KCRjs7aKPkSoYdku-nEMQBRXsNBRo3kT1NLo6xKyrj9IuqsHkysTVhcSchDpWts-0ev5vSRbhOB9UFoJar1WaQ0yPZqs3zOAakGbvfPOduIp6zB/s1600/veeam2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="394" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWO-6bhrKat0Sb1UTWc7j_Xv8V40xX_KCRjs7aKPkSoYdku-nEMQBRXsNBRo3kT1NLo6xKyrj9IuqsHkysTVhcSchDpWts-0ev5vSRbhOB9UFoJar1WaQ0yPZqs3zOAakGbvfPOduIp6zB/s640/veeam2.jpg" width="640" /></a></div>
<br />
The fix is a relatively straightforward but manual process of removing them, adding them back from the new associated servers (under <b>Guest Processing, Credentials...</b>), setting the right credentials for them, then hitting OK, then Finish. That will fix that particular error so you won't see it again on the next run. <br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-68292677365063991712016-02-09T12:16:00.000-05:002016-02-09T12:16:02.573-05:00Configuring LDAP auth from Palo Alto PA-500 firewalls to Windows 2012 R2 AD servers<br />
For the most part this is covered in the Palo Alto admin guides but if like me you just wind up owning one of these at work and you don't have a bunch of time to decipher it then you might find this useful. Especially since configuring Palo Altos is a lot like object oriented programming where you have to 'build' out all your components and then chain them together which makes troubleshooting more fun.<br />
<h2>
LDAP Config (using PanOS release 7.x):</h2>
<br />
<h3>
<b>Step 1</b> - </h3>
Device Tab -> Server Profiles -> LDAP. From here Add a new Server profile, give it a meaningful name like domain-ldap and populate the server list. <br />Enter in your base DN<br />
Enter in your Bind DN - which in my case I created a dedicated service account and entered it in UPN format as 'accountname@domainname.com'. Then enter in the password for the account so it'll be able to access the directory.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP3o8r7lF7KgA7NVZdSIBtaoXisETohZvrMipF0hTyDRBXPGDVoUK_HUwaVtQkWpQ3Tuasq5kffd3ErJAmtWF-OBuEY6cfu6sUGHgDCP6thB11AJiRsluXQ-0eoZmZGCzNHZ8fgiH7azRS/s1600/2016-02-09_11-49-04.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP3o8r7lF7KgA7NVZdSIBtaoXisETohZvrMipF0hTyDRBXPGDVoUK_HUwaVtQkWpQ3Tuasq5kffd3ErJAmtWF-OBuEY6cfu6sUGHgDCP6thB11AJiRsluXQ-0eoZmZGCzNHZ8fgiH7azRS/s320/2016-02-09_11-49-04.jpg" width="320" /></a></div>
<br /><br />For AD LDAP, go ahead an uncheck the Require SSL/TLS checkbox.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTIT28bgZLvIejxlhg5ouvc6_5Qy6sKDi7Z9M8nHIia_O_-SMaWxrnwKeRheNDziMp5DjYt8SvHd_2g90T2lQPwZiH40c5PPwUmq_sXjVXZW19UC8A2qf7JRAX24V_X6Cgjr6WCy3bvvUf/s1600/step1b.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="344" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTIT28bgZLvIejxlhg5ouvc6_5Qy6sKDi7Z9M8nHIia_O_-SMaWxrnwKeRheNDziMp5DjYt8SvHd_2g90T2lQPwZiH40c5PPwUmq_sXjVXZW19UC8A2qf7JRAX24V_X6Cgjr6WCy3bvvUf/s640/step1b.jpg" width="640" /></a></div>
<br />
And Commit your changes<br />
<h3>
Step 2</h3>
Now go to the Authentication Profile (also on the Device Tab) and click Add. <br />
Give it a meaningful name like ldap-authprofile. <br />
Then choose the Server Profile that we created in step 1 from the drop down list.<br />
The Login Attribute should be sAMAccountame. (no, I don't know if that's case sensitive).<br />
Important - Fill in the User Domain with the NETBIOS name of your domain. Yes, I know it's 2016 and we're still stuck with it. It'll make a difference later on if you try to do Group Filtering.<br />
If you're setting up an Allow list then click the Advanced Tab and enter in the LDAP strings for your groups.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF7H4a70HPG9-KjXMbRAo-P_mOnuUTZq7tLzKt4z4ioKGBPvwqg-5aGxDrT9LChLu10vn8YlIv6itikCQAuYuVFVURem_SA6fbQUGUF1XqABCG2CFtC92ykdKCFCSP-Fiy4bLlCnYXjhj_/s1600/step2a.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="286" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF7H4a70HPG9-KjXMbRAo-P_mOnuUTZq7tLzKt4z4ioKGBPvwqg-5aGxDrT9LChLu10vn8YlIv6itikCQAuYuVFVURem_SA6fbQUGUF1XqABCG2CFtC92ykdKCFCSP-Fiy4bLlCnYXjhj_/s640/step2a.jpg" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzdK_WeEEqDQQVvrYCOGeLFGH_oqeg-eemUeDTakaTeAXiIN-_Ioor4JqerHC-eVxC1-f9GNALbbnigS5yv_CBa5as0BiDFBOCmBuZ8SKnluvgaMDU2oGG3UOVAdMyB6q5dtT5vymgbqUn/s1600/step2b.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzdK_WeEEqDQQVvrYCOGeLFGH_oqeg-eemUeDTakaTeAXiIN-_Ioor4JqerHC-eVxC1-f9GNALbbnigS5yv_CBa5as0BiDFBOCmBuZ8SKnluvgaMDU2oGG3UOVAdMyB6q5dtT5vymgbqUn/s640/step2b.jpg" width="640" /></a></div>
<br />
And Commit your changes<br />
<h3>
Optional Step 3 - Group filtering/search</h3>
If you're using Group Filtering, make sure to go under User Identification, then to the Group Mappings setup tab and Add those groups in.<br />
Click Add, then choose your Sever Profile that we created in Step 1.<br />
Go to the Group Include List Tab, and drill down to your group. <br />
<i>Note: if you can't drill down, then you don't have a working LDAP connection. check your settings and make sure your AD Controllers are listening. Also, keep in mind that the traffic will be coming From the MGT port on the Palo Alto which may have a different IP.</i><br />
<i><br /></i>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc40q8hIU83D1FoHYg-a7HX_UsE9UON-iDHXmW26bmLY-3EWahGJsvSJ8RUATfwYv51OloHDqiq_WEr3uHQFh8j8wqE3haeFLbNJNjbM7AjGaezUgGArML5GqhalHOoNh7JX3XcRChuz_1/s1600/step3a.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="242" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc40q8hIU83D1FoHYg-a7HX_UsE9UON-iDHXmW26bmLY-3EWahGJsvSJ8RUATfwYv51OloHDqiq_WEr3uHQFh8j8wqE3haeFLbNJNjbM7AjGaezUgGArML5GqhalHOoNh7JX3XcRChuz_1/s400/step3a.jpg" width="400" /></a></div>
<i><br /></i>
Click Ok. Commit your changes.<br />
<br />
At this point you should have a fully functional LDAP Authentication Profile which you can feed into other objects like Authentication Sequences, GlobalProtect Gateways, etc. <br />
<br />
<b>Troubleshooting tips:</b><br />
The default caching period is about an hour. If you're doing testing you'll want to force that cache to empty out. From a console/ssh connection - run<br />
<b>debug
user-id refresh group-mapping all</b><br />
to refresh the LDAP cache.<br />
<br />
PanOS 7.x also has a new feature to help you troubleshoot authentication from a command line. Details here:<br />
<a href="http://dsg0.com/t/palo-atlo-networks-user-authentication-test-through-cli/273">http://dsg0.com/t/palo-atlo-networks-user-authentication-test-through-cli/273</a><br />
<br />
Good Luck!<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-78544518464618122172016-02-06T15:03:00.000-05:002016-02-06T15:03:11.551-05:00Veeam failed to create snapshot (Microsoft Software Shadow Copy provider 1.0) (mode: Veeam application-aware processing) on hyper-vWe recently abandoned Backup Exec and transitioned to Veeam Backup and Replication and as with most product transitions we ran into a few hiccups. Aside from having to adjust Shadow Copy space limits on some VMs and Hyper-V hosts, we also ran into snapshot errors. BE uses agents to handle quiescing while Veeam directly contacts the VM to request snapshot creation. Some of our VMs were in DMZs and other areas and not on a domain so the default credentials that Veeam was using was not able to authenticate and create snapshots.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCU0vbNK_lG1B2G1y6A4M0V4DzRKNppXONQinK11Sf7O7hzTwKEp8o4qMm82pvTYlcGvaxUVsUDFrK2pUD4X3CG65H0Ims3EoE4KNK6n7qHbx9j23QsV5N2B2Zao5AFBQe_cG4lJVyPhgW/s1600/veeam1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="63" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCU0vbNK_lG1B2G1y6A4M0V4DzRKNppXONQinK11Sf7O7hzTwKEp8o4qMm82pvTYlcGvaxUVsUDFrK2pUD4X3CG65H0Ims3EoE4KNK6n7qHbx9j23QsV5N2B2Zao5AFBQe_cG4lJVyPhgW/s640/veeam1.jpg" width="640" /></a></div>
<span id="goog_1860358930"></span><span id="goog_1860358931"></span><br />
The fix for this was to add additional credentials and map them to each of the errant VMs directly.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgavX5_Z8r37poVGw4v1ItfUjcSVlwGgEAX71WzXHgAVUAOaVavc0QZGKqqHwVruVDqAy5PVMVnan3fgWrEggzQ4uAhihFHwY5B6s20yqqwkCGR7ZwPQB1V6w1G3o4EUBBIZ2GBkGuZekme/s1600/veeam2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgavX5_Z8r37poVGw4v1ItfUjcSVlwGgEAX71WzXHgAVUAOaVavc0QZGKqqHwVruVDqAy5PVMVnan3fgWrEggzQ4uAhihFHwY5B6s20yqqwkCGR7ZwPQB1V6w1G3o4EUBBIZ2GBkGuZekme/s640/veeam2.jpg" width="640" /></a></div>
<br />
After we got that sorted out, the rest was a breeze.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-9571294071823827902015-02-19T09:36:00.002-05:002015-02-19T09:36:23.601-05:00Bitlocker could not be enabled - Dell Latitude 7440Sometimes it just doesn't pay to disable stuff in BIOS. I recently had problems enabling Bitlocker on some Latitude 7440 units. After the initial reboot the error would pop up saying that the bitlocker encryption key cannot be obtained. So I checked the usual suspects - clearing the TPM, making sure the TPM was recognized in device manager, etc. And then I remembered the USB settings that we'd changed to lock down the laptops more.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7ijB4f7OrgAm51__V9U9OB6CM6MscL5FER9-yOwuw0NbJdvIeJhSoj9BlzzahMDzeLNxPWWeOkoqtLFw5yW_v85Kl9BKgO1q0Sh4Jh4gkNK4swfUsjGlxTBLZhMh1ejzoYKdx8-4Tw3UX/s1600/bitlocker+error.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7ijB4f7OrgAm51__V9U9OB6CM6MscL5FER9-yOwuw0NbJdvIeJhSoj9BlzzahMDzeLNxPWWeOkoqtLFw5yW_v85Kl9BKgO1q0Sh4Jh4gkNK4swfUsjGlxTBLZhMh1ejzoYKdx8-4Tw3UX/s1600/bitlocker+error.PNG" height="335" width="400" /></a></div>
<br />
Specifically under System Configuration, USB Configuration, "<b>Enable Boot Support</b>" which had been disabled just to make sure our users wouldn't be able to boot off USB devices. I wouldn't have equated that bitlocker error with that setting but as soon as we undid it on the laptops we were able to enable bitlocker.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-69692680429694355692014-05-23T10:59:00.001-04:002014-05-23T10:59:37.567-04:00Lync 2013 android client error connecting to ADFS 3.0 federated 365 serviceSo after our migration to ADFS 3.0 from the old ADFS 2.0 servers my Android based Lync users started getting <b>we can't sign you in, please try again</b> errors during login. After digging around I found this forum entry from Jeffr.M which points out that the Lync android app has an issue with servers that can support multiple certificates on the same IP.<br />
<br />
<a href="http://community.office365.com/en-us/f/173/t/223414.aspx?pageindex=2">http://community.office365.com/en-us/f/173/t/223414.aspx?pageindex=2</a><br />
<br />
The following command adds a new default catch-all listener to your server. If you're using a Web Application Proxy like I am you'll want to run this on that server as well.<br />
<br />
<strong style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;">netsh http show sslcert</strong><br />
<strong style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;"><br /></strong>
<span style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;">The command above will show you all the listeners and their associated certificate hashes and application IDs. You'll need those for the next step.</span><br />
<br />
<strong style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;">netsh http add sslcert ipport=0.0.0.0:443 certhash=INSERTHASHHERE appid='{INSERTAPPIDHERE}'</strong><br />
<strong style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; font-size: 13px; letter-spacing: 0.3999999761581421px;"><br /></strong>
<span style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;"><i>Note the ticks around the appid. Powershell sometimes eats curly brackets so you'll get an error if you don't use the "'" marks. More info <a href="http://stackoverflow.com/questions/779228/the-parameter-is-incorrect-error-using-netsh-http-add-sslcert">here</a></i></span><br />
<span style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;"><i><br /></i></span>
<span style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; letter-spacing: 0.3999999761581421px;"><i>Note 2: If you're thinking it's easier to just copy/paste the certificate hash from the MMC Certificates panel - Don't. That method often introduces hidden characters which will take forever to debug.</i></span><br />
<strong style="background-color: white; color: #43515b; font-family: SegoeUI-Regular-final; font-size: 13px; letter-spacing: 0.3999999761581421px;"><br /></strong>
After you do that on your ADFS 3.0 and WEP servers, restart the ADFS services on them and then your Android Lync clients will start working again.<br />
<br />
On a related note, if your Onedrive authentication isn't working - try disabling the /adfs/services/trust/2005/windowstransport endpoint. (disable on proxy if using a proxy or just disable both modes just in case). There's a bug with the windowstransport endpoint in ADFS 3.0 and Onedrive authentication.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-1929420122128269232014-01-07T11:07:00.004-05:002014-01-07T11:07:59.273-05:00Fix for Wake after sleep freeze on Dell LatitudesThis turned out to be an issue with the O2Micro SD/MMC drivers on the E6420/E6430 units that we had. After adding new drivers to our MDT server I started getting reports from users stating that their laptops were completely freezing up after waking from sleep - no response to keyboard, mouse, etc. No mini dumps were generated, powercfg - energy didn't show any major issues, and event viewer was useless. It was occuring both on Windows 7 and Windows 8.1 builds. <br />
<br />
It wasn't until after I started disabling hardware components in Device Manager that I found a correlation between disabling the O2Micro SD/MMC controllers and it being able to wake from sleep. (I rebooted after each diagnostic test just to make sure all changes were in full effect)<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjarlQBZ5Crcbv-9Kqh8J9gzUzxEGQ3UGZnkzgYc5MiC94w3Z8VNIhw3HxUEJ22qx34_vuNy_tUPXSlbbFe_3fM7gIxegJSJkwfIbPRASs3RJebVAPkZBm1hdgbwCTCtD2uowVqahDpgMPj/s1600/Capture.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjarlQBZ5Crcbv-9Kqh8J9gzUzxEGQ3UGZnkzgYc5MiC94w3Z8VNIhw3HxUEJ22qx34_vuNy_tUPXSlbbFe_3fM7gIxegJSJkwfIbPRASs3RJebVAPkZBm1hdgbwCTCtD2uowVqahDpgMPj/s1600/Capture.JPG" height="271" width="400" /></a></div>
<br />
Installing an older version of the driver and rebooting fixed the problem on all the laptops that were having the hang issue. Of course, just disabling the SD/MMC controllers is a fine fix too.<br />
<br />
<br />Unknownnoreply@blogger.com215tag:blogger.com,1999:blog-4510394351870137228.post-16675977353770816602013-11-07T13:44:00.003-05:002013-11-07T13:44:40.612-05:00How to access BIOS on a Dell Venue 8 ProTurn the Tablet off.<br />
Press the power button once and then hold the Volume down button for a few seconds. Let go a couple of seconds after the Dell logo appears.<br />
And now you're in BIOS. <br />
<br />Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-4510394351870137228.post-55626895011119225742013-06-18T08:30:00.002-04:002013-06-18T08:30:34.281-04:00Exchange 365 hybrid Remote move request- found yet another method for getting the operation could not be performed because the GUID could not be found.Just when you think you've got the hang of your hybrid exchange deployment (on-premise and cloud), the cloud throws another curve ball at you. I thought I had the process nailed down but apparently I forgot the old rule of 'order is important'.<br />
<br />
Scenario: You need to create a new user so you go to the AD controller and create or copy them. Then I went to my DirSync server and forced a "Start-OnlineCoexistenceSync" and then waited a few minutes for it to finish. Now at that point what I should have done was go to the on-premise Exchange server and created the new mailbox, and then ran DirSync. Instead since I already had the office365.com admin portal up I went ahead and assigned a license to the user since the object was already sync'd up on the cloud. When I went to submit a Remote Move Request of the mailbox from my on-premise server to the cloud, I got the friendly "The operation couldn't be performed because object <insert guid="" here="" your=""> couldn't be found on <insert assigned="" here="" nbsp="" office365="" server="" your="">. At this point I hadn't figured out what I'd done wrong so I forced DirSync a few more times, unassigned the license, reassigned the license, etc. In the end, I actually wound up having to delete the new AD account I'd created and do the whole thing over again but this time I created the local mailbox BEFORE I assigned a license in the cloud. Apparently if you assign a license to the user and they don't have an Exchange GUID in their AD attributes yet, it hoses things up. </insert></insert><br />
<br />
Order:<br />
1. Create AD user<br />
2. Create local mailbox<br />
3. Force DirSync<br />
4. Move mailbox to the cloud<br />
5. Assign a license in the 365 admin portal<br />
<br />
Notes: <br />
Steps 4 and 5 are interchangeable.<br />
We create the mailbox locally first so that we retain the ability to move it back from the cloud to on-premise later if needed. DirSync does NOT replicate a GUID created initially from the 365 cloud back to your local AD.<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4510394351870137228.post-29213427330370569682013-05-14T10:51:00.001-04:002013-05-14T10:51:27.582-04:00Migrating from on-premise BES on Exchange 2010 to Blackberry cloud services on office 365I'd been waiting a very long time to finally be rid of my on-premise blackberry enterprise server and the blackberry cloud services (at the compelling price of free) was a light at the end of the tunnel for the office 365 upgrade. But as you know, with Blackberry there are many things that can go wrong and most roads lead to phone wipes - which wasn't an option for me.<div>
<br /></div>
<div>
The initial activation was a piece of cake. I just went into the office 365 admin portal and activated it under Service Settings -> Mobile. Then waited 20 minutes as recommended on this guide: <a href="http://www.proexchange.be/blogs/office365/archive/2012/03/08/migrate-from-on-premise-blackberry-enterprise-server-to-blackberry-business-cloud-services-in-office-365.aspx">http://www.proexchange.be/blogs/office365/archive/2012/03/08/migrate-from-on-premise-blackberry-enterprise-server-to-blackberry-business-cloud-services-in-office-365.aspx</a></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRXS5JAlm1s4KRFxVuouEI-gfk4Ze6706w6KOJGkcNxT4Ih48WzEJMyqsebkNyWND0ny6qfH258ZDj8raaVB2dPncFKCFfndnurQNDWIeHUmpslYyONuEHIpD7IdMRtobNJxkXfcdldMvh/s1600/BB1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRXS5JAlm1s4KRFxVuouEI-gfk4Ze6706w6KOJGkcNxT4Ih48WzEJMyqsebkNyWND0ny6qfH258ZDj8raaVB2dPncFKCFfndnurQNDWIeHUmpslYyONuEHIpD7IdMRtobNJxkXfcdldMvh/s640/BB1.png" width="640" /></a></div>
<div>
<br /></div>
<div>
Before proceeding I had all my blackberry users check to make sure they had the Enterprise Activation App installed. </div>
<div>
Then I added my first user and sent them the invite and migrated their mailbox to office 365. At which point I found out that the user had never verified their blackberry ID so they couldn't download the app. Oh, and they were on their last password attempt before it would wipe on them. After having fixed that, we then tried the activation and it balked and told us to wipe the device because there was already another account on it. Now wiping this particular employee's device was really just not an option. So after digging around, I found a page that told us we could initiate an organization data only wipe from the BES console.</div>
<div>
<a href="http://docs.blackberry.com/en/admin/deliverables/27983/How_device_deletes_work_data_1303191_11.jsp">http://docs.blackberry.com/en/admin/deliverables/27983/How_device_deletes_work_data_1303191_11.jsp</a></div>
<div>
"In the Device activation list, click Delete only the organization data and remove device. "</div>
<div>
<br /></div>
<div>
That worked properly and then we were able to get a little further along into the enterprise activation where it decided to get stuck forever while contacting the server. To fix that, we just yanked out the battery for 30 seconds, then plugged it back in and tried again. And voila - a fully functional blackberry on blackberry cloud services connected to an office 365 account.</div>
<div>
<br /></div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-44110231736361731262013-05-02T09:44:00.000-04:002013-05-02T09:44:04.414-04:00remote move request not an accepted domain for your organizationSo you've got your fancy Hybrid configuration all set up between your on-premise Exch 2010 SP3 server(s) and the office 365 cloud and you only get this error on some mailboxes. Your dirsync is working fine, and your ADFS is even working for once. Depending on your situation you may even have noticed that it's mostly older user mailboxes that are giving you grief. In my case we used to have other domain names in use and when those were decommissioned, the extra Email addresses were never removed from the mailboxes. For the Remote move to work, the only email address domains that can be attached to that mailbox have to be listed under the Domains tab in the office 365 admin page. Once you remove the extras, then wait a while, then force a Dirsync, then wait a while and try again it'll go through.<br />
<br />
Example:<br />
<br />
UserA has these email addresses:<br />
usera@contoso.com<br />
usera@contoso.mail.onmicrosoft.com<br />
<br />
UserB has these email addresses:<br />
userb@contoso.com<br />
userb@contoso.mail.onmicrosoft.com<br />
userb@notcontoso.net<br />
<br />
If the domains tab in office 365 only has contoso.com and contoso.mail.onmicrosoft.com, then UserA will move but UserB will fail.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-63440740839303145892012-10-29T12:02:00.000-04:002012-10-29T12:02:01.759-04:00Windows RT touch cover keyboard not working fix<br />
So one of my managers got their brand new, shiny Windows RT units in today. And the touch keyboard wouldn't work at all. Of course, finding any hints online during the launch week of a new device is fun and after trying out several solutions such as re-docking, refreshing, and cursing profusely, we tried one last thing we saw on the forums - rubbing alcohol.<br />
Yes, after seeing a post from rhalbert10 at <a href="http://forums.wpcentral.com/surface-windows-rt/199669.htm">http://forums.wpcentral.com/surface-windows-rt/199669.htm</a> we grabbed a bottle of rubbing alcohol and some q-tip swabs and cleaned off both sets of shiny, brand new, untarnished, pristine looking connectors. We let it air dry for 3 minutes and then redocked the touch cover. And it started working fine...<br />
<br />Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-4510394351870137228.post-52926018416385456722012-09-12T08:26:00.000-04:002012-09-12T08:26:05.223-04:00Lync 2010 clients stuck in Offline state after patching<br />
So after applying all the latest patches to my Lync 2010 server, my users started complaining that they were stuck in the Offline state but still 'connected'. I noticed some errors in the event log related to SSL problems so after digging around I went into the Lync Deployment Wizard and ran the certificate wizard. One of my external certificate entries was displaying as 'missing'. After digging further I figured out that one of my certificates had expired but hadn't caused any problems so I hadn't noticed. So I just installed an updated certificate and Assigned it using the wizard and shortly after all my Lync clients switched back to an Available status.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-77829688198348857092012-08-04T23:57:00.000-04:002012-08-04T23:57:04.323-04:00Scheduling non-humans in project 2010So I was helping someone muddle through making a Project plan and ran into scheduling fun. Some human performed tasks had to be scheduled with Predecessors that were computer days. The humans only work monday-friday as opposed to the computers which ran 7 days a week. I found several articles online that showed how to make a copy of the Standard calendar and they all said to change the days to include the weekends. This seemed to work until we got a dozen or so entries into the plan and then it tried to schedule a Finish task on the same day as a Start for the same computer resource. After fumbling around for a bit I figured out how to change the Display from just Date to Date and Time (00/00/00 00:00am/pm) and I noticed some odd start/stop times. Ultimately the issue was that the Standard calendar Work week has the time defined as 08:00 to Noon and 13:00 to 17:00. My cloned calendar had just been set to 08:00 to 17:00 which Project treated as a 9 hour work day and applied 8 hours of work which left a remainder. So yes, now my computers get a lunch hour too and all is well.<br />
<br />Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-4510394351870137228.post-36671925892311582772012-07-06T21:05:00.003-04:002012-07-06T21:05:50.207-04:00Syncthru LDAP to 2008 active directoryI had the opportunity recently to work with one of the newer large multifunction Samsung copiers this month. The Syncthru web interface is fairly feature rich but the documentation really could use more examples in some places. My bane for 2 hours was figuring out how to populate the address book inside it by doing an LDAP pull from Active Directory. <br />
The initial setup of the LDAP connector went through pretty quickly. I just went to Security -> Network Security and then down to LDAP Server on the left menu. I then clicked Add to enter in my LDAP server. I added in the IP address of one of my domain controllers and then used Port number 3268 to start with because you want to keep it simple initially and introduction SSL LDAP would just add one more thing to troubleshoot. Fill in your AD Domain name in DC=yourdomain,DC=com format. Choose simple and enter in your username in DOMAINNAME\username format. Note that this is the first oddity in that we're mixing netbios/domain name\username format and LDAP convention on the same form.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj_Z2mc0PddGw4QXQ1PhrR4wZVZ_1_3A_22TId-Du2b5OtjthdxteWZlfyXiY-mFf60nWA8cQRiQJXBUC2CiRNGHx-1mCXeQx8fwPqdWFDzLdNGgadffZjiZyY3l1ynGrrcwZ_lGh7QIWR/s1600/ldappull1a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="464" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj_Z2mc0PddGw4QXQ1PhrR4wZVZ_1_3A_22TId-Du2b5OtjthdxteWZlfyXiY-mFf60nWA8cQRiQJXBUC2CiRNGHx-1mCXeQx8fwPqdWFDzLdNGgadffZjiZyY3l1ynGrrcwZ_lGh7QIWR/s640/ldappull1a.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
On the second half of that window, don't check the LDAPS yet!!! </div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg36AKnNrvsoSQGmpIbrn2MqS18H9JODl-Qs0wZ9FJIzGHDG4Z70px0TC5gbwMh06MiVS8ZfcOlzipC7QsKAKh0j5N7L06XMunO4yizHCTrp_BybZ2aDaKAbSi3NmlK5XDVpY3HUXtYzIB9/s1600/ldappull1b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="417" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg36AKnNrvsoSQGmpIbrn2MqS18H9JODl-Qs0wZ9FJIzGHDG4Z70px0TC5gbwMh06MiVS8ZfcOlzipC7QsKAKh0j5N7L06XMunO4yizHCTrp_BybZ2aDaKAbSi3NmlK5XDVpY3HUXtYzIB9/s640/ldappull1b.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
Click on the TEST button at the very bottom and make sure you get all OK/Success. </div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi16-SY0atK4NE73V5PEUvhEX2TQIdDNswhdIYhRWg3pcQArn1rts3hMGqgrZenx9PMCGW22tmz6LCTg2IAkfyaODfPfPtAtR0efnrwW2fR9oWPa7blDtgw6fYEEynYQVfcVsMNRwbp10Qb/s1600/ldaptest.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="281" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi16-SY0atK4NE73V5PEUvhEX2TQIdDNswhdIYhRWg3pcQArn1rts3hMGqgrZenx9PMCGW22tmz6LCTg2IAkfyaODfPfPtAtR0efnrwW2fR9oWPa7blDtgw6fYEEynYQVfcVsMNRwbp10Qb/s640/ldaptest.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Once that works, then click the Apply button at the top to save these settings.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
So now we're halfway done and ready for the twists. Go to the Address book and then click on the LDAP button at the top right.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL0Ymmx81XmHf9fuD6sDFLD2Ba3C6j6RQOFTuuzI8hGZeh7vOBOc8r5zP8vkQxEjhojuH5xOjfWiVzc7CVuKD-a_3lfai5wAlRurVIm6f4IeHtjSEEa-9N-U8JVPyS-7BpWJARasoO3C9Z/s1600/ldapbutton.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL0Ymmx81XmHf9fuD6sDFLD2Ba3C6j6RQOFTuuzI8hGZeh7vOBOc8r5zP8vkQxEjhojuH5xOjfWiVzc7CVuKD-a_3lfai5wAlRurVIm6f4IeHtjSEEa-9N-U8JVPyS-7BpWJARasoO3C9Z/s400/ldapbutton.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Now for the GOTCHAS! </div>
<div class="separator" style="clear: both; text-align: left;">
<b>a) I couldn't get it to search recursively</b></div>
<div class="separator" style="clear: both; text-align: left;">
<b>b) It only worked when the user account I used to authentication against AD was in the same ORG that I was searching. (My AD is set to not allow anonymous searching so I have to use authentication)</b></div>
<div class="separator" style="clear: both; text-align: left;">
<b>c) The login ID is in <span style="font-size: large;">CN=firstname lastname</span> format. This is different than the domainname\username from the other LDAP screen.</b></div>
<div class="separator" style="clear: both; text-align: left;">
<b>d) The search root is the full path to the exact ORG that you want to pull from. (note the OU=test, OU=US prepended)</b></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizZpDLXSGTkyEWEc_XXhggLa1SiP8LzyJRV6ayW0cYgU38g0jIMa6qnqq4DVwjV0hrRWhDBzavHUjxH7Qi7e-PlWVw-k-J8_hDKnv2_NMYyYxoBgg4p6dxvVN4REJLmFAoMmxCpfxI0Szl/s1600/ldappull2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="321" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizZpDLXSGTkyEWEc_XXhggLa1SiP8LzyJRV6ayW0cYgU38g0jIMa6qnqq4DVwjV0hrRWhDBzavHUjxH7Qi7e-PlWVw-k-J8_hDKnv2_NMYyYxoBgg4p6dxvVN4REJLmFAoMmxCpfxI0Szl/s640/ldappull2.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
To keep it simple, I used (mail=*) for my search filter. Click on the Search button when done and IF you are successful, a list of people will show up. Just click the Apply button to pull them all into the Address book (you can always delete the ones you don't want later from inside the copier). If you botched it, you'll get Incorrect Filter errors.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Repeat for your other ORG units, remembering to use an account inside each one for the Login ID. If you make it past the inconsistencies of the interface and the limitations of the AD implementation of LDAP you're home free. Once you're done you'll have a fully functional Scan to Email function that works great.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<br />Unknownnoreply@blogger.com8tag:blogger.com,1999:blog-4510394351870137228.post-40025644497826997852012-06-20T22:30:00.000-04:002012-06-20T22:30:43.867-04:00LDAPS, php, windows server 2008 r2 and the Unknown CA errorIt's never a good day when I have to use IIS and PHP in the same sentence. I was trying to set up an open source program to do an LDAP auth to my Active Directory servers and it worked fine without encryption on port 389. Since I'm not fond of passing credentials in clear text across networks, I then tried to set it up for LDAPS at which time it started failing. I ran a wireshark capture on it and the glaring fatal error of "<b>Unkonwn CA</b>" reared it's ugly head. After spending considerable time making sure my AD certificates were up to date, the CA cert was imported to the local machine's certificate store, and several LDP.exe tests just to make sure, I turned my attention to figuring out how to make ldap skip past that error. PHP had been installed using the microsoft platform installer so of course very little matched up with most of the articles I found since folders like c:\openldap\sysconf don't exist, much less then LDAP.conf file whose location appears to shift depending on which DLL your install came with. <br />
Anyway, the key I needed was <b>TLS_REQCERT never</b> which would tell ldap to go fly a kite if it didn't like the CA. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiod_85hiGn3h19Np3QcYXZYCgCGDlbOk77QUz6lMaZ2Tvc9usHf3bl6Kk-qbzjdZxPoFeaAmAFvRZwQy_RJtoo5vz1MHxwz05cBJ57rE5RxJ4SroSkY8hlb-37j3uidgm9EsfKcouK2yhw/s1600/ldap1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="226" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiod_85hiGn3h19Np3QcYXZYCgCGDlbOk77QUz6lMaZ2Tvc9usHf3bl6Kk-qbzjdZxPoFeaAmAFvRZwQy_RJtoo5vz1MHxwz05cBJ57rE5RxJ4SroSkY8hlb-37j3uidgm9EsfKcouK2yhw/s400/ldap1.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
So yes, that's all that you have to put in the ldap.conf file and then save it out as type "All Files" so notepad doesn't attach a hidden .txt to your filename. Depending on your DLL, you'll either need to drop it in the root of your inetpub drive or in c:\openldap\sysconf. Or do like I did and just dump it in both places. Then run an IISRESET or reboot the server and voila, LDAPS starts working.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Yes, it is slightly less secure since it's not checking the CA but at least it's not clear text.</div>
<br />
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4510394351870137228.post-19553610926069133912012-05-10T07:54:00.000-04:002012-05-10T07:54:23.831-04:00Making NPS logs legible with notepad++Overall I do like NPS in Windows 2008 but reading the logs is just painful. I know there are aftermarket solutions but sometimes you just need to be able to read these things with something freely available. Notepad++ is part of my standard toolkit and overall is just a great tool. When you open an NPS log you'll notice that each line is over 2000 characters long. Since all the tags look pretty orderly, I went to Language and told it to interpret is as XML. Now I had pretty, colorized 2000+ character long lines. After a little digging online, I figured out how to do a find/replace to insert a carriage return between each back to back tag.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWOoajeRXOIqEcnno71UUszUg7qcrvXzIB0_SpEcM5GsqaPiKPVtySCzo8DSdig2fhFA293FMg-QbzeSJiRbUcnfw_3tBartTSWxsQxEOiKtskTDl_pgP2PPfoW1bpJQVW4ZRfSaq5366u/s1600/replace.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWOoajeRXOIqEcnno71UUszUg7qcrvXzIB0_SpEcM5GsqaPiKPVtySCzo8DSdig2fhFA293FMg-QbzeSJiRbUcnfw_3tBartTSWxsQxEOiKtskTDl_pgP2PPfoW1bpJQVW4ZRfSaq5366u/s640/replace.png" width="640" /></a></div>
<br />
You have to remember to select "<b>Regular Expression</b>" before clicking <b>Replace <u>A</u>ll.</b> Now everything fits on the width of the screen and now all you have to do is decipher all the tags.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilBebMPGYANn5CaoTfVZh-3fRLc_XqnNrP41_Rj9CMXKWD26UmBLj7Q6fPIgXXXt1lxE8TrmbzcgFTroIAHjlZxpGBBLc32v1vZMyR43tP3zjBDZ8jAzTpqhgR63QW7NsQUcQmJoiHGfOw/s1600/nap+status.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="343" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilBebMPGYANn5CaoTfVZh-3fRLc_XqnNrP41_Rj9CMXKWD26UmBLj7Q6fPIgXXXt1lxE8TrmbzcgFTroIAHjlZxpGBBLc32v1vZMyR43tP3zjBDZ8jAzTpqhgR63QW7NsQUcQmJoiHGfOw/s400/nap+status.png" width="400" /></a></div>
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4510394351870137228.post-25489791166327502172012-04-18T08:44:00.001-04:002012-04-18T08:44:35.902-04:00Microsoft Certified Solutions AssociateSo I've been out of the certification loop for a couple of years - mainly due to workload and time/cost vs gain just not being worth it to me once you hit a high enough level. I always try not to certify too far past the level I currently work at because a lot of that knowledge will just drain away when you're not using it. <br />
On a whim I logged into the MCP page this week and noticed some changes since my last visit. I guess it was only a matter of time until Microsoft pushed out new 'cloudy' certifications. One bittersweet surprise was that I gained a new certification - Microsoft Certified Solutions Associate - apparently just for being a 2008 MSCA (back when it was 'administrator'). I guess I can't complain too much about a new free title, it's just that after all those bad years of hearing horror stories about paper MCSEs and fly by night Microsoft certs coupled with a few real life experiences with 'book smart, practical dumb' certification holders I'd already felt my certifications were being devalued. I guess it's time to get off my duff and get back to working on these again before I get left behind. So starting today I'll start studying for my "Microsoft Certified Solutions Expert" cert which will be available June 11th, 2012. If I find anything useful while brushing up I'll be sure to post.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-38082548452793026412012-04-17T14:45:00.002-04:002012-04-17T14:45:24.087-04:00MDT 2012After spotting the release announcement on Aiden Finn's blog I went ahead and downloaded MDT 2012 so I could upgrade my old MDT 2010 Update 1 installation. That turned out to be a chore since TMG 2010 kept trying to eat the install file. Installation was a breeze afterward, just dumped it on top of the old one and once I got into the deployment manager it had an exclamation mark over my deployment share to remind me to upgrade my deployment share to the latest version. Running powershell scripts is now a built in task option and it now supports security compliance manager templates so there's some new stuff to play with. I also noticed several screens seemed a bit more polished and if I'm not mistaken a few new options in the default task sequence steps. <br />
So far so good and no issues - currently got a LTI deployment running to test out the new monitoring console.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuwqqW4-s87vmhLJy_sc6bk34PSv_UgZpa8KWR_iD_ttZz4BbI1E6JItRA2BC8whbqk5mlG_O8Zhd_yG7k_x-XQLjsDEw4ExHujiU8bwNeARZ8Vte_nj3h011NgH9JVvQyOUTeysJ1SiBr/s1600/powershell.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuwqqW4-s87vmhLJy_sc6bk34PSv_UgZpa8KWR_iD_ttZz4BbI1E6JItRA2BC8whbqk5mlG_O8Zhd_yG7k_x-XQLjsDEw4ExHujiU8bwNeARZ8Vte_nj3h011NgH9JVvQyOUTeysJ1SiBr/s320/powershell.png" width="320" /></a></div>
<br />
Monitoring console:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHumKAQuDG98TWpleghJmdWEPnSwRL1Ht-TC2DRVURvP4R-LAaQ8MyUYrvt7mzRQ6-Dgtz235b9uY6A5QC_tDRVd5ztvqD19BXwh2WIrltIf7guzsCqLPq6ngf1bcDOVM7JK8Sw6OdsAfj/s1600/monitoring+console.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="492" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHumKAQuDG98TWpleghJmdWEPnSwRL1Ht-TC2DRVURvP4R-LAaQ8MyUYrvt7mzRQ6-Dgtz235b9uY6A5QC_tDRVd5ztvqD19BXwh2WIrltIf7guzsCqLPq6ngf1bcDOVM7JK8Sw6OdsAfj/s640/monitoring+console.png" width="640" /></a></div>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4510394351870137228.post-17161324007194308652012-01-17T18:51:00.002-05:002012-01-17T18:51:47.640-05:00Galaxy Tab WiFi stops working every few daysHaving finally gotten fed up with rebooting my galaxy tab every few days to get it to work with my home netgear router, I started trolling through forums for a solution. Suffice it to say, Android has a long way to go as far as dhcp and wifi if even half of what's posted on these forums is accurate. Fortunately I managed to stumble on a fix that worked for me. My wifi network was set up to accept Both WPA and WPA2. I removed WPA support and just left only WPA2 support on and I haven't had to reboot in the past few weeks. (There's some kind of rekey'ing issue with WPA version 1 every few days) I'd also had intermittent issues with my Cisco WAPs and I applied the same changes to them and am waiting to see if it helps.<br />
Unknownnoreply@blogger.com2