Converting a Cheap ($2) ST-Link v2 Clone into a Hardware GPG Key

drew 29 Sep , 2018 0 comments GPG, Hacking, Hardware

Several weeks back, I decided to re-explore the concept of hardware devices for storing GPG keys.  The Yubikey Neo, 4, and 5 all have this functionality (with varying key-length options depending on the model), but these devices are not inexpensive ($40+, though Wired magazine has been running a deal for the past several months where you can get a Yubikey 4 with a 1-year subscription to the magazine for $5).  In my search for inexpensive alternatives, I came across this post, where the author purchased very inexpensive ST-Link v2 clone devices and changed the firmware to an open-source hardware GPG implementation, called Gnuk.  I have dabbled with hardware hacking more and more over the past few years, and I’m no stranger to flashing firmware (for example, I’ve been a long-time fan of DD-WRT and OpenWRT), so I thought this sounded like a really great project.  The article was well written and had links to everything I needed to buy and install, so everything should be easy, right? Of course not, it never is, and that’s why I am writing this post rather than just letting you find the one I originally worked off of.

I was not initially familiar with the ST-Link v2, but I learned that it is a hardware tool used for interfacing with devices via SWD (Serial Wire Debug).  SWD is a protocol found in ARM devices and uses fewer wires than JTAG.  STMicroelectronics makes the official ST-Link v2 device, which I see via a Google search for between $10 – $25.  AliExpress, however, has tons of sellers offering clone devices for approximately $2.  These devices feature an STM32F103 microcontroller, which is compatible with the Gnuk firmware (or in other words, Gnuk offers this chip as a build target).  I went searching on AliExpress and found myself the cheapest device I could (factoring in shipping) and bought 5 units.  Why 5?  I figured it’d be a cool little geek gift for some of my friends, potentially, but realistically I anticipated breaking at least one of them.  Also, the nice thing about these devices is that you need an SWD programmer to re-flash the devices themselves, so you can actually use one of them to program the other ones.  My order was estimated to arrive in 19-39 days, and I was pleasantly surprised when they arrived in around 15 days.

The programmer with the cover on

With the devices now on hand, I began by compiling the firmware.  That was my first stumbling block.  The repo where the project is stored had moved since the article I was referencing was published, so I needed to track that down.  Eventually I found the repo, and cloned it onto my dev system (for reference, I was using Kali Linux, since that was the only Linux VM I had on my machine; any Debian-based variant should work fine with my instructions).

git clone https://salsa.debian.org/gnuk-team/gnuk/gnuk.git
cd gnuk/
git submodule update --init
cd src/
./configure --vidpid=234b:0000 --target=ST_DONGLE
mkdir build
make

Once I built the firmware, I now needed to attach the device to my debugger and connect using OpenOCD.  Initially, I tried using one of the ST-Link v2 devices I had purchased for programming another unit, but upon running into issues I instead tried using my Bus Pirate, which also supports SWD.  However, I was unsuccessful in connecting to the device with either of these programmers.  Multiple reference articles mentioned the debug layout as (starting from the spot closest to the USB port): SWDIO, GND, SWCLK, v3.3.  Upon further inspection, I found that the units I had were physically different than those pictured in the article I referenced (and another that I found later).  In the reference posts, the 4 debug points on the board were holes, going all the way through the board.  On mine, however, they were just pads.  The underside of the boards did not have any appearance of being connected to the topside, so I was not able to poke probes through the board for a good connection.  That was the first hardware problem I encountered.  I found that if I connected v3.3 and GND from the debugger to the target device (on the header pins, rather than the debug pads) I could power the device just the same as if I were connecting them to the debug pads, meaning I had fewer unwieldy connections to deal with.  When that was connected, the device powered up, but I could not get any type of a data connection.  For note, I was using clips to connect to the board, that way I could grab on to the pads since I couldn’t push anything through.

I decided that I would solder my probes to the board.  So, I needed to buy a soldering iron, since I had no clue where mine was (it’d been years).  I decided on a kit that contained a multimeter included, that way I could test the voltage levels coming from the board.  I want to approach this slightly less blindly than I had thus far.  Once that kit arrived, I attempted to solder wires to the 2 data pads on the board (pads 1 and 3, if counting from 1, with 1 being closest to the USB port).  I *finally* was able to get it soldered (this stuff is really small!), but I still could not get it to connect to the ST-Link or Bus Pirate. Also, I burned the crap out of some of the debug points in the process (some out of error, some out of frustration).  I then bust out the multimeter and started probing the board.  What I found was that the points on the board were not only physically different than the reference article boards, but they were actually arranged differently too.  I found that point 1 was v3.3, and point 4 was GND.  Upon connecting power to point 1 and ground to point 4, the LED lit up just as it had when connecting to the header pins, so I knew at least those points were correct.  Point 2 and 3 had to be SWDIO and SWCLK, right?  Thankfully there are only 2 combinations I needed to test to identify which point was which.  Guess which combination worked? Neither!

One of the devices that I burned the crap out of debug points 1 and 3

At this point I was at a loss.  I thought I had done everything correctly from a trouble-shooting standpoint, but I could not explain why those two points on the board would not give me debug access.  In my frustration, I ended up buying 2 more sets of 5 boards each from AliExpress, from 2 different sellers, each with multiple pictures of the board showing that they had debug holes rather than pads.  That was 2 days ago, and now I needed to wait another 15-30 days for them to arrive.  So much for this being an easy, fun project… For reference, these are the two ones that I bought that I’m still waiting to arrive that *should* be much easier to make work: https://www.aliexpress.com/item/ST-Link-V2-Programming-Unit-mini-STM8-STM32-Emulator-Downloader-M89-New/32631496848.html?spm=a2g0s.9042311.0.0.15ff4c4djHAJY3, https://www.aliexpress.com/item/1Set-ST-LINK-Stlink-ST-Link-V2-Mini-STM8-STM32-Simulator-Download-Programmer-Programming-With-Cover/32887597480.html?spm=a2g0s.9042311.0.0.15ff4c4djHAJY3. This is the bad one (stay away from it!): https://www.aliexpress.com/item/1PCS-ST-LINK-Stlink-ST-Link-V2-Mini-STM8-STM32-Simulator-Download-Programmer-Programming-With-Cover/32792925130.html?spm=a2g0s.9042311.0.0.15ff4c4djHAJY3

I’m curious and impatient, though.  So yesterday I decided to take one more stab at this.  I had no idea what debug points 2 and 3 were on that board and why they would not give me SWD access to the chip.  However, I knew what the chip itself was (due to the imprint on it that said “STM32F103”), so I thought that could help.  I Googled the pinout of the chip and identified the SWDIO and SWCLK pins.  In the image below, you can see that pin 34 corresponds to SWDIO and pin 37 is SWCLK.

STM32 chip pinout

So, now that I knew which pins on the actual chip were needed for debugging, I attempted to connect my clips to the pins.  This was about as difficult at first as I expected, as these pins are super small.  I couldn’t get pin 34 connected by itself, though I appeared to be able to connect pin 37 (since it was at the end of a row, and as such easer to grab).  Eventually though, I decided to give up on precision and just to attempt to make contact with the pins by grabbing each one needed and a few around it.  That was much easier than attaching just to the single needed pins.  Note: I don’t know if this would work on similar or different chips, and I suppose it could potentially cause damage, so proceed with caution.

Connected to pins 34 and 37 directly on-chip

I fired up OpenOCD using my Bus Pirate, and was finally able to connect!  Hallelujah!  I was able to execute the needed commands to unlock the device, reset it, and erase the flash.  However, I was not able to write my firmware file.  Maybe I was just too impatient, but I literally gave this thing 20+ minutes to write a 128KB file, and it just wouldn’t.  From the other posts I read, it appeared from the output of OpenOCD that it should only take a matter of seconds to write the file.  Frustrated again, I decided to try to use one of the devices I bought to act as the debugger, too.  I used the OpenOCD configuration below to connect.

Note: if you don’t have OpenOCD already, you can install it by typing:

sudo apt install openocd

OpenOCD config for ST-Link v2 (called openocd.cfg moving forward):

telnet_port 4444
source [find interface/stlink-v2.cfg]
source [find target/stm32f1x.cfg]
set WORKAREASIZE 0x10000

To connect, run the command:

openocd -f openocd.cfg

You should see the a message about connecting to the device, and it should not exit within a few seconds. After that, you need to open a new terminal and connect to OpenOCD using Telnet:

telnet localhost 4444

Once connected via Telnet, you need to unlock the device, reset it, erase memory (for good measure), write the Gnuk binary, lock the device, and then reset it once more:

reset halt
stm32f1x unlock 0
reset halt
stm32f1x mass_erase 0
flash write_bank 0 /root/src/build/gnuk.bin 0
stm32f1x lock 0
reset halt

After that, once you connect the device to the USB port of your machine it should be detected as a GPG card! You can confirm by running the command:

gpg --card-status

References:
https://nx3d.org/gnuk-st-link-v2/
https://blog.danman.eu/2-usb-crypto-token-for-use-with-gpg-and-ssh/ https://www.earth.li/~noodles/blog/2015/08/program-fst01-with-buspirate.html

Fix for Cron Failing on VMware vCenter Server Appliance (VCSA) 6.5

drew 19 Apr , 2017 1 Comment VMware

When trying to enable scheduled jobs via cron on VMware VCSA 6.5 I kept seeing the errors below, and my job would not run.

2017-04-19T09:56:01.996673-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.996797-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.996907-04:00 VCSA crond[104661]: PAM _pam_load_conf_file: unable to open config for password-auth
2017-04-19T09:56:01.997010-04:00 VCSA crond[104661]: (root) PAM ERROR (Permission denied)
2017-04-19T09:56:01.997116-04:00 VCSA crond[104661]: (root) FAILED to authorize user with PAM (Permission denied)

The contents of /etc/pam.d/crond had 3 references to “password-auth”, however there was no file in /etc/pam.d called “password-auth”.  I changed “password-auth” to “system-auth” in /etc/pam.d/crond, as seen below, and everything worked.

account required pam_access.so
account include system-auth
session required pam_loginuid.so
session include system-auth
auth include system-auth

Solution for Subnet Conflict Between VPN and LAN

We had an issue at work where VPN users are having intermittent difficulty reaching servers due to a subnet conflict, ie the VPN network is 192.168.1.X, and so is the LAN they’re connecting from. It displayed itself in a strange way: application connections would fail repeated attempts, but after pinging the servers in question the first ping would fail but the next 3 would work, and then the application would work, for a while. Finally I developed a solution for this. Here’s what I did: create a Powershell script that identifies the IP address of the VPN connection, then create a static route for the 192.168.1.0/24 and 192.168.0.0/24 subnets (since that’s another common home subnet, as well as one running at the office) going out the VPN connection’s IP with a lower metric number than everything else. Then, set a scheduled task to run automatically when the VPN connection occurs (task triggered by event). Save the script as “Fix_VPN.ps1” at the root of the C drive and import the scheduled task (you may need to make sure that the event I have triggering this is also being generated by whatever VPN solution you’re using; we’re using built in RAS (SSTP)).

Powershell Script:

 
$ErrorActionPreference= 'silentlycontinue'
$ip = $null
$nics = [System.Net.NetworkInformation.NetworkInterface]::GetAllNetworkInterfaces()
foreach ($nic in $nics) {
   if($nic.Name -like "*VPN*"){
      $props = $nic.GetIPProperties()
      $addresses = $props.UnicastAddresses
      foreach ($addr in $addresses) {
         $ip = $($addr.Address.IPAddressToString)
      }
      break
   }
}
if($ip -ne $null){
   route delete 192.168.1.0 METRIC 1 | Out-Null
   route delete 192.168.0.0 METRIC 1 | Out-Null
   route add 192.168.1.0 mask 255.255.255.0 $ip METRIC 1 | Out-Null
   route add 192.168.0.0 mask 255.255.255.0 $ip METRIC 1 | Out-Null
}

Scheduled Task:

   <?xml version="1.0" encoding="UTF-16"?>
    <Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
      <RegistrationInfo>
        <Date>2016-01-28T12:11:28.5687701</Date>
        <Author>AGreenBHM</Author>
      </RegistrationInfo>
      <Triggers>
        <EventTrigger>
          <Enabled>true</Enabled>
          <Subscription>&lt;QueryList&gt;&lt;Query Id="0" Path="System"&gt;&lt;Select Path="System"&gt;*[System[Provider[@Name='Rasman'] and EventID=20267]]&lt;/Select&gt;&lt;/Query&gt;&lt;/QueryList&gt;</Subscription>
        </EventTrigger>
      </Triggers>
      <Principals>
        <Principal id="Author">
          <UserId>S-1-5-18</UserId>
          <RunLevel>HighestAvailable</RunLevel>
        </Principal>
      </Principals>
      <Settings>
        <MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
        <DisallowStartIfOnBatteries>false</DisallowStartIfOnBatteries>
        <StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
        <AllowHardTerminate>true</AllowHardTerminate>
        <StartWhenAvailable>false</StartWhenAvailable>
        <RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
        <IdleSettings>
          <StopOnIdleEnd>true</StopOnIdleEnd>
          <RestartOnIdle>false</RestartOnIdle>
        </IdleSettings>
        <AllowStartOnDemand>true</AllowStartOnDemand>
        <Enabled>true</Enabled>
        <Hidden>false</Hidden>
        <RunOnlyIfIdle>false</RunOnlyIfIdle>
        <WakeToRun>false</WakeToRun>
        <ExecutionTimeLimit>P3D</ExecutionTimeLimit>
        <Priority>7</Priority>
      </Settings>
      <Actions Context="Author">
        <Exec>
          <Command>powershell</Command>
          <Arguments>C:\Fix_VPN.ps1</Arguments>
        </Exec>
      </Actions>
    </Task>

Nginx (Reverse SSL Proxy) with ModSecurity (Web App Firewall) on CentOS 7 (Part 1)

Today I’ll demonstrate how to install the Nginx webserver/reverse proxy, with the ModSecurity web application firewall, configured as a reverse SSL proxy, on CentOS 7.  This is useful in scenarios where you are terminating incoming SSL traffic at a centralized location and are interested in implementing a web application firewall to protect the web servers sitting behind the proxy.  This setup can also support multiple sites (ie different host names) using Server Name Indication (SNI) for SSL.  If multiple sites are being protected you will need a different SSL certificate for each site (unless a wildcard or SAN certificate will suffice for the sites in question, in which case SNI is not used).  There is an abundance of information available online showing how to do this on Ubuntu/Debian, but not for CentOS.  In Part 1 (this post) we’ll compile everything, install Nginx, and configure a reverse SSL proxy and in Part 2 we’ll configure ModSecurity.

The first step is to deploy a fresh VM with CentOS 7.  We are using the x64 version, but I don’t see why the x86 version would be any different.

Log on to the server and either elevate to a root shell or execute all commands below using sudo.

Here we are downloading and installing the EPEL (Enterprise Packages for Enterprise Linux)
repository to provide access to some of the prerequisite packages we need.  After that we install all of the required packages.

      
rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm       
yum update       
yum install gc gcc gcc-c++ pcre-devel zlib-devel make wget openssl-devel libxml2-devel libxslt-devel gd-devel perl-ExtUtils-Embed GeoIP-devel gperftools gperftools-devel libatomic_ops-devel perl-ExtUtils-Embed libtool httpd-devel automake curl-devel       

After installing all of the prerequisite packages we create a user named “nginx” for the daemon to run as.  We also set the login shell to “nologin” to prevent a compromised account from gaining access to a real shell.

      
useradd nginx       
usermod -s /sbin/nologin nginx       

Now let’s download ModSecurity to the /usr/src/ directory, untar it, and change to the unpacked folder.  At the time of this writing the latest version was 2.9.0.  You can find downloads for the latest version at https://www.modsecurity.org.

      
cd /usr/src/       
wget https://www.modsecurity.org/tarball/2.9.0/modsecurity-2.9.0.tar.gz       
tar xzvf modsecurity-2.9.0.tar.gz       
cd modsecurity-2.9.0/       

Now inside the source directory, run autogen.sh, then configure with the “enable-standalone-module” option, and finally make the module.

      
./autogen.sh       
./configure --enable-standalone-module       
make       

After building the ModSecurity module, move back to the /usr/src/ directory, download Nginx, untar it, and then enter the unpacked directory.  The latest version of Nginx at the time of this writing was 1.9.7.  Newer versions can be found at http://www.nginx.org.

      
cd /usr/src/       
wget http://nginx.org/download/nginx-1.9.7.tar.gz       
tar xzvf nginx-1.9.7.tar.gz       
cd nginx-1.9.7/       

After navigating to the extracted directory run configure, then make, and then make install.  The “add-module” option set with configure will point to the nginx/modsecurity/ directory which should be inside of the extracted ModSecurity directory from earlier.  The default behavior of nginx is to install to /usr/local/nginx, but in the command below we’re changing the location of where some files are stored. First, the binary “nginx” is being saved directly in /usr/sbin/ to make things easier dealing with $PATH. We’re setting the conf path to the /etc/nginx/ directory, the PID path to the /var/run/ directory, and the log paths to /var/log/nginx/. I have two reasons for changing these paths: 1 – to keep the respective folders in the standard locations, and 2 – to allow SELinux to function using default rules. There are default SELinux rules related to /sbin/nginx, /etc/nginx/, and /var/log/nginx/, and I’ve found it much easier to work with SELinux if I just keep everything possible in the locations that are already predefined. There will be more later showing how to add SELinux rules to account for the directories that aren’t predefined.

      
./configure --user=nginx --group=nginx --with-pcre-jit --with-debug --with-ipv6 --with-http_ssl_module --add-module=/usr/src/modsecurity-2.9.0/nginx/modsecurity --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log
make       
make install       

Now let’s setup a systemd service to allow Nginx to autostart with the server.  In the first block of code we’ll navigate to the proper directory, then we’ll create a file to tell systemd what to do.  The second block of code will contain the contents of the file we created before that.

cd /usr/lib/systemd/system
nano nginx.service
#
# nginx signals reference doc:
# http://nginx.org/en/docs/control.html
#
[Unit]
Description=A high performance web server and a reverse proxy server
After=network.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /var/run/nginx.pid
TimeoutStopSec=5
KillMode=mixed

[Install]
WantedBy=multi-user.target

Now we need to edit the conf file for Nginx.  Before we do that though we create two folders: sites-available and sites-enabled, just like in Apache, to hold our vhosts.

mkdir /etc/nginx/sites-available
mkdir /etc/nginx/sites-enabled
nano /etc/nginx/nginx.conf

Change the config file to what’s listed below. NOTE: For “worker_processes” set the number to the number of cores you wish to dedicate to the server.

user nginx;
worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    gzip  on;
    include /etc/nginx/sites-enabled/*;
}

Here we generate a Diffie-Hellman key larger than OpenSSL’s default size of 1024. We’ll do 4096.

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

Now we need to create a folder to hold our SSL files. Inside of that folder we’ll create subfolders for each vhost’s certs. Though not shown below, place your certificate and key files in the vhost SSL directory. We also create a vhost config file in the sites-available directory.

mkdir /etc/nginx/ssl
mkdir /etc/nginx/ssl/localhost
nano /etc/nginx/sites-available/ssl.conf

Add the following to the file just created. Make sure for the “ssl_certificate” and “ssl_certificate_key” lines you point to the proper locations. Set the “server_name” line to the hostname of your server (make sure it matches the SSL certificate). Set the “proxy_pass” line to point to the server you’re proxying to. In the example below we’re connecting to port 80 locally, but in most cases this would be a separate server. If you have several SSL certificates and want to define one of the vhosts as default, add “default_server” on the “listen” line, between the port number and semicolon.

server {
        listen   443;

        server_name localhost;
        ssl                  on;
        ssl_certificate      /etc/nginx/ssl/localhost/localhost.crt;
        ssl_certificate_key  /etc/nginx/ssl/localhost/localhost.key;

        ssl_session_timeout  5m;

        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        location / {
          proxy_set_header X-Real-IP  $remote_addr;
          proxy_set_header X-Forwarded-For $remote_addr;
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 90;
          proxy_pass http://127.0.0.1;         
        }
}

Create a symlink from the site(s) in the sites-available directory to the sites-enabled directory to activate the sites. Then test to make sure the config files don’t contain any errors.

ln -s /etc/nginx/sites-available/ssl.conf /etc/nginx/sites-enabled/ssl.conf
nginx -T

Assuming nginx -T doesn’t return any errors, everything should be all ready to go! Enable the service, start it, then verify it’s running.

systemctl enable nginx.service
systemctl start nginx.service
systemctl status nginx.service

If you need to open up the firewall to allow incoming traffic, use the command below

firewall-cmd --add-port=443/tcp --permanent
firewall-cmd --reload

Python – Mass-Mail w/ Attachment

I wrote a Python script today to mass-mail files to our clients. Our wonderful ERP system screwed up again, leaving us with PDFs of the information we needed to send out, the filenames being prefixed with a unique ID per-recipient. Rather than re-generate all documents and have them sent out properly (which likely wouldn’t have happened due to bugs in the system from a recent upgrade) we wrote this script to mail out the documents we already had. We created a CSV file with two columns: column one has the unique filename prefix, column two has the corresponding recipient’s email address.

import csv
import os
import smtplib
import mimetypes
from email.mime.multipart import MIMEMultipart
from email import encoders
from email.message import Message
from email.mime.audio import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
from email.mime.text import MIMEText

def readCSV():
    clientCSVList = []
    try:
        #set below to the CSV file you want to read.  Two columns:
        #1 - file prefix to search
        #2 - email address to send file to 
        with open("csvFile", 'rb') as csvfile: 
            clientReader = csv.reader(csvfile, delimiter=',')

            missing = 0
            for clientNum, email in clientReader:
                found = False
                for file in os.listdir("folder_with_files"):
                    if file.startswith(clientNum):
                        clientCSVList.append([clientNum, email, file])
                        found = True
                        continue
                if found == False:
                    missing += 1
                    print clientNum + " " + email

        print clientCSVList
        print "Missing: " + str(missing)
    except Exception as e:
        print "Exception: " + str(e.args)

    try:
        for client in clientCSVList:
            send_email("sender@domain.com", client[1], "Email Subject Here", "This text is supposed to be the body, but doesn't seem to appear", client[2])
            print client[0] + " Success"

    except Exception as e:
        print "Exception: " + str(e.args)
        print client[0] + " Failure"

def send_email(sender, receiver, subject, message, fileattachment=None):
    emailfrom = sender
    emailto = receiver
    fileToSend = fileattachment
    username = "smtp_username"
    password = "smtp_password"

    msg = MIMEMultipart()
    msg["From"] = emailfrom
    msg["To"] = emailto
    msg["Subject"] = subject
    msg["Message"] = message
    msg.preamble = message

    ctype, encoding = mimetypes.guess_type(fileToSend)
    if ctype is None or encoding is not None:
        ctype = "application/octet-stream"

    maintype, subtype = ctype.split("/", 1)

    if maintype == "text":
        fp = open(fileToSend)
        # Note: we should handle calculating the charset
        attachment = MIMEText(fp.read(), _subtype=subtype)
        fp.close()
    elif maintype == "image":
        fp = open(fileToSend, "rb")
        attachment = MIMEImage(fp.read(), _subtype=subtype)
        fp.close()
    elif maintype == "audio":
        fp = open(fileToSend, "rb")
        attachment = MIMEAudio(fp.read(), _subtype=subtype)
        fp.close()
    else:
        fp = open(fileToSend, "rb")
        attachment = MIMEBase(maintype, subtype)
        attachment.set_payload(fp.read())
        fp.close()
        encoders.encode_base64(attachment)
    attachment.add_header("Content-Disposition", "attachment", filename=fileToSend)
    msg.attach(attachment)

    server = smtplib.SMTP("server:25") #insert SMTP server:port in quotes
    server.starttls()
    server.login(username,password)
    server.sendmail(emailfrom, emailto, msg.as_string())
    server.quit()

def main():
    readCSV()

if __name__ == "__main__":
    main()

Office 365 Outlook Configuration Problem

drew 26 Apr , 2013 3 Comments Office 365

TL;DR – Disable IPv6 for the network adapter

I moonlight as the CIO of a startup owned by some friends in my spare time and decided last year that the best solution for a new business that didn’t want to invest any capital in computer hardware was cloud computing.  Their needs were fairly basic – email access.  No need for a dedicated Exchange box sitting in the non-existent server room.  However, access via Outlook and mobile devices was mandatory, as was calendar and contact sharing, so we opted to to go with a hosted Exchange solution rather than a service offering only POP or IMAP.  I chose Office 365 P1 (http://office.microsoft.com/en-us/business/office-365-small-business-small-business-software-FX103887194.aspx).  At $6/month per-person, it was a no-brainer.

I set this up for my friends on their smartphones and laptops (and my own phone as well) and have been very happy with it since then.  However, Microsoft ran some upgrades 2 weeks ago (one of the major upgrades being moving to Exchange 2013).  During this upgrade (and afterwards), the company’s President was having issues with Outlook not connecting.  It worked fine on his phone, and we were usually able to get in through Outlook Web App, but Outlook was always hit or miss.  During one of these periods, I was unable to get into OWA myself, so I chocked it up to upgrade issues and told my friend to let me know if the problem persisted.  It did, so I went to work.

After testing out a few things within Outlook, I eventually removed the email account from the profile and attempted to re-add it.  This was a bad idea, as now Outlook would hang when trying to open the profile (which contained other important data).  Eventually I just removed the problematic account and we were able to get into the profile, but he still needed the account back.  This shouldn’t have been a difficult thing to do.  By default, Office 365 configures DNS settings for your domain, including Autodiscover, and my friend is using Outlook 2010 which should have worked flawlessly with this, but this was not the case.

The problem we were having was getting Outlook to configure the Exchange settings.  We would type the email address and password, and sometimes Outlook would autoconfigure itself using the Autodiscover settings, othertimes it would throw an error about “unable to establish an encrypted connection”.  After a few tries we manually input the server settings, but we were still unable to make the account work.  The strangest part is that after seemingly setting itself up properly using Autodiscover (which validated to me that we had the right settings), Outlook would hang when trying to authenticate, ie it would either not pop-up a username/password prompt, or sometimes it would, but then reject the credentials we gave it, we I had already validated were correct.

Anyways, after working with Microsoft Office 365 phone support for over an hour (which is included with the low-cost monthly subscription, by the way), I eventually gave up for the day and decided to try again in the morning.

The next morning I remoted back into my friend’s machine and tried again.  Same issues as before, sporadic issues that appeared sometimes but not others, and never able to fully complete the process.  At some point I decided to ping Google just to see what the response was (not sure what was going through my mind, but I’m thankful for whatever that was).  I noticed that I was receiving an IPv6 response back from Google.  WTF is this I thought.  Then I decided to check the network settings and came across my issue.  My friend was using a Verizon Wireless 4G aircard to connect to the Internet.  This card was so kind as to assign him both an IPv4 and an IPv6, standard operating procedure within Windows 7.  However, these were both public addresses, not a link-local (ie non-routable) IPv6 address.  I decided that Outlook was having some issue connecting to Office 365 because of this.  My best bet is that Outlook was trying to connect using IPv6, then failing over to IPv4 (but not consistently), which explained why *sometimes* it would connect to Autodiscover, but not othertimes, and why after it did successfully Autodiscover, we couldn’t make it past there.

My solution was simple: go into the network properties for the adapter and uncheck the IPv6 box.  No reboot required.  A problem that took several hours was solved in about 10 seconds.

Manually Run “Fix Permissions” From Recovery

I own a Samsung Galaxy Nexus running Android 4.0.4 (Ice Cream Sandwich).  My phone is rooted and running a custom recovery (Clockworkmod Touch 5.8.0.2).  I’m not sure if it’s been the case since I purchased the phone (and originally had the most recent non-touch version of Clockworkmod) or if it’s after upgrading to the touch version of CWM or if it’s as a result of me installing a custom ROM and just generally f—ing with my phone, but “Fix Permissions” doesn’t work from recovery any more.  This wouldn’t be a big deal if I could run this from Rom Manager, however, my phone seems to stall every time I run this while Android is booted.  It appears that it hangs while processing running applications, as every time it stalled I exited out of Fix Permissions (in Rom Manager), force closed the app it stalled on, then re-ran Fix Perms, and I’d make a little bit more progress, only to stall again on another app.  Finally I FC’d enough apps that Fix Perms completed, but I had mixed feelings about the quality of the job I’d just performed.

When you run Fix Perms, the first line (helpfully) prints the script that is being run, which is “/data/data/com.koushikdutta.rommanager/files/fix_permissions”.  Using this info, I attempted to boot into recovery and run the script manually from the ADB shell of my PC.  Again, Fix Perms seemed to stall, only this time I knew it couldn’t be a running app that caused the issue (remember, I’m booted into recovery!).  I issued an

“adb pull /data/data/com.koushikdutta.rommanager/files/fix_permissions”

to get a copy of the Fix Perms script on my computer and checked it out.  It turns out that at the top of the script there is a big warning that states “Warning: if you want to run this script in cm-recovery change the above to #! /sbin/sh”.  Fair enough.  I changed the first line of the script to “#! /sbin/sh” and then pushed the script back to my phone using the following command:

“adb push fix_permissions /data/media”

I pushed the file to the “/data/media” directory instead of the original directory the file came from as that would have overwritten the original and not allowed it to run while Android is booted (not that it’s working anyway…).  NOTE: The “/data/media” directory on the Galaxy Nexus is the virtual SD card (the GN doesn’t have removable media, so it tricks Android into thinking this directory is an SD card).  If you’re using a different phone you’ll likely have to load the script into a different directory.  It shouldn’t make a difference where you place the file, I just wouldn’t put it in the “/data/data” directory.

Once you’ve loaded the script onto your phone, set the proper executable permissions by issuing the command below. Remember to change the file path to wherever you placed the file.

“chmod +x /data/media/fix_permissions”

After that’s done, run the script by issuing:

“/data/media/fix_permissions”

The script should run and fix your file system permissions without stalling.

Squid Reverse SSL Proxy With Multiple Sites (on Windows)

I’ve been running my Exchange 2010 OWA site on a non-standard port (default is 443) for a while so that I can run SSL for my personal website (you’re reading it) on the standard port.  My server is hosted at home, and I only have 1 public IP, so unless I (re-)installed everything on a single server, I could only have 1 site running on port 443.

This has been fine, up until yesterday, when I attempted to install Exchange 2010 SP2.  The installation kept failing due to some issue with IIS.  I assumed it was due to the fact that I had moved the OWA site from “Default Site Name” to it’s own site on the same server.  To remedy, I removed the OWA, ECP and ActiveSync virtual directories and re-installed them in the default location on my Exchange box.  After doing this, SP2 installed fine.  However, now I am back at my original problem (2 servers needing incoming traffic on port 443).  I could re-do the custom OWA setup, but what a PITA that’d be to do every time an Exchange update comes out (theoretically).

My solution: Squid!  I fought Squid until the wee hours of the morning before calling it quits and going to bed.  This morning I tried my configuration again, and this time I have success.  The goal, of course, was to have Squid running on a local server and receiving all traffic on port 443, and based on the URL of the incoming request, redirect traffic to the proper server.

First thing you need to do is download Squid.  The Windows binary can be obtained from here: http://squid.acmeconsulting.it/download/dl-squid.html.  After you download it (it’s a zip file), extract it to c:/squid.  By default, Squid expects to be here, so if you place it somewhere else you’ll have a little bit more work to do.  This is my first experience with Squid, and as such I opted to use the stable release for Windows, which is 2.7, Stable8.  Also, since I’m setting up  a reverse proxy using SSL, I downloaded the SSL support version.

My configuration for squid.conf is below (edited to remove any personal information).  This file needs to be located in the squid/etc directory.  By default, there are example files in there for all config files.  You’ll also need to rename cachemgr.conf.default and mime.conf.default to cachemgr.conf and mime.conf, respectively.  I didn’t edit any settings within those files as I was under the impression that the defaults were fine.  There’s a 4th file in there that doesn’t even need to be configured or renamed for this setup.

The first section here is setting the cache directory (if you opt not to use the default).  Please note: the default location is c:/squid/var/cache, however, if you simply extracted the Squid Windows binary zip to c:/squid, the /var/cache directory does not exist and will cause an error.  Please create that directory (or whatever directory you specify below). Also, ensure that when typing directory paths you use the "/" slash instead of the normal Windows "\" slash.

#Set Cache Directory
cache_dir ufs c:/path/to/cache_dir 100 16 256

This next section is specifying that we want HTTPS to run on port 443 and accelerate (cache) reverse requests. Then it points to the SSL certificate and the corresponding private key. I installed a non-password protected key, as I found that Squid would hang when starting if the key was protected (makes sense). The last part, "vhost", tells Squid that there are multiple sites. I found that without "vhost", my config didn’t work as expected. Also, most documentation I found suggested adding "defaultsite=(url of default site)" to this line. This caused me major headaches, as Squid kept redirecting to the wrong site and I’d get 404 errors when the requested files didn’t exist on the server I was directed to.

#Define Port(s) and Cert(s) (if appropriate)#
https_port 443 accel cert=c:/path/to/cert/cert.crt key=c:/path/to/key/priv.key vhost

Here, we define the sites we are proxying to. "cache_peer" is followed by the internal IP address of the target host. 443 is the port number we’re directing to. "ssl" tells Squid to use SSL. "login=PASS" allows for pass-through authentication (such as when using .htaccess). "name" specifies the name we’ll use to refer to this host within Squid.

#Define Sites#
cache_peer 1.1.1.1 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=wordpress
cache_peer 1.1.1.2 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=exchange

Here we’re configuring some ACLs for our proxy. I chose to add "-acl" to the name of my ACLs just for clarity. The last 2 parts of the first line (and 1 part of the second line) specify the destination URL(s) that the ACL will work for. The last line, "acl all", defines an ACL for all IP addresses. This will be used to deny all traffic access that doesn’t meet our other conditions.

#ACLs#
acl wordpress-acl dstdomain site.com www.site.com
acl exchange-acl dstdomain owa.exchange.com
acl all src 0.0.0.0/0.0.0.0

Here we’re enabling the ACLs (as before we just defined them). "http_access allow" allows traffic using the specified ACL. My "cache_peer_access" statements seem to be saying the same thing as the "http_access" statements, however, this is what I ended up with after tons of trial and error, so I’m leaving it as-is. Those statements may be redundant and only 1 might be necessary. The last line, "never_direct", was only needed for Exchange OWA. If you’re not hosting OWA, you can probably leave that off.

#Enable ACLs#
http_access allow wordpress-acl
cache_peer_access wordpress allow wordpress-acl
http_access allow exchange-acl
cache_peer_access exchange allow exchange-acl
never_direct allow exchange-acl

Here we’re denying all other requests to access our servers behind the proxy

#Deny all others#
cache_peer_access wordpress deny all
cache_peer_access exchange deny all

 

And that’s it!  Install Squid as a service (if you haven’t done so already), by using the “squid.exe –i” command.  The squid.exe file is in the squid/sbin/ directory.  If you unzipped Squid to any other directory than c:/squid, you’ll need to add another switch (I think it’s “-f”) to the install command to specify the location of everything.  Documentation for the Windows binary can be found here: http://squid.acmeconsulting.it/Squid27.html.

Please make sure that there are no port conflicts on the host running Squid.  I found that when another program was running on port 443 Squid would hang when started from the Services console.  Also, make sure that you have the necessary firewall ports open.  Then start Squid from the Services console.

Now verify that everything’s working properly.  Squid will create a log file in the same directory as the squid.exe file if there are any errors upon initialization.  After that, logs will be created in the squid/var/logs directory.

VMware vCenter Storage View is blank/not updating

drew 12 Mar , 2012 0 comments VMware

We upgraded our VMware infrastructure to v5 back around the time it was released late last summer. Since then I’ve been having an issue where the Storage View feature of vCenter would not load. I would receive a mostly blank screen that has a link to update the Storage View. Clicking that link had 0 effect. I finally started messing around trying to troubleshoot a few weeks ago (after nearly 6 months without Storage View). Something I did resolved the issue and it seemed to be working since.

Today I loaded my VI Client up and clicked the Storage View tab to check on something. To my displeasure I realized SV was broken again. This time, I finally determined why – the server vCenter is installed on had a port conflict with another process, the port in question being 8080, which was also being used by IIS for some other service I had installed at some point in the past (certainly NOT at the time that SV broke originally). I changed the port that the IIS instance was running on, restarted the VMware services, and voila, SV worked again.

Like I said above, the IIS instance was running prior to upgrading to vSphere 5, so I’m not sure exactly what caused the conflict. I don’t know if vCenter 5 installs a new service to port 8080 by default or if I, in my infinite wisdom, set the port to use 8080. I know I set IIS to 8080 at one point to AVOID a port conflict with vCenter, however, I wasn’t aware that vCenter was running any web services on ports other than 80 and 443. Long story short, make sure that the vCenter Management Webservices HTTP service isn’t conflicting with anything else (according to VMware 8080 is the default port for the aforementioned service).

XBMC MKV – Green Video

drew 24 Sep , 2011 2 Comments XBMC

I recently installed XBMC on my living room server in order to better supply my 1080p TV with downloaded media. Upon trying to play an MKV x264 movie, I was presented with an all green screen but the video’s proper audio. It turns out that this was caused by enabling hardware acceleration, which was found under Settings->Video->Playback. I disabled hardware acceleration (which actually is the default settings; I had enabled it last night, thinking “why not”), and the video plays just fine, now.