Nginx (Reverse SSL Proxy) with ModSecurity (Web App Firewall) on CentOS 7 (Part 1)

Today I’ll demonstrate how to install the Nginx webserver/reverse proxy, with the ModSecurity web application firewall, configured as a reverse SSL proxy, on CentOS 7.  This is useful in scenarios where you are terminating incoming SSL traffic at a centralized location and are interested in implementing a web application firewall to protect the web servers sitting behind the proxy.  This setup can also support multiple sites (ie different host names) using Server Name Indication (SNI) for SSL.  If multiple sites are being protected you will need a different SSL certificate for each site (unless a wildcard or SAN certificate will suffice for the sites in question, in which case SNI is not used).  There is an abundance of information available online showing how to do this on Ubuntu/Debian, but not for CentOS.  In Part 1 (this post) we’ll compile everything, install Nginx, and configure a reverse SSL proxy and in Part 2 we’ll configure ModSecurity.

The first step is to deploy a fresh VM with CentOS 7.  We are using the x64 version, but I don’t see why the x86 version would be any different.

Log on to the server and either elevate to a root shell or execute all commands below using sudo.

Here we are downloading and installing the EPEL (Enterprise Packages for Enterprise Linux)
repository to provide access to some of the prerequisite packages we need.  After that we install all of the required packages.

rpm -ivh       
yum update       
yum install gc gcc gcc-c++ pcre-devel zlib-devel make wget openssl-devel libxml2-devel libxslt-devel gd-devel perl-ExtUtils-Embed GeoIP-devel gperftools gperftools-devel libatomic_ops-devel perl-ExtUtils-Embed libtool httpd-devel automake curl-devel       

After installing all of the prerequisite packages we create a user named “nginx” for the daemon to run as.  We also set the login shell to “nologin” to prevent a compromised account from gaining access to a real shell.

useradd nginx       
usermod -s /sbin/nologin nginx       

Now let’s download ModSecurity to the /usr/src/ directory, untar it, and change to the unpacked folder.  At the time of this writing the latest version was 2.9.0.  You can find downloads for the latest version at

cd /usr/src/       
tar xzvf modsecurity-2.9.0.tar.gz       
cd modsecurity-2.9.0/       

Now inside the source directory, run, then configure with the “enable-standalone-module” option, and finally make the module.

./configure --enable-standalone-module       

After building the ModSecurity module, move back to the /usr/src/ directory, download Nginx, untar it, and then enter the unpacked directory.  The latest version of Nginx at the time of this writing was 1.9.7.  Newer versions can be found at

cd /usr/src/       
tar xzvf nginx-1.9.7.tar.gz       
cd nginx-1.9.7/       

After navigating to the extracted directory run configure, then make, and then make install.  The “add-module” option set with configure will point to the nginx/modsecurity/ directory which should be inside of the extracted ModSecurity directory from earlier.  After compilation we create a symlink from the binary created in the /usr/local/sbin/ directory to the /usr/sbin/ directory so it’s automatically in $PATH.  Then we create a directory called logs since that’s the folder the PID file will live in. Next, we change the owner of the Nginx files to root, for security, and finally we change permissions to remove all access for anyone who is not the root user or in the root group. NOTE: Originally I was under the impression the “nginx” user had to have access to these files, but I was mistaken. Nginx starts as the root user and then spawns worker processes under the account we specified earlier (in this case, the “nginx” user). The master process, running as root, is the one that needs access to the SSL cert, the logs, the conf, etc. That being the case, make sure to lock down the necessary folders so only root has access. In the event the “nginx” user is compromised the sensitive files should still be secure. Also, despite setting the user and group owner of all files within the nginx directory to root, after starting the service the following directories automatically change their user owner to “nginx”: cache, proxy_temp, scgi_temp, uwsgi_temp, client_body_temp, fastcgi_temp.

./configure --user=nginx --group=nginx --with-pcre-jit --with-debug --with-ipv6 --with-http_ssl_module --add-module=/usr/src/modsecurity-2.9.0/nginx/modsecurity       
make install       
ln -s /usr/local/nginx/sbin/nginx /usr/sbin/nginx
mkdir /usr/local/nginx/logs
chown -R root:root /usr/local/nginx
chmod -R o-rwx /usr/local/nginx

Now let’s setup a systemd service to allow Nginx to autostart with the server.  In the first block of code we’ll navigate to the proper directory, then we’ll create a file to tell systemd what to do.  The second block of code will contain the contents of the file we created before that.

cd /usr/lib/systemd/system
nano nginx.service
# nginx signals reference doc:
Description=A high performance web server and a reverse proxy server

ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/


Now we need to edit the conf file for Nginx.  Before we do that though we create two folders: sites-available and sites-enabled, just like in Apache, to hold our vhosts.

mkdir /usr/local/nginx/sites-available
mkdir /usr/local/nginx/sites-enabled
nano /usr/local/nginx/conf/nginx.conf

Change the config file to what’s listed below. NOTE: For “worker_processes” set the number to the number of cores you wish to dedicate to the server.

user nginx;
worker_processes  1;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    gzip  on;
    include /usr/local/nginx/sites-enabled/*;

Here we generate a Diffie-Hellman key larger than OpenSSL’s default size of 1024. We’ll do 4096.

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

Now we need to create a folder to hold our SSL files. Inside of that folder we’ll create subfolders for each vhost’s certs. Though not shown below, place your certificate and key files in the vhost SSL directory. We also create a vhost config file in the sites-available directory.

mkdir /usr/local/nginx/ssl
mkdir /usr/local/nginx/ssl/localhost
nano /usr/local/nginx/sites-available/ssl.conf

Add the following to the file just created. Make sure for the “ssl_certificate” and “ssl_certificate_key” lines you point to the proper locations. Set the “server_name” line to the hostname of your server (make sure it matches the SSL certificate). Set the “proxy_pass” line to point to the server you’re proxying to. In the example below we’re connecting to port 80 locally, but in most cases this would be a separate server. If you have several SSL certificates and want to define one of the vhosts as default, add “default_server” on the “listen” line, between the port number and semicolon.

server {
        listen   443;

        server_name localhost;
        ssl                  on;
        ssl_certificate      /usr/local/nginx/ssl/localhost/localhost.crt;
        ssl_certificate_key  /usr/local/nginx/ssl/localhost/localhost.key;

        ssl_session_timeout  5m;

        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        location / {
          proxy_set_header X-Real-IP  $remote_addr;
          proxy_set_header X-Forwarded-For $remote_addr;
          proxy_set_header Host $host;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_read_timeout 90;

Create a symlink from the site(s) in the sites-available directory to the sites-enabled directory to activate the sites. Then test to make sure the config files don’t contain any errors.

ln -s /usr/local/nginx/sites-available/ssl.conf /usr/local/nginx/sites-enabled/ssl.conf
nginx -T

Assuming nginx -T doesn’t return any errors, everything should be all ready to go! Enable the service, start it, then verify it’s running.

systemctl enable nginx.service
systemctl start nginx.service
systemctl status nginx.service

If you need to open up the firewall to allow incoming traffic, use the command below

firewall-cmd --add-port=443/tcp --permanent
firewall-cmd --reload

Python – Mass-Mail w/ Attachment

I wrote a Python script today to mass-mail files to our clients. Our wonderful ERP system screwed up again, leaving us with PDFs of the information we needed to send out, the filenames being prefixed with a unique ID per-recipient. Rather than re-generate all documents and have them sent out properly (which likely wouldn’t have happened due to bugs in the system from a recent upgrade) we wrote this script to mail out the documents we already had. We created a CSV file with two columns: column one has the unique filename prefix, column two has the corresponding recipient’s email address.

import csv
import os
import smtplib
import mimetypes
from email.mime.multipart import MIMEMultipart
from email import encoders
from email.message import Message
from import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
from email.mime.text import MIMEText

def readCSV():
    clientCSVList = []
        #set below to the CSV file you want to read.  Two columns:
        #1 - file prefix to search
        #2 - email address to send file to 
        with open("csvFile", 'rb') as csvfile: 
            clientReader = csv.reader(csvfile, delimiter=',')

            missing = 0
            for clientNum, email in clientReader:
                found = False
                for file in os.listdir("folder_with_files"):
                    if file.startswith(clientNum):
                        clientCSVList.append([clientNum, email, file])
                        found = True
                if found == False:
                    missing += 1
                    print clientNum + " " + email

        print clientCSVList
        print "Missing: " + str(missing)
    except Exception as e:
        print "Exception: " + str(e.args)

        for client in clientCSVList:
            send_email("", client[1], "Email Subject Here", "This text is supposed to be the body, but doesn't seem to appear", client[2])
            print client[0] + " Success"

    except Exception as e:
        print "Exception: " + str(e.args)
        print client[0] + " Failure"

def send_email(sender, receiver, subject, message, fileattachment=None):
    emailfrom = sender
    emailto = receiver
    fileToSend = fileattachment
    username = "smtp_username"
    password = "smtp_password"

    msg = MIMEMultipart()
    msg["From"] = emailfrom
    msg["To"] = emailto
    msg["Subject"] = subject
    msg["Message"] = message
    msg.preamble = message

    ctype, encoding = mimetypes.guess_type(fileToSend)
    if ctype is None or encoding is not None:
        ctype = "application/octet-stream"

    maintype, subtype = ctype.split("/", 1)

    if maintype == "text":
        fp = open(fileToSend)
        # Note: we should handle calculating the charset
        attachment = MIMEText(, _subtype=subtype)
    elif maintype == "image":
        fp = open(fileToSend, "rb")
        attachment = MIMEImage(, _subtype=subtype)
    elif maintype == "audio":
        fp = open(fileToSend, "rb")
        attachment = MIMEAudio(, _subtype=subtype)
        fp = open(fileToSend, "rb")
        attachment = MIMEBase(maintype, subtype)
    attachment.add_header("Content-Disposition", "attachment", filename=fileToSend)

    server = smtplib.SMTP("server:25") #insert SMTP server:port in quotes
    server.sendmail(emailfrom, emailto, msg.as_string())

def main():

if __name__ == "__main__":

Office 365 Outlook Configuration Problem

TL;DR – Disable IPv6 for the network adapter

I moonlight as the CIO of a startup owned by some friends in my spare time and decided last year that the best solution for a new business that didn’t want to invest any capital in computer hardware was cloud computing.  Their needs were fairly basic – email access.  No need for a dedicated Exchange box sitting in the non-existent server room.  However, access via Outlook and mobile devices was mandatory, as was calendar and contact sharing, so we opted to to go with a hosted Exchange solution rather than a service offering only POP or IMAP.  I chose Office 365 P1 (  At $6/month per-person, it was a no-brainer.

I set this up for my friends on their smartphones and laptops (and my own phone as well) and have been very happy with it since then.  However, Microsoft ran some upgrades 2 weeks ago (one of the major upgrades being moving to Exchange 2013).  During this upgrade (and afterwards), the company’s President was having issues with Outlook not connecting.  It worked fine on his phone, and we were usually able to get in through Outlook Web App, but Outlook was always hit or miss.  During one of these periods, I was unable to get into OWA myself, so I chocked it up to upgrade issues and told my friend to let me know if the problem persisted.  It did, so I went to work.

After testing out a few things within Outlook, I eventually removed the email account from the profile and attempted to re-add it.  This was a bad idea, as now Outlook would hang when trying to open the profile (which contained other important data).  Eventually I just removed the problematic account and we were able to get into the profile, but he still needed the account back.  This shouldn’t have been a difficult thing to do.  By default, Office 365 configures DNS settings for your domain, including Autodiscover, and my friend is using Outlook 2010 which should have worked flawlessly with this, but this was not the case.

The problem we were having was getting Outlook to configure the Exchange settings.  We would type the email address and password, and sometimes Outlook would autoconfigure itself using the Autodiscover settings, othertimes it would throw an error about “unable to establish an encrypted connection”.  After a few tries we manually input the server settings, but we were still unable to make the account work.  The strangest part is that after seemingly setting itself up properly using Autodiscover (which validated to me that we had the right settings), Outlook would hang when trying to authenticate, ie it would either not pop-up a username/password prompt, or sometimes it would, but then reject the credentials we gave it, we I had already validated were correct.

Anyways, after working with Microsoft Office 365 phone support for over an hour (which is included with the low-cost monthly subscription, by the way), I eventually gave up for the day and decided to try again in the morning.

The next morning I remoted back into my friend’s machine and tried again.  Same issues as before, sporadic issues that appeared sometimes but not others, and never able to fully complete the process.  At some point I decided to ping Google just to see what the response was (not sure what was going through my mind, but I’m thankful for whatever that was).  I noticed that I was receiving an IPv6 response back from Google.  WTF is this I thought.  Then I decided to check the network settings and came across my issue.  My friend was using a Verizon Wireless 4G aircard to connect to the Internet.  This card was so kind as to assign him both an IPv4 and an IPv6, standard operating procedure within Windows 7.  However, these were both public addresses, not a link-local (ie non-routable) IPv6 address.  I decided that Outlook was having some issue connecting to Office 365 because of this.  My best bet is that Outlook was trying to connect using IPv6, then failing over to IPv4 (but not consistently), which explained why *sometimes* it would connect to Autodiscover, but not othertimes, and why after it did successfully Autodiscover, we couldn’t make it past there.

My solution was simple: go into the network properties for the adapter and uncheck the IPv6 box.  No reboot required.  A problem that took several hours was solved in about 10 seconds.

Manually Run “Fix Permissions” From Recovery

I own a Samsung Galaxy Nexus running Android 4.0.4 (Ice Cream Sandwich).  My phone is rooted and running a custom recovery (Clockworkmod Touch  I’m not sure if it’s been the case since I purchased the phone (and originally had the most recent non-touch version of Clockworkmod) or if it’s after upgrading to the touch version of CWM or if it’s as a result of me installing a custom ROM and just generally f—ing with my phone, but “Fix Permissions” doesn’t work from recovery any more.  This wouldn’t be a big deal if I could run this from Rom Manager, however, my phone seems to stall every time I run this while Android is booted.  It appears that it hangs while processing running applications, as every time it stalled I exited out of Fix Permissions (in Rom Manager), force closed the app it stalled on, then re-ran Fix Perms, and I’d make a little bit more progress, only to stall again on another app.  Finally I FC’d enough apps that Fix Perms completed, but I had mixed feelings about the quality of the job I’d just performed.

When you run Fix Perms, the first line (helpfully) prints the script that is being run, which is “/data/data/com.koushikdutta.rommanager/files/fix_permissions”.  Using this info, I attempted to boot into recovery and run the script manually from the ADB shell of my PC.  Again, Fix Perms seemed to stall, only this time I knew it couldn’t be a running app that caused the issue (remember, I’m booted into recovery!).  I issued an

“adb pull /data/data/com.koushikdutta.rommanager/files/fix_permissions”

to get a copy of the Fix Perms script on my computer and checked it out.  It turns out that at the top of the script there is a big warning that states “Warning: if you want to run this script in cm-recovery change the above to #! /sbin/sh”.  Fair enough.  I changed the first line of the script to “#! /sbin/sh” and then pushed the script back to my phone using the following command:

“adb push fix_permissions /data/media”

I pushed the file to the “/data/media” directory instead of the original directory the file came from as that would have overwritten the original and not allowed it to run while Android is booted (not that it’s working anyway…).  NOTE: The “/data/media” directory on the Galaxy Nexus is the virtual SD card (the GN doesn’t have removable media, so it tricks Android into thinking this directory is an SD card).  If you’re using a different phone you’ll likely have to load the script into a different directory.  It shouldn’t make a difference where you place the file, I just wouldn’t put it in the “/data/data” directory.

Once you’ve loaded the script onto your phone, set the proper executable permissions by issuing the command below. Remember to change the file path to wherever you placed the file.

“chmod +x /data/media/fix_permissions”

After that’s done, run the script by issuing:


The script should run and fix your file system permissions without stalling.

Squid Reverse SSL Proxy With Multiple Sites (on Windows)

I’ve been running my Exchange 2010 OWA site on a non-standard port (default is 443) for a while so that I can run SSL for my personal website (you’re reading it) on the standard port.  My server is hosted at home, and I only have 1 public IP, so unless I (re-)installed everything on a single server, I could only have 1 site running on port 443.

This has been fine, up until yesterday, when I attempted to install Exchange 2010 SP2.  The installation kept failing due to some issue with IIS.  I assumed it was due to the fact that I had moved the OWA site from “Default Site Name” to it’s own site on the same server.  To remedy, I removed the OWA, ECP and ActiveSync virtual directories and re-installed them in the default location on my Exchange box.  After doing this, SP2 installed fine.  However, now I am back at my original problem (2 servers needing incoming traffic on port 443).  I could re-do the custom OWA setup, but what a PITA that’d be to do every time an Exchange update comes out (theoretically).

My solution: Squid!  I fought Squid until the wee hours of the morning before calling it quits and going to bed.  This morning I tried my configuration again, and this time I have success.  The goal, of course, was to have Squid running on a local server and receiving all traffic on port 443, and based on the URL of the incoming request, redirect traffic to the proper server.

First thing you need to do is download Squid.  The Windows binary can be obtained from here:  After you download it (it’s a zip file), extract it to c:/squid.  By default, Squid expects to be here, so if you place it somewhere else you’ll have a little bit more work to do.  This is my first experience with Squid, and as such I opted to use the stable release for Windows, which is 2.7, Stable8.  Also, since I’m setting up  a reverse proxy using SSL, I downloaded the SSL support version.

My configuration for squid.conf is below (edited to remove any personal information).  This file needs to be located in the squid/etc directory.  By default, there are example files in there for all config files.  You’ll also need to rename cachemgr.conf.default and mime.conf.default to cachemgr.conf and mime.conf, respectively.  I didn’t edit any settings within those files as I was under the impression that the defaults were fine.  There’s a 4th file in there that doesn’t even need to be configured or renamed for this setup.

The first section here is setting the cache directory (if you opt not to use the default).  Please note: the default location is c:/squid/var/cache, however, if you simply extracted the Squid Windows binary zip to c:/squid, the /var/cache directory does not exist and will cause an error.  Please create that directory (or whatever directory you specify below). Also, ensure that when typing directory paths you use the "/" slash instead of the normal Windows "\" slash.

#Set Cache Directory
cache_dir ufs c:/path/to/cache_dir 100 16 256

This next section is specifying that we want HTTPS to run on port 443 and accelerate (cache) reverse requests. Then it points to the SSL certificate and the corresponding private key. I installed a non-password protected key, as I found that Squid would hang when starting if the key was protected (makes sense). The last part, "vhost", tells Squid that there are multiple sites. I found that without "vhost", my config didn’t work as expected. Also, most documentation I found suggested adding "defaultsite=(url of default site)" to this line. This caused me major headaches, as Squid kept redirecting to the wrong site and I’d get 404 errors when the requested files didn’t exist on the server I was directed to.

#Define Port(s) and Cert(s) (if appropriate)#
https_port 443 accel cert=c:/path/to/cert/cert.crt key=c:/path/to/key/priv.key vhost

Here, we define the sites we are proxying to. "cache_peer" is followed by the internal IP address of the target host. 443 is the port number we’re directing to. "ssl" tells Squid to use SSL. "login=PASS" allows for pass-through authentication (such as when using .htaccess). "name" specifies the name we’ll use to refer to this host within Squid.

#Define Sites#
cache_peer parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=wordpress
cache_peer parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=exchange

Here we’re configuring some ACLs for our proxy. I chose to add "-acl" to the name of my ACLs just for clarity. The last 2 parts of the first line (and 1 part of the second line) specify the destination URL(s) that the ACL will work for. The last line, "acl all", defines an ACL for all IP addresses. This will be used to deny all traffic access that doesn’t meet our other conditions.

acl wordpress-acl dstdomain
acl exchange-acl dstdomain
acl all src

Here we’re enabling the ACLs (as before we just defined them). "http_access allow" allows traffic using the specified ACL. My "cache_peer_access" statements seem to be saying the same thing as the "http_access" statements, however, this is what I ended up with after tons of trial and error, so I’m leaving it as-is. Those statements may be redundant and only 1 might be necessary. The last line, "never_direct", was only needed for Exchange OWA. If you’re not hosting OWA, you can probably leave that off.

#Enable ACLs#
http_access allow wordpress-acl
cache_peer_access wordpress allow wordpress-acl
http_access allow exchange-acl
cache_peer_access exchange allow exchange-acl
never_direct allow exchange-acl

Here we’re denying all other requests to access our servers behind the proxy

#Deny all others#
cache_peer_access wordpress deny all
cache_peer_access exchange deny all


And that’s it!  Install Squid as a service (if you haven’t done so already), by using the “squid.exe –i” command.  The squid.exe file is in the squid/sbin/ directory.  If you unzipped Squid to any other directory than c:/squid, you’ll need to add another switch (I think it’s “-f”) to the install command to specify the location of everything.  Documentation for the Windows binary can be found here:

Please make sure that there are no port conflicts on the host running Squid.  I found that when another program was running on port 443 Squid would hang when started from the Services console.  Also, make sure that you have the necessary firewall ports open.  Then start Squid from the Services console.

Now verify that everything’s working properly.  Squid will create a log file in the same directory as the squid.exe file if there are any errors upon initialization.  After that, logs will be created in the squid/var/logs directory.

VMware vCenter Storage View is blank/not updating

We upgraded our VMware infrastructure to v5 back around the time it was released late last summer. Since then I’ve been having an issue where the Storage View feature of vCenter would not load. I would receive a mostly blank screen that has a link to update the Storage View. Clicking that link had 0 effect. I finally started messing around trying to troubleshoot a few weeks ago (after nearly 6 months without Storage View). Something I did resolved the issue and it seemed to be working since.

Today I loaded my VI Client up and clicked the Storage View tab to check on something. To my displeasure I realized SV was broken again. This time, I finally determined why – the server vCenter is installed on had a port conflict with another process, the port in question being 8080, which was also being used by IIS for some other service I had installed at some point in the past (certainly NOT at the time that SV broke originally). I changed the port that the IIS instance was running on, restarted the VMware services, and voila, SV worked again.

Like I said above, the IIS instance was running prior to upgrading to vSphere 5, so I’m not sure exactly what caused the conflict. I don’t know if vCenter 5 installs a new service to port 8080 by default or if I, in my infinite wisdom, set the port to use 8080. I know I set IIS to 8080 at one point to AVOID a port conflict with vCenter, however, I wasn’t aware that vCenter was running any web services on ports other than 80 and 443. Long story short, make sure that the vCenter Management Webservices HTTP service isn’t conflicting with anything else (according to VMware 8080 is the default port for the aforementioned service).

XBMC MKV – Green Video

I recently installed XBMC on my living room server in order to better supply my 1080p TV with downloaded media. Upon trying to play an MKV x264 movie, I was presented with an all green screen but the video’s proper audio. It turns out that this was caused by enabling hardware acceleration, which was found under Settings->Video->Playback. I disabled hardware acceleration (which actually is the default settings; I had enabled it last night, thinking “why not”), and the video plays just fine, now.

The State of Bitcoin

It’s currently 12:17pm EST (16:17 GMT) and Mt. Gox (the most prominent Bitcoin exchange, which last Sunday shut down all trading due to a huge site hack) was supped to resume trading at 11am EST. Currently the site says 15:30 GMT (45 minutes ago) is when trading is to resume, however, that obviously isn’t going to happen. Hundreds of people are hanging out in the #mtgox channel on the Freenode IRC network, anxiously awaiting the return of Mt Gox.

Stay tuned for updates…

Windows 7–New User cannot logon <Solved>

Upon creating a new user and attempting to login, I received the following error message: "The User Profile Service failed the logon. User profile cannot be loaded." I’m running Windows 7 x64 SP1.

My current setup is: 
C: drive (SSD) Program Files, Windows, ProgramData.
X: drive (HDD) Users, and everything else.

I use the Administrator account as my account. To set up the system, I created a temporary user, logged in, changed the registry to point the Users directory to the X: drive (and moved the Default and Public folders to the X:\Users directory). I then enabled the Admin account and logged in with it. This created my profile on the X drive.
However, I was trying to create another user account to test with. The account is named "Test". I receive the quoted error message upon attempts to log in. I checked the X:\Users folder and there is no profile for the Test account, not even a folder for it. I tried copying the Default profile and renaming it, but no dice. I also verified that the folder seemed to have the proper permissions (Full Control for System, Administrator, Administrators, as the Test account was an admin account as well). There is no entry in the registry under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ for the Test user, so I got a utility that could find that users’ SID and created an entry for it, but still, I receive the original error message when trying to log in.

I finally solved the problem.  It turns out that the X:\Users\Default folder needed to give the "Everyone" group Read, List folder contents, and Read and Execute permissions. When I copied the Default profile via the user profile screen, it didn’t apply correct permissions; it just inherited those of the destination folder.  Event Viewer showed that Windows didn’t have permission to copy the Default profile to the X:\Users\Test folder.  When I gave the Everyone group permission to the Default profile folder, I was then able to login with new users (and subsequently have them automatically create profiles).

SFC /scannow error 0x000006ba – The RPC Server is unavailable

While attempting to run a System File Check (sfc /scannow) on a Windows XP machine today I received the error 0x000006ba – The RPC Server is unavailable.  I checked the services console and saw that the RPC service was running.  After scratching my head for a few minutes, I decided that the SFC component must have been disabled (as the CD I installed from was a torrented ISO that has the latest updates).  I figured the ISO must have been modified to disable the file checker for whatever reason.

Sidenote for the haters: I have a valid license, but I was deploying about 20 refurbished machines for coworkers and didn’t have the time or desire to install all of the Windows updates that were needed to get the machines fully updated.

Anyway, so after scouring the ultranet for a little bit, I came across an nLite setting that was probably employed when creating the Windows disk I downloaded.  The settings is a registry entry to disable SFC.  It’s located at HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Winlogon.  It’s a DWORD value titled “SfcDisable” and was set to some random hex value.  Change this to 0 and restart, and SFC should now work.  Note: There was another value called SfcDisabled which was already set to 0.  That threw me off for a moment, but then I saw the proper DWORD I was looking for, which was not already zero.