Tag: Sysadmin

TLS Peer Verification w/PHP 5.6 and WordPress SMTP Email plugin

We ran into an issue after upgrading from PHP 5.5 to 5.6. We were no longer able to send email via the awesome WordPress SMTP Email plugin. Turns out that PHP 5.6 introduced some enhancements to its OpenSSL implementation. Stream wrappers now verify peer certificates and hostnames by default. This was causing the form submissions on our site to fail. Clearly there are some issues with our Postfix config and certs. While we sort out those issues, we were able to “solve” our immediate problem, and disabled peer verification in the plugin without editing core files. Thankfully the plugin developer included a filter that would allow us to modify the options array that is passed to PHPMailer.

Woot!

Thanks random WordPress forum user for the solution!

Whitelist IPs in Nginx

I want to whitelist my clients IP addresses (and my office IPs) to allow them to view a site, while the rest of the world will be redirected to another site, using Nginx. My Nginx server is behind a load balancer.

Using the geo module I am able to do this rather easily. By default, geo will use $remote_addr for the IP address. However, because our server is behind a load balancer this will not work, as it would always be the IP of the load balancer. You can pass in a parameter to geo to specify where it should get the IP value. In this case, we want to get the IP from $http_x_forwarded_for.

What this is doing is assigning the variable $redirect_ips the value after the IP address. So, if my IP is 1.2.3.4, $redirect_ips will have a value of 0, or false. If my ip is not matched, it will get the default value of 1, or true;

Ok, with that, my server directive now looks like:

 

Setting up Git HTTP Backend for local collaboration

You want to share a topic branch with a colleague but do not want to push that branch upstream to Github/BitBucket/GitLab, etc. How do you do this? You could create a patch and email it. Or you could use Apache and allow your colleague to pull from your repo directly. This does take a bit more time to setup, but would be the most convenient for everyone involved. The basic idea is that you “host” the git repos on your local machine, and push your commits to it as you are developing. You then make your git repos available via Apache on your internal network to allow team members to pull from your local repo.

First create a place to store your repos. Let’s also create a test repo to work with to make sure everything is working.

Next let’s setup Apache (I am using OS X El Capitan with Apache 2.4).

Edit /private/etc/apache2/httpd.conf.

Ensure the following modules are being loaded.

Uncomment the following line:

Edit /private/etc/apache2/extra/httpd-vhosts.conf.

I removed the existing virtualhosts since I actually do all of my development with Vagrant and Linux. So I really have no need to have anything more than a single virtualhost on my Mac.

Edit the the bolded parts to suit your local setup.

Restart apache.

Your git repos will now be available at http://locahost/git/<REPONAME.git>.

You should now be able to clone your empty repo.

Let’s test it out.

You should be able to make changes and push to your remote.

There it is!

Now you can have a colleague pull changes directly from you. Simply provide them with your public address, for example http://192.168.1.120/git/testproject.git, and they should now be able to clone your repo, add your remote, pull changes, etc.

If you want other people to be able to push to your repo you will have to explicitly allow this. In your testproject.git repo, set the http.receivepack value to true:

Ohai Plugin for OpenVZ VMs to get public IP Address

Media Temple uses the OpenVZ virtualization system and I have quite a few Media Temple servers under Chef management. The one thing that has made management difficult is that by default during a Chef run ohai returns 127.0.0.1 as the default IP address which means I cannot run knife to execute commands from my workstation.

For example, when I run  knife node show mydv.server.com  I get the following:

Kinda sucks. If try to execute on all of my MT servers, say something like  knife ssh 'tags:mt' 'sudo chef-client'  I will get a ton of failed connection errors because it is trying to connect to 127.0.0.1.

The solution is to get ohai to return the correct IP. OpenVZ has virtual interfaces and the actual public IP is assigned to them, while the main interface, eth0, is given the ip 127.0.0.1. This ohai plugin will retrieve the correct IP.

Put this in your ohai plugins directory for chef: /etc/chef/plugins/openvz.rb and when chef runs, it will get the correct IP address.

Now when I run knife show mydb.server.com

 

Virtualbox Bug related to SendFile

I have been doing more web development with Vagrant and VirtualBox. It’s a nice way to keep my dev environment nearly the same as my production environments. Recently I was doing some front-end coding and was running into the most bizarre errors with my JavaScript.

[highlight]It pays to read the documentation. Had I read it more thoroughly, I would have known about this before debugging it and wasting time. Oops![/highlight]

Turns out there is bug with VirtualBox’s shared folder support and sendfile. This bug was preventing the VM from serving new versions of any file in the shared directory. Obviously this is not good for web development.

The solution is easy enough. You just have to disable sendfile in your web server.

In apache:

In nginx:

Update:

The Vagrant documentation does include some information it: https://docs.vagrantup.com/v2/synced-folders/virtualbox.html

Setup Development Environment with Vagrant & Chef

vagrant-chef21

I use Chef to manage and provision new staging and production servers in our company. It takes a lot of the headache out of managing multiple servers and allows me to fire up new web & data servers for our projects with ease. I have several cookbooks that I use to configure servers and to setup/configure websites. In a nutshell, it’s rad, and website deployments have never been easier.

For my local development environment I currently run Ubuntu, with Apache, Nginx, PHP 5.3, Ruby 1.9.3, Ruby 2.0, MySQL 5.5, etc. Some of our projects use Node, Redis, MongoDB. Ideally I would offload all these different servers into virtual machines suited and designed for the task, identical in configuration ot the staging and production servers.

Enter Vagrant. Vagrant is a tool to configure development environments.

How I expect this to work:

* I want to use my native development tools (NetBeans, Sublime Text, Git, etc) on my workstation.
* I want to use the VM to serve the project files.
* I do not want to have to deploy my local code to the VM for testing and review.
* I will mount my project directory as a shared path in the VM.
* I will build the VM using my Chef cookbooks.

Ok, not so bad. Vagrant makes this really easy.

Install Vagrant.

Once installed you want to add a box to build your VM from. There are many to choose from. I prefer CentOS for my servers, and will add one from http://www.vagrantbox.es/. All of our production machines use CentOS or RHEL, so my development VM should use CentOS.

Now you need to create your Vagrant project. I have considered creating the Vagrant config file in my project and putting it under version control. Currently I just have a directory for Vagrant projects. Either way.

This will create a Vagrant config file. The Vagrant file describes how to build the VM. It defines the network settings, shared directories, and how to provision the machine using Chef. When creating the vagrant file you can pass the name of the box to use. I used the box we added earlier, “centos-6-4”. If you leave this parameter off you can always edit the Vagrant file to change it.

Configuration is rather minimal. Not a whole lot we need to do to get something running. Open Vagrantfile in your text editor.

I want my VM to use the local network. You could opt to use the private network, which I believe is the default. You can also setup port forwarding here. For example, if you want to forward requests to http://localhost:8080 to port 80 on the VM. I just setup a public network for my VMs because often times I have people in the office review work on my server from their

Let’s set up the shared folders. The first path is the local directory you want to share, and is relative from the Vagrantfile. The second is the mount point in the VM. Since the Vagrant file is the root of my project, I will set the share directory to the current directory. Set the mount point to someplace you will want to serve your project from. I tend to do things the Enterprise Linux way, and will put my web projects under /var/

Now, to tell Vagrant how to provision the VM using

This is pretty straight forward. We tell Vagrant where our chef data is stored, which recipes to run, and pass along any attributes we want Chef to use. I would like to explain how I organize my Chef cookbooks very briefly. I use a fat-recipe/sinny-role approach. I have a cloud server cookbook to manage AWS and Rackspace instances. I have role recipes, and site recipes. A role recipe defines how the node should act: is it an Nginx server? Or a MySQL server? Will it run PHP-FPM? And so on. Then I have a site recipe which is defines how the website will be configured. It creates an apache vhost file, sets up a PHP-FPM pool, creates an Nginx proxy to a NodeJS app. I have data bags that correspond to different environments to configure the site as well, so production uses a different hostname than staging, has a different MySQL configuration, and so on. Now when Chef runs, it detects the environment, loads the corresponding data bag, and configures the site and node.

There is one more step before we can start up our VM. We need to install the omnibus vagrant plugin.

The Omnibus Vagrant plugin automatically hooks into the Vagrant provisioning middleware. It will bootstrap the VM for us. It is required if you are going to provision the VM with chef.

Ok, when that is installed you can fire up the

And there we go. You can continue working on your project locally, but serve it using a VM configured identically to your production servers. Have fun with your kick ass new dev environment!

The resulting Vagrant file should look something like:

Troubleshooting extremely high MySQL CPU Usage

MySQL CPU usage was spiking upwards of 1000%. Load average was around 50-60. I could not SSH into the machine though, not immediately.

Since I could not actually get into the machine I had it restarted. Just as soon as the machine came back up MySQL CPU usage jumped right back up to 1000%.

Once I was able to finally shut MySQL down I had to discover _why_ the load was so ridiculously high.

MySQL slow query log showed that it was a WordPress plugin that was making a SHIT TON of queries, and MySQL could not keep up. The plugin in question was the WP Security & Audit Log. The plugin logs activity on a WordPress sites, failed logins, successful logins, new user accounts, etc. The queries in question were all related to failed logins. Checking the Apache access logs showed me that there was a massive spike in users attempting to brute force passwords. Thankfully none of them got through.

Gathered a list of offending IPs and blocked them. Also changed all user passwords, just in case. Also disabled the plugin until a resolution can be found and an alternative put in place.

A few things here..

1. MySQL is probably not tuned well, and should be tuned for this particular site.
2. The audit plugin makes too many unnecessary queries and I do not think it is meant to handle the amount of traffic the site was getting. I really should look into that further.
3. Our Nagios configuration does not monitor traffic spikes on websites. If it did I could have caught this _before_ it become so bad.

Securing Git repository from accidental exposure using Chef

It was brought to my attention at the office that a few of our recently launched websites had publicly exposed .git repository information. Unscrupulous users could use the exposed data to pull down the entire commit history, giving them unfiltered access to what is basically the blueprint for the website.

What if someone accidentally uploaded a config file to the repository with sensitive information in it? Or what if the user was able to discover a major security vulnerability in the code that would have otherwise remained “safe”? Scary.

There is definitely too much risk in allowing public access to your .git directory.

I created a Chef recipe called
gitsec.rb to deploy a very easy and quick fix to 20 web servers.

I was not sure the best way to determine if a service exists using Chef, so I just check for the appropriate init script. All of our servers are RHEL or CentOS, so I do not do any detection of platform here. It would be trivial to do so, so I will leave that to you!

You will also want to create a couple of templates:

gitsec-apache.conf

gitsec-nginx.conf

At first I was returning a 403 status code, but realized that it was still announcing that a file did exist at that location. 404 is better, it does not expose the existence of a file.

This recipe is part of my initial web server setup now.