Tuesday, 5 December 2017

Run ad-hoc ansible commands against your vagrant box

OK, 4 years down the line, here's my next pearl of wisdom. I'm not going to make statements about keeping this blog updated, because, as you can see, they generally fail :)

I'm working with vagrant boxes running on a VMware vsphere cluster (perhaps some details of that in a later blog post..

I'd like to run "ansible -m setup" against it so that I can see what facts I can grab for a role.

First I tried this:

$ ansible -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory default -m ping

..but it failed with this message:

 default | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Received disconnect from 10.1.1.1 port 22:2: Too many authentication failures\r\nAuthentication failed.\r\n",
    "unreachable": true
}


So, I attempted to debug by looking at what vagrant was doing when running "vagrant ssh"

$ export VAGRANT_LOG=debug
$ vagrant ssh

<snip LOTS and LOTS of output>

INFO ssh: Invoking SSH: ssh ["vagrant@10.1.1.1", "-p", "22", "-o", "LogLevel=FATAL", "-o", "Compression=yes", "-o", "DSAAuthentication=yes", "-o", "IdentitiesOnly=yes", "-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null", "-i", "/home/jerry/myansiblecode/.vagrant/machines/default/vsphere/private_key"]

..by selectively deleting the "-o" arguments until the ssh login failed, I found that the magic incantation was "-o IdentitiesOnly=yes" (I have many many private key files in ~/.ssh). But how to tell ansible to do this?

$ ansible -i /home/jerry/myansiblecode/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory default --ssh-extra-args="-o IdentitiesOnly=yes" -m ping

default | SUCCESS => {
    "changed": false,
    "ping": "pong"
}


\0/

Wednesday, 2 January 2013

Adding PPAs from behind a firewall

An Ubuntu user wishes to try out some nifty software from outside the main Ubuntu repositories, however they are annoyingly limited to only port 80 outgoing, so apt-add-repository won't work properly, it times out while waiting for a signed key from Ubuntu's servers. There is a solution.

First, add the PPA's repo manually, this can be done by expanding the "Technical Details about this PPA" part. Stick those lines in your /etc/apt/sources.list



Now, do a sudo apt-get update, and it will do its thing, but barf on verifying the signature, because it can't contact the keyserver.

$ sudo apt-get update
Ign http://gg.archive.ubuntu.com quantal InRelease

........
W: GPG error: http://ppa.launchpad.net quantal Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A777609328949509
Now for the good bit. Make sure http_proxy is set, and use apt-key to import the key (use the key identifier from the error output earlier):

$ export http_proxy=http://<your proxy address:port>
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A777609328949509
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.rGta6LkQ9i --trustdb-name /etc/apt//trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys A777609328949509
gpg: requesting key 28949509 from hkp server keyserver.ubuntu.com
gpg: key 28949509: public key "Launchpad Gwendal Le Bihan" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
And there you have it. The PPA should now work.

Thursday, 25 October 2012

APC, memcache and varnish

I've recently set up these little puppies as some web developer friends of mine explained their usefulness in speeding up certain things like Drupal.

It's kinda not surprising when you realise that people are developing things on Drupal, which is a layer on top of PHP, itself an interpreted language (i.e. S.L.O.W.), which then has to go and talk to a MySQL database (useful, ubiquitous, relatively easy to use, but not necessarily the best designed or speediest database out there, so I've been told), which then has to go and grab the content, perhaps off a S.L.O.W. spinning hard drive, and serve it to the end user. When you consider that according to Drupal's web site, some of the biggest sites on the web are running Drupal, you can see that there may be room for some optimisation.

Enter a couple of enhancements to PHP itself, which implement and improve caching, and a reverse-proxy, which itself is another form of caching.

Bear in mind that this post is written from the POV of a sysadmin, I possibly don't have the insights into how these things actually do their job at a deeper level, but I know how to get them installed and working.

APC

APC, or the Alternative PHP Cache is fairly self explanatory, according to Wikipedia, it "optimises..and caches data and compiled bytecode", so takes stuff that has been interpreted and passed along and sticks it in memory so it's probably several orders of magnitude easier to access.

Getting this into PHP is as easy as finding the relevant package for your distribution's package manager of choice. I'm going with Ubuntu here, as that will possibly find a wider audience, but I did it first on my Gentoo box at home, and there wasn't really much difference.
$ sudo apt-get install php-apc 
..................... 
Setting up php-apc (3.1.7-1) ...
 Easy! And in Ubuntu, installing this package will insert the necessary line into the relevant PHP config files:

apache2/conf.d/apc.ini:extension=apc.so
cli/conf.d/apc.ini:extension=apc.so
conf.d/apc.ini:extension=apc.so
 To check that this is enabled, we make a small webpage available which uses the phpinfo function, like this:

<?php
 phpinfo();
 ?>
We should now be able to see that apc is enabled by going to http://<your webserver/test.php

Memcache

OK, memcache next. Cribbing from Wikipedia again, it seems that memcache is a more general purpose application, which many other apps can use to stuff data into memory, so it's faster to access.

This is installed in much the same way as apc:

$ sudo apt-get install memcached
...........................
Starting memcached: memcached_1

$ sudo apt-get install php5-memcached
............................
Creating config file /etc/php5/conf.d/memcached.ini with new version

and the latter line sticks similar lines in PHP's configs:

apache2/conf.d/memcached.ini:extension=memcached.so
cli/conf.d/memcached.ini:extension=memcached.so
conf.d/memcached.ini:extension=memcached.so
 phpinfo shows memcache is enabled in PHP:


A word on security


It seems that on older machines, memcache may be started in an insecure manner, allowing anyone on the internet to connect to it, if firewall rules allow:. The quick answer to this is to only let memcache listen on its local interface. You can check this by doing the following:

$ ps aux |grep memcache
memcache 24752  0.0  0.0 316888  1108 ?        Sl   12:18   0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1
You're looking for the bit that says "-l 127.0.0.1" , if that isn't there, find the config file that it should be in (according to which distro you're running), put it in, and restart memcache. On Ubuntu, this file is /etc/memcached.conf, and it's there as of 12.04. On Redhat/Fedora/CentOS, it appears to be /etc/sysconfig/memcached, on Gentoo, it's /etc/conf.d/memcached, etc. etc.

This will ensure that memcache only communicates with things that are running on your server, and no digital Tom, Dick or Harry can come along and connect to your particular memcache instance. Phew, you're a little bit safer from the denizens of the internet for the time being.

Varnish

Varnish is slightly different in that it isn't linked to the LAMP stack as such, but sits in front of the webserver, basically caching stuff that goes out to the internet, making said stuff much quicker to get to than, for instance:

Web client (browser or app) -> Apache -> PHP -> MySQL -> PHP -> Apache -> Web client

Instead, with Varnish installed, you're potentially looking at:

Web client -> Varnish (cached data) - > Web client

Assuming the content has been accessed once, and of course assuming Varnish has been set up correctly, it's easy to see how this could potentially be much faster, especially considering that once a request hits the PHP and MySQL parts of the process, it could be slowed down even further if APC and memcache are not installed.

So, installation is essentially the same as we've seen above - via the package manager

$ sudo apt-get install varnish
.............................
 * Starting HTTP accelerator varnishd                                           [ OK ]

To configure varnish as it would be used in the wild we're going to have it listening on TCP port 80, the standard destination for HTTP requests. Of course, this being the case, we'll need to tell the webserver to listen on a port other than 80, in this example, we'll use port 8080. We'll then tell varnish where the webserver is. The webserver I'm using in this example is apache. The distro is Ubuntu, YCFMV (your config files may vary)

In /etc/apache2/ports.conf, change:

NameVirtualHost *:80
Listen 80

to

NameVirtualHost *:8080
Listen 8080


in /etc/apache2/sites-enabled/000-default, change:

<VirtualHost *:80>

to

<VirtualHost *:8080>


Then, restart your webserver.

Now, we point varnish to the webserver (the "backend" in varnish parlance). We're using localhost because varnish is running on the same machine as the webserver.

Editing /etc/varnish/default.vcl:

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}
To get varnish to listen on port 80, edit /etc/default/varnish:

DAEMON_OPTS="-a :6081 \

becomes

DAEMON_OPTS="-a :80 \

And then restart varnish via /etc/init.d/varnish restart .

You can check this is working by running varnishlog from the command line, and accessing some pages on your server - you should see some ouput scrolling up your console window.

There is some optimisation to be done on the configuration of varnish - but this can vary according to the needs of your site. I'll cover these in a subsequent post.

Enjoy!



Friday, 31 August 2012

Gmail and postfix

If I don't get all this stuff down, there's a danger I'll forget it, so this is the first of, I hope, many posts on how to do things, what to type, etc. Hopefully, someone will google it and get some use out of it. If you just have, Hiya!

I recently fancied sending mails from my Linux box at home. I decided to use postfix, as I've had some dealings with sendmail at work and while it wasn't too bad, throughout my whole working life, everyone has always moaned about how hard it is to use.

I used someone else's tutorial, of course, it's here. I wonder who was the very first person to work all this stuff out?

So, first set up your postfix. Add the following lines to /etc/postfix/main.cf:

smtp_use_tls = yes
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_tls_cert_file = /etc/postfix/FOO-cert.pem
smtp_tls_key_file = /etc/postfix/FOO-key.pem
smtp_tls_session_cache_database = btree:/var/run/smtp_tls_session_cache

smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtpd_sasl_local_domain = $myhostname
smtp_sasl_security_options = noanonymous

 Most of that stuff as to allow postfix to authenticate with Gmail using SSL, I think. It's all very well supplying your password, but Gmail's SMTP setup requires more than that to believe you are who you say you are, and not some evil spambot.

This means setting up SSL. Forgive me if it sounds like I don't know what I'm talking about here: I don't really -still a bit iffy on SSL. I've dealt with SSH for years, however, so I have a good grasp of what's going on with public-key encryption, I've just never had to deal with SSL before. What this does is to generate a public/private key pair. You encrypt stuff using your private key, and the receiver can use the public key to decrypt it, but it is nigh on impossible to reverse engineer a public key from scratch, so your data is theoretically very safe.

Here's how it's done, using a combination of other people's tutorials (this is a good one), and what I managed to pull out of my bash history.

This bit sets up the certificate:
# /etc/ssl/misc/CA.pl -newca Making CA certificate ... Generating a 1024 bit RSA private key ..........++++++ .........++++++ 
writing new private key to './demoCA/private/./cakey.pem' 
Enter PEM pass phrase: <type a password here> 
Verifying - Enter PEM pass phrase: <retype the password> -----
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank 
For some fields there will be a default value,
If you enter '.', the field will be left blank.
----- 
Country Name (2 letter code) [AU]: <enter> 
State or Province Name (full name) [Some-State]: <enter> 
Locality Name (eg, city) []: <enter> 
Organization Name (eg, company) [Internet Widgits Pty Ltd]: <enter> 
Organizational Unit Name (eg, section) []: <enter> 
Common Name (eg, YOUR name) []: <your name> 
Email Address []: <your email> 
Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: <enter> 
An optional company name []: <enter>

Using configuration from /usr/lib/ssl/openssl.cnf 
Enter pass phrase for ./demoCA/private/./cakey.pem: <same password as before> 
Check that the request matches the signature  
Signature ok
 
This bit sets up the SSL key pair:

Fill in your own values for this bit, I don't think they're actually used for anything:

# openssl req -new -nodes -subj '/CN=<some value>/O=<org>/C=<country>/ST=<state>/L=<location>/emailAddress=<email address>' -keyout FOO-key.pem -out FOO-req.pem -days 3650
Then:

# openssl ca -out FOO-cert.pem -infiles FOO-req.pem
# cp demoCA/cacert.pem FOO-key.pem FOO-cert.pem /etc/postfix
# chmod 644 /etc/postfix/FOO-cert.pem /etc/postfix/cacert.pem
# chmod 400 /etc/postfix/FOO-key.pem

 ..and that's that. Now you have to tell postfix how to log in. This is done by pointing it to a password hash, to save you having to put the actual password in the main postfix config file, this is shown by the line in /etc/postfix/main.cf:

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

So, populate this file as follows:


[smtp.gmail.com]:587<email_address@gmail.com>:<password>

But this won't work until the password is hashed, as I found out the other day, and as is in fact specified by the "hash:" bit in the config. I loves me some hash. Corned beef hash, that is.

# postmap /etc/postfix/sasl_password

And that's it. Enjoy, hope this helps someone somewhere.


Stumbling, blinking, into the sunlight

Hmmm, four and a half years. Still, at least the web has been spared my gibberish all this time.

For my next trick, I'll publish a quick documentation post, so that I don't forget what I did to get something working.

TTFN

Thursday, 14 February 2008

Hello again

Well, after letting this thing languish for almost a year, I've been inspired to start spouting rubbish again.

Look out for ensuing gibberish over the coming weeks, months, years.

Be seeing you...

Wednesday, 21 March 2007

amule & amuled

Well, work's not too hectic at the moment, so I thought I'd get a few posts knocked off while I had a chance.

The trusty P-III upstairs has recently taken on ed2k duties at the Steele household. I'll get round to writing about this at some point, but it used to be Win2k server running emule, and now it's FC5 and it's running...aMule. Emule itself is open source, and probably the most-used successor of the original eDonkey 2000 client. I've been running it for years on low-end hardware with Win2k and it's hardly ever done me wrong. Actually I've been using Emule Plus, but there's very little difference, AFAIK...

This software seems pretty neat. You have to install WxWidgets (whatever they may be), to get it to work, but this didn't present a challenge to me on FC5.

Now, I used to run Emule 24x7 on the Win2k box in a Terminal Services session, and this worked pretty well. I used to need to log in to add ed2k links and, after my failed attempts at getting bandwidth shaping to work, throttle back the bandwidth when my wife wanted to web browse (chuckle). After a couple of weeks of using Amule in FreeNX, I just wasn't geting the same experience. Suspending the session more often than not seemed to cause amule to just stop downloading (though this could have been my imagination), and it was pretty unresponsive (probably more to do with the old hardware it's running on than anything else).

Enter amuled. This is, as the name suggests, a lovely little bit of software from the developer of amule which is simply amule without a GUI. There's a few tricks to getting it working though, hence this JE...

Assuming you've got amuled running, and have tried amulecmd (the CLI tool for amuled), and amuleGUI and/or amuleweb without success

Make sure the following is present in your ~/.aMule/amule.conf and your ~/.aMule/remote.conf for good measure:

ECUseTCPPort=1

You may have to create this line, or set 0 to 1, as apparently Unix sockets are not working in this release (2.1.3). So use good ol' TCP/IP...

Next, enter the md5sum of the password, there's a way to determine this on the command line, but I don't have it to hand at the moment. Enter this as:

Password=b43555325d34534340534e435

That's not my actual password hash, BTW :)

With these two bits in place, it should be possible to connect to amuled via amulecmd, amulegui and amuleweb. I found this info tucked away on a forum somewehere, so hopefully, this will help it reach a wider audience.

Goodnight and Good luck :)