• How to create and configure EC2 instances for Rails hosting with CentOS using Ansible

    Introduction

    In this quite extensive post I will walk you through the process of creating from scratch a box in EC2 ready to use for deploying your Rails app using Ansible. In the process I will show how to write a simple module that, while not necessary, will illustrate some points as well.

    Read on →

  • Check progress of a mysql database import

    If you’ve ever had to do a huge mysql import, you’ll probably understand the pain of not being able to know how long it will take to complete.

    At work we use the backup gem to store daily snapshots of our databases, the main one being several gigabytes in size. This gem basically does a mysqldump with configurable options and takes care of maintaining a number of old snapshots, compressing the data and sending notifications on completion and failure of backup jobs.

    When the time comes to restore one of those backups, you are basically in the situation in which you simply have to run a mysql command with the exported sql file as input, which can take ages to complete depending on the size of the file and the speed of the system.

    The command used to import the database snapshot from the backup gem may look like this:

    tar -x -v -O -f database_snapshot.tar path_to_the_database_file_inside_the_tar_file.sql.gz | zcat | mysql -u mysql_user -h mysql_host -ppassword database_name

    What this command does is untar the gzipped file and sending it as an input to a mysql command to the database you want to restore (passing it through zcat before to gunzip it).

    And then the waiting game begins.

    There is a way, though, to get an estimate of the amount of work already done, which may be a big help for the impatiens like myself. You only need to make use of the good proc filesystem on Linux.

    The first thing you need to do is find out the tar process that you just started:

    ps ax | grep "database_snapshot\.tar" | grep -v grep

    This last command assumes that no other processes will have that string on their invocation command lines.

    We are really interested in the pid of the process, which we can get with some unix commands and pipes, appending them to the last command:

    ps ax | grep "database_snapshot\.tar" | grep -v grep | tail -n1 | cut -d" " -f 1

    This will basically get the last line of the process list output (with tail), separate it in fields using the space as a delimiter and getting the first one (cut command). Note that depending on your OS and the ps command output you may have to tweak this.

    After we have the pid of the tar process, we can see what it is doing on the proc filesystem. The information we are interested in is the file descriptors it has open, which will be in the folder /proc/pid/fd. If we list the files in that folder, we will get an output similar to this one:

    [rails@ip-10-51-43-240 ~]$ sudo ls -l /proc/7719/fd
    total 0
    lrwx------ 1 rails rails 64 Jan 22 15:38 0 -> /dev/pts/1
    l-wx------ 1 rails rails 64 Jan 22 15:38 1 -> pipe:[55359574]
    lrwx------ 1 rails rails 64 Jan 22 15:36 2 -> /dev/pts/1
    lr-x------ 1 rails rails 64 Jan 22 15:38 3 -> /path/to/database_snaphot.tar

    The important one for our purposes is the number 3 in this case, which is the file descriptor for the file tar is unpacking.

    We can get this number using a similar strategy:

    ls -la /proc/19577/fd/ | grep "database_snaphot\.tar" | cut -d" " -f 9

    With that number, we can now check the file /proc/pid/fdinfo/fd_id, which will contain something like this:

    [rails@ip-10-51-43-240 ~]$ cat /proc/7719/fdinfo/3
    pos:    4692643840
    flags:  0100000

    The useful part of this list is the pos field. This field is telling us in which position of the file the process is now on. Since tar processes the files sequentially, having this position means we know how much percentage of the file tar has processed so far.

    Now the only thing we need to do is check the original file size of the tar file and divide both numbers to get the percentage done.

    To get the pos field we can use some more unix commands:

    cat /proc/7719/fdinfo/3 | head -n1 | cut -f 2

    To get the original file size, we can use the stat command:

    stat -c %s /path/to/database_snaphot.tar

    Finally we can use bc to get the percentage by just dividing both values:

    echo "`cat /proc/7719/fdinfo/3 | head -n1 | cut -f 2`/`stat -c %s /path/to/database_snaphot.tar` * 100" | bc -l

    To put it all together in a nice script, you can use this one as a template:

    file_path="<full path to your tar db snaphot>"
    file_size=`stat -c %s $file_path`
    file="<filename of yout db snapshot>"
    pid=`ps ax | grep $file | grep -v grep | tail -n1 | cut -d" " -f 1`
    fdid=`ls -la /proc/$pid/fd/ | grep $file | cut -d" " -f 9`
    pos=`cat /proc/$pid/fdinfo/$fdid | head -n1 | cut -f 2`
    echo `echo "$pos / $file_size * 100" | bc -l`

    I developed this article and script following the tips in this stack overflow answer: http://stackoverflow.com/questions/5748565/how-to-see-progress-of-csv-upload-in-mysql/14851765#14851765

  • ARM assembler in Raspberry Pi – Chapter 17

    In chapter 10 we saw the basics to call a function. In this chapter we will cover more topics related to functions.

    Read on →

  • Create a temporary zip file to send as response in Rails

    We have been doing a painful migration from Rails 2 to Rails 3 for several months at work, and refactoring some code the other day I had to do something in a non straightforward way, so I thought I’d share that.

    Basically we had an action that would group several files into a zip file and return those zipped files to the user as a response. In the old code, a randomly named file was created on the /tmp folder of the hosting machine, being used as the zip file for the rubyzip gem, and then returned in the controller response as an attachment.

    During the migration, we’ve replaced all those bespoken temp file generation for proper Tempfile objects. This was just another one of those replacements to do. But it turned out not to be that simple.

    My initial thought was that something like this would do the trick:

    filename = 'attachment.zip'
    temp_file = Tempfile.new(filename)
    
    Zip::File.open(temp_file.path, Zip::File::CREATE) do |zip_file|
        #put files in here
    end
    zip_data = File.read(temp_file.path)
    send_data(zip_data, :type => 'application/zip', :filename => filename)

    But it did not. The reason for that is that the open method, when used with the Zip::File::CREATE flag, expects the file either not to exist or to be already a zip file (that is, have the correct zip structure data on it). None of those 2 cases is ours, so the method didn’t work.

    So as a solution, you have to open the temporary file using the Zip::OutputStream class and initialize it so it’s converted to an empty zip file, and after that you can open it the usual way. Here’s a full simple example on how to achieve this:

    #Attachment name
    filename = 'basket_images-'+params[:delivery_date].gsub(/[^0-9]/,'')+'.zip'
    temp_file = Tempfile.new(filename)
    
    begin
      #This is the tricky part
      #Initialize the temp file as a zip file
      Zip::OutputStream.open(temp_file) { |zos| }
    
      #Add files to the zip file as usual
      Zip::File.open(temp_file.path, Zip::File::CREATE) do |zip|
        #Put files in here
      end
    
      #Read the binary data from the file
      zip_data = File.read(temp_file.path)
    
      #Send the data to the browser as an attachment
      #We do not send the file directly because it will
      #get deleted before rails actually starts sending it
      send_data(zip_data, :type => 'application/zip', :filename => filename)
    ensure
      #Close and delete the temp file
      temp_file.close
      temp_file.unlink
    end
  • ARM assembler in Raspberry Pi – Chapter 16

    We saw in chapters 6 and 12 several control structures but we left out a usual one: the switch also known as select/case. In this chapter we will see how we can implement it in ARM assembler.

    Read on →