Jenkins parallel pipeline

I was very excited to know about the ‘parallel’ feature in Jenkins Pipeline, but, there are many gotchas while making¬† use of the pipeline feature (many of which are documented here: Jenkins Pipeline Example). After trying and reading a few different solutions, following worked for me (notice in screenshot that the browser jobs run in parallel !)

pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
        echo 'hello'
    stage('Test') {
      steps {
        script {
            def jobs = [:]
            def browsers = ["Chrome", "Firefox"]
            for (int i = 0; i < browsers.size(); i++) {
                def browser = browsers.get(i)
                def jobName = "printing $browser"
                jobs[jobName] = doJob(browser)
            parallel jobs
    stage('Deploy') {
      steps {
        echo 'deploying'

def doJob(browser) {
    return {
        node {
            echo "testing in $browser"

React Native: Application is not registered

- This is either due to a require() error during initialisation or failure to call AppRegistry.registerComponent.

I had been struggling with this for some time and the answer was right there… I had to make sure I called AppRegistry.registerComponent with the right params/appname – I had renamed my app and forgot to update the class name and also the name passed to registerComponent.

And, one way to rename a project is by renaming in package.json and then running react-native upgrade.

High System CPU usage

After upgrading a webserver running Apache to Debian Jessie (from Wheezy), I noticed that the system CPU usage was higher.  Running an strace on one of the Apache processes was giving me very little info:

strace -c -p 10112
Process 10112 attached - interrupt to quit
^CProcess 10112 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
95.71 28.549784 3172198 9 4 futex

I had previously tried to spread interrupts across cores and also limited network activity – as they are something one can do to work out why System CPU is high – but they didn’t bring down the CPU usage.

One culprit remained: Futex/Mutex locks. I changed the default Mutex [1] to file and magically the System CPU usage went down.

[1] Apache 2.4 Mutex doc

HHVM notes

  • Impressive throughput improvements (>100%) with the app that I am working on.
  • phpinfo() doesn’t output what you would expect.
  • xhprof output_dir doesn’t get read from ini files, need to set that up in the constructor of XHProfRuns_Default.
  • Set hhvm.server.thread_count to a high value (>=MaxRequestWorkers), otherwise a few slow MySQL queries could bring the server to halt, minimal doc here: HHVM server architecture (worker thread => hhvm.server.thread_count). Suggest to keep it higher while JITing is happening.
  • If using Newrelic, tough luck!
    Unofficial Newrelic HHVM extension uses XHProf internally, so cannot get any data out of your own XHProf usage.
    The extension above relies on agent SDK that has no support for MySQL slow traces.
    Very low MySQL time in transactions.
    Strange traces in transactions.
  • CGI differences (apache_getenv not available use $_SERVER, SCRIPT_NAME will not be the same as REQUEST_URI).
  • Use realpath in imageftbox, relative paths for fonts don’t work.
  • Use Apache 2.4 as it has FastCGI support.
  • hhvm.log.header = true to have datetime in hhvm log.
  • HHVM log will also contain slow sql.
  • .hhbc was getting very huge, turned out it was due to Smarty file caching being enabled (the cached files were themselves php files that HHVM was compiling).
  • .hhbc file is sqlite(3) file that one can query (that is how I worked out the above).
  • High timeout values in memcached was leading to very high System CPU usage.
  • @ wasn’t suppressing (this could be Newrelic related)
  • Friendly folks in the hhvm IRC channel (get link from HHVM homepage), need to be online during daytime in the US.

HTTPS and GA referral exclusion

After converting a site to be served over TLS, we noticed the drop off rates increasing on some pages. This was because: user was being redirected to a payment page on another domain that was also served over TLS. As, now both sites were served over TLS, Google Analytics was picking up the referrer (HTTPS -> HTTP would pass no referrer, HTTPS -> HTTPS does) and was counting the payment page/domain as a new source. Thus, counting the redirect as an exit. Fix was simple, I excluded the payment domain in the referral list.

Google Analytics referral exclusion

casperjs output to html

Documenting what I had to do.

Used XSLT from here:
nosetest xslt

Problems and Fixes:

  1. Firefox was inserting “transformiix” as the root element, this caused the DOCTYPE to be spit out. I fixed by adding:
    doctype-public="-//W3C//DTD HTML 4.0//EN"/

    And removed the doctype declartion in the above xslt

  2. My version of casperjs was setting xml namespace to


    After unsuccessfully trying to match the namespace in the xslt, I gave up and removed the namespace from the xunit xml, by

    sed -i 's@ xmlns="" @ @' "output.xml"
  3. Inserted the style into xml so can be rendered in the browser

    sed -i 's@ encoding="UTF-8"?>@ encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="nosetests.xslt" ?>@' "output.xml"
  4. Removed timestamp info from the xslt as it was in UTC timezone and could be very confusing when looking at the results (attempts to convert to my TZ in client were unsuccessful)

mysql setup for phabricator

Create phabricator user to be used in phabricator config
Grant the above user CREATE TABLESPACE permission so he can create DB
Grant all permissions on the phabricator tables to the above user (daemons/upgrade scripts require different permissions – the permission list from the doc wasn’t enough, so I granted all permission to the user)

CREATE USER 'phabricator'@'%' IDENTIFIED BY '{password}';
GRANT CREATE TABLESPACE ON `phabricator\_%` TO 'phabricator'@'%';
GRANT ALL ON `phabricator\_%`.* TO 'phabricator'@'%';

Keepalived instance not entering FAILED state

When a monitored interface goes down, the instance immediately enters FAILED state and the other instance gets into the MASTER state.

But, if you have a script block to check – say you are monitoring HAProxy – and HAProxy goes down the MASTER will not enter FAILED state, unless you do this:

Set the weight to a negative number (if MASTER priority is 101 and BACKUP priority is 100, the weight could be -2).

This way when HAProxy goes down, the Priority of the master will become 101 -2 = 99, the Backup with a priority of 100 will win the election and enter into MASTER state.

When HAProxy on the master comes back, it’s priority increases by 2 to become 101 again and if you have nopreempt disabled, this instance will enter the MASTER state.