Today I Learned

Replacing num.times.map with Array.new(num)

Problem

I want to create an array containing objects created with an incrementing integer index parameter. For example, an array of hashes containing strings built off of the incrementing number.

Standard Solution

my_array = num.times.map do |index|
  build_hash(index)
end

BUT...rubocop didn't like this:

... C: Performance/TimesMap: Use Array.new with a block instead of .times.map

Improved Solution

The improved solution allows us to build the same array with less chained methods, thus improving readability:

my_array = Array.new(num) do |index|
  build_hash(index)
end

pgcli — a great alternative to psql

Ever wished for a feature-rich, intuitive command-line postgresql client? Look no further! Presenting, pgcli — a result of pouring creativity into features rather than the name of the tool.

Supports:

  • Smart autocompletion for almost anything. Even column names in the select query are properly autocompleted from the table you are selecting from;
  • Multi-line query editing support;
  • SQL Syntax highlighting;
  • CLI parameters compatible with psql params. No need to relearn anything;
  • Vi mode, allowing to edit multi-line queries using some of the vi bindings. There are also Emacs bindings too;
  • Installation as easy as pip install pgcli (and if that fails, fix is as easy as xcode-select --install (usually));

Grab your copy today!

Speeding Up Rake Command Completion

The speed of rake command completion is determined by the startup time of your Rails app.

If your Rails app is slow to startup, the oh-my-zsh rake-fast plugin will make rake command completion tolerable again.

Edit your zsh configuration file:

plugins=(... rake-fast ...)

After refreshing your shell instance issue the following command:

$ rake_refresh

Take a deep breath. Enjoy fast rake command completion.

Record File Handle Usage in OSX

lsof is a helpful tool for looking at what files a process currently has open, however sometimes a process may only access a file for a second and lsof may miss the moment.

For OSX we also have Instruments. This is included with XCode and is pretty straight forward to use:

  • Open Instruments
  • Select File Activity
  • Select the process
  • Hit Record
  • Perform your action
  • Stop Recording

You can also save the log for later analysis.

webpack-merge is a thing and it is beautiful

Let's say you want to redefine a loader from your base webpack config in your production webpack config. Try webpack-merge.smart !

webpack.base.config.js

const config = {
  entry: "./index.js",
  module: {
    rules: [
      {
        test: /\.css?$/,
        exclude: /node_modules/,
        use: [{
          loader: "css-loader",
          options: { sourceMap: true }, // set to true
        }],
      },
      {
        test: /\.css?$/,
        include: /node_modules/,
        use: ["css-loader"],
      },
    ],
  },
};

webpack.production.config.js

const merge = require("webpack-merge");
const baseConfig = require("webpack.base.config");

const productionConfig = {
  module: {
    rules: [{
      test: /\.css?$/,
      exclude: /node_modules/,
      use: [{
        loader: "css-loader",
        options: { sourceMap: false }, // override to false
      }],
    }],
  },
};

module.exports = merge.smart(baseConfig, productionConfig);

Result

const config = {
  entry: "./index.js",
  module: {
    rules: [
      {
        test: /\.css?$/,
        exclude: /node_modules/,
        use: [{
          loader: "css-loader",
          options: { sourceMap: false }, // yep! it's false
        }],
      },
      {
        test: /\.css?$/, // but we didn't touch this rule
        include: /node_modules/,
        use: ["css-loader"],
      },
    ], // and we didn't append anything either!
  },
};

webpack-merge.smart is aware of the shape of a webpack configuration and allows you to update only the rule you want.

Check it out: https://github.com/survivejs/webpack-merge#smart-merging

Mocha: Fail on any console error message

PROBLEM

I want to write a Mocha (JS) test that will fail if there is any warning printed to the console to fail. My use case to necessitate this requirement is that I want to test whether the correct prop type is passed to a React component. PropTypes exists for this purpose, but would only print out a message to the console instead of failing.

SOLUTION

Use the following code, either at the beginning of your test file or a global helper such as specHelper.js, to stub out the implementation of console.error to throw an Error and thus fail the test.

before(() => stub(console, "error", (warning) => {
  throw new Error(warning);
}));

after(() => console.error.restore());

Reference: this gist.

Capybara will skip invisible elements by default

While working on an acceptance test for a date range filter in GO, we were having an issue where Capybara couldn't find an element on the page, even though we could verify it was there. Eventually we realized that the element had an opacity of 0, and that Capybara was passing it over. To illustrate, imagine you have an element with the id #myElement.

CSS:

#myElement { opacity: 0; }

And in your Rails spec:

page.find("#myElement");

The spec will fail, because #myElement can't be found.

Fortunately, there is a visible option that can be set to false so that Capybara doesn't skip the element. So now, changing the line in the spec to:

page.find("#myElement", visible: false);

will cause it to pass.

Encrypt data using psql + keybase

To export any query to a CSV and send it to stdout one can use:

psql -c "\copy (select version()) to stdout csv header"

So you can just replace select version() with any query in the above command and the results will be dumped in your terminal screen. If you have any sensitive data that is not already encrypted you could pipe this results directly to keybase as in:

psql -c "\copy (select version()) to stdout csv header" | keybase encrypt diogob

Where diogob is the recipient of your message (or your own username in case you want to store this file for future use).

SQL's WITH RECURSIVE Query

While optimizing calls to a recursive table, we found a neat SQL solution. It uses a common table expression as a working table to query against iteratively.

Here's an example of using WITH RECURSIVE with a modified nested set example of clothing categories that find all paths through the categories:

CREATE TEMPORARY TABLE categories (id INT, name text, parent_category_id INT);

INSERT INTO categories VALUES
  (1, 'Clothing', null),
  (2, 'Mens''s', 1),
  (3, 'Women''s', 1),
  (4, 'Suits', 2),
  (5, 'Dresses', 3),
  (6, 'Skirts', 3),
  (7, 'Jackets', 4),
  (8, 'Evening Gowns', 5);

WITH RECURSIVE category_hierarchies AS
(SELECT id, parent_category_id, name AS full_path
 FROM categories
 WHERE parent_category_id is NULL

 UNION ALL

 SELECT child_categories.id,
        child_categories.parent_category_id,
        parent_categories.full_path || ' -> ' || child_categories.name as full_path
 FROM categories AS child_categories
 INNER JOIN category_hierarchies AS parent_categories
   ON child_categories.parent_category_id = parent_categories.id
)
SELECT full_path FROM category_hierarchies ORDER BY full_path;

Produces paths through all categories:

  • Clothing
  • Clothing -> Mens's
  • Clothing -> Mens's -> Suits
  • Clothing -> Mens's -> Suits -> Jackets
  • Clothing -> Women's
  • Clothing -> Women's -> Dresses
  • Clothing -> Women's -> Dresses -> Evening Gowns
  • Clothing -> Women's -> Skirts

Read more about WITH RECURSIVE queries

Run last command in BASh/ZSH (they're different)

To run the last executed command in BASH, execute the following:

!!

In ZSH, things are a little different. Using !! will only expand the command into the shell prompt. You would have to press enter again to execute it. Rather, if you want to immediately execute the last command similar to BASH, use this:

r

If you prefer for ZSH behaviour to match that of BASH, then add setopt no_hist_verify to your .zshrc file.

Serializing many value objects to database columns

While reading the IDDD book on serialization of value objects there is this description of an approach called ORM and Many Values Serialized into a Single Column. It's good to note that some of the main objections to this approach are technology related and barely applicable in a world of Rails' ActiveRecord + PostgreSQL.

The objections presented by the book are:

  • Column width: It mentions that serializing to varchar fields will meet some limitations imposed by Oracle and MySQL implementations. In PostgreSQL, besides having composite types (e.g. json or array), the limit on any column is much higher (1GB).
  • Must query: The book states that if the values must be queried this approach cannot be used. This is another limitation imposed by the underlying technology. Using PostgreSQL one can easily query composite values and even created indexes over them.
  • Requires custom user type: This is not related to the database technology but is heavily biased towards hibernate. In Rails' ActiveRecord the custom serializers require very little boilerplate and it offers out of the box support for json, array and range types.

Am I executing the correct executable?

Q: When I run a command (let's say rails), which executable is it executing?

A:

> which rails
/Users/username/.rvm/gems/ruby-2.1.6/bin/rails

Q: Ah, I see which one it's running. And it's not the right one! Where are all the potential executables, given the current PATH?

A:

> where rails
/Users/username/.rvm/gems/ruby-2.1.6/bin/rails
/usr/bin/rails

Now I know whether it's a PATH ordering issue, or whether it's not included PATH at all.

Turn AWS tags into a useful data structure with jq

The JSON responses from the AWS API contain tags in a data structure like this:

"Tags": [
    {
        "Value": "consul-test-jf",
        "Key": "Name"
    },
    {
        "Value": "test-jf",
        "Key": "consul-group"
    },
    {
        "Value": "server",
        "Key": "consul-role"
    }
]

This structure is awkward to query with jq, but you can map it into a normal object like this:

jq '<path to Tags> | map({"key": .Key, "value": .Value}) | from_entries'

Which returns an object that looks like this:

{
  "consul-role": "server",
  "consul-group": "test-jf",
  "Name": "consul-test-jf"
}

Comparing Version Strings in Ruby

While writing a Ruby script, I needed to check the the version of a binary dependancy. The --version switch gets me the data, but how to compare to the required version?

The binary follows semver, so a quick and dirty attempt might be:

"1.4.2".gsub(".", "") >= "1.3.1".gsub(".", "")
# => true

Unfortunately, this is misleading: we are lexicographically comparing the strings and these strings happen to have the same length. Thus, "142" comes after "131".

Testing that version "1.200.0" is newer than "1.9.0" will fail as "120" comes before "190".

It would be straight-forward to write a small class to parse the string and compare the major, minor, and patch values. But, Ruby has a quick solution provided by RubyGems. Since Ruby 1.9, RubyGems has been included in Ruby's standard library:

Gem::Version.new("1.200.1") >= Gem::Version.new("1.3.1")
# => true

Gem also provides a way handle pessimistic constraints:

dependency = Gem::Dependency.new("", "~> 1.3.1")
dependency.match?("", "1.3.9")
# => true
dependency.match?("", "1.4.1")
# => false

Configuring a Rails app to redirect http to https

Problem

I have a Rails app on Heroku that is serving up a site on http and https. Google oAuth's callback URL is for https, so attempting to log into the site from the http URL fails.

Solution

The intention was to serve up the site just from the https url, so the solution is to configure Rails to redirect all http traffic to https.

In config/production.rb:

  config.force_ssl = true

Resource: http://stackoverflow.com/questions/27377386/force-ssl-for-heroku-apps-running-in-eu-region

Decorator Pattern in Ruby with SimpleDelegator

The Decorator Pattern allows us to chain new behaviours to objects without modifying the underlying objects. It is an application of the Open/Closed Principle. This pattern is useful for example when we need to tack on logging, monitoring, and other non-functional requirements to objects.

In Java or C# this can be achieved using interfaces. In Ruby, we can use the SimpleDelegator class to achieve this:

require "delegate"

class FooDecorator < SimpleDelegator
  def bar
    "This is a decorated #{__getobj__.bar}"
  end
end

class Foo
  def bar
    "bar"
  end

  def fiz
    "Fiz"
  end
end

decorated = FooDecorator.new(Foo.new)
puts decorated.bar # outputs "This is a decorated bar"
puts decorated.fiz # outputs "Fiz"

double_decorated = FooDecorator.new(FooDecorator.new(Foo.new))
puts double_decorated.bar # outputs "This is a decorated This is a decorated bar"

Sources:

Non-Invasive Monitoring of Socket Traffic

Problem

I would like to diagnose failures to communicate with an external service over a network socket, without making modifications to the code or otherwise disturbing a production-like environment.

Solution

One writes to or reads from a socket by making a request to the kernel (a.k.a syscall). This requires the file descriptor (numerical identifier) of the socket and the message to be sent over the socket, or a buffer that will contain the next message read from the socket.

Using strace (or dtruss on MacOS), one can inspect the stream of syscalls issued to the kernel and the arguments for each syscall. First, find the ID of the process that will be communicating over the socket:

ryan@staging ~ $ ps ax | grep unicorn
99999 ?        Sl     0:00 unicorn worker[0]

Then attach to the process with strace:

ryan@staging ~ $ strace -p 99999
Process 99999 attached
[pid 99999] write(11, "Hello", 6) = 6
[pid 99999] read(11, 0xBAAAAAAD, 64) = -1 EAGAIN (Resource temporarily unavailable)

Here, a Hello message was sent with a write syscall over socket with file descriptor 11, though the read syscall failed as the socket was temporarily blocked.

Attach to Local Ruby Process with Debugger

RubyMine has a nice feature that allows you to debug a Rails app without restarting the server.

With the server running,

1) Run the Attach to Local Process.. action from RubyMine

2) RubyMine will show a list of Ruby processes running. Pick the one running your server

3) Wait for RubyMine to connect to the process

4) Add a break point in RubyMine

5) Execute the action on the web application that hits that breakpoint

6) Execution will stop on that line. Now you can use all the nice tools the RubyMine debugger gives you.

I'm really exited with this new feature and I hope you are too. You can read more about it in here

RubyMine Attaches to Local Ruby Process

Prefer sort_by to sort when providing a block

Prefer the sort_by method over the sort method whenever you provide a block to define the comparison.

Common form:

line_adds.sort { |x, y| x.elements["ItemRef/ListID"].text <=> 
  y.elements["ItemRef/ListID"].text }

Preferred form:

line_adds.sort_by { |x| x.elements["ItemRef/ListID"].text }

For small collections both techniques have similar performance profiles. When the sort key is something simple like an integer there is no performance benefit from sort_by.

The performance difference is especially noticeable if the sort key is expensive to compute and/or you have a large collection to sort.

The algorithm that yields the performance benefit is known as the Schwartzian Transform.

React will conditionally batch calls to setState()

React tries to be smart and batch calls to setState() when its being called from a UI event context (e.g. button click). This has ramifications on code as your setState() call is no longer synchronous and accessing this.state will actually refer to the old state.

E.g.

this.state = { hello: false };
...
onClick() {
   this.setState({ hello: true });
   console.log(this.state.hello); //<=== will print false instead of true
}

However, if the setState is in a context not from a UI event, setState becomes synchronous

this.state = { hello: false };
...
changeState() {
   this.setState({ hello: true });
   console.log(this.state.hello); //<=== will print true!
}

There's more info here on the topic of batching setState calls: https://www.bennadel.com/blog/2893-setstate-state-mutation-operation-may-be-synchronous-in-reactjs.htm

Ruby print to replace contents on same line

In Ruby, the print command can be used with the '\r' (carriage return) character to bring the cursor back to the beginning of the printed line, so that the next print call will replace the contents already outputted to that line. This is a very useful tool for printing status updates in a CLI script. For example:

print "#{index} done. Progress: %.2f%" % (index.to_f / items * 100).round(2) + "\r" if (index % 10) == 0

This will print and replace a line in STDOUT to report the status of a list of items being processed by a function, like so:

200 done. Progress: 15%

Typewriters still hold a lasting impact on modern-day computing!

Hotkey to switch control mode in Mac Screen Share

I use MacOS screen sharing to power pair programming sessions that I have in my development team. There are two modes for the navigator to use when observing the driver's screen (assuming that the screen being shared is of the driver): Observe Mode to disallow taking control of the screen, or a self-explanatory Control Mode.

I like being in Observe Mode as the navigator so that I don't mistakingly take control of the driver's screen and start polluting the screen with accidental key strokes. But if I ever need to switch control, I would have to then make a mouse click on the correct icon. This gets annoying if I am observing in Full Screen mode (which is almost always). I would have to exit full screen mode first in order to switch to taking control.

SOLUTION: I can instead use the CMD-ALT-X key combination to quickly switch control mode :D

Why Git Uses (:<BRANCH>) to Delete Remote Branch

It would appear that the colon in git push origin :<branch-to-delete> is used exclusively to delete branches. But such is not the case.

The format for the refspec is*:

<source>:<destination>

This tells Git to push the source branch to the destination branch in remote. So if the source is blank, we get a leading colon. This has the effect of deleting the destination branch. Its like saying push null pointer to destination.

*You can learn more about the refspec in its entirety in this Stack Overflow

Performance Metrics for Scripts Using Command Line

To quickly collect performance metrics for a script via command line:

  1. Start running the script. Make note of the process name that the script is running as (e.g. ruby)
  2. Create a script called profiler.sh with this content: ps aux | grep $1 | head -1 | awk '{print "CPU="$3 ", MEM="$4 ", RSS="$6}'
  3. Make the profiler executable: chmod +x profiler.sh
  4. Execute the profiler in a watch session every minute: watch -n 60 --no-title "./profiler.sh SCRIPT_IDENTIFIER | tee -a logfile". Where the script identifier is any text that we can use to grep for the process in the ps aux output.
  5. After your script is done running or you have enough data points, observe the output in logfile.

NOTE: RSS is resident set size

ZDT Column Rename in a Distributed System

In order to deploy code to a highly available distributed system any two sequential versions of the code can be running at the same time. Therefore they need to be compatible.

  1. Add the new column, keep the columns in sync when updating.
  2. Migrate the data, start using the new column however fallback to the old column if the new column is blank, continue keeping the columns in sync.
  3. Remove all dependencies on the old column, only use the new column, do not sync them anymore.
  4. Drop the column.

When in Rails, Step #3 requires some special care as the column needs to be marked for removal:

module MarkColumnsForRemoval
  def mark_columns_for_removal(*columns_marked_for_removal)
    @columns_marked_for_removal = columns_marked_for_removal.map(&:to_s)
  end

  ##
  # Overrides ActiveRecord's list of the database columns in order to hide a column which we intend to delete
  # This ensures that ActiveRecord does not try to read or write to the column
  #
  def columns
    cols = super
    cols.reject { |col| (@columns_marked_for_removal || []).include?(col.name.to_s) }
  end
end

class SomeModel < ActiveRecord::Base
  # Remove this as part of step 4 when dropping the old_column
  extend MarkColumnsForRemoval
  mark_columns_for_removal :old_column
end

A quick deep dive into 'rake gettext:find'

Problem

I am using Ruby Gettext to manage translations. But today, when I ran rake gettext:find to update my PO files, none of them got updated.

Why??

The Investigation

After some digging, I noticed that Ruby Gettext defines one FileTask (a specific type of Rake task) per PO file, which delegates the work to GNU gettext.

FileTask looks at the timestamps of dependent files, and only executes the supplied block if any of the dependent files have a timestamp later than the file to update.

For example:

dependent_files = ["translations_template_file.pot"]
file "file_to_update" => dependent_files do
  # update the file
end

Why gettext:find was not doing anything

It turned out that gettext uses two FileTasks.

One to update the template:

files_needing_translations = ["file1.js", "file2.rb"]
file "translations_template_file.pot" => files_needing_translations do
  # update the translations template file
end

and another to update the PO file:

file "en-US/translation_file.po" => ["translations_template_file.pot"] do
  # update "en-US/translations.po"
end

The reason gettext:find did not do anything was because none of the files needing translation were updated, thus no PO files were updated.

Solution

> touch one_of_the_files_that_gettext_looks_at.js
> rake gettext:find

the .then(onSuccess, onError) anti-pattern

Before:

somePromise().then(
  function onSuccess (res) {
    // stuff happens, but oh no!
    // an error is thrown in here!
  },
  function onError (err) {
    // request-only error handler
  }
);

After:

somePromise()
  .then(function onSuccess (res) {
    // stuff happens, but oh no!
    // an error is thrown in here!
  })
  .catch(function onError (err) {
    // yay! The error thrown in the function above
    // can be handled here or rethrown to be handled elsewhere.
  });

More details here.

Compounding expectations in Rspec and Chai

When I had multiple expectations on the same object in rspec, I would write the code like so:

expect(page).to have_content("Foo")
expect(page).to have_content("Bar")
expect(page).to have_content("Other Stuff")

You can save yourself some typing if you instead use compound expectations, which is basically the usage of the and function after the previous expectation. Doing so will allow the previous code to be writted as such:

expect(page).to have_content("Foo")
  .and have_content("Bar")
  .and have_content("Other Stuff")

The same concept also exists in the Chai JavaScript testing library (documentation):

expect(page).to.contain("Foo")
  .and.contain("Bar")
  .and.contain("Other Stuff");

Add executable flags in git file

There is support in the git add command to make a file tracked in your git repository executable. For example, let's say you added a foo.sh script to your repo but forgot to add the executable bit to its file permissions. You can now do this:

git add --chmod=+x foo.sh

One gotcha of this approach is that this will only change the permissions tracked by git, but not the actual permissions of the file on YOUR filesystem. You will still need to run chmod +x foo.sh to modify your local permissions. However, your teammates should be able to pick up the permission changes from a git pull.

Courtesy of http://stackoverflow.com/a/38285435/814576

ES2015 Arrow fns do not have the arguments object

const myFn = (/*unknown arity*/) => {
  console.log(arguments); //EMPTY ARRAY!
};
function myFn(/*unknown arity*/) {
  console.log(arguments); //returns what you expect!
}

My takeaway: only use arrow functions when they're necessary, which actually isn't that often! Plain old named JS functions are still powerful and if necessary can still easily be bound with .bind(this).

Related reading: https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/arguments

Matching array subset in Ruby

Problem:

How do you evaluate whether one array is a subset of another? For example, are the elements [a,c] included in [a,b,c]?

First attempt:

I was hoping to find something like Array.include?([...]), but this only checks if the array includes the argument as one of its values.

Second attempt:

Another approach is to pass a block into Array.any?

!arr1.any? { |e| !arr2.include?(e) }

But the double negation is rather indirect and doesn't easily reveal the intent.

I considered extracting a method to name the functionality:

def subset?(arr1, arr2)
  !arr1.any? { |e| !arr2.include?(e) }
end

But it's still difficult to read, as it's not clear whether arr1 is a subset of arr2, or vice versa.

Final Solution:

The Enumerable module includes a to_set method to convert the array to set, and Set includes a subset? method.

arr1.to_set.subset?(arr2.to_set)

Technically, you need to require set.rb to get this method defined on Enumberable:

require "set"

arr1.to_set.subset?(arr2.to_set)

But you get this require for free in Rails.

Add extra line to git commit message from CLI

You can add extra lines to your commit messages by adding an extra -m flag to the git commit flag. This is useful if you have extra information that you want captured in your commit, but you don't want it in your commit message header. For example:

git commit -am "Updates the README with copyright information" -m "This conforms to requirements from Legal."

Will produce the following commit message:

Updates the README with copyright information

This conforms to requirements from Legal.

Now your commit message is split up into a header and a body. You can also add another -m flag for a footer.

Rolling back Rails migrations

There are a bunch of ways to roll back migrations, so I figured I'd capture them in Q & A format.

Let's say the following migration files exist:

> ls db/migrate

20160613172644_migration_1
20160614173819_migration_2
20160615142814_migration_3
20160615160123_migration_4
20160615174549_migration_5

Q: How do I roll back the last migration.
A: rake db:rollback

Q: How do I roll back the last 3 migrations?
A: rake db:rollback STEP=3

Q: How do I roll back a specific migration?
A: rake db:migrate:down VERSION=20160615142814
Details:
The timestamp comes from the filename: 20160615142814_migration_3

and... the one I learned today:

Q: How do I roll back all the migration past a certain version?
A: rake db:migrate VERSION=20160615142814.
Details:
The above will keep the following:

20160613172644_migration_1
20160614173819_migration_2
20160615142814_migration_3

and roll back the following:

20160615160123_migration_4
20160615174549_migration_5

In other words, it will keep all the migrations upto and including the version you specified.

RSpec Matchers for Array Comparisons

Whenever you are matching arrays ask yourself two questions:

  • Is order important?
  • Am I matching a subset of the elements or all of the elements?

How I decide on a matcher:

  1. Choose between the eq and be matcher if order is important.
  2. Choose the include matcher if you want to match on a subset of the elements.
  3. Choose between the match_array and contain_exactly matcher if you want to match all elements (and order doesn't matter).

Below is an example of an improvement to a previously intermittent test. I replaced the eq matcher with the match_array matcher because I wanted to match all location_ids and order doesn't matter.

expect(location_ids).to eq([location_2.id, location_3.id])
expect(location_ids).to match_array([location_2.id, location_3.id])

The root cause of the intermittent test was that the locations were being retrieved from the database with no order specified. From the PostreSQL documentation: If sorting is not chosen, the rows will be returned in an unspecified order. The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on.

How to see invisible text in iTerm2

Yesterday, I tried to run 'npm test' for a new project and the text was invisible (i.e. the same color as the background color of my chosen color scheme for iTerm2). You can find a long discussion about this problem here: https://github.com/altercation/solarized/issues/220

Buried in this discussion was the solution: iTerm2 -> Preferences -> Profiles -> Colors -> Minimum contrast -> Move slider about a third of the way

Testing an Independent Mixin With RSpec

Objective: write a spec for the Inventory::Query mixin.

Note: the mixin is independent of the including class as it does not depend on any instance variables or instance methods.

Original Approach

class InventoryQueryTest
  include Inventory::Query
end
subject(:inventory_query) { InventoryQueryTest.new }

Preferred Approach

subject(:inventory_query) { (Class.new { include Inventory::Query }).new }

Advantage

Simpler and avoids polluting the global namespace with a test class.

Prettify JSON in the browser console

Problem

I want to check the shape of data for an XHR request in Chrome. So I go to the Network panel in the inspector.

When I check the response tab, I see the following:

[{"id":43,"child_node":{"active":true,"name":"name","created_at":"2015-05-25T16:55:09.600-04:00"},"notes":null}]

Not very inspectable.

When I check the preview tab, it's a fancy preview mode, with all the nodes folded:

v [{id: 43,…}, {id: 44,…}, {id: 46,…}, {id: 45,…}]
> 0: {id: 43,…}
> 1: {id: 44,…}
> 2: {id: 46,…}
> 3: {id: 45,…}

Not easy to check the shape of the data either.

Solution

JSON.stringify to the rescue!

function prettifyJson(json) {
  console.log(JSON.stringify(
    json,      // copied from Response tab
    undefined, // ignore this argument (or read link below)
    2          // spaces to indent
  ));
};

Paste the above into the Chrome inspector.

Then copy the response in the response tab, and call the function:

>> prettifyJson([{"id":43,"child_node":{"active":true,"name":"name","created_at":"2015-05-25T16:55:09.600-04:00"},"notes":null}])

Output:

[
  {
    "id": 43,
    "child_node": {
      "active": true,
      "name": "name",
      "created_at": "2015-05-25T16:55:09.600-04:00"
    },
    "notes": null
  }
]

// Tada!!

Resource:

https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify

Using WIP acceptance specs

Context

I usually follow the following approaching when working on a story:

  1. Write a failing acceptance spec.
  2. Do a spike to validate the proposed solution. Get the spike to pass.
  3. Capture learnings, and blow away the spike changes.
  4. Properly TDD away at the solution.

One annoyance with this approach was:

What do I do with the failing acceptance spec?

I usually try not to commit failing specs, since that makes git bisect less useful when I'm trying to see what broke it.

Solution

RSpec tags to the rescue.

Configure your specs to ignore wip specs by default:

RSpec.configure do |c|
  c.filter_run_excluding wip: true
end

Write a WIP spec:

it 'tests my yet-to-be-added feature', :wip do
  "my test"
end

Run the spec:

rspec my_acceptance_spec.rb --tag=wip

The acceptance spec can be committed, because it won't run as part of your regular test suite.

Once the story is done, make sure you remove the :wip flag!