Dan Bradbury

I enjoy making and breaking things. I'll post notes of my adventures

Different browsers are the worst

While working on a personal project I ran into an issue with a bootstrap navbar collapse. In my local testing everything went fine and I decided to push and hoped everything would behave properly.. I grab my iPhone 5 and take a look only to see that the dropdown is not working at all. After doing some googling I came across a [SO post](http://stackoverflow.com/questions/20960405/bootstrap-3-dropdown-on-ipad-not-working) that accurately described the shitty situation I found myself in (the dropdown working in all browsers (including IE) and failing on all iOS devices) The guy was apparently using a tag without the *href* attribute which would fail to trigger the collapseable menu. That's all fine and good but I'm trying to use a span and am too lazy to wrap my one line in a tag so I hunt for a better solution.. My original (almost functional code) trigger looks like this ```html \ ``` Can you spot what's missing with this simple `data-toggle`? It turns out you need to add `cursor: pointer` to the style of whatever the element might be.. If the majority of people are usin links and buttons to trigger collapsable content then everything will work as expected and no problems will be had. For people who do what they want there's shit like this to deal with. And that's the web for you. Use some CSS/JS library like Bootstrap in hopes of saving yourself time and then tackle random shit like this. For the novice I'd imagine this would be an aggravating roadblock that would make hault all progress for a few solid hours until they give up and use a button or link to accomplish the same thing as adding the `cursor: pointer` styling. If you want to do work with web applications enjoy things like this because this is what we deal with on the daily.

Into the Abyss with Turbolinks

Previous attempts to adopt `turbolinks` during upgrades or new projects led me to the conclusion that I have a burning hatred for everything the project stands for (rage hatred is the worst kind..). From conversations with other Rails folks + former CTOs it seemed like `turbolinks` was something I could avoid without batting an eyelash (see [comparisons to Windows 8 decision making](http://corlewsolutions.com/articles/article-9-remove-uninstall-delete-turbolinks-in-rails-4) or just ask a local Rails expert what their experience with `turbolinks` has been like) As someone who previously ignored the efforts being made by DHH and the core team I would just start a new project with `--skip-turbolinks` to ensure my own sanity and continue with the hammering. Since I'm a bit late to this conversation it's nice to read posts like [Yahuda Katz' problem with turbolinks](https://plus.google.com/+YehudaKatz/posts/A65agXRynUn) and [3 reasons why I shouldn't use Turbolinks](http://cobwwweb.com/turbolinks-not-worth-the-effort) to get my hopes and dreams crushed.. Here is just the beginning of the headache that one can look forward to if they are to continue down through the thorns > ### Duplicate Bound Events > Because JavaScript is not reloaded, events are not unbound when the body is replaced. So, if you're using generic selectors to bind events, and you're binding those events on every Turbolinks page load. [This] often leads to undesirable behavior. Alright so to be honest this isn't that bad. People can bitch about global state all they want but as someone who enjoys thinking in a "game loop" I don't mind this and feel like I can easily write my own code to these standards > ### Audit Third-Party code > You audit all third-party code that you use to make sure that they do not rely on DOM Ready events, or if they do, that they DOM Ready events are idempotent. And this is where it starts to get fun.. I just stumbled upon a bug that reared its head beacuse of these two issues and I wanted to post a solution that I may find myself using more moving forward.. Imagine we are using [`typeahead.js`](https://twitter.github.io/typeahead.js/) we want to go ahead and initialize our typeahead input on a given page. Here's what the JS might look like ```javascript $('#searchBar .typeahead').typeahead({ hint: true, highlight: true, minLength: 2 }, { name: 'estados', source: matcher(items) }); ``` A pretty harmless call that you are probably going to copy paste in to try the first time you mess with `typeahead.js`. It works and you move on.. But be careful because `turbolinks` will give you some intereseting behaviour if we navigate between the page that has this piece of JS and another page. . `Turbolinks` will invoke this each time the page is "loaded". Because of this we will spawn a new instance of the typeahead input and the associated hint div.. For some reason (one I don't care to look into) `typeahead.js` will spawn a new instance and hide the others rather than truly cleaning up. No matter what we are left to fend for ourselves in the wilds of `turbolinks` so we search for a solution. I figure we can just handle global state a little better than your typical inline JS would. To do this we simply wrap the initializer in a conditional to verify the number of typeahead divs that are present on the screen. With proper naming we should be able to expand this approach to multiple typeahead instances. ```javascript if($('.typeahead.tt-input').size() < 1) { $('#searchBar .typeahead').typeahead({ ... } } ``` With that extra check we are able to handle the global state that turbolinks will create when natrually navigating and attempting to speed up our page. A recent webcast featuring DHH got me thinking about how simple the problem of a web application really is. The server demands are not a problem whatsoever (30ms response times are all you need to be perfect anything lower is not truly noticable or necessary). We have an issue when it comes to how the rest of the "page-load" occurs for the user. We all know the "hard refresh" links, the ones that clearly jump you to a new page with new content. Loading a new page is the same old same old that we've been doing since we could serve shit up. Of course the new way is the "one page app" that allows the user to navigate without ever having to disengage from the page they were on. IMO the trend is getting a bit insane (always felt the JS community was a bit heavy handed with trying new things..) and trying to keep up with the latest quarrels and trends is tiring. Where is the solution to the seamless application? It's clear that some will say Ember or React are the way forward to building beautiful apps that will take over the world but I'm not sure I believe a JS Framework is what will carry an application. So why learn all that unecessary complexity when HTML5 is here? If `Turbolinks` lives up to the into of the `README` I will be a happy Rails camper. > Turbolinks makes navigating your web application faster. Get the performance benefits of a single-page application without the added complexity of a client-side JavaScript framework. Use HTML to render your views on the server side and link to pages as usual. When you follow a link, Turbolinks automatically fetches the page, swaps in its , and merges its , all without incurring the cost of a full page load. C'mon `Turbolinks` don't let me down again..

Scaling Images with HTML5 Canvas

> Had intented to post this 8 months ago but it got lost in the sea of gists.. This is old news by now for most but I had quite a bit of fun implementing it for myself and figured I'd share my code and some learnings that came along with it. The basic idea is to use `canvas` to render an uploaded image and then utilize the `toDataURL` method on canvas to retrieve a Base64 encoded version of the image. In the example included here we will just direct link to the newly scaled image but you could imagine that we kick off an ajax request and actually process the image (in PHP `base64_decode` FTW). Without any more tangential delay let's take a look at the code. ```javascript
``` The above HTML shouldn't need any explanation but if it does feel free to open the attached JSFiddle to get a feel for it.. ```javascript (function(){ (function(){ document.getElementById("imageFile").addEventListener("change", fileChanged, false); document.getElementById("width").addEventListener("keyup", sizeChanged, false); document.getElementById("height").addEventListener("keyup", sizeChanged, false); document.getElementById("saveImage").addEventListener("click", share, false); }()); var currentImage, canvas = document.getElementById("canvas"); function sizeChanged() { var dimension = this.id, value = this.value; canvas[dimension] = value; if(currentImage) { renderImage() } } function fileChanged() { var file = this.files[0], imageType = /^image\//; if (!imageType.test(file.type)) { console.error("not an image yo!"); } else { var reader = new FileReader(); reader.onload = function(e) { currentImage = e.target.result; renderImage(); }; reader.readAsDataURL(file); } } function renderImage() { var data = currentImage, image = document.createElement("img"); image.src = data; image.onload = function() { context = canvas.getContext("2d"); context.drawImage(this, 0, 0, canvas.width, canvas.height); } } function share() { document.location = canvas.toDataURL(); } }()); ``` In order to bring the HTML to life we need to attach a few EventHandlers and define some basic functionality. The first things to tackle is the actual file upload. The File API has been added to the DOM since HTML5 and will be used here to open the uploaded file from `` on the `"change"` event. Inside of the change event there are 2 things that we want to do; (1) confirm the file type, and (2) render the file onto the canvas. The confirm the file type we can use the MIME type given to use from `file.type` and do a simple regex test (`/^image\//`) before attempting to render the unknown file (Even though we've added `accept="image/*"` inside the input that can be easily modified to attempt to upload any file). Once we are convinced that the user has uploaded an image it's time to read the file and send it off to the canvas to render. [`FileReader`'s](https://developer.mozilla.org/en-US/docs/Web/API/FileReader) [`readDataAsURL`](https://developer.mozilla.org/en-US/docs/Web/API/FileReader/readAsDataURL) will allow us to process the file asyncronously and allows for an `onload` callback that gives us the ability to set the newly read image and ask the canvas to draw. ### Additional Reading - [Using files in Web Applications - Mozilla Dev](https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications)

Playing the Twitter-game

> I am not a marketer not do I have any real prior experience managing PR/social media for a company of any size. This is just a write-up of some of my learnings while out in the wild By all accounts I am a Twitter novice; I joined a few years ago but don't really keep up with it (rage tweet a politician or media personality from time to time but not much more). From other business ventures I've learned that having a strong presence on Twitter+Facebook can be a great way to drive traffic to your site and keep folks updated with any updates but I had never invested any time in growing the profiles (outside of word of mouth and the usual social links in the footer of a site). For my most recent project I decided to take an active role in the growth of my Twitter account and to attempt to use a few of these automation tools / API to make my life a little easier. I started the journey by trying out a product called `ManageFlitter.com`; I had gone all in and decided to buy the smallest plan they offered to maximize my "strategic follows". After about 2 days of bulk requesting it becamse obvious that the "snazzy dashboard" views were nothing more than a facade.. I was hitting rate limits and unable to process a single follow / request with the access grant I had enabled for the site. At this point I started angrily emailing support to figure out why I was being blocked without hitting any of the actual API / request limits listed in the [twitter guidelines](https://support.twitter.com/articles/18311). Here is the initial diagnosis I recieved (steps to fix are omitted since they were useless..) > Thank you for writing into ManageFlitter support on this. Unfortunately Twitter seems to have revoked some 3rd party app tokens over the last few weeks. This causes "invalid or expired token" in the notifications page and keeps your account from being able to process actions, post tweets into Twitter, or from allowing your ManageFlitter account to properly sync with Twitter. Hmm so at this point I was frustrated because there is no way my token should have been revoked! Obviously they were using keys so that they could make these "analytic queries" on the twitter user base and had messed something up on their end that had made it impossible to proceed. I pressed along that line of thinking and received the following "helpful" response. >I am sorry to hear this. There seem to be a small group of Twitter users currently having this issue with ManageFlitter and some have noted that once their account is revoked, it is far easier for their account to continue to be revoked afterwards. > > Some users have suggested that waiting 24 hours helps reset the account. Others have noted that the amount of actions they perform in sequence also matters greatly to avoid the revoked access. Some have noted that after they perform 100 clicks, if they wait 20-30 seconds (usually enough time to go to the Notifications page and see if Twitter is revoking access), then continuing to perform more clicks. > > There is no particular reason why some accounts are being affected and other are not. We have been in touch with Twitter and unfortunately Twitter haven't offered much in the way of remedy for us or our users. TLDR; I was told to wait for the problem to fix itself.. This threw a massive wrench in my plans to bulk follow people inside the industry and hope that some percentage of them would follow me back. After a few more angry emails I was told to just wait it out.. At that moment I pulled out the classic `"I will have to proceed with a claim to Paypal / the better business bureau"` argument to get my money back and move on with another option. After getting my money back I decided to ride the free train with `Tweepi` which had no problems for the first week of usage so I decided to buy a month of the lowest tier to use some of the analytics / follow tools that were being offered. With 2 weeks on the platform I can say that I'm very happy with what I paid for and will continue to use it in the future (until my follower count levels out a bit) So why am I writing this article if I am just using a service to accomplish the task for me? While Tweepi does a lot for me it still imposes artificial limits on follow / unfollow in a 24 hour period. (see pic below) ![](http://i.imgur.com/lxGicMA.png) You can see that the service has some limitations. The main one being that I can follow more people than I can unfollow in a given day. While that makes sense with Twitter's policies my goal is a raw numbers game where I'd like to follow as many people as possible in hopes they follow me back. Whether they follow me back or not I am content to unfollow and continue with my following numbers game. Through this process I was able to drive my followed count up quite a bit (considering my actual follower count) ![](http://i.imgur.com/7lMskhe.png) ![](http://i.imgur.com/xgFbtwG.png) but still had this problem of the unblanaced `follow:followers` that I wanted to correct. If I was active on Tweepi there was no way for me to drive this down without having to completely stop following people for a period while I unfollowed the max every day. So today I decided to have a little fun inside the browser and see what I could do. :grin: Since twitter has a web + mobile application I could obviously sit and click through each of the people I was following to reduce the number but.. ![](http://colsblog.com/wp-content/uploads/2014/07/notime.jpg) So let's see how well formatted the twitter follower page is (and since it's Twitter we know its going to be well organized). When arriving at the page we see the trusty un/follow button for each of the followers ![](http://i.imgur.com/WVI23gR.png) we also notice that twitter has some infinite scroll magic going on to continuous load the 1000s of people we follow. With that knowledge in our hands it's time to craft some Jquery flavored code to do the clicking for us ```javascript $.each($('.user-actions-follow-button'), function(value,index) { $(index).click(); }); ``` Pretty easy to click through each of the buttons on the page but that's only going to account for the ones we have manually scrolled through.. Not sufficient since we have >4000 but <20 buttons on the page. So let's handle that damned auto-scrolling ```javascript var count = 0; function st(){ $("html, body").animate({ scrollTop: $(document).height() }, "fast"); if(count < 2000) { count += 1; setTimeout(st, 500); } } st() ``` You might be thinking; why not just for loop this shit?! The scroll animation needs a bit of time to allow for the page load; if you call it too fast the entire page will bug out and the "click button" code wont work as expected. So we just use `setTimeout` and let that sucker run (good time to take a stretch or make some coffee). When you come back you should hopefully be at the bottom of the screen (wait for `GridTimeline-footer` to show up and you know you are done) :D Run the click code and patiently wait for your browser to slow down drastically and eventually unfollow your entire list. The result should look something like this ![](http://i.imgur.com/qTz4KtF.png) The 1 follower there through me off since when I clicked on the link for my followers there wasn't anyone listed. At this point I was suspicious that I may have set off one of the limits that would have deactivated my accounts. I checked my emails and didn't see any warnings or notifications from Twitter but did start seeing this whenever I tried to follow someone on their website. (Learn more [here](https://support.twitter.com/articles/66885)) ![](http://i.imgur.com/ot7Ouon.png) At this point I was thinking I just fucked myself and got my account banned or locked in some way. During this time of panic I decided to refresh my browser and saw some funky behavior on my profile page.... ![](http://i.imgur.com/3IUBOXT.png) No "Following" count at all?! And I cant follow anyone because of some unknown policy breach.. After writing a short sob story to Twitter about how I had "accidentally unfollowed everyone"(cover my ass) I thought about the locking problem a bit more..Hmmm what about that Tweepi token I was using before? Who would have guessed that it would work and allow me to follow people again! ![](http://i.imgur.com/TGpVV06.png) So with a little bit of crafted Javascript I was able to drop that Following count down without having to fight any artificial limits imposed on me by some third party. I'm incredibly happy with the results (as I am not banned and my account is working as expected) and plan to reproduce with another client in the future. It's always a good feeling when adding a new tool to the utility belt.

Replacing SimpleCov

After fighting with [`simplecov`](https://github.com/colszowka/simplecov) for a little longer that I would like to admit; was attempting to get it to start analyzing a group of files that were the meat and potatoes of my application(Golaith application). Unfortunately none of the default configs (`Simplecov.start 'rails'`, etc) nor the filters were allowing my files to be tracked and printed in the handy coverage html file. Because of all this struggling I decided to go ahead and create my own crude coverage module; I'll be using this post to discuss my learnings and share an early working iteration. To get started I wanted to have the invocation of coverage be exactly the same as `simplecov`; so let's start with the goal of adding `CrudeCov.start` inside of our `spec_helper.rb` to keep track of the files we care about. Before diving into the code I did a little research on how `Simplecov.start` worked. I was mainly looking for information on how it was able to keep track of files with only a single invokation inside of the `spec_helper`. Inside of [`lib/simplecov.rb`](https://github.com/colszowka/simplecov/blob/master/lib/simplecov.rb#L40-L53) we find a definition of the `start` method; which checks to see if the water is friendly (`SimpleCov.usable?`) and then starts the tracking with a call to `Coverage.start`. At this point during my investigation I was pretty sure that `Coverage` was a `Class`/`Module` defined within the `simplecov` source; after some grepping within the repo I only found one other reference to `Coverage` inside of [`lib/simplecov/jruby_fix.rb`](https://github.com/colszowka/simplecov/blob/master/lib/simplecov/jruby_fix.rb). Unfortunately that reference is just as the name implies, a `jruby` specific fix for the `Coverage` module that overrides the `result` method. When I was that in the only reference to the module I ran off to google and was incredibly pleased to find that `Coverage` is a `Ruby` module! According to the [Ruby 2.0 Coverage doc](http://ruby-doc.org/stdlib-2.0.0/libdoc/coverage/rdoc/Coverage.html) >Coverage provides coverage measurement feature for Ruby. This feature is experimental, so these APIs may be changed in future. With that note about this being an experimental feature let's be flexible and see what we can do (`simplecov` uses it and it's a pretty successful gem). The usage note in the doc also looks fairly promising: > 1. require “coverage.so” > > 1. do ::start > > 1. require or load Ruby source file > > 1. ::result will return a hash that contains filename as key and coverage array as value. A coverage array gives, for each line, the number of line execution by the interpreter. A nil value means coverage is disabled for this line (lines like else and end). So we don't have to worry about #1 (will be loaded by Ruby) and can start with #2 and call `Coverage#start`, load all the files that matter, and then use `Coverage.result` (which `Returns a hash that contains filename as key and coverage array as value and disables coverage measurement.`) to see how well the files have been covered. As a note Coverage will pickup **any** file that has been required after `do ::start` so it's a good idea to have a way to selectively find the files that you want to get the coverage results on (e.g. Array of keys `Dir['./app/apis/*rb']` to grab the coverage results you want) Since we don't have any intention of supporting `JRuby` we should be able to use `Coverage` as is for our `CrudeCov` example. Let's start off with the `#start` and `#print_result`(used after our test suite finishes) ```ruby module CrudeCov class << self def start @filelist = [] Coverage.start end def print_result cov_results = Coverage.result root = File.dirname(__FILE__)[0..-6] filelist = [ "./app/apis/untested_endpoint.rb", "./app/apis/covered_endpoint.rb" ] filelist.each do |file| # process file results # coverage results returns Array([1,0,..,nil,3] where val = # of times line was hit & size = # of lines) # this makes for easy matching when creating the pretty html result file file_results = cov_results[file] results = file_results.compact.sort # remove all nil entries & sort to help with calculations puts "Results for: #{file}" total_lines = (results.length*1.00).to_f covered_lines = total_lines-results.find_index(1) percentage = (covered_lines/total_lines).round(2)*100 puts "#{percentage}% Covered (#{covered_lines} of #{total_lines} Lines Covered)" end # create html for easy viewing outside of shell end end end ``` Our `CrudeCov` module above is pretty straightforward and covers our basic needs of (1)Having a one-line call to add to our `spec_helper`, and (2) a print method that we can call after our suite is finished running (ideally the module would figure out which test framework is being used and ensure that the hook is made to print results at the end of the suite). With the example above we will have to explicityly ensure that the `print_result` method is called. Assuming that we are testing with `RSpec` our `spec_helper` will look something like this ```ruby require 'crudecov' CrudeCov.start # require project files.. Rspec.configure do |config| # your other config.. config.after(:suite) do CrudeCov.print_result end end ``` With that basic setup you will get a print out of the coverage percentages for all files that have been included in the `filelist`. In less than 30 lines of code we were able to have an incredibly simple coverage module that we could use in a project to sanity check a file that may potentially lacking coverage or confirm proper testing. From that simple example you can start to see how a project like `simplecov` would come into being and how something as simple as `CrudeCov` could become a full ruby coverage suite. With the legitimate need to get data on the effectiveness of your tests; SaSS solutions like [`Coveralls`](https://coveralls.io/) (which did not recognize a Goliath application) + gems like `simplecov`, `rcov` and `cover_me` have all become relied upon staples for the TDD community. What's the point of even doing TDD if you aren't covering new lines of code that could result in bugs down the road? For that reason alone I'd say it's worthwhile to implement some sort of coverage tool when all the rest have failed.