Remember that crazy Javascript quiz from 6 years ago? Craving to solve another set of mind-bending snippets no sensible developer would ever use in their code? Looking for a new installment of the most ridiculous Javascript interview questions?
Look no further! The "ECMAScript Two Thousand Fifteen" installment of good old Javascript Quiz is finally here.
The rules are as usual:
The quiz goes over such ES6 topics as: classes, computed properties, spread operator, generators, template strings, and shorthand properties. It's relatively easy, but still tricky. It tries to cover various ES6 features — a little bit of this, a little bit of that — but it's certainly still only a tiny subset.
If you can think of other silly riddle ideas to break one's head against, please post them in the comments. For a slightly harder version, feel free to explore some of the tests in our compat table or perhaps something from TC39 official test suite.
Ready? Here we go.
Here be quiz result
I hope you enjoyed it. I'll try to write up an explanation for these in the near future.
Vote on Hacker News TweetinnerText
property.
That quirky, non-standard way of element's text retrieval, [introduced by Internet Explorer](https://msdn.microsoft.com/en-us/library/ie/ms533899%28v=vs.85%29.aspx) and later "copied" by both WebKit/Blink and Opera for web-compatibility reasons. It's usually seen in combination with textContent
— as a cross-browser way of using standard property followed by a proprietary one:
Or as the main webcompat offender in [numerous Mozilla tickets](https://bugzilla.mozilla.org/show_bug.cgi?id=264412#c24) — since Mozilla is one of the only major browsers refusing to add this non-standard property — when someone doesn't know what they're doing, skipping textContent
"fallback" altogether:
innerText
is pretty much always frown upon. After all, why would you want to use a non-standard property that does the "same" thing as a standard one? Very few people venture to actually check the differences, and on the surface it certainly appears as there is none. Those curious enough to investigate further usually do find them, but only slight ones, and only when retrieving text, not setting it.
Back in 2009, I did just that. And I even wrote [this StackOverflow answer](http://stackoverflow.com/a/1359822/130652) on the exact differences — slight whitespace deviations, things like inclusion of <script> contents by textContent
(but not innerText
), differences in interface (Node
vs. HTMLElement
), and so on.
All this time I was strongly convinced that there isn't much else to know about textContent
vs. innerText
. Just steer away from innerText
, use this "combo" for cross-browser code, keep in mind slight differences, and you're golden.
Little did I know that I was merely looking at the tip of the iceberg and that my perception of innerText
will change drastically. What you're about to hear is the story of Internet Explorer getting something right, the real differences between these properties, and how we probably want to standardize this red-headed stepchild.
textContent
and innerText
are.
Here's a simple example:
See the Pen gbEWvR by Juriy Zaytsev (@kangax) on CodePen.
Notice howinnerText
almost precisely represents exactly how text appears on the page. textContent
, on the other hand, does something strange — it ignores newlines created by <br> and around styled-as-block elements (<span> in this case). But it preserves spaces as they are defined in the markup. What does it actually do?
Looking at the [spec](http://www.w3.org/TR/2004/REC-DOM-Level-3-Core-20040407/core.html#Node3-textContent), we get this:
This attribute returns the text content of this node and its descendants. [...]In other words,
On getting, no serialization is performed, the returned string does not contain any markup. No whitespace normalization is performed and the returned string does not contain the white spaces in element content (see the attribute Text.isElementContentWhitespace). [...]
The string returned is made of the text content of this node depending on its type, as defined below:
For ELEMENT_NODE, ATTRIBUTE_NODE, ENTITY_NODE, ENTITY_REFERENCE_NODE, DOCUMENT_FRAGMENT_NODE:
concatenation of the textContent attribute value of every child node, excluding COMMENT_NODE and PROCESSING_INSTRUCTION_NODE nodes. This is the empty string if the node has no children.
For TEXT_NODE, CDATA_SECTION_NODE, COMMENT_NODE, PROCESSING_INSTRUCTION_NODE
nodeValue
textContent
returns concatenated text of all text nodes. Which is almost like taking markup (i.e. innerHTML
) and stripping it off of the tags. Notice that no whitespace normalization is performed, the text and whitespace are essentially spit out the same way they're defined in the markup. If you have a giant chunk of newlines in HTML source, you'll have them as part of textContent
as well.
While investigating these issues, I came across a [fantastic blog post by Mike Wilcox](http://clubajax.org/plain-text-vs-innertext-vs-textcontent/) from 2010, and pretty much the only place where someone tries to bring attention to this issue. In it, Mike takes a stab at the same things I'm describing here, saying these true-to-the-bone words:
Internet Explorer implemented innerText in version 4.0, and it’s a useful, if misunderstood feature. [...]Knowing these differences, we can see just how potentially misleading (and dangerous) a typical
The most common usage for these properties is while working on a rich text editor, when you need to “get the plain text” or for other functional reasons. [...]
Because “no whitespace normalization is performed”, what textContent is essentially doing is acting like a PRE element. The markup is stripped, but otherwise what we get is exactly what was in the HTML document — including tabs, spaces, lack of spaces, and line breaks. It’s getting the source code from the HTML! What good this is, I really don’t know.
textContent || innerText
retrieval is. It's pretty much like saying:
innerText
is as if the text was selected and copied off the page. In fact, this is exactly what WebKit/Blink does — it [uses the same code](http://lists.w3.org/Archives/Public/public-html/2011Jul/0133.html) for Selection#toString
serialization and innerText
!
Speaking of that — if innerText
is essentially the same thing as stringified selection, shouldn't it be possible to emulate it via Selection#toString
?
It sure is, but as you can imagine, the performance of such thing [leaves more to be desired](http://jsperf.com/innertext-vs-selection-tostring/4) — we need to save current selection, then change selection to contain entire element contents, get string representation, then restore original selection:
The problems with this frankenstein of a workaround are performance, complexity, and clarity. It shouldn't be so hard to get "plain text" representation of an element. Especially when there's an already "implemented" property that does just that.
textContent
and Selection#toString
are poor contenders in cases like this; innerText
is exactly what we need. Except that it's non-standard, and unsupported by one major browser. Thankfully, at least Chrome (Blink) and Safari (WebKit) were considerate enough to immitate it. One would hope there's no deviations among their implementations. Or is there?
innerText
, I wanted to see the differences among 2 engines. Since there was nothing like this out there, I set on a path to explore it. In true ["cross-browser maddness" traditions](http://unixpapa.com/js/key.html), what I've found was not for the faint of heart.
innerText
. To sustain web compatibility, Opera simply went ahead and "aliased" innerText
to textContent
. That's right, in Opera, innerText
would return nothing close to what we see in IE or WebKit. There's simply no point including in a table; it would diverge in every single aspect, and we can just consider it as never implemented.
textContent
and innerText
— performance.
You can find dozens of [tests on jsperf.com comparing innerText and textContent](http://jsperf.com/search?q=innerText) — innerText
is often dozens time slower.
innerText
being up to 300x slower (although that seems like a particularly rare case) and advises against using it entirely.
Knowing the underlying concepts of both properties, this shouldn't come as a surprise. After all, innerText
requires knowledge of layout and [anything that touches layout is expensive](http://gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-webkit.html).
So for all intents and purposes, innerText
is significantly slower than textContent
. And if all you need is to retrieve a text of an element without any kind of style awareness, you should — by all means — use textContent
instead. However, this style awareness of innerText
is exactly what we need when retrieving text "as presented"; and that comes with a price.
text()
method. But how exactly does it work and what does it use — textContent || innerText
combo or something else? Turns out, jQuery [takes a safe route](https://github.com/jquery/jquery/blob/7602dc708dc6d9d0ae9982aadb9fa4615a9c49fa/external/sizzle/dist/sizzle.js#L942-L971) — it either returns textContent
(if available), or manually does what textContent
is supposed to do — iterates over all children and concatenates their nodeValue
's. Apparently, at one point jQuery **did** use innerText
, but then [ran into good old whitespace differences](http://bugs.jquery.com/ticket/11153) and decided to ditch it altogether.
So if we wanted to use jQuery to get real text representation (à la innerText
), we can't use jQuery's text()
since it's basically a cross-browser textContent
. We would need to roll our own solution.
innerText
is pretty damn useful; we went over the underlying concept, browser differences, performance implications and saw how even an all-mighty jQuery is of no help.
You would think that by now this property is standardized or at least making its way into the standard.
Well, not so fast.
Back in 2010, Adam Barth (of Google), [proposes to spec innerText](http://lists.w3.org/Archives/Public/public-whatwg-archive/2010Aug/0455.html) in a WHATWG mailing list. Funny enough, all Adam wants is to set pure text (not markup!) of an element in a secure way. He also doesn't know about textContent
, which would certainly be a preferred (standard) way of doing that. Fortunately, Mike Wilcox, whose blog post I mentioned earlier, chimes in with:
In addition to Adam's comments, there is no standard, stable way of *getting* the text from a series of nodes. textContent returns everything, including tabs, white space, and even script content. [...] innerText is one of those things IE got right, just like innerHTML. Let's please consider making that a standard instead of removing it.In the same thread, Robert O'Callahan (of Mozilla) [doubts usefulness of innerText](http://lists.w3.org/Archives/Public/public-whatwg-archive/2010Aug/0477.html) but also adds:
But if Mike Wilcox or others want to make the case that innerText is actually a useful and needed feature, we should hear it. Or if someone from Webkit or Opera wants to explain why they added it, that would be useful too.Ian Hixie is open to adding it to a spec if it's needed for web compatibility. While Rob O'Callahan considers this a redundant feature, Maciej Stachowiak (of WebKit/Apple) hits the nail on the head with [this fantastic reply](http://lists.w3.org/Archives/Public/public-whatwg-archive/2010Aug/0480.html):
Is it a genuinely useful feature? Yes, the ability to get plaintext content as rendered is a useful feature and annoying to implement from scratch. To give one very marginal data point, it's used by our regression text framework to output the plaintext version of a page, for tests where layout is irrelevant. A more hypothetical use would be a rich text editor that has a "convert to plaintext" feature. textContent is not as useful for these use cases, since it doesn't handle line breaks and unrendered whitespace properly.To which Rob gives a reasonable reply:
[...]
These factors would tend to weigh against removing it.
There are lots of ways people might want to do that. For example, "convert to plaintext" features often introduce characters for list bullets (e.g. '*') and item numbers. (E.g., Mac TextEdit does.) Safari 5 doesn't do either. [...] Satisfying more than a small number of potential users with a single attribute may be difficult.And the conversation dies out.
See the Pen emXMKZ by Juriy Zaytsev (@kangax) on CodePen.
Notice that "opacity: 0" elements are not displayed, yet they are part ofinnerText
. Ditto with infamous "text-indent: -999px" hiding technique. The bullets from the list are not accounted for and neither is dynamically generated content (via ::after pseudo selector). Paragraphs only create 1 newline, even though in reality they could have gigantic margins.
But I think that's OK.
If you think of innerText
as text copied from the page, most of these "artifacts" make perfect sense. Just because a chunk of text is given "opacity: 0" doesn't mean that it shouldn't be part of output. It's a purely presentational concern, just like bullets, space between paragraphs or indented text. What matters is **structural preservation** — block-styled elements should create newlines, inline ones should be inline.
One iffy aspect is probably "text-transform". Should capitalized or uppercased text be preserved? WebKit/Blink think it should; Internet Explorer doesn't. Is it part of a text itself or merely styling?
Another one is "visibility: hidden". Similar to "opacity: 0" (and unlike "display: none"), a text is still part of the flow, it just can't be seen. Common sense would suggest that it should still be part of the output. And while Internet Explorer does just that, WebKit/Blink disagrees (also being curiously inconsistent with their "opacity: 0" behavior).
Elements that aren't known to a browser pose an additional problem. For example, WebKit/Blink recently started supporting <template> element. That element is not displayed, and so it is not part of innerText
. To Internet Explorer, however, it's nothing but an unknown inline element, and of course it outputs its contents.
innerText
proposal [is posted to WHATWG mailing list](http://lists.w3.org/Archives/Public/public-html/2011Jul/0133.html), this time by Aryeh Gregor. Aryeh proposes to either:
innerText
entirelyinnerText
to be like textContent
innerText
according to whatever IE/WebKit are doinginnerText
:
The problem with (3) is that it would be very hard to spec; it would be even harder to spec in a way that all browsers can implement; and any spec would probably have to be quite incompatible anyway with the existing implementations that follow the general approach. [...]Indeed, as we've seen from the tests, compatibility poses to be a serious issue. If we were to standardize
innerText
, which of the 2 behaviors should we put in a spec?
Another problem is reliance on Selection.toString()
(as expressed by Boris Zbarsky):
It's not clear whether the latter is in fact an option; that depends on how Selection.toString gets specified and whether UAs are willing to do the same for innerText as they do for Selection.toString....In the end, we're left with [this WHATWG ticket by Aryeh](https://www.w3.org/Bugs/Public/show_bug.cgi?id=13145) on specifying
So far the only proposal I've seen for Selection.toString is "do what the copy operation does", which is neither well-defined nor acceptable for innerText. In my opinion.
innerText
. Things look rather grim, as evidenced in one of the comments:
I've been told in no uncertain terms that it's not practical for non-Gecko browsers to remove. Depending on the rendering tree to the extent WebKit does, on the other hand, is insanely complicated to spec in terms of standard stuff like DOM and CSS. Also, it potentially breaks for detached nodes (WebKit behaves totally differently in that case). [...] But Gecko people seemed pretty unhappy about this kind of complexity and rendering dependence in a DOM property. And on the other hand, I got the impression WebKit is reluctant to rewrite their innerText implementation at all. So I'm figuring that the spec that will be implemented by the most browsers possible is one that's as simple as possible, basically just a compat shim. If multiple implementers actually want to implement something like the innerText spec I started writing, I'd be happy to resume work on it, but that wasn't my impression.We can't remove it, can't change it, can't spec it to depend on rendering, and spec'ing it would be quite difficult :)
innerText
or will it forever stay an unspecified legacy with 2 different implementations?
My hope is that the test suite and compatibility table are the first step in making things better. We need to know exactly how engines differ, and we need to have a good understanding of what to include in a spec. I'm sure this doesn't cover all cases, but it's a start (other aspects worth exploring: shadow DOM, detached nodes).
I think this test suite should be enough to write 90%-complete spec of innerText
. The biggest issue is deciding which behavior to choose among IE and WebKit/Blink.
The plan could be:
1. Write a spec
2. Try to converge IE and WebKit/Blink behavior
3. Implement spec'd behavior in Firefox
Seeing [how amazing Microsoft has been](https://status.modern.ie/) recently, I really hope we can make this happen.
innerText
:
Couple important tasks here:
1. Checking if a text node is within "formatted" context (i.e. a child of "white-space: pre-*" node), in which case its contents should be concatenated as is; otherwise collapse all whitespaces to 1.
2. Checking if a node is block-styled ("block", "list-item", "table", etc.), in which case it has to be surrounded by newlines; otherwise, it's inline and its contents are output as is.
Then there's things like ignoring <script>, <style>, etc. nodes and inserting tab ("\t") between <td> elements (to follow WebKit/Blink lead).
This is still a very minimal and naive implementation. For one, it doesn't collapse newlines between block elements — a quite important aspect. In order to do that, we need to keep track of more state — to know information about previous node's style. It also doesn't normalize whitespace in "true" manner — a text node with leading and trailing spaces, for example, should have those spaces stripped if it is (the only node?) in a block element.
This needs more work, but it's a decent start.
It would be also a good idea to write innerText
implementation in Javascript, with unit tests for each of the "feature" in a compat table. Perhaps even supporting 2 modes — IE and WebKit/Blink. An implementation like this could then be simply integrated into non-supporting engines (or used as a proper polyfill).
I'd love to hear your thoughts, ideas, experiences, criticism. I hope (with all of your help) we can make some improvement in this direction. And even if nothing changes, at least some light was shed on this very misunderstood ancient feature.
It was a sunny Monday morning that I woke up to an article on HackerNews, simply named “This in Javascript”. Curious to see what all the attention is about, I started skimming through. As expected, there were mentions of this
in global scope, this
in function calls, this
in constructor instantiation, and so on. It was a long article. And the more I looked through, the more I realized just how overwhelming this topic might seem to folks unfamiliar with intricacies of this
, especially when thrown into a myriad of various examples with seemingly random behavior.
It made me remember a moment from few years ago when I first read Crockford’s Good Parts. In it, Douglas succinctly laid out a piece of information that immediately made everything much clearer in my head:
The `this` parameter is very important in object oriented programming, and its value is determined by the invocation pattern. There are four patterns of invocation in JavaScript: the method invocation pattern, the function invocation pattern, the constructor invocation pattern, and the apply invocation pattern. The patterns differ in how the bonus parameter this is initialized.
Determined by invocation and only 4 cases? Well, that’s certainly pretty simple.
With this thought in mind, I went back to HackerNews, wondering if anyone else thought the subject was presented as something way too complicated. I wasn’t the only one. Lots of folks chimed in with the explanation similar to that from Good Parts, like this one:
Even more simply, I'd just say:
1) The keyword "this" refers to whatever is left of the dot at call-time.
2) If there's nothing to the left of the dot, then "this" is the root scope (e.g. Window).
3) A few functions change the behavior of "this"—bind, call and apply
4) The keyword "new" binds this to the object just created
Great and simple breakdown. But one point caught my attention — “whatever is left of the dot at call-time”. Seems pretty self-explanatory. For foo.bar()
, this
would refer to foo
; for foo.bar.baz()
, this
would refer to foo.bar
, and so on. But what about something like (f = foo.bar)()
? After all, it seems that “whatever is left of the dot at call time” is foo.bar
. Would that make this
refer to foo
?
Eager to save the world from unusual results in obscure cases, I rushed to leave a prompt comment on how the concept of “left of the dot” could be hairy. That for best results, one should understand concept of references, and their base values.
It is then that I shockingly realized that this concept of references actually hasn’t been covered all that much! In fact, searching for “javascript reference” yielded anything from cheatsheets to “pass-by-reference vs. pass-by-value” discussions, and not at all what I wanted. It had to be fixed.
And so this brings me here.
I’ll try to explain what these mysterious References are in Javascript (by which, of course, I mean ECMAScript) and how fun it is to learn this
behavior through them. Once you understand References, you’ll also notice that reading ECMAScript spec is much easier.
But before we continue, quick disclaimer on the excerpt from Good Parts.
The book was written in the times when ES3 roamed the prairies, and now we’re in a full state of ES5.
What changed? Not much.
There’s 2 additions, or rather sub-points to the list of 4:
Function invocation that happens in strict mode now has its this
values set to undefined
. Actually, it would be more correct to say that it does NOT have its this
“coerced” to global object. That’s what was happening in ES3 and what happens in ES5-non-strict. Strict mode simply avoids that extra step, letting undefined
propagate through.
And then there’s good old Function.prototype.bind
which is hard to even call an addition. It’s essentially call/apply wrapped in a function, permanently binding this
value to whatever was passed to bind()
. It’s in the same bracket as call
and apply
, except for its “static” nature.
Alright, on to the References.
To be honest, I wasn’t that surprised to find very little information on References in Javascript. After all, it’s not part of the language per se. References are only a mechanism, used to describe certain behaviors in ECMAScript. They’re not really “visible” to the outside world. They are vital for engine implementors, and users of the language don’t need to know about them.
Except when understanding them brings a whole new level of clarity.
Coming back to my original “obscure” example:
How do we know that 1st one’s this
references foo
, but 2nd one — global object (or undefined
)?
Astute readers will rightfully notice — “well, the expression to the left of ()
evaluates to f
, right after assignment; and so it’s the same as calling f()
, making this function invocation rather than method invocation.”
Alright, and what about this:
“Oh, that’s just grouping operator! It evaluates from left to right so it must be the same as foo.bar(), making this
reference foo
”
“Strange”
And how about this:
“Well… considering last example, it must be undefined
as well then? There must be something about those parenthesis”
“Ok, I’m confused”
ECMAScript defines Reference as a “resolved name binding”. It’s an abstract entity that consists of three components — base, name, and strict flag. The first 2 are what’s important for us at the moment.
There are 2 cases when Reference is created: in the process of Identifier resolution and during property access. In other words, foo
creates a Reference and foo.bar
(or foo['bar']
) creates a Reference. Neither literals — 1
, "foo"
, /x/
, { }
, [ 1,2,3 ]
, etc., nor function expressions — (function(){})
— create references.
Here’s a simple cheat sheet:
Example | Reference? | Notes |
---|---|---|
"foo" | No | |
123 | No | |
/x/ | No | |
({}) | No | |
(function(){}) | No | |
foo | Yes | Could be unresolved reference if `foo` is not defined |
foo.bar | Yes | Property reference |
(123).toString | Yes | Property reference |
(function(){}).toString | Yes | Property reference |
(1,foo.bar) | No | Already evaluated, BUT see grouping operator exception |
(f = foo.bar) | No | Already evaluated, BUT see grouping operator exception |
(foo) | Yes | Grouping operator does not evaluate reference |
(foo.bar) | Yes | Ditto with property reference |
Don’t worry about last 4 for now; we’ll take a look at those shortly.
Every time a Reference is created, its components — “base”, “name”, “strict” — are set to some values. The strict flag is easy — it’s there to denote if code is in strict mode or not. The “name” component is set to identifier or property name that’s being resolved, and the base is set to either property object or environment record.
It might help to think of References as plain JS objects with a null [[Prototype]] (i.e. with no “prototype chain”), containing only “base”, “name”, and “strict” properties; this is how we can illustrate them below:
When Identifier foo
is resolved, a Reference is created like so:
and this is what’s created for property accessor foo.bar
:
This is a so-called “Property Reference”.
There’s also a 3rd scenario — Unresolvable Reference. When an Identifier can’t be found anywhere in the scope chain, a Reference is returned with base value set to undefined
:
As you probably know, Unresolvable References could blow up if not “properly used”, resulting in an infamous ReferenceError (“foo is not defined”).
Essentially, References are a simple mechanism of representing name bindings; it’s a way to abstract both object-property resolution and variable resolution into a unified data structure — base + name — whether that base is a regular JS object (as in property access) or an Environment Record (a link in a “scope chain”, as in identifier resolution).
So what’s the use of all this? Now that we know what ECMAScript does under the hood, how does this apply to this
behavior, foo()
vs. foo.bar()
vs. (f = foo.bar)()
and all that?
What do foo()
, foo.bar()
, and (f = foo.bar)()
all have in common? They’re function calls.
If we take a look at what happens when Function Call takes place, we’ll see something very interesting:
Notice highlighted step 6, which basically explains both #1 and #2 from Crockford’s list of 4.
We take expression before ()
. Is it a property reference? (foo.bar()
) Then use its base value as this
. And what’s a base value of foo.bar
? We already know that it’s foo
. Hence foo.bar()
is called with this=foo
.
Is it NOT a property reference? Ok, then it must be a regular reference with Environment Record as its base — foo()
. In that case, use ImplicitThisValue as this
(and ImplicitThisValue of Environment Record is always set to undefined
). Hence foo()
is called with this=undefined
.
Finally, if it’s NOT a reference at all — (function(){})()
— use undefined
as this
value again.
Are you feeling like this right now?
Armed with this knowledge, let’s see if if we can explain this
behavior of (f = foo.bar)()
, (1,foo.bar)()
, and (foo.bar)()
in terms more robust than “whatever is left of the dot”.
Let’s start with the first one. The expression in question is known as Simple Assignment (=). foo = 1
, g = function(){}
, and so on. If we look at the steps taken to evaluate Simple Assignment, we’ll see one important detail:
Notice that the expression on the right is passed through internal GetValue()
before assignment. GetValue()
in its turn, transforms foo.bar
Reference into an actual function object. And of course then we proceed to the usual Function Call with NOT a reference, which results in this=undefined
. As you can see, (f = foo.bar)()
only looks similar to foo.bar()
but is actually “closer” to (function(){})()
in a sense that it’s an (evaluated) expression rather than an (untouched) Reference.
The same story happens with comma operator:
(1,foo.bar)()
is evaluated as a function object and Function Call with NOT a reference results in this=undefined
.
Finally, what about grouping operator? Does it also evaluate its expression?
And here we’re in for surprise!
Even though it’s so similar to (1,foo.bar)()
and (f = foo.bar)()
, grouping operator does NOT evaluate its expression. It even says so plain and simple — it may return a reference; no evaluation happens. This is why foo.bar()
and (foo.bar)()
are absolutely identical, having this
set to foo
since a Reference is created and passed to a Function call.
It’s worth mentioning that ES5 spec technically allows function calls to return a reference. However, this is only reserved for host objects, and none of the built-in (or user-defined) functions do that.
An example of this (non-existent, but permitted) behavior is something like this:
Of course, the current behavior is that non-Reference is passed to a Function call, resulting in this=undefined/global object (unless bar
was already bound to foo
earlier).
Now that we understand References, we can take a look in few other places for a better understanding. Take, for example, typeof operator:
Here is that “secret” for why we can pass unresolvable reference to typeof
and not have it blow up.
On the other hand, if we were to use unresolvable reference without typeof
, as a plain statement somewhere in code:
Notice how Reference is passed to GetValue() which is then responsible for stopping execution if Reference is an unresolvable one. It all starts to make sense.
Finally, what about good old delete operator?
What might have looked like mambo-jumbo is now pretty nice and clear:
delete 1
, delete /x/
)delete iDontExist
)
delete foo.bar
)delete foo
)
And that’s a wrap!
Hopefully you now understand the underlying mechanism of References in Javascript; how they’re used in various places and how we can “utilize” them to explain this
behavior even in non-trivial constructs.
Note that everything I mentioned in this post was based on ES5, being current standard and the most implemented one at the moment. ES6 might have some changes, but that’s a story for another day.
If you’re curious to know more — check out section 8.7 of ES5 spec, including internal methods GetValue()
, PutValue()
, and more.
P.S. Big thanks to Rick Waldron for review and suggestions!
Skip straight to TL;DR.
Kitchensink is your usual behemoth app.
I created it couple years ago to showcase everything that Fabric.js — a full-blown <canvas> library — is capable of. We’ve already had some demos, illustrating this and that functionality, but kitchensink was meant to be kind of a general sandbox.
You could quickly try things out — add simple shapes or images or SVG’s or text; move them around, scale, rotate, delete, group, change colors, opacity; experiment with locking or z-index properties; serialize canvas into image or JSON or SVG; and so on.
And so there was a good old, single kitchensink.js file (accompanied by kitchensink.html and kitchensink.css) — just a bunch of procedural commands and conditions, really. Pressed that button? Add a rectangle to the canvas. Pressed another one? Load an image. Was object selected on canvas? Enable that button and update its text. You get the idea.
But things change, and over time the app grew and grew until the once-simple kitchensink.js became too big for its own good. I was starting to notice more and more repetition, problems with navigating and maintaining code. Those small weird glitches that live in the apps without authoritative data source; they came as well.
I was looking at a 1000+ LOC JS file, realizing it’s time to refactor.
But there was a bit of a pickle. You see, kitchensink is all about managing <canvas>, through Fabric, and frankly I had no idea how to best approach an app like this. If this was your usual “User” or “Collection” or “TodoItem” data coming from a server or elsewhere, I’d quickly throw together few Backbone.Model
’s and call it a day. But with Fabric, we have an object model on top of <canvas>, so there’s just a collection of abstract visual objects and a way to operate on those objects.
Is it possible to shoehorn MVC onto all this? What exactly would become a model or the views? Is it even a good idea?
The following is my step-by-step refactoring path, including close look at some MVC-ish solutions. You can use it to get ideas on revamping your own spaghetti app, and/or to see how to approach design of <canvas>-based app, specifically. Each step is made as a separate commit in fabricjs.com repo on github.
Before changing anything, I decided to do a little experiment and statically analyze complexity of an app. Not to tell me that it was in shitty state; that I already knew. I wanted to see how it changes based on different solutions.
There are few ways to analyze JS code at the moment. There’s complexity-report npm package, as well as jscomplexity.org (both rely on escomplex). There’s Plato that provides visual tracking of complexty (based on complexity-report). And there’s good old JSHint; it has its own cyclomatic complexity calculation.
I used complexity-report
because it has more granular analysis and has this useful metric — “Maintainability”. What exactly is it and why should we care about it?
Here’s a simple example:
This chunk of code has cyclomatic complexity (CC) of 1. It’s just a single function call. No conditional operators, no loops. Yet, it’s pretty scary (actual code from Fabric.js, btw; shame on me).
Now look at this code:
It also has cyclomatic complexity of 1. But it’s clearly significantly easier to understand and maintain.
Maintainability, on the other hand, is reported as 151 for the first one and 159 for the second (the higher the better; 171 being the highest). It’s still not a big difference but it’s definitely more representative of overall state of the code, unlike cyclomatic complexity.
complexity-report
defines maintainability as a function of not just cyclomatic complexity but also lines of code and overall volume of operators & operands (effort):
Suffice it to say, it gives more accurate picture of code simplicity and maintainability.
It all started with this one big, linear, 1057 LOC JS file. Kitchensink never had any complex DOM/AJAX interactions or animations, so I never even used jQuery in there. Just a plain vanilla JS.
I started by porting all existing DOM interactions to jQuery. I wasn’t expecting any great improvements in code; jQuery can’t help with architectural changes. But it did remove some repetitive code - class handling, element removal, event handling, etc.
It would have also provided good foundation for further improvements, in case I decided to go with Backbone or any other higher-level tools.
Notice how it shaved off ~50 lines of code and even improved complexity from 132 to 116 (mainly removing some DOM handling conditions: think toggleClass
, etc.).
With easy stuff out of the way, I tried to figure out what to do next. I’ve used Backbone in the past, and I’ve been meaning to try out Angular and/or Ember — 2 of the most popular higher-level solutions. This would be a perfect way to learn them.
Still unsure of how to proceed, I decided to do something very simple. Instead of figuring out which library is the be-all-end-all, I went on to fix the most obvious issue — tight coupling of view logic and all the other (let’s call it “business”) logic.
I broke kitchensink.js into 3 files: model.js, view.js, and utils.js.
Utils were just some language-level methods used by the app (like getRandomColor
or supportsColorpicker
). View was to contain purely UI code, and it would reach out to model.js for any state or actions on that state. I called it model.js but really it was a combination of model and controller. The bottom line was that it had nothing to do with the presentation logic.
So this kind of mess (in previous code):
was now separated into view concern:
and model/controller concern:
Separating presentation logic from everything else had a dramatic effect on the state of the app.
Yes, there was an inevitable increase in lines of code (714 -> 829) but both complexity and maintainability skyrocketed. Overall CC went from 116 to 98, but more importantly, it was significantly less per-file. The biggest chunk was now in the model (cc=70) and the view became thin and easy to follow (cc=26).
Maintainability rose from 104 to ~125.
Looking at the code revealed few more possible optimizations. One of them was to use convention when enabling/disabling buttons representing canvas object actions. Instead of keeping references to them in the code, then disabling/enabling them through those references, I gave them all specific class name (“btn-object-action”) right in the markup, then toggled their state with the help of jQuery.
The changes weren’t very impressive, but complexity of the view went down from 26 to 21, SLOC went from 829 to 805. Not bad.
At this point, the app complexity was concentrated in model/controller file. There wasn’t much I could do about it since it was all pure “business” logic: creating objects, manipulating objects, keeping their state, etc.
However, there was still some room for improvement in the view corner.
I decided to start with Backbone. I only needed a fraction of its capabilities, but Backbone is relatively “lean” and provides a nice, declarative abstraction of certain common view operations, such as event handling. Changing plain kitchensink view object to Backbone.View
would allow to take advantage of that.
Instead of assigning event handlers manually, there was now this:
At the same time, model-controller was now implemented as Backbone.Model
and was letting views know when to update themselves. This was an important change towards a different architecture. View was now observing model-controller for changes, and re-rendering itself accordingly. And model-controller fired change event whenever something would change on canvas itself.
In model-controller:
Remember I mentioned abstract canvas state and interactions?
Notice the bridge between canvas and model/controller object: “object:selected”, “object:added”, “selection:cleared” canvas/Fabric events were all forwarded as controller’s “change” one.
In view:
As an example, now when user selected an object canvas, model-controller would trigger change event and view would re-render itself. Then during render, view would ask model-controller — is there active object? — and depending on an answer, render corresponding buttons in either enabled or disabled state, with one text or the other.
This felt like a good improvement in the right direction, architecture-wise.
Views became more declarative and easier to follow. SLOC went down a bit (787 -> 772), and view complexity was now even less (from 21 to 16). Unfortunately, maintainability of model went slightly down.
Backbone made views more declarative, but there was still some repetition I wasn’t happy with:
Notice how “#lock-horizontally” selector is repeated twice. This is bad both for maintainance (my main concern) and performance. In the past, I’ve used a tiny backbone.unclassified extension to alleviate this problem, and so I went with it again:
Notice how we create an “identifier” for an element in ui
“map”, and then use that identifier in both events “map” and in the rendering code.
This made views even more declarative, albeit at the expense of slightly more cruft overall. Complexity and maintainability stayed more or less the same.
The KitchensinkView was already clean and beautiful. Half of it was simple declarative one-liners (clicked this button? call that model method) and the rest was pretty simple and linear rendering logic.
But there was something else.
Entire view logic/rendering of an app was stuffed in one file. The declarative “events” hash, for example, was spanning ~200 lines and was becoming daunting to look through. More importantly, this one file included multiple concerns: object controls section, section for adding objects, global canvas controls section, text handling section, and so on. Yes, these are all view concerns but they’re also logically distinct view concerns.
What to do? Break it into multiple views!
The code size obviously increased once again, but look what happened with views maintainability. It went from 132 to 145! A significant and expected improvement.
Of course I didn’t need complexity report to tell me that things got better. I was now looking at 5 beautiful concise view files, each with its own rendering logic and behavior. As a nice side effect, some of the views (e.g. AddCommandsView
) became entirely declarative.
At this point, I was fully satisfied with the way things turned out.
Backbone (with unclassified extension) and multiple views made for a pretty clean app. Backbone felt almost perfect here as there was none of the more complicated logic of nested views/collections, animations/transition, routing, etc. I knew that adding new functionality or changing existing one would be straightforward; multiple views meant easy scaling and easy addition of new ones.
What could possible be better…
Determined to continue further and see where it takes me, I took another look at the views:
This is ObjectControlsView
and I’m only showing 2 functionalities here: lock toggling button and opacity slider. Notice how both of their behavior have something in common. There’s event (“click” or “change”) that maps to a model action, and then there’s rendering logic — updating button text or updating slider value.
Don’t you find the cruft inside render
just a bit too repetitive and unnecessary? Wouldn’t it be great if we could just update “opacity” or toggle lock value on a model, not caring about rendering of corresponding control? So that opacity slider automatically knew to update itself, once opacity on a model changed. Ditto for toggling button.
Did someone say… data binding?
Of course! I just had to see what introducing data-binding would do to an app. Unfortunately, Backbone doesn’t have it built-in, unlike other MV* solutions — Knockout, Angular, Ember, etc.
I wanted to stick to Backbone for now, instead of trying something completely different, which meant using an addon of some sort.
I tried backbone.stickit first, but couldn’t get it to work at all with kitchensink’s model/controller methods.
You see, binding view to a regular Backbone model is easy with “stickit”. Just define a hash with selector ↔ attribute mapping:
Unfortunately, our model is <canvas>-based and all the state needs to be set & retrieved via a proxy. This means using methods, not properties.
We can’t just map opacity slider to “opacity” attribute on a model. We need to map it to canvas.getActiveObject().opacity
(possibly checking that getActiveObject()
returns object in the first place) via custom getters/setters.
Next there was Epoxy.js, which defines bindings like so:
Again, easy with plain attributes. Not so much with methods. I tried to implement it via computed properties but without much success.
Next there was Rivets.js and as I was expecting another painful “adaptation”, it surprisingly just worked outside of the box!
Rivets turned out to be pretty low-level, but also very flexible. Docs quickly revealed how to use methods instead of properties. The binding could be initialized like so:
And the markup would then be parsed for any “rv-…” attributes (prefix could be changed). For example:
The great thing was that I could just write app.getBgColor
and it would call that method on kitchensink
since that’s what was passed to rivets.bind()
as an app
. No limitations of only working with Backbone.Model
attributes. While this worked for one-way binding, with 2-way binding (where view also needs to update the model), I would need to write custom adapter…
It sounded daunting but turned out rather straighforward:
Now, I could add this in markup (notice the use of special ^
separator, instead of default .
):
..and it would use a nice convention of calling getCanvasBgColor
as a getter and setCanvasBgColor
as a setter, when changing the colorpicker value.
There was no longer a need for manual (even if declarative) event listeners:
I didn’t exactly like this whole setup.
I’d prefer to have bindings right in the code, to have a “birds-view” understanding of which elements map to which behavior. It would also be easier and more understandable to map multiple elements. If I wanted a set of buttons to toggle their enabled/disabled state according to certain state of canvas — and I did want that — I couldn’t just do something like:
I had to write custom binder instead, and that’s certainly more obscure and harder to understand. Speaking of custom binders…
Rivets makes it easy to create them. Binders are those “rv-…” directives we saw earlier. There’s few built-in ones — “rv-value”, “rv-checked”, “rv-on-click” — and it’s easy to define your own.
In order to toggle buttons state, I wrote this simple 1-way binder:
It was now possible to use “rv-enable” on a parent element to enable or disable descendant buttons:
But imagine reading unknown markup like this, trying to understand which directive controls what, and how far it spans…
Another binder I added was “rv-val”, as an alternative to “rv-value” (with the exception of observing “keyup” rather than “change” event on an element):
You can see that adding binders is simple, they’re easy to read, and you can even reuse existing behavior (rivets.binders.value.routine
in this case).
Finally, there’s a convenient support for formatting, which was just perfect for changing toggleable text on some elements:
Notice how “rv-text” contents include | toggle smth smth
. This is a custom formatter, defined like this:
The button text was now determined according to app^horizontalLock
(which desugars to app.getHorizontalLock()
) and when passed to toggle
formatter, would come out either as one or the other. Unfortunately, formatter falls a bit short; it seems that its values can’t be strings, which makes things much less convenient.
Unlike with behavior, keeping alternative UI text directly in HTML felt perfect. Text stays where text should be — in markup; it makes localization easier; it’s easy to follow.
On the other hand, I didn’t like keeping model/controller actions right in the markup:
It’s especially bad when some of the view behavior is somewhere in a JS-based view/controller, and some — in the markup. YMMV.
So what happened to the code?
After moving app logic from JS views to HTML (via Rivets’ “rv-“ attributes), all that was left from the views were these 3 lines:
Amazing, right? Or not so much?
Yes, we practically eliminated JS-based view, moving logic/behavior to markup and/or model-controller. But let’s look at the stats:
There was now additional (32 SLOC) data_binding_adapter.js file which included all the customizations and additions for Rivets.js. Still, there was a dramatic reduction of SLOC (830 -> 715); expected, since a lot of logic was moved to the markup. View’s maintainability was still ~145 but model-controller surprisingly went from 116 to 125! Even though more code moved to model-controller, that code was now simpler — usually a pair of getter/setter’s for particular state.
So how does this compare to the very first step — a monolythic spaghetti code?
Improvement across the board. And what about HTML, where so much logic was moved to?
Ok, 100 lines longer, and only 3KB heavier. Doesn’t seem too bad.
But was this really an improvement? All the HTML declarations and all the abstraction felt like 1 step forward, 2 steps back. It seemed harder to understand and likely harder to maintain. While complexity tool showed improvement, it was only improvement on JS side, and of course it couldn’t give holistic analysis.
I wanted to take a step back and try something else.
Aside from markup contamination, the problem was model-controller becoming too fat; that one file that was still sitting at 70 complexity.
What if I could keep Rivets.js for now, but break model-controller into multiple controllers, each for distinct behavior. And a very thin model would serve as a proxy between <canvas> and controller actions. After some experimentation and pondering on a best way to organize something like that, I ended up with this:
The model was now <canvas> itself! There were no JS-based views, and all the logic was in 5 distinct controllers. But how was this possible? Shouldn’t canvas actions go through some kind of proxy to normalize all the canvas.getActiveObject().{get|set}Something()
voodoo? Yes, it was still needed, but all the proxying was now happening in controller itself.
I created CanvasController
, inheriting from Backbone.Model
(to have event managing), and gave it very minimal generic behavior (getStyle
, setStyle
, triggerChange
). Those methods are what served as proxy between canvas and controllers. Controllers implemented specific getters/setters via those methods (inherited from a parent CanvasController
“class”).
How did this all look complexity-wise?
SLOC stayed the same but what happened to complexity? Not only it went down to total of 68, the max complexity per file was now only 18! There was no longer a big business logic file cc=70, but small controller files with cc<=20. Definitely an improvement.
Unfortunately, maintainability went slightly down (to 128), likely due to all the additional cruft.
Even though this was the best case complexity-wise, I still wasn’t too happy with this solution. There were still bindings in HTML and canvas controllers felt a bit too overly abstracted (i.e. it would take some time to understand how app works, how to change or extend it).
Muliple controllers reminded me of what I’ve seen in Angular tutorials. It seemed natural to try and see how Angular compares to the last (Backbone + Rivets) solution, since it looked so similar.
Angular learning curve is definitely steeper. It took me ~2 days to understand and get comfortable with Rivets data-binding. It took ~2 weeks to understand and get comfortable with Angular data-binding (watches, digest cycle, directives, etc.).
Overall, implementing kitchensink via Angular felt very similar to Backbone + Rivets combo. But, as with everything, there were pros and cons.
In Angular, there’s no need to Function#bind
methods to a model (when calling them from within attribute values). For example, rv-on-click="app.foo"
calls app.foo()
in context of element, whereas Angular’s ng-click="foo()"
calls foo in context of $scope. This proves to be more convenient.
Using the same example of rv-on-click="app.foo"
vs. ng-click="foo()"
, braces after name make it more clear that it’s a function call.
Function calls are also more concise. For example, rv-show="app.getSelected"
vs. ng-show="getSelected()"
. There’s no need to specify app
since getSelected
is looked up automatically on $scope
.
Mostly syntactic preference, but
<button>{{ ... }}</button>
(in Angular) is easier to read/understand than <button rv-text></button>
.
The biggest drawback was getting started and understanding how to plug kithchensink’s unique “model” into Angular. I was also unlucky to have run into an issue with {{ … }} conflicting with Jekyll’s {{ … }}. Took quite some time to figure out why in the world Angular was not “initializing”…
It’s a bit annoying that Angular’s methods start with $
and “interfere” with a common convention of referencing jQuery objects via $xxx
. Just a minor additional cognitive burden if you’re used to that notation.
There were some minor things like Angular’s $element.find()
limiting lookup by tagName even when jQuery was available. Weird.
Most importantly, custom 2-way binding was non-trivial, unlike with Rivets documentation which made it very clear. With Angular, it’s pretty much impossible to use custom accessors in attribute values. We can’t do that elegant Rivets trick of app^selected
desugaring to app.getSelected()
and app.setSelected()
. Of course Angular’s directives kind of solve this, but it’s not the same.
Why? Because in Rivets, you can plug this custom adapter anywhere, including Rivet’s “native” binders!
Take this radio group, use built-in rv-checked
attribute, and it just works:
This can not be done in Angular, and so we need to implement our own “radio group” handling via directive. Directives are somewhat similar to Rivets’ ones, although of course much more powerful.
To implement accessors, I created bindValueTo
directive, to be used like this:
Now, slider would call getFontSize()
to retrive the value, and setFontSize(value)
to set it. Once I understood directives, it was fairly straightforward:
Notice the additional $element[0].type === 'radio'
branch for that radio group case I mentioned earlier.
When it comes to Angular, I feel it’s important to strike a balance between abstraction & clarity. Take this toggle button, for example. A common chunk of functionality in kitchensink, used a dozen times.
Now, this is a fairly understandable piece of markup — putting the issue of mixed content/behavior aside — accessor methods toggling the state, element class and text updating accordingly. Yet, this is a common functionality. So to avoid repetition, it could be abstracted away into its own directive.
Imagine it being written like this:
Certainly cleaner and easier to read, but is it easier to understand? As with any abstraction, there’s now an extra level underneath, so it’s not immediately clear what’s going on.
So how did porting to Angular affect complexity/maintainability scores?
Comparing to previous Backbone/Rivets combo, SLOC went from 715 to 660. Complexity — from 68 to 65, and maintainability — from 128 to 126. Interesting.
The reduction in SLOC was expected, knowing Angular’s nature of controller “entrees” right in markup. Complexity and maintainability, on the other hand, practically stayed the same.
If you’re wondering how this refactoring affected size of the main HTML file, the picture is very simple and straightforward.
As expected, it’s been continuously growing little by little, with the spike from markup-based solutions like Rivets and Angular. Curiously, while Angular resulted in higher SLOC, it was actually less KB comparing to Rivets.
Unfortunately, other MV* libraries (Ember, Knockout, etc.) didn’t make it into my exploration. I was constrained on time, and I’ve already came to a much more maintainable solution. I do hope to try something else in the near future. It’ll be interesting to see how yet another concept ties into the app. Stay tuned for part 2.
My final conclusion was that Backbone+Rivets and Angular provided relatively similar benefits, with almost exact complexity/maintainability scores, and only different distribution of logic (attributes in markup vs. methods in JS “controller”). The pros/cons I mentioned earlier are what constituted the main difference.
Path of exploration: Vanilla JS (initial mess) -> jQuery (cleaner) -> UI/business logic separation (much cleaner) -> Backbone (slightly better) -> Backbone.unclassified (slightly better) -> Backbone & multiple views (significantly better) -> Rivets (better or worse?) -> Multiple controllers (possibly better) -> Angular.js (better or same?)
MVC framework is not always necessary when refactoring or creating small/medium-sized client-side app. Separating presentation logic from “business” logic is often enough to produce clean and maintainable architecture.
Backbone is great but almost always comes out a bit too low-level. Backbone.unclassified is a great addition to remove some repetition in the views.
Rivets.js is a nice library-agnostic data-binding tool, that could be used on top of Backbone to remove lots of repetitive view logic.
Complexity tools like complexity-report
or JSHint
can aid with refactoring but shouldn’t be followed blindly. Use common sense and time-tested principles (SRP, DRY, separate presentation logic) when refactoring/designing an app.
Don’t forget to look at a big picture. When the size of JS goes down, what happens to the markup? It could be that you’re just shifting things around without any significant improvements.
4 years ago I wrote about and released HTMLMinifier. Back then, there were almost no tools for proper HTML minification; unless you considered things like “Absolute HTML Compressor” for Windows 95/98/XP/2000 or Java-based HTMLCompressor.
I haven’t been working on it all that much, but occasionally would add a feature, fix a bug, add some tests, refactor, or pull someone’s generous contribution.
Fast forward to these days, and HTMLMinifier is no longer a simple experimental piece of Javascript. With over 400 tests, running on Node.js and packaged on NPM (with 120+ dependents), having CLI, grunt/gulp modules, benchmarking suite, and a number of improvements over the years, it became a rather viable tool for someone looking to squeeze the most out of front-end performance.
Seeing how minifier gained quite few new additions over the years, I thought I’d give a quick rundown of what changed and what it’s now capable of.
We still rely on John Resig’s HTML parser but it is now heavily tweaked to conform to HTML5 and to provide more flexible parsing.
A common problem was inability to “properly” recognize block elements within inline ones.
This was not allowed in HTML4 but is now OK in HTML5.
Another issue was with custom elements (e.g. <my-component>test</my-component>
). While, technically, not part of HTML5, browsers do tolerate such cases and so does minifier.
Two other commonly requested features were keeping end tag closing slash and case-sensitivity. Both of these are useful when minifying SVG (or XHTML) documents. Having HTML4 parser at heart, and considering that in 99% of the cases trailing slashes serve no purpose, minifier would always drop them from the output. It still does, but you can now turn this behavior off.
Ditto for case-sensitivity — there’s an option for those looking to have finer control.
With the rise of client-side MVC frameworks, HTML comments became more than just comments. In Knockout, for example, there’s a thing called containerless control flow syntax, where you can have something like this:
It’s useful to be able to ignore such comments, while removing “regular” ones, so minifier now allows for exactly that:
Relatedly, we’ve also added support for generic ignored comments — those starting with <!--!
. You might recognize this pattern from de-facto standard among Javascript libraries — comments starting with /*!
are ignored by minifiers and are often used for licenses.
If you’d like to ignore an entire chunk of markup from minification, you can now simply wrap it with <!-- htmlmin:ignore -->
and it’ll stay untouched.
Finally, we now ignore anything surrounded by <%...%>
and <?...?>
which is often useful when working with server-side templates, etc.
Another bastardization twist on your regular HTML we can see in client-side MVC frameworks is non-standard attribute names, values and everything in between.
Example of Handlebars’ dynamic attributes:
Most of the HTML4/5 parsers will fail here, choking on {
in {{#if
as an invalid attribute name character.
We worked around this by adding support for customAttrSurround
option, in which you can specify an array of regexes to match anything surrounding attributes:
But wait, there’s more! Attribute names are not the only offenders.
Here’s an example from Polymer; notice ?=
as an attribute assignment characters:
Only few days ago we’ve added support for customAttrAssign
option, similar to customAttrSurround
(thanks Duncan Beevers!), which you can call like so:
Continuing on the topic of MVC frameworks, we’ve also added support for an often-used pattern of scripts-as-templates:
AngularJS:
Ember.js
There’s no reason not to minify contents of such scripts, and you can now do this via processScripts
directive:
Now, what about “regular” scripts?
We decided to go a step further, providing a way to minify contents of <script> elements and event handler attributes (“onclick”, “onload”, etc.). This is being delegated to an excellent UglifyJS2.
CSS isn’t left behind either; we can now pass contents of style elements and style attributes through clean-css, which happens to be the best CSS compressor at the moment.
Both of these features are optional.
If you’d like to play it safe and make minifier always leave at least 1 whitespace where it would otherwise completely remove it, there’s now an option for that — conservativeCollapse
.
This could come in useful if your page layout/rendering depends on whitespace, such as in this example:
Minifier doesn’t know that input-preceding element is rendered as inline-block; it doesn’t know that whitespace around it is significant. Removing whitespace would render checkbox too close (squeeshed) to a “label”.
This is when “conservativeCollapse” (and that extra space) comes in useful.
Another recently-introduced customization is maximum line length. An interesting use case is that some email servers automatically add a new line after 1000 characters, which breaks (minified) HTML. You can now specify line length to add newlines at valid breakpoints.
We also have a benchmark suite now that goes over a number of “source” files (front pages of popular websites), minifies them, then reports size comparison and time spent on minification.
How does HTMLMinifier compare [1] to the other solutions out there (Will Peavy’s online minifier and a Java-based HTMLCompressor)?
Site | Original size (KB) | HTMLMinifier (KB) | Will Peavy (KB) | htmlcompressor.com (KB) |
---|---|---|---|---|
HTMLMinifier page | 48.8 | 37.3 | 43.3 | 41.9 |
ES6 table | 117.9 | 79.9 | 92 | 91.9 |
MSN | 156.6 | 133 | 145 | 138.3 |
Stackoverflow | 200.4 | 159.5 | 168.3 | 163.3 |
Amazon | 245.9 | 206.3 | 225 | 218.5 |
Wikipedia | 401.4 | 380.6 | 396.3 | n/a |
Eloquent Javascript | 869.5 | 830 | 872 | n/a |
Not too bad!
Notice remarkable savings (~40KB) on large static files such as a one-page Eloquent Javascript.
Minifier has come a long way, but there’s always room for improvement.
There’s few more bugs to squeesh and few features to add. I also believe there’s more optimizations we could perform to get the best savings — whether it’s reordering attributes to aid gzip compression or more aggressive content removal (spaces, attributes, values, etc.).
One concern I have is how long it takes to minify large (500KB+) files. While it’s unlikely that someone uses minifier in real-time (rather, as a one time compilation step) it’s still unacceptable for minification to take more than 1-2 minutes. This is something we could try fixing in the future.
We can also monitor performance stats — both size (as well as gzipped?) and time taken — on each commit, to get a good picture of whether things change for the better or worse.
As always, I welcome you to try minifier in your projects, report any bugs/suggestions, and help with whatever you can. Huge thanks goes to all the contributors without whom we wouldn’t have come this far!
[1] Benchmarks performed on OS X 10.9.4 (2.3GHz Core i7).
Choosing a good piece of Javascript is hard.
Every time I come across a newly-released, shiny plugin or library, I wonder what’s going on underneath. Yes, it looks pretty and convenient but what does underlying code look like? Does it browser sniff or extend the DOM? Does it pollute global scope? What about compatibility with older browsers; could it be that it utilizes, say, ES5 getters/setters making it unusable in IE<9?
I always wished there was a way to quickly check how well a certain script behaves. Not like we did back in the days.
The best thing for a code quality test like this is undoubtedly through JSHint [1]. It can answer most of those questions and many more. Unfortunately, “many more” part is a bit of a problem. Plugging a script code into jshint.com usually yields tons of issues, not just with browser compatibility or global variables but also code style. These checks are a must for your own scripts, but for 3rd party code, I don’t really care about missing semicolons (despite my love of them), whether constructors begin with uppercase, or if assignments happen in conditional statements. I only wish to know how well a script behaves on the outside. Now, a sloppy code style can certain be an indication of a bad quality of script overall. But more often than not it’s a preference not a problem.
Few days ago, I decided to hack something together; something simple, that would allow me to quickly plug the script and see a big picture.
So I made JSCritic.
Plug in script code and it answers some of the more pressing questions.
I tried using Esprima at first, but quickly realized that most of the checks I care about are already in JSHint. So why not piggy back on that? JSCritic turned out to be a simple wrapper on top of it. I originally wrote it in Node, to be able to pass it filename and quickly see the results, then ported it to run in a browser.
You can still run it in both.
Another thing I wanted to see is minified script size. Some plugins have minified versions, some don’t, some use better minifiers, some worse. I decided to minify content through UglifyJS — a de facto standard of minification at the moment — to get an objective overview of code size. Unfortunately, browser version of UglifyJS seems to be choking more often than Node one, so it might be safer to use the latter.
I have to say that JSCritic is more of a prototype at the moment. Static analysis has its limitations, as well as JSHint. I haven’t had much time to polish it, but hoping to improve in the near future or with the help of ever-awesome contributors. One thing to emphasize is that for best results you should use non-minified source code (you’ll see exactly why below)!
If you want to know more about tests, implementation details, and drawbacks, read on. Otherwise, hope you find it as useful as I do.
Let’s first take a look at global variables detection. Unfortunately, it seems to be very simplistic in JSHint, failing to catch cases other than plain variable/function declarations in top level code.
Here it catches foo
, bar
, and qux
as expected, but fails with all of these:
Granted, detecting globals via static analysis is hard. A more robust solution would be to actually execute code and compare global object “signature”, just like I did in detect-global bookmarklet back in 2009 (based on a script by Remy Sharp). Unfortunately, executing script is also not always easy, and global properties could be exported from various places (e.g. methods that need to be called explicitly); we have no idea which places those are.
Still, JSHint catches a good number of globals and accidental leaks like these:
It gives a decent overview, but you should still look through variables carefully as some of them might be false positives. I’m hoping this will be made more robust in the future JSHint versions (or we could try using hybrid detection method — both via static analysis and through global object signature).
Detecting native object extensions has few limitations as well. While it catches both Array and String in example like this:
..it fails with all of these:
As you can see, it’s also simplistic and could have false negatives. There’s an open JSHint issue for this.
Just like with other checks, there are false positives and false negatives. Here’s some of them, just to give an idea of what to expect:
and with document.write
:
I included 3 checks for browser/engine compatibility — Mozilla-only extensions (let expressions, expression closures, multiple catch blocks, etc.), things IE chokes on (e.g. extra comma), and ES6 additions (array comprehensions, generators, imports, etc.). All of these things could affect cross-browser support.
To detect browser sniffing, we first check statically for occurance of navigator
implied global (via JSHint), then check source for occurance of navigator.userAgent
. This covers a lot of cases, but obviously won’t catch any obscurities, so be careful. To make things easier, a chunk of code surrounding navigator.userAgent
is pasted for expection purposes. You can quickly check what it’s there for (is it for non-critical enhancement purposes or could it cause subtle bugs and/or full breakage?)
Finally, I included unused variables check from JSHint. While not exactly an indication of external script behavior, seeing lots of those could be an indication of sloppy (and potentially buggy) code. I put it all the way at the end, as this is the least important check.
So there it is. The set of rules can definitely be made larger (does it use ES5 features? does it use browser-sniffing-like inference? does it extend the DOM?) and more accurate in the future. For now you can use JSCritic as a quick first look under the hood.
[1] and perhaps ESLint, but I haven't had a chance to look into it.
It’s always fun to see something described as “magic” in Javascript world.
One such example I came across recently was AngularJS dependency injection mechanism. I’ve never been familiar with the concept, but seeing it in practice looked clever and convenient. Not very magical though.
What is it about? In short: defining required “modules” via function parameters. Like so:
Notice the $scope
, $timeout
, $http
identifiers.
Aha. So instead of passing them as strings or vars or whatever, they’re defined as part of the source. And of course to “read” the source there could only be one thing involved…
The kind that we used in Prototype.js to implement $super back in 2007? Yep, that one. Later making its way to Resig’s simple inheritance (used in a safe fashion) and other places.
Seeing a modern framework like Angular use function decompilation got me surprised. Even though it wasn’t something Angular relied on exclusively, this black magic has been somewhat frowned upon for years. I wrote about some of the problems associated with it back in 2009.
Something so inherently non-standard and so varying among implementations could only be compared to user agent sniffing.
But is it, really? Could it be that things are not nearly as bad these days? I last investigated this 4 years ago — a significant chunk of time. Could it be that implementations came to some kind of unification, when it comes to function string representation? Am I completely outdated?
Curious, I decided to take a look at the current state of affairs. Could function decompilation be relied on right now? What exactly could we rely on?
But first..
To put simply, function decompilation is the process of accessing function code as a string (then parsing its body or extracting arguments or whatever).
In Javascript, this is done via toString()
of function objects, so fn.toString()
or String(fn)
or fn + ''
or anything else that delegates to Function.prototype.toString
.
The reason this is deemed unreliable in Javascript is due to its non-standard nature. A famous quote from ES5 spec states:
15.3.4.2 Function.prototype.toString( )
An implementation-dependent representation of the function is returned. This representation has the syntax of a FunctionDeclaration. Note in particular that the use and placement of white space, line terminators, and semicolons within the representation String is implementation-dependent.
Of course when something is implementation-dependant, it’s bound to deviate in all kinds of ways imaginable.
..and it does. You would think that a function like this:
.. would be serialized to a string like this:
And it almost does. Except when some engines omit newlines. And others omit comments. And others omit “dead code”. And others include comments around (!) function. And others hide source completely…
Back in the days, things were really bad. Safari <=2.x, for example, didn’t even conform to valid Function Declaration syntax. It would go wild with things like “(Internal Function)” or “[function]” or drop identifiers from NFE’s, just because.
Back in the days, some of the mobile browsers (Blackberry, Opera Turbo) hid the code completely (replacing it with polite “** /* source code not available */ **” comment instead or similar), supposedly to “save” on memory. A fair optimization.
But what about today? Surely, things must have gotten better. There’s a convergence of engines, domination of relatively sane WebKit, lots of standardization, and tremendous increase in engines performance.
And indeed, things are looking good. But it’s not nearly all nice and peachy yet, and there’s more “fun” on the horizon.
I made a simple test page, checking various cases of functions and their string representations. Then tested it on desktop browsers, including pretty “old” ones (IE6+, FF3+, Safari4+, Opera 9.6+, Chrome), as well as slew of mobiles and looked at common patterns.
It’s important to understand different purposes of function decompilation in Javascript.
Serializing native, built-in functions is different from serializing user-defined ones. In case of Angular, for example, we’re talking about user-defined function, so we don’t have to concern ourselves with the way native functions are serialized. Moreover, if we’re talking about retrieving arguments only, there’s definitely less deviations to deal with; unlike if we wanted to “parse” the source code.
Some things are more reliable; others — less so.
When it comes to user-defined functions, things are pretty uniform.
Aside from oddball and dying environments like IE<9 — which sometimes includes comments (and even parens) around functions in their string representation — or Konqueror, that omits function body brackets from new Function
-generated functions.
Most of the deviations are in whitespace (and newlines). Some browsers (e.g. Firefox <17) strip all comments from source code, and remove “dead”, unreachable code.
But don’t get too excited as we’ll talk about what future holds in just a bit…
Things are also a bit hectic in generated functions (using new Function(...)
) but not much. While most of the engines create function with “anonymous” identifier, the spacing and newlines are inconsistent. Chrome also inserts extra comment after parameters list (extra comment never hurts, right?).
becomes:
Every single supporting engine that I’ve tested represents bound (via Function.prototype.bind
) functions the same way as native functions. Yes, that means bound functions “lose” their source from string representation.
Arguably this is a reasonable thing to do; although a bit of a “wat?” when you first see it — why not use “[bound code]” instead?
Curiously, some engines (e.g. latest WebKit) preserve function’s original identifier and some don’t.
What about non-standard extensions? Like Mozilla’s expression closures.
Yep, those are still represented as they’re written; without function body brackets (technically, a violation of Function Declaration syntax, which MDN page on Function.prototype.toString doesn’t even mention; something to fix!).
I was almost done writing a test case, when a sudden thought crossed my mind. Hold on a second… What about EcmaScript 6?!
All those new additions to the language; new syntax that changes the way functions look — classes, generators, rest params, default params, arrow functions. Won’t they affect function representation as well?
Quick test shed the light — they do. Of course. Firefox 24+, leading ES6 brigade, reveals string representation of these new constructs:
Examining ES6 spec confirms this further:
An implementation-dependent String source code representation of the this object is returned. This representation has the syntax of a FunctionDeclaration, FunctionExpression, GeneratorDeclaration, GeneratorExpession, ClassDeclaration, ClassExpression, ArrowFunction, MethodDefinition, or GeneratorMethod depending upon the actual characteristics of the object. In particular that the use and placement of white space, line terminators, and semicolons within the representation String is implementation-dependent.
If the object was defined using ECMAScript code and the returned string representation is in the form of a FunctionDeclaration FunctionExpression, GeneratorDeclaration, GeneratorExpession, ClassDeclaration, ClassExpression, or ArrowFunction then the representation must be such that if the string is evaluated, using eval in a lexical context that is equivalent to the lexical context used to create the original object, it will result in a new functionally equivalent object. The returned source code must not mention freely any variables that were not mentioned freely by the original function’s source code, even if these “extra” names were originally in scope. If the source code string does meet these criteria then it must be a string for which eval will throw a SyntaxError exception.
Notice how ES6 still leaves function representation implementation-dependent although clarifying that it no longer conforms to just FunctionDeclaration syntax. Also notice an interesting additional requirement — “returned source code must not mention freely any variables that were not mentioned freely by the original function’s source code” (bonus points if you understood this in less than 7 tries).
I’m unclear on how this will affect future engines and their representation. But one thing is certain. With the rise of ES6, function representation is no longer just an optional identifier followed by parameters and function body. There’s a whole lot of new stuff coming.
Regexes will, once again, have to be updated to account for all the changes (did I say it’s similar to UA sniffing? hint, hint).
I should also mention couple of old chestnuts that never quite sit well with function decompilation — minifiers and preprocessors.
Minifiers like UglifyJS, and preprocessors/compilers like Caja tend to tweak the hell out of source code and rename parameters. This is why Angular’s dependency injection doesn’t work with minifiers unless alternative methods are used.
Perhaps not a big deal, but still a relevant issue and definitely something to keep in mind.
To sum things up: it appears that function decompilation is becoming safer but — depending on your parsing needs — it might still be unwise to rely exclusively on.
Thinking to use it in your app/library?
Remember that:
P.S. Functions with overwritten toString
methods and/or Proxy.createFunction
are a different kind of beast; we can consider those a special case that would require a special consideration.
Special thanks to Andrea Giammarchi for providing some of the mobile tests (not available on BrowserStack).
Moving from Wordpress to Github Pages (and Jekyll) is the best thing that ever happened to this blog. I’ve been meaning to do it for a while and finally found some time over these holidays.
Jekyll is a static site generator, and Github Pages allow for seamless hosting of its content. I’ve been using it on fabricjs.com for couple years now; once you get familiar with the worklow, it’s simple and straightforward.
If you mainly care about content and enjoy minimalism, Jekyll/gh-pages is just a perfect combo that doesn’t get in a way and lets you focus on writing.
Since starting perfectionkills.com in Aug 2007 — mostly wanting to share Prototype.js tutorials, tips, and scripts — I wrote 55 posts (worth 70,000 words). There’s been 914,000 visits and 1,160,000 pageviews. People left 1744 comments (most popular posts being Understanding delete, Javascript quiz, and Profiling CSS for fun and profit).
Here’s to the next 50 posts and a milion visits! :)
I recently started working on adding some good-looking brushes to Fabric.js. We've had free drawing functionality for a while, but it was... laughable. Just a simple pencil of varying thickness. Far from anything you would see in those amazing drawing applications popping up in the last few years — Mr. doob's Harmony, deviantART's Muro, or mudcu.be Sketchpad. Freedrawing is one of the strongest points of canvas, so it's a shame not to have something good in a canvas library like Fabric.
image by Krzysztof Banaś
I started experimenting with different styles and techniques — edge smoothing, bezier curves, ink and chalk and pen and stamp and patterns — oh my. Turns out there's not much written about this on the web. Not in the context of Javascript and <canvas>, anyway. The best you can do is look at the demos source code to get a glimpse of what's going on.
So I've got an idea to create sort of an interactive tutorial. Taking you from the very basics (drawing a primitive mouse-following-line on canvas), all the way to those harmony brushes, with their sophisticated curves and strokes, spanning from the edges and curling around into weirdly beautiful structures. The tutorial pretty much reflects my own path of exploration.
I'll go over different code implementations of brushes so that you can understand how to implement free drawing on canvas yourself. And you can play with things around as we go.
Before proceeding, it's good to have general understanding of HTML5 canvas.
So let's start with a very basic approach.
Check out this Pen!
We observe "mousedown", "mousemove", and "mouseup" events on canvas. On "mousedown", we move pointer to clicked coordinates (ctx.moveTo
). On "mousemove", we draw a line to new coordinates of a mouse (ctx.lineTo
). Finally, on "mouseup", we end drawing by setting isDrawing
flag to false. This flag is used to prevent drawing when just moving mouse on canvas (without first clicking it). You could avoid flag by assigning "onmousemove" event handler right in "onmousedown" one (and then removing it in "onmouseup"), but flag is a simple solution that works just as well.
Well, that's a start. Now, we can control the line thickness by changing value of ctx.lineWidth
. However, with thick line comes thick responsibility jagged edges. This happens on "sharp turns" and can be solved by setting ctx.lineJoin
and ctx.lineCap
to "round" (see MDN for examples of how these affect rendering).
Check out this Pen!
Now the lines are not jagged around corners. But they aren't very smooth on the edges either. This is because there's no antialiasing hapenning here (controlling antialiasing on canvas has never been straightforward). So how do we emulate it?
One way to make edges smooth is with the help of shadows.
Check out this Pen!
All we've added is ctx.shadowBlur
and ctx.shadowColor
. Edges are definitely smoother now, since lines are surrounded with a shadow. But there's still a little problem. Notice how line is thinner and blurry at the beginning but then becomes thicker and more solid at the tail. An interesting effect on its own, but perhaps not exactly what we want. So why does this happen?
Turns out this is due to shadows overlapping each other. Shadow from current stroke overlaps shadow from previous stroke which overlaps shadow from previous stroke, and so on. The more overlapping shadows, the less blurry and the thicker line is. So how would we go about fixing this?
One way to avoid these kind of issues is to always stroke once. Instead of blindly stroking on every mousemove, we can introduce a state — store points in an array, and always stroke through them once.
Check out this Pen!
As you can see, it looks the same as the first example. Now we can try adding shadow here. Notice how it stays even throughout entire path.
Check out this Pen!
Another smoothing option is to use radial gradients. Gradients allow for more even color distribution, unlike shadows which often comes out more blurry than "smooth".
Check out this Pen!
But, as you can see, stroking with gradient has other issues. Notice how we're simply filling area with circular gradient on each mousemove. When moving mouse quick, we get a sequence of disconnected circles rather than a straight line with smooth edges.
One way to solve this is by generating additional points whenever there's too much distance between any of them.
Check out this Pen!
Finally a decently smooth curve!
You might notice a small change in the above example. Instead of storing all points of a path, we only store last one. And we always stroke from that last one to the current one. Having last point is all we really need to calculate the distance between it and the current one. If the distance is too large, we stroke more in between. The good thing about this approach is that we use less memory by not having entire points
array!
One interesting concept I came across was using bezier lines instead of straight ones. This allows for curves of a free-drawn path to be naturally smoother. The idea is to replace straight-line stroke with quadraticCurveTo
, using middle point between two consecutive points as quadratic curve control points. Try it:
Check out this Pen!
So there you have it: some basic variations of drawing and smoothing lines, from simple few-liner to more complex curve-based solution. Let's move on to something more fun.
One of the tricks in a realistic brush toolbox is to simply stroke with an image. I came across this technique in this blog post by Andrew Trice. The idea is to fill with an image of a little chunk of a stroke, using last-point-technique. This opens a huge number of possibilities.
Check out this Pen!
Depending on an image, we can achieve different brush styles. In this case, it's something resembling a thick brush.
An interesting twist (excuse the pun) to a previous technique is to fill path with same image but rotating it randomly every time it's rendered. If we do this, we can get something resembling a fur (or a garland?).
Check out this Pen!
When it comes to simulating a pen, a nice solution is to simply randomize segment width of a path! We can still use good-old moveTo
+lineTo
combination, but change "lineWidth" every time stroke occurs. Here's how it looks:
Check out this Pen!
One thing to keep in mind is that, in order for drawing to look realistic, randomized values should be not too far apart.
Another pen simulation is done via multiple strokes. Instead of stroking between points once, we add 2 more passes. But we don't want to stroke at the same spot, as that wouldn't change anything. Instead, we take couple random points (blue dots on a picture) next to original (green dots on a picture), and stroke from there. So instead of 1 line, we get 2 lines "sloppily" stroked right next to the original one. Perfect simulation of a pen!
Check out this Pen!
There's so much you can do with this "multiple stroke" technique. I urge you to try your own variations. Here's one example where, if we increase line thickness and offset 2nd pass just slightly, we get a simulation of a thick brush. Those blank spots on the edges is what makes it look realistic.
Check out this Pen!
If we implement multiple strokes, but at even and small offsets, we can get something resembling a sliced brush again. This time, without using an image. The path simply comes out skewed.
Check out this Pen!
If we take the same brush as in previous example, and give each stroke lesser and lesser opacity, we get an interesting effect like this.
Check out this Pen!
But enough with straight strokes. Can we apply the same technique to, say, bezier-curve based path? Of course. We just need to draw each curve at an offset from the original points. This is how it looks:
Check out this Pen!
We can also use same "fading" technique where each line has lesser opacity. This makes these lines look even more elegant.
Check out this Pen!
As with straight strokes, the possibilities with bezier curves are endless.
After we learned how to stroke lines and curves, implementing stamp brush couldn't be simpler! All we need is to draw certain shape on every mouse move, at a location of a mouse. That's it. Here's an example of stamping with a red circle.
Check out this Pen!
You can see the same issues with intermediate points, which we can solve with the same technique of prefilling. The prefilling in case of stamps tends to create pretty interesting trail-like or tube-like effects. You can control the density of a tube by changing interval at each points are prefilled between last point and current.
Check out this Pen!
Of course we can always spice things up, changing each stamp in some way. For example, randomly varying radius and opacity in the 1st example gives us this.
Check out this Pen!
When it comes to the kind of stamp, you can really go as far as you can — anything from basic shapes (e.g. circle) like we've just seen to more complex paths, made of hundreds or thousands of curves. The only limiting aspect here is performance. Here's an example of stamping with a simple five-pointed star.
Check out this Pen!
And here's the same star, but rotated randomly on each move, for a bit more natural feel.
Check out this Pen!
Heck, let's radomize even more — size, angle, opacity, color, thickness! Now isn't that fun.
Check out this Pen!
We're also not limited to just shapes. One option is to manipulate pixels around mouse point directly. A simple example would be to just randomize their color and location.
Check out this Pen!
Now that we went over stroking and stamping, let's take a look at a completely different beast — patterns. We can use canvas' createPattern
filling the path with it as we go. This makes for some very interesting effects. Let's take a look at a simple dot pattern.
Check out this Pen!
Notice how the pattern is created here. We're instantiating mini canvas, drawing circle on it, then using that canvas as a pattern on a main canvas! We might have just as well used a plain image, but the beauty of using canvas is that we have programmatic access to it and can change it anyway we like. This means we can create dynamic patterns, e.g. changing color of a circle in a patttern, its radius, etc. It also means that we can experiment with patterns quicker and easier.
Based on previous example, you should be able to create something similar. Let's say a horizontal lines pattern.
Check out this Pen!
...or vertical lines, with interchanging colors.
Check out this Pen!
...or even multiple lines with varying colors. Once again, everything is possible. Just think of some pattern and try to create it on a mini canvas. The rest is taken care by createPattern
and path filling.
Check out this Pen!
Finally, here's an example of using image-based pattern together with bezier-curved path. All that's changed here is that we're passing an image object to createPattern
(and then assigning resulting pattern to strokeStyle
).
Check out this Pen!
Now what about goold-old spray brush? There's few ways we can implement it. One of them is to simply fill area (pixels) around mouse point with color. The larger the area (radius), the thicker spray is. The more pixels we fill, the denser it is.
Check out this Pen!
You might notice that previous approach does not really paint like a real spray. A real spray paints area continuously, not just when we move a mouse/brush. In order to achieve this, we need to paint area at a constant interval while the mouse is pressed. This way, certain areas can be made darker just by "holding a spray" there longer.
Check out this Pen!
The previous example is more realistic but not fully so. Real spray throws paint over a round area, not rectangular. So let's try to distribute pixels over a round area.
Check out this Pen!
Much better.
Finally, is there anything else we can do to make spray more realistic? Aside from using an image as a stamp, of course. We can certainly make paint spread out even more sporadically, as it would in a real life. If we change opacity of each of the painted pixels, we get a very similar effect.
Check out this Pen!
The concept of connecting neighbour points was popularized by zefrank's Scribbler and Mr. doob's Harmony. If you remember Harmony brushes like sketchy, shaded, chrome — that's the effect I'm talking about.
The idea is: add additional strokes between nearby points of already drawn path. This usually creates an effect of a sketch, or a web, or a shading of some sort; additional strokes add illusion of darker spots in small, "bended" areas.
A naive approach would be to take our first simple example of point-based brush, and add extra stroking. For each point along the path, we would stroke towards one of previous points on a path:
Check out this Pen!
You can kind of start to see something resembling Harmony's brushes, but it's not exactly the same. It could be made better by reducing opacity (i.e. contrast) of additional strokes, to make them more realistic and shadowy. But to recreate effect fully, we need to follow a different algorithm.
Check out this Pen!
The part responsible for "nearby" stroking is this:
var lastPoint = points[points.length-1];
for (var i = 0, len = points.length; i < len; i++) {
dx = points[i].x - lastPoint.x;
dy = points[i].y - lastPoint.y;
d = dx * dx + dy * dy;
if (d < 1000) {
ctx.beginPath();
ctx.strokeStyle = 'rgba(0,0,0,0.3)';
ctx.moveTo(lastPoint.x + (dx * 0.2), lastPoint.y + (dy * 0.2));
ctx.lineTo(points[i].x - (dx * 0.2), points[i].y - (dy * 0.2));
ctx.stroke();
}
}
What's going on here? Looks crazy. Took me a while to understand but the concept is strikingly simple!
When drawing a line, we check entire distance of already-drawn path comparing all the points to the current (last) one. If the point is in certain proximity (d < 1000
) of a last one, we move pointer to it and stroke a line from there to the current point. dx * 0.2
and dy * 0.2
give those additional strokes a bit of offset.
That's it. Simple idea, powerful effect.
An interesting twist to this technique — seen in Harmony — is to create fur effect. Instead of stroking towards the nearby point (from the last one), the stroke is made to the opposite direction. With a little bit of offset, it produces furry strokes around certain (close) areas.
Check out this Pen!
Shortly after investigating Harmony brushes, I came across this wonderful blog post by Lukáš Tvrdý, explaining nicely some of the variations of neighbor-points technique. He describes how different parameters affect the strokes and the kind of effects they produce. Definitely worth checking out.
So there you have it — some of the basic as well as more interesting drawing techniques. We've only scratched a surface here. There are endless possibilities to customize either of the brushes, creating even more exciting effects. Change opacity or color, width or offsets, introduce random factor, and a whole new effect is born.
Try experimenting with them on your own!
“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”
– Bill Gates
What if I told you that you can become a superhero? Yes, just like the one from the comics and movies we’ve all come to love since the childhood years.
“The Amazing Spider-Man”. “Batman, the Dark Knight”. Or even a man of steel — Superman. Characters with extraordinary superpowers, fighting the good fight against villains of the night.
I’m here to tell you that you can become one of them. Yes, you, my dear developer. You, who spend a good chunk of the day in front of the monitor.
“But how is this possible?” — you ask me.
“Have you completely lost your mind? Spent a night too much in the debris of a code, perhaps?”. “Obviously, the superpowers are called extraordinary for a good reason — not everyone has them. And the entire concept of superhuman — aside from being a silly fairytale — is definitely about extraordinary specimen; not the real human beings. We just don’t have the superpowers, capabilities of such sort, no matter how much we wish for them”.
Well, damn.
You got me there. I think I’ll just have to come straight out and admit that I lied.
You actually won’t become a Superman. Or even a Spider-Man. Sorry. But don’t close this tab just yet, as I do have a point here.
So we can’t have an amazing ability to create spider web in seconds. Or to fly through the skies with the speed of light. Then what is this nonsense I’m talking about?
Something I’ve come to realize over the past few years is that superhuman is hidden in all of us. We just don’t know about it, or don’t care about it, or never try to unleash the potential. When I talk about superhuman, I don’t mean the above-mentioned comics abilities, but physical and mental abilities as compared to an average human being.
“Superhuman hidden in all of us?” — I hear your laugh — “Is this one of those deep, motivational quotes you found on Pinterest?”
The dictionary defines “superhuman” as “having or showing exceptional ability or powers”. Can we achieve exceptional abilities as compared to an average human being? Of course! And guess what. To that same human being, those abilities will come off pretty damn close to superpowers. How is that for a superhero?
The sad state of affairs these days is that an average person is so physically out of shape, that the difference in what we are and what we can achieve is mind-blowing. In programmer circles situation is even worse, since our lifestyle — coupled with our lazy nature — takes its toll; and the gap becomes even wider. A regular computer-facing-on-a-daily-basis person has incredibly low levels of physical strength, endurance, mobility, speed, and work capacity.
There’s no reason for this and it has to stop.
This is certainly easier said than done, so I’d like to share something with you. I’d like to show you my outlook on things, and tell you a couple of ideas that will help unleash those super abilities hidden within you. As weird as it might sound, becoming a better you can actually be fun, and here’s how.
When I was 12 years old, a friend of mine introduced me to my very first computer game — Baldur’s Gate. A fascinating role-playing adventure, set in a Dungeons & Dragons universe. Just like in any game of such kind, you start with a character. A weak and unskillful, pale version of what you can become. As you travel through the mysterious world full of dangerous monsters, the character “grows”. It becomes a better, stronger, faster version of itself; learning new skills and improving the existing ones. Turning into a superhuman machine — powerful and dangerous to any opposing force on its way.
For some reason, this “character growing” aspect has always fascinated me. Yes, the story is engaging and the fantasy world is amazing on its own. But there’s nothing like reaching that new level, learning a new skill, or becoming a slightly more powerful version of yourself. Even if it’s all happening in a virtual world.
Have you ever felt the same about your game character? Ever caught your heart beating faster when you’ve found new spell in a Skyrim world? Or finally got that Fallout perk?
So here’s the fun part — that character can be you. Few months of proper physical regime gives +1 to your strength. Practice some acrobatics skills — +1 to agility. Engage in long-lasting activity — +1 to endurance.
“You’ve got to be joking!” — you say. “I have a project to finish, and there’s still IE testing to be done. And you want me to start practicing acrobatic skills? I get enough by juggling DOM quirks in my mind. This is seriously the dumbest idea ever.”
Why is it that we’re so good at perfecting virtual things, yet fail so much with our real-life self?
The idea is this: gaming aspect could be a huge motivation. Try treating yourself as a game character and it could do wonders. Don’t care about games? Treat it as a pet project. Just like that library you released few months ago, carefully planning and thinking through the API, adding a thorough documentation, making sure there are unit tests covering all functionality; there’s always new stuff to add, things to improve, polish. The very same principle could be used to make a better you.
Unlike in games, there’s almost no level cap in real life. Well, technically there is, but it’s so far, it will likely take you years and years to reach it. When I realized this, it shocked the hell out of me; made me feel absolutely empowered. No more reaching maximum level 40 and losing interest in a game from not being able to improve.
We’re talking about life-long leveling up here. And the possibilities for growth are endless.
You can work on strength, speed, power, agility, endurance — you name it. Or all of them at the same time, slowly reaching extraordinary levels in all of them. Now that’s superhuman.
Before you dismiss all this as some silly games, think about health benefits and aesthetic improvements that come along, just in case you happen to care about things like that.
Of course you care about things like that! We all do. Great physique increases confidence and good health makes the rest of your life more enjoyable.
But here’s the cool thing: focusing on performance does wonders for both health and looks. Focusing just on looks or health usually does little for performance. Yes, those two goals are great on their own, but I can’t repeat this enough — if you focus on performance, looks and health will follow.
You get the best of all worlds.
As geeks, we possess OCD-like features, which is what probably makes gaming aspect so effective. We’re also smart about what needs to be done. But we’re lazy. So the biggest thing preventing you from becoming a better you is just… starting. Once you start, it really doesn’t take long to see the progress; to feel the “leveling up” aspect. And that’s fun. Instead of new spell you’ll be thinking about new achievement of doing 10 pull-ups. Instead of increasing strength point, you’ll plan for squatting your bodyweight. Instead of speed boost, you’ll aim to run an 8-minute mile.
With all this gaming talk, I just have to bring up Fitocracy — a social network and a tracking tool that brings gaming ideas to the world of fitness.
I’ve been using it for the last 2 years, just for fun, and while my motivation comes from the obsessive head, I can see the kind of boost it gives to hundreds of people on a daily basis.
It’s one of the reasons I know that gaming works. If you’re lacking some motivation, I suggest you give it a try. It might just be that perfect push. And it keeps things fun; points, achievements, badges, levels — it’s all there.
So now that you decided to give it a shot and become a superhero version of yourself (right? right?) Where should you start?
One thing to keep in mind is that fitness world — just like web development one — is often dark and full of terrors full of misconceptions and ignorant advice. It’s good to take things with a grain of salt, and always use common sense.
There’s few ways to get started: whether it’s bodyweight training or running or sports or weight training — each has its own benefits. I have my own ideas of self-improvement focused around strength, speed, mobility and utilizing both free weight and bodyweight training. But that’s a story for another time.
Two of the best resources I can recommend:
r/fitness FAQ on Reddit has no-nonsense and very accurate information on the topic of fitness. In particular, getting started section should be all you need in the beginning.
T-nation articles are some of the highest-quality you can find online, although they’re more advanced (don’t pay attention to horrific images used on the site, and occasional advertisement; the content is written by the top coaches in the world and is always legit).
There’s also been few excellent books that reinforce the idea of “less is more” and an “easy”, accessible, non-intimidating physical progress:
Everything I said above was about becoming a better you. But if you’re asking yourself — “Who is this poor delusional bastard? And why does he think he can become a superhero?” — I owe you a quick explanation.
Ten years ago my physical shape was close to zero. I couldn’t run without getting out of breath, couldn’t do a single pull-up. Just one of those chubby kids who would rather play another game of Warcraft than go outside for some basketball. The kind of physical shape I’m in today would be absolutely superhuman to my 10-years-back self. If I was told back then that I’d be able to squat with 300lbs on my back, run 5km, easily perform 20+ pull-ups, 50+ pushups, or overhead press my bodyweight, I would think I ended up in some kind of professional sport. Or spend hours every day in the lame gym instead of doing more fun things by the computer.
But the truth is that I still spend most of my time by the computer, enjoying the wonderful world of front end. And while I do hit the gym, it’s only 3-4 times a week and not more than 1-1.5 hours each time.
The amazing capabilities of our bodies can be developed to great effect without huge effort or time investment. Just by doing the right thing and “sticking to it”. You’ll be amazed at what becomes possible in few years.
Am I somehow special? Definitely not. Any of you can achieve the same, and likely more, if you only make it part of your routine. Become a superhero for yourself, for your kid or your spouse; become the best version of you.
So join me in this wonderful journey to an ultimate self. And good luck.