The Mixin bills itself as a Sass Front-end Meetup and I have been wanting to go for quite some time. What follows is my major take aways from the event.
Getting to the Event at PlanGrid
I took the BART in to the 16th Street and Mission Station and it was a quick 2 block walk to the event. I made my way to the building and walked down a pretty nondescript corridor. There was a small sign that stated that the event was on the 4th floor.
What I found was a beautiful space that seemed to occupy the entirety of the 4th floor (top floor) of the building. It was stripped down to its load bearing essentials and presented with nice warm wooden accents and had wonderful lighting.
There were 3 speakers lined up for the evening covering topics such as Sass, Design Systems Tokens, and Front-end Performance.
The first was Sass.
Sass Update by Kaelig Deloumeau-Prigent
Kaelig (@kaelig) took the forefront and after a few pointed jokes about our recent election, took us on a blow-by-blow account of what is happening in the Sass world. More specifically, what is new with the Sass 3.5 release.
I won’t take you through all of the specifics but rather I want to share with you 3 major take aways.
First Class Functions
The big difference here is that functions are now being treated as first class citizens. If you want to dig a bit deeper into what that means I suggest taking a look at this Stack Overflow post which has examples in several languages.
In short, prior to this change in Sass a function could only return a value. That was it’s sole purpose. It could not really be used in any other way.
Now it is possible to rename them as variables, to pass them as arguments, to use them as the results in other functions and they can be included in other data structures. If you want to learn a bit more about that, I suggest taking a look at Kaelig’s Medium article Making sense out of Sass 3.5 first-class functions.
The Dart Sass Refactor
Lately there has been a fracturing of Sass in that you now have 2 flavors.
There is Ruby Sass which is the original. Trusted, but slow and showing its age. And there is LibSass which goes by a few other names but it is a port of the original Sass into C. Much faster, but sometimes lagging behind in features a bit.
ZURB Foundation recently moved to LibSass in ZURB Foundation v5 first with Grunt and now with Gulp. This has resulted in huge speed improvements.
This might not be that big of a deal on smaller projects, but if you have worked on much larger Sass builds it can mean the difference between 1 and 12 seconds that you are waiting around for the CSS to rebuild. Libsass largely solves this problem by being about 10 times faster.
So why rewrite Sass in Dart?
There are two main reasons.
First, you are able to get much faster run time speeds over the Ruby variant.
We already know from experience and the LibSass port that speed is something that people want, and it can really make your pipeline much more reactive.
Who wants to just sit around looking at their computer screen longer waiting for some process to finish.
The second thing that we get is faster iteration.
Fast iteration is generally a benefit with Ruby but you have the poor speed. Faster iteration of c vs. Ruby is no contest.
Without knowing the exact reason why myself, it seems that Dart hits that sweet spot allowing the Sass core team to quickly build out new features and provide bug fixes without having to deal with a language like c.
So, moving forward, it would be safe to assume after some learning curve we should see a much faster Sass variant vs. the Ruby one and also more frequent updates.
This works great for me. Personally, I prefer a faster solution with feature parity, rather than the absolute fastest that lags a few features behind.
The third takeaway from Kaelig’s talk was a tool called Sass-lint. I have seen css linting tools added to many build pipelines and it is nice to see a lint tool that can approach the unparsed sass or scss files.
Although he didn’t get into the specifics, I plan to spend some more time exploring this tool in my personal and client projects.
There is both Grunt and Gulp task runner integrations as well as IDE integration for Sublime Text, Atom, and others.
Design Tokens by Jina Anne
The 2nd talk was by the organizer of the event, Jina Anne (@jina) from Salesforce.
Before getting into Design Tokens, she took us on a whirlwind tour of her history with design systems including the design and styleguide for the sass-lang.org site.
She explained how she used yaml and a erb generated scss file with Ruby Middleman to generate things such as color swatches.
She summed this up by saying, “Design Systems contain whatever your organization needs to communicate design decisions.”
She then continued to talk about her experience working on Lightning, the Design System used by Salesforce. She talked about how there were over 100 people on core UX and over 20,000 employees at the company.
As you might imagine, she said “Scaling enterprise design is tricky.”
She gave some examples of things people keep asking over and over for, like where are the icons. She also showed some of the “redlines” (hi-fi mocks annotated with dimensions) and compared them to the xkcd comic “how standards proliferate.”
This is a great analogy and great point that she made. I think that it is human nature to complicate things. Even if we are hoping to “fix” things we often end up with another competing standard.
She explained it was difficult to understand what changes were being made to which documents and this led to a lack of continuity amongst their properties.
She also explained that in a recent audit, there were 116 different text colors, 120 different background colors and 73 different font sizes.
To help combat this she proposes the idea of Design Tokens. This allows you to keep your design system agnostic and have a single source of truth when it comes to colors and font-sizes.
Having recently worked on an localized / internationalized site using Ruby Middleman and i18n tokens were nothing new to me. In most tools you end up creating tokenized strings that represent the text of a button or other user interface elements. For example you might have, ‘BUTTON_SAVE’ and then you create a data set that represents that text amongst many different languages.
In a similar fashion, Salesforce UX has created a tool called THEO which allows you to create such tokens for colors.
Then from that single list of colors you can export a variety of formats so that you can have color consistency across web, iOS and Android platforms.
One of the larger benefits that I saw was the idea of how you can have cozy vs. compact and night mode. I have worked on several night mode projects recently and a tool to be able to implement that faster and in a more consistent way is a win in my book.
She left her talk by saying there is no product too small to realize the benefits of using Design Systems in your work.
The Hateful Weight: A [Front-end] Performance Talk by Henri Helvetica
The 3rd talk for the evening was by Henri Helvetica (@HenriHelvetica) (not his real last name, but pretty cool) from Toronto. He recently spoke at the O’reilly Velocity Conference and this evening discussed why performance was so important in having good UX.
The talk started off with a somewhat comical historical journey of how we got here (to super bloated web pages) and the key point was that the amazing camera’s on smart phones were setting the bar very high.
Now everyone is snapping super high res images without any thought as to how this impacts people without massively fast internet connections.
Really, even if your connection is fast, why waste all of that bandwidth?
Some interesting stats were that the average web page weight was 2.5mb. Of that 64% was images. On top of that, 72% send the same data to mobile devices.
To hammer this point further, he talked about emerging markets and how modern browsers are no longer trusting developers. They are crushing the page size on their end to improve the experience of the end user.
So, being intentional about your images, format and compression are critical for improving your page weight.
He gave an interesting breakdown of image distribution on the web
- 1% svg
- 1% webp
- 24% gif <– this is baffling to me
- 28% png
- 44% jpegs
He then went on with an awesome breakdown of the different formats and where you can see some real savings.
For example, he said that the exif data in an PNG, largely useless on a webpage, can make up to 14% of the file size of an image. He suggested using tools such as image optim as a minimum to trim the fat of these unnecessary features in PNG.
He also broke down more esoteric and emerging formats such as webp (chrome), jpeg2000 (apple), and jpegXR (microsoft).
He suggested that the best route is to serve up the best image for the best quality and lowest file size depending on the browser type.
He gave the example of forever21.com.
Viewing the source in different browsers he found the following image formats being served up
- Firefox: jpeg
- Safari: jpeg2000
- Chrome: webp
- IE: jpegXR
He also suggested that you should absolutely be lazy loading the images. He stated that typically 80% of image request are below the fold.
Although he didn’t get into the specifics about how to manage this type of setup, he suggested that a CDN can automate most of the image format conversion.
The final shocker, the google pixel site (the google smartphone) weighed in at 37mb! That is huge.
I am a huge proponent of performance tuning the front-end of your sites and properties and found the presentation to be very on point.
I agree with Henri that taking care of your images is “low hanging fruit” and can have big rewards without a lot of investment into making changes.
As a closing note, he suggested that if you wanted to dig deeper, you should checkout the book, High Performance Images
If you want to take a look at his slides, you can download the PDF from afast.site.
The event was fantastic and I look forward to attending the next meeting in early 2017. My understanding is that it is a quarterly event.
Images used by permission courtesy of the Mixin SF.