Feed Viewer Demo


Get the app

Send feedback

Privacy policy

Terms of use




USB-IF Publishes Audio Over USB Type-C Specifications

Roku Unveils 2016 Streaming Media Players with 4Kp60 and HDR Support

Best Laptops: Q3 2016

The SilverStone SX700-LPT SFX 700W PSU Review

ADATA Launches XPG SX8000: High-End M.2 NVMe SSD Featuring 3D MLC NAND

Giveaway: OCZ VX500 & RD400 SSDs

The Lenovo ThinkPad X1 Yoga Review: OLED and LCD Tested

BlackBerry Stops Development of Smartphones, Set to Outsource Hardware Development

Best Android Phones: Q3 2016

NVIDIA Teases Xavier, a High-Performance ARM SoC for Drive PX & AI

GTC Europe 2016: NVIDIA Keynote Live Blog with CEO Jen-Hsun Huang

Xiaomi Mi 5s and Mi 5s Plus Announced

Razer Updates The DeathAdder Elite Gaming Mouse

New ARM IP Launched: CMN-600 Interconnect for 128 Cores and DMC-620, an 8Ch DDR4 IMC

CEVA Launches Fifth-Generation Machine Learning Image and Vision DSP Solution: CEVA-XM6

AMD Announces Embedded Radeon E9260 & E9550 - Polaris for Embedded Markets

Xilinx Launches Cost-Optimized Portfolio: New Spartan, Artix and Zynq Solutions

The Phononic HEX 2.0 TEC CPU Cooler Review

Satechi and StarTech USB 3.1 Gen 2 Type-C HDD/SSD Enclosures Review

New Chrome OS Update Enables Google Play on Acer’s and ASUS Chromebooks

AMD 7th Gen Bristol Ridge and AM4 Analysis: Up to A12-9800, B350/A320 Chipset, OEMs first, PIBs Later

NVIDIA Releases 372.90 WHQL Game Ready Driver

AMD Releases Radeon Software Crimson Edition 16.9.2 - Support for Forza Horizon 3

NZXT Unveils Fully Customizable Aer RGB LED Fans

NVIDIA Announces Gears of War 4 Game Bundle for GTX 1080 and 1070


Superhero Bits: Luke Cage’s Ties To The Avengers, Justice Society of America, Psylocke Training & More

Tim Burton’s ‘Miss Peregrine’s Home for Peculiar Children’ Needs More Peculiarity [Review]

FX’s ‘Archer’ Ending After Season 10

‘Star Trek’ Fan Film Lawsuit Moving Forward; JJ Abrams’ Claims “Are Irrelevant”

HBO Wants ‘Game of Thrones’ Spinoff: “It’s About Finding the Right Take”

Cool Stuff: Hot Toys ‘Star Wars: the Force Awakens’ Luke Skywalker 1/6th Scale Figure

Lee Daniels Is Working On A Musical About His Own Life Similar To Fellini’s ‘8 1/2’

‘War for the Planet of the Apes’ Plot Revealed: Caesar vs. The Colonel… and Himself

Ranking the Movies of Director Peter Berg: Plenty of Handheld Chaos & Full Hearts

Quentin Tarantino Almost Made a ‘Luke Cage’ Movie, Discusses His Hopes For Marvel’s Show

‘Jurassic World 2’ Will Not Be ‘Jurassic War’; More Animatronics, Suspenseful and Scary

New ‘Trollhunters’ Photos Show Off Guillermo Del Toro’s Animated Netflix Series

‘American Honey’ Star Sasha Lane on Road Tripping and Finding Hope in Flyover Country [Fantastic Fest Interview]

Daniel Craig Still the “First Choice” to Play James Bond

Zack Snyder Teases Deathstroke in ‘Justice League’ Set Photo

Is Doctor Strange’s Eye of Agamotto the Fifth Infinity Stone?

New ‘Fast 8’ Image Reveals When The First Trailer Will Arrive

‘Westworld’ Review: An Exciting, Disturbing, and Thoughtful Reimagining

‘What We Do In The Shadows’ Gets A TV Series Spin-Off

New ‘Rules Don’t Apply’ Trailer: Warren Beatty’s Long-Awaited Return Is Almost Here

NBC Working on ‘The Italian Job’ TV Series

‘Inferno’ Clips: Ben Foster Has the Cure for Humanity

Get A Sneak Peek at ‘Harry Potter and the Chamber of Secrets’ Illustrated Edition

Cool Stuff: Adam Savage Builds A Mobile Movie Theater In A Pick-Up Truck

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]

Smashing Magazine

Desktop Wallpaper Calendars: October 2016

Automating Art Direction With The Responsive Image Breakpoints Generator

Building Hybrid Apps With ChakraCore

Building Social: A Case Study On Progressive Enhancement

Developing For Virtual Reality: What We Learned

Stretching The Limits Of What’s Possible

Choosing The Right Prototyping Tool

How To Design Error States For Mobile Apps

Understanding REST And RPC For HTTP APIs

The Thumb Zone: Designing For Mobile Users

The Art Of Hand Lettering

Driving App Engagement With Personalization Techniques

Creating Websites With Dropbox-Powered Hosting Tools

How To Boost Your Conversion Rates With Psychologically Validated Principles

Content Security Policy, Your Future Best Friend

Reducing Cognitive Overload For A Better User Experience

How To Scale React Applications

Breaking Out Of The Box: Design Inspiration (September 2016)

Redesigning SGS’ Seven-Level Navigation System: A Case Study

The Building Blocks Of Progressive Web Apps

Freebie: Flat Line UX And E-Commerce Icon Sets (83 Icons, AI, EPS, PNG, SVG)

Web Development Reading List #152: On Not Shipping, Pure JS Functions, And SameSite Cookies

Responsive Images In WordPress With Art Direction

Desktop Wallpaper Calendars: September 2016

Prototyping For Success

USB-IF Publishes Audio Over USB Type-C Specifications

AnandTech — 9/30/2016 8:00:00 AM

The USB Implementers Forum this week published the USB Audio Device Class 3.0 specification, which standardizes audio over USB Type-C interface. The new spec enables hardware makers to eliminate traditional 3.5mm mini-jacks from their devices and use USB-C ports to connect headsets and other audio equipment. Makers of peripherals can also build their audio solutions, which use USB-C instead of traditional analog connectors. Developers of the standard hope that elimination of mini-jacks will help to make devices slimmer, smarter and less power hungry.

The industry, led by Intel and some other companies, has been mulling about replacing the traditional 3.5mm mini-jack connector for some time now. The main motives for replacement were necessity to simplify internal architecture of devices by removing analog and some audio processing components from the inside (which leads to further miniaturization), minimize the number of external connectors, improve power management as well as to add smart features to headsets and other audio equipment. We discussed USB Type-C Audio Technology briefly earlier this year and mentioned that this isnot the first time that the industry has tried to use USB instead of the good-old mini-jack. The important difference between contemporary initiative and attempts in the past is the fact that today the primary goal is to replace the 3.5mm jack in portable devices.

As reported, the USB Audio Device Class 3.0 specification supports both analog and digital audio. Analog audio is easy to implement and it does not impact data transfers and other functionality of USB-C cables since it uses the two secondary bus (SBU) pins. Some device makers may find analog audio feature of the standard as a relatively simple way to add certain smart capabilities to their headsets without major redesign of hosts. While analog USB-C audio will not help to shrink dimensions of portables, it could be particularly useful for non-mobile devices, where miniaturization is not crucial, but where port space is at a premium or where additional features either make sense (infotainment, sport equipment, etc.) or are fundamental (VR HMDs).

The USB ADC 3.0 defines minimum interoperability across analog and digital devices in order to avoid confusion of end-users because of incompatibility. In fact, all ADC 3.0-compliant hosts should support the so-called headset adapter devices, which allow to connect analog headsets to USB-C. However, digital audio is one of the primary reasons why companies like Intel wanted to develop the USB-C audio tech on the first place, hence, expect them to promote it.

According to the USB ADC 3.0 standard, digital USB-C headphones will feature special multi-function processing units (MPUs), which will, to a large degree, define the feature set and quality of headsets. The MPUs will handle host and sink synchronization (this is a key challenge for digital USB audio), digital-to-analog conversion, low-latency active noise cancellation, acoustic echo canceling, equalization, microphone automatic gain control, volume control and others. Such chips will also contain programmable amplifiers and pre-amplifiers, which are currently located inside devices. Besides, USB ADC 3.0-compatible MPUs will also support USB Audio Type-III and Type-IV formats (the latest compressed formats), but will retain compatibility with formats supported by ADC 1.0 and 2.0. Finally, among the mandated things set to be supported by USB-C Audio devices are new Power Domains (allows devices to put certain domains in sleep mode when not in use) as well as BADD (basic audio device definition) 3.0 features for saving power and simplified discovery and management of various audio equipment (each type of devices has its own BADD profile).

Over the past few months, Conexant has introduced three USB-C Audio MPUs (1, 2) for headsets, docking stations and other equipment. Assuming that these chips are compliant with the USB ADC 3.0 specs from a hardware standpoint, and the software is ready, actual devices featuring USB-C Audio could arrive in the coming months. Pricing of the first USB ADC 3.0-compliant MPUs is unknown, but in general MPU ICs do not cost too much. Moreover, as developers adopt smaller process technologies and a larger number of such chips hit the market, their prices are going to depreciate. In the end, it will be interesting to see where digital headphone prices end up. The MPUs will definitely add to the total bill of materials for a set of headphones, but at the same time they add new functionality as well, so the big question is how manufacturers then factor all of that into device pricing.

A number of companies, including Apple and LeEco, have already introduced smartphones that do not use traditional mini-jacks, and Google added support for USB DAC devices to Android over a year ago. The finalization of the USB ADC 3.0 spec, introduction of USB-C audio ICs, as well as design decisions of smartphone makers demonstrate that the industry is trying to eliminate 3.5mm jacks from mobile devices. The big question is whether the rest of the industry plans to do the same. It is true that portables are primary devices for music listening for many people. However, there are tens of applications which still rely on analog connectors, and hundreds of millions of people who use them either to consume or create content. To eradicate 3.5mm jacks completely, USB-C Audio promoters will have to work with thousands of vendors and this takes time. Consequently, it is too early to say that this is the end for the good-old mini-jack.

Images by Conexant, USB IF.

Source: USB IF

View All Comments

  • sprockkets

  • - Friday, September 30, 2016 -

  • link

  • Lol, less power hungry? Yeah, right, bs. Who's going to power the internal audio speakers and earpiece?
  • All this does is move what was in the phone to a stupid dongle. Screw apple and moto for doing this.
  • Reply
  • DanNeely

  • - Friday, September 30, 2016 -

  • link

  • What you're forgetting is that since your phone maker is the one who will be blamed for the battery not lasting long enough is that outside of extreme budget models they're likely to spend the extra $0.50 on the BoM for a high power efficiency DAC. The tiny company in China who manufactured the cheap no-name dongle you bought on Amazon. Not so much.
  • Reply
  • ImSpartacus

  • - Friday, September 30, 2016 -

  • link

  • Oh god. I didn't think about this. Fuck. This is not good.
  • Reply
  • Old_Fogie_Late_Bloomer

  • - Friday, September 30, 2016 -

  • link

  • Oh goodie, another way for manufacturers to shave off a few cents and fractions of a millimeter. If only there was some way to capitalize on slightly thicker devices, like, I don't know, bigger batteries?
  • Reply
  • Scabies

  • - Friday, September 30, 2016 -

  • link

  • No, you will take your tiny battery and monster screen and you will like it.
  • Reply
  • rms141

  • - Friday, September 30, 2016 -

  • link

  • Congratulations, the thinner phone means your battery case will now take less room in your pocket.
  • Reply
  • Zak

  • - Friday, September 30, 2016 -

  • link

  • Yeah, no kidding. There was no technical reason to remove the headphone jack. And I bet this standard will not come with iPhones. Apple will have their own "standard". You know, "for the customer benefit".
  • Reply
  • gsilver

  • - Friday, September 30, 2016 -

  • link

  • I believe that the word that you are looking for is "courage"
  • Reply
  • baka_toroi

  • - Friday, September 30, 2016 -

  • link

  • Well, Apple devices use a different connector than USB Type-C, so of course this won't come with iPhones.
  • Reply
  • cygnus1

  • - Friday, September 30, 2016 -

  • link

  • The Apple Lightning connector has had similar capability for several years. The only difference to my knowledge, is that it doesn't support analog output.
  • Reply
  • 1


Roku Unveils 2016 Streaming Media Players with 4Kp60 and HDR Support

AnandTech — 9/30/2016 6:30:00 AM

The most affordable STBs from the new lineup are the Roku Express and Roku Express+ players, which connect to 802.11n Wi-Fi, support up to 1080p video and retail for $30 and $40, respectively. The Roku Express+ version is especially notable here as it's the only new player from the company in the last two years to support RCA composite video for older, pre-HDMI televisions. Meanwhile the Roku Premier series complements the company’s Streaming Stick product released earlier this year, which has similar capabilities, but is more portable and expensive ($50).

The considerably more advanced Roku Premiere, Roku Premiere+ and Roku Ultra are based on more powerful SoCs with four CPU cores to enable 4Kp60 video decoding as well as additional functionality. Furthermore, the premium players also feature Wi-Fi 802.11ac MIMO dual-band connectivity. Among the higher-end players, the Roku Premiere+ and the Roku Ultra also support displaying HDR video via the HDR10 standard (but note that Dolby Vision is not supported). In addition, both players are also equipped with microSD card readers for additional channel storage and USB ports for local playback. The baseline 4Kp60 Premiere STB goes for $80, while the HDR-capable Premier+ player costs $100. Meanwhile, the top-of-the-range Roku Ultra is available for $130. For additional $30, owners will get a more advanced remote with a speaker (for the lost remote finder feature), a digital optical audio port as well as improved support for lossless audio formats like ALAC or FLAC (but no Dolby Atmos).

The new Roku Express, Roku Premiere, Roku Premiere+ and Roku Ultra STBs will be available in stores on October 9 and can be pre-ordered immediately. The Roku Express+ will be sold exclusively at Walmart.

Gallery: Roku Unveils Streaming Media Players with 4Kp60 and HDR Support

Source: Roku

View All Comments

  • svan1971

  • - Friday, September 30, 2016 -

  • link

  • I went with Apple TV simply because of the obnoxious advertising buttons on the Roku 4 remote one of which is for a channel that is bankrupt and out of business (Rdio)
  • Reply
  • 1


Copyright 2016 AnandTech

Best Laptops: Q3 2016

AnandTech — 9/30/2016 5:15:00 AM

Coming to the end of Q3, there’s been a nice refresh of many laptops. Intel has recently launched their first Kaby Lake processors in their U and Y series, which are dual-core low wattage versions. Skylake is still the current processor for any of the quad-core and higher wattage CPUs. In addition, NVIDIA has recently released their first Pascal graphics cards for laptops, but only at the high end for gaming laptops.

For a full dive into Kaby Lake, check out our coverage here, and for Pascal updates, check out this article.

Low Cost Laptops

Low cost has a whole new meaning now. With Microsoft changing the pricing on Windows for low cost devices, it has opened up a new PC competitor to the Chromebook. There are plenty of compromises with devices that cost at or around $200, especially the TN displays, but performance is enough for light work.

HP Stream 11

HP has once again updated the HP Stream that they launched a couple of years ago. The pop of color sets this apart from a lot of the other devices around, and despite the low price, the build quality is pretty good. The TN display is the biggest detractor, along with the low amount of eMMC storage, but with Windows 10 the 32 GB is sufficient for the OS and you can add a microSD card for extra apps and data storage. The 11.6” model still features Braswell with the Celeron N3060, but HP has double the RAM to 4 GB which should be a nice boost from the 2 GB they had before. They’ve even added a USB-C port, even though it’s only USB 3.0 speed, and the horrible single-channel wireless has been upgraded to a 2x2 802.11ac NIC. I’d like HP to offer a better display, and more storage, but still for $199 this is a pretty decent laptop, and it’s gotten a lot better without the price going up.

ThinkPad 11e

Built for education, but still offering some of the “must haves” in a notebook, the ThinkPad 11e costs about double the HP Stream 11, but can be had with up to a Core i3-6100U which is going to offer a lot more performance than the Stream’s Atom processor. It also is only available with SSD storage, unlike most of the notebooks in this price range. You can get 128, 192, or 256 GB of SATA SSD. The 42 Wh battery should offer decent battery life, and it comes with the Intel 802.11ac wireless card. The big letdown is the 1366x768 TN display, but to get to this price you’re going to have to give something up. At less than $500 starting, it offers decent value.


Utrabooks have moved the laptop forward, with sleek and thin designs that still feature good performance with the Core i-U series processors, and even thinner and lighter models are available with the Core m-Y series models. The definition has expanded somewhat over the years, but a good Ultrabook will have at least a 1920x1080 IPS display, SSD storage, and over eight hours of battery life, with many of them over ten now. If I was to recommend an everyday notebook, it would be an Ultrabook. The traditional laptop form factor is less compromised for notebook tasks than most of the 2-in-1 designs, and there are some great choices now.

HP Spectre

HP launched a new entrant in the Ultrabook category with the “world’s thinnest laptop” which they are calling the Spectre. It’s not quite the lightest, but the 2.45 lbs is a very low weight, and the design is stunning. U series Core processors are available with 8 GB of memory, and HP has gone with PCI-E storage in 256 or 512 GB offerings. The display is a 1920x1080 IPS model at 13.3-inches. The very thin design has precluded the use of USB-A though, but the Spectre does have three USB-C ports, with two of them capable of Thunderbolt 3. The Spectre is just 10.4 mm thick, yet despite this they have still included a keyboard with a solid 1.3 mm of travel. The Spectre starts at $1169.99, which is a lot, but it’s a stunner.

Dell XPS 13

The reigning Ultrabook on the best-of lists is generally the Dell XPS 13. The Infinity Display makes it stand apart, with very thin bezels packing a large display into a small chassis. The downside of this is the webcam, which is mounted on the bottom of the display, which might make this a non-starter for people who do a lot of video chat, but despite this, Dell has crafted a great machine here. Dell has recently updated this to a Kaby Lake processor, up to the Core i7-7500U. The outgoing model did offer Iris graphics on the i7 version, but not right away, so we’ll see if Dell brings back this option once the Iris Kaby Lake processors are available. They’ve also switched from Broadcom NICs to Killer, because Broadcom is exiting the market. They now quote up to 22 hours of battery life on the 1080p model thanks to more efficiency with Kaby Lake as well as a 60 Wh battery, up from 56 Wh last year. I love the aluminum outside with the black carbon fibre weave on the keyboard deck, and the black keys make the backlighting stand out with great contrast. The XPS 13 starts at $799 for the i3 model.


ASUS packs a lot into the UX305CA, and you likely get more Ultrabook for the money with this model than pretty much any other. At a MSRP of just $699, the UX305CA features a Skylake Core m3 processor, 8 GB of memory, and 256 GB of SSD storage. ASUS hasn’t yet announced an updated version of this, but the Skylake version still offers plenty of value. Compare that to a Dell XPS 13 which is hundreds more to get a model with that much RAM and storage. The Core m CPU is plenty for most tasks, and with the 4.5 W TDP you get the advantage of a fanless device. ASUS includes a 1920x1080 IPS display as well. If you want a thin and light, all aluminum laptop, but don’t want to break the bank, the ASUS UX305CA needs to be highly considered.

Razer Blade Stealth

Razer has also updated the Stealth with Kaby Lake, and even more importantly they’ve increased the battery capacity as well. The Razer Blade Stealth is a fantastic notebook that was hindered by its battery life, and the new model should offer at least a bit longer time away from the mains. This CNC aluminum notebook mimics the larger Razer Blade 14 in appearance, yet is very thin and light. I also like that Razer offers just a single CPU choice in the Core i7-7500U, and now has 16 GB of memory, but they didn’t increase the starting price of $999. It’s also the only laptop on this list to feature per-key RGB backlighting on the keyboard, allowing some pretty nifty looks. It can be connected to the Razer Core external graphics dock with a single Thunderbolt 3 cable as well, which is going to offer a massive boost in gaming performance when docked. I really like what Razer is doing in this market, and their pricing is very competitive.


Love it or hate it, the MacBook is the only Mac to make the list this go-around. Apple updated it to use Skylake Core m CPUs, and although I would expect the rest of their lineup to be updated soon, possibly to Kaby Lake, this is the only current generation CPU based MacBook at the moment. The display is great, and Apple continues to buck the trend and use 16:10 aspect ratio displays. Apple’s MacBook keyboard is a big change from normal laptops, leveraging butterfly switches to keep the travel consistent despite having a very short throw. The trackpad has no click action at all, and instead uses haptic feedback. The biggest controversy is the single USB-C port, which is also the charging port, but despite this the Retina display and fanless design make it a great portable laptop if you need a Mac. It’s pretty hard to recommend the Air at this point, since it still features a low resolution TN display and old processors.


As much as I love an Ultrabook when I need a true laptop experience, there are some great convertible devices out there too which can serve multiple roles. They may not be the best laptop and they may not be the best tablet, but they can generally handle either chore well enough.

Microsoft Surface Pro 4

The best convertible is the Surface Pro 4. This 12.3-inch tablet has basically created the 2-in-1 tablet market, with many competitors now creating similar devices, from Dell to Google and Apple. The Surface Pro 4 certainly sets the bar high compared to the other Windows based devices, and with the legacy software support, is highly productive. All the changes from the Surface Pro 3 to the Surface Pro 4 are subtle, with a slightly larger display in the same chassis size, higher resolution, and Skylake processors, but there are new features too like the lightning fast Windows Hello facial recognition camera. Possibly the best new feature is an accessory, with the new Type Cover offering edge to edge keys and a much larger glass trackpad, meaning the Surface Pro 4 can double as a laptop much better than any previous model could. Starting with the Core m3 processor, the Surface Pro 4 starts at $899, but the more popular Core i5 version with 8 GB of memory and 256 GB of storage costs $1199 without the Type Cover. It’s not the most inexpensive 2-in-1, but it’s a leader in this category.

Microsoft Surface Book

Software issues plagued the Surface Book at launch, but Microsoft has seemed to sort all of them out. The Surface Book is now easily recommended as a great 2-in-1 if you need something that’s more of a laptop than a tablet. The 13.5-inch 3:2 display with it’s 3000x2000 resolution is one of the best displays on a laptop, with a sharp resolution and great contrast. Performance is solid too with either a Core i5-6300U or Core i7-6600U, and you can also get discrete NVIDIA graphics with a custom GT 940M. It’s not a gaming powerhouse, but the NVIDIA option is pretty much double the integrated performance. The all magnesium body gives the Surface Book a great look and feel, and the keyboard and trackpad are some of the best on any Ultrabook as well. The Surface Book is not perfect though; the device is heavier than traditional Ultrabooks and the weight balance makes it feel heavier than it is. Also, there’s the price, which starts at $1349 and goes all the way up to $3199 for a Core i7 with 16 GB of memory, 1 TB of SSD storage, and the dGPU. Still, it’s got solid performance, good battery life, and a great detachable tablet.

Lenovo Yoga 910

Lenovo pretty much invented the flip-around convertible with their Yoga series, and the latest Yoga 910 takes it all to the next level. It features Kaby Lake processors, up to Core i7-7500U, along with up to 16 GB of memory, and it keeps the fantastic watch band hinge introduced on the Yoga 3 Pro. The big upgrade this year are new displays, with edge to edge displays similar to the XPS 13. They’ve increased the panel size from 13.3” to 13.9” and offer both a 1920x1080 IPS panel as well as a 3840x2160 IPS panel. I would assume this means the RGBW subpixel arrangement is also gone, which should help out a lot on color accuracy and contrast. It is available in three colors, starting at $1299 and will be available in October.

Large Laptops

For some people, a 13.3-inch or 14-inch laptop is just too small. Maybe they need more performance, and the quad-core chips in larger laptops and better discrete GPUs are necessary. Maybe they just like the larger display. There are some great large form factor laptops that are available too.

Dell XPS 15

Dell took the winning formula with the XPS 13 and applied it to their larger XPS 15, and the result is a great looking laptop, which has a 15.6-inch display in a smaller than normal chassis. The XPS 15 features quad-core 45-Watt Intel Core processors, and the NVIDIA GTX 960M discrete graphics card, which is a big jump in performance over what’s available in any Ultrabook. You can get a UHD display with 100% of the Adobe RGB gamut as well, although the battery life takes a big hit with that many pixels, so the base 1920x1080 offering may be better suited to those that need a bit more time away from the power outlet. The keyboard and trackpad are both excellent, just like the XPS 13, and it features the same styling cues. The XPS 15 starts at $999.

ASUS ZenBook UX501VW

ASUS makes some pretty fantastic looking aluminum notebooks in their ZenBook series, and the UX501VW is a great looking 15-inch notebook. It comes with a Core i7-6700HQ and GTX 960M, so performance will be excellent, and ASUS offers both 1920x1080, and 3840x2160 IPS display choices. It weighs in at 2.06 kg, which is decent for a notebook this size. ASUS generally comes in a bit less expensive than an XPS 15 as well.

View All Comments

  • Eden-K121D

  • - Friday, September 30, 2016 -

  • link

  • Does anybody care?
  • Reply
  • arayoflight

  • - Friday, September 30, 2016 -

  • link

  • Except all those who own a laptop, no.
  • Reply
  • 1


Copyright 2016 AnandTech

The SilverStone SX700-LPT SFX 700W PSU Review

AnandTech — 9/30/2016 5:00:00 AM

As PC gaming continues to grow, more and more PCs are finding their way into the living room. As such, the demand for small, elegant computers that are powerful enough to be used as gaming machines is constantly on the rise. Several reputable manufacturers have presented products specifically designed for living room PC gaming, from subtle gaming cases to specialized keyboards/mice.

One major challenge with developing these small form factor (SFF) gaming systems is power. A gaming PC can require a lot of power, which can be an issue with cases that only support SFX PSUs. As the market for SFX units is very low and such systems were not expected to have high power requirements to begin with, there are very few designs available with a power output higher than 500 Watts.

SilverStone is a company that is strongly focused on the design and development of SFF cases, with several of their recent products designed to be used primarily as gaming machines. They are one of the very few companies that offer advanced, high performance SFX PSUs. In this review we are having a look at the SX700-LPT, their latest and greatest SFX PSU design. The SX700-LPT is 80Plus Platinum certified and has a maximum power output of 700 Watts, theoretically making it the most advanced consumer SFX PSU available today.

Packaging and Bundle

SilverStone supplies the SX700-LPT SFX PSU into a relatively large cardboard box for an SFX PSU. The box is very sturdy and the PSU is sandwiched between thick polystyrene foam pieces, providing ample shipping protection. The most basic features of the PSU can be read at the front side of the box and more details are printed on the back.

The bundle of the SX700-LPT is spartan, with the company supplying only a manual, an AC power cable and four black mounting screws. The manual is extensive and detailed. SilverStone does not provide a SFX to ATX adapter with the SX700-LPT, which is peculiar considering that they do with less powerful units.

This is a fully modular design so every cable can be detached, including the 24-pin ATX cable. All of the cables are "flat", ribbon-like, including the thick 24-pin ATX cable. Apparently, SilverStone is trying to save as much space as possible. Be warned that these cables are much shorter than those of a regular ATX unit, with the ATX power cable being just 30 cm (11.8") long. Every cable is made by using black wires and black connectors, with the sole exception of the PSU-side connectors of the PCI Express power cables, which are blue.

View All Comments

  • Eden-K121D

  • - Friday, September 30, 2016 -

  • link

  • Interesting product although I don't think anyone needs SFC PSUs of more than 500 Watts
  • Reply
  • 1


Copyright 2016 AnandTech

ADATA Launches XPG SX8000: High-End M.2 NVMe SSD Featuring 3D MLC NAND

AnandTech — 9/29/2016 5:00:00 PM

ADATA on Thursday introduced its first lineup of SSDs powered by 3D MLC NAND flash memory. The XPG SX8000 drives promise up to 2.4 GB/s read speed as well as the enhanced reliability of 3D NAND.

ADATA’s XPG SX8000 lineup of SSDs will include 128 GB, 256 GB, 512 GB and 1 TB configurations, offering different levels of performance at different price points. The drives are based on Silicon Motion’s SM2260 controller (which sports two ARM Cortex cores, has eight NAND flash channels, LDPC ECC technology, 256-bit AES support and so on) and 3D MLC NAND flash from an unknown manufacturer (IMFT is the most likely supplier, but SK Hynix is a possible supplier as well). The drives come in M.2-2280 form-factor and use PCIe 3.0 x4 interface.

The manufacturer rates XPG SX8000’s sequential read performance at up to 2400 MB/s and its write performance at up to 1000 MB/s when pseudo-SLC caching is used. As for random performance, the new drives can deliver up to 100K/140K 4KB read/write IOPS. It is important to note that the 128 GB model is considerably slower than other SKUs in the family and the drive needs 512 GB configuration to demonstrate all the capabilities of the SM2260 controller.

Since the SX8000 SSDs belong to ADATA’s flagship XPG lineup, the company ships such drives with a five-year warranty. Moreover, thanks to improved reliability of 3D NAND compared to traditional planar NAND made using ultra-small process technology, the manufacturer also rates the XPG SX8000 for two million hours MTBF, 0.5 million (or 25%) higher compared to previous-gen XPG SSDs.

For several years, Samsung has been the only supplier of high-end SSDs based on 3D MLC NAND flash memory, offering high performance and improved reliability. Recently companies like IMFT started mass production of their 3D NAND for SSDs and independent makers of drives can now release their own SSDs featuring 3D MLC flash. Being one of the largest suppliers of NAND-based storage devices, ADATA is naturally among the first to offer advanced SSDs powered by 3D MLC with its XPG SX8000 family. But what is noteworthy is that last month Micron (which co-owns IMFT with Intel) decided to cancel its 3D MLC/SM2260-based Crucial Ballistix TX3 M.2 SSDs for an undisclosed reason. As a result, ADATA gets to join a rather exclusive club of non-Samsung M.2 NVMe drive vendors. Unfortuantely however, prices have yet to be announced, so we'll have to see if (and by how much) ADATA pushes prices below what Toshiba and Samsung have been charging for their own M.2 NVMe SSDs.

Finally, along with today's release, ADATA is also prepping an upgraded version of the XPG SX8000 due in late October, which will feature increased performance. The upcoming SSDs are primarily geared towards desktop users and will require a heatsink, making them incompatible with the vast majority of notebooks.

Gallery: ADATA Launches XPG SX8000: High-End M.2 SSD Featuring 3D MLC NAND

Source: ADATA

View All Comments


Copyright 2016 AnandTech

Giveaway: OCZ VX500 & RD400 SSDs

AnandTech — 9/29/2016 11:30:00 AM

It’s been a while since we’ve last had a hardware giveaway, so for those of you looking for some new hardware, you’re in luck. As part of the recent launch of our updated forums, the forum community team will be holding an Ask The Experts-styled Q&A session with OCZ, and along with that will be giving away a few OCZ SSDs. The prizes include the 512GB and 1TB versions of the recently launched OCZ VX500 SATA SSD, and a 512GB OCZ RD400 NVMe M.2 SSD.

The giveaway itself is open now, and will be running through October 14th on our forums. Meanwhile, the community team is soliciting questions for the Q&A, so please be sure to submit any questions you have. You can find the full details for submitting questions, along with entry instructions for the giveaway itself, over on the storage section of our forums.

Source: AnandTech Forums

View All Comments

  • Crono

  • - Thursday, September 29, 2016 -

  • link

  • Nice promo! I haven't had an OCZ drive in a while (OCZ Vertex 2). Speed of the Toshiba OCZ NVME SSD looks great, though. Thinking of buying the 256GB one if I don't win for my primary boot/OS drive.
  • Reply
  • 1


Copyright 2016 AnandTech

The Lenovo ThinkPad X1 Yoga Review: OLED and LCD Tested

AnandTech — 9/29/2016 7:30:00 AM

Earlier this year at CES, Lenovo took the wraps off their latest lineup of premium business class notebooks, and they revamped the X1 lineup completely. Originally the X1 was just the X1 Carbon notebook, but Lenovo has decided to expand the X1 series to include the aforementioned X1 Carbon, along with the X1 Yoga and X1 Tablet. So the ThinkPad Yoga is now the ThinkPad X1 Yoga, and as such it keeps the same thin and light design of the X1 Carbon.

The Lenovo ThinkPad X1 Yoga Review

Thin and light is the key here, and the X1 Yoga doesn’t disappoint. The X1 Yoga is only 16.8 mm (0.66”) thick, and weighs 1270 grams (2.8 lbs). While not the thinnest and lightest notebook around, don’t forget that the X1 Yoga also features a 360° hinge, allowing it to be used exclusively with touch with several modes, including tablet, stand, and tent, just like the other Yoga devices they sell. Lenovo also pointed out that the X1 Yoga is thinner and lighter than the original X1 Carbon even, despite including touch and the convertible hinges.

Lenovo is offering plenty of choices here to outfit the X1 Yoga, with the baseline offering of an Intel Core i5-6200U and 8 GB of LPDDR3-1866. You can upgrade to the i5-6300U, i7-6500U, and i7-6600U, with RAM offerings up to 16 GB. On storage Lenovo has gone all NVMe, with choices from 128 GB to 512 GB. On the display side, the 14-inch panel can be either a 1920x1080 IPS, 2560x1440 IPS, or a 2560x1440 OLED model.

Lenovo also offers plenty of connectivity on the X1 Yoga, including three USB 3.0 ports, HDMI, DisplayPort, and a OneLink connector for its docking stations. There are no USB Type-C ports, but the X1 Yoga does have MicroSD support for additional storage, and LTE-A as an option for those that want to be as untethered as possible. Wireless is supplied via the Intel 8260 wireless card, and as a business focused device it can be had with vPro as well.

They also include a stylus built into the laptop which will charge while docked. It’s not as big or as comfortable as the one included with something like the Surface Book, but the fact that it is docked will more than make up for that for a lot of people, because that means it’s always available, and less likely to get misplaced.

Lenovo has gone with a 52 Wh battery for this laptop, meaning it is over the 50 Wh baseline for Ultrabooks. That’s pretty good considering the inclusion of a stylus, and the thin nature of this device.

View All Comments

  • mooninite

  • - Thursday, September 29, 2016 -

  • link

  • $1800 and no Iris graphics? I'll pass.
  • Reply
  • ddriver

  • - Thursday, September 29, 2016 -

  • link

  • Knock yourself out.
  • Reply
  • Senti

  • - Thursday, September 29, 2016 -

  • link

  • I expect USB type-C in what you call "premium notebook" today. And better than Intel HD 520 graphics...
  • It's sad to see that OLEDs are still "not quite ready". Battery life with web browsing was the last nail in the coffin.
  • Reply
  • 1


Copyright 2016 AnandTech

BlackBerry Stops Development of Smartphones, Set to Outsource Hardware Development

AnandTech — 9/28/2016 10:00:00 AM

BlackBerry on Wednesday said it would cease internal development of its hardware and will transfer that function to its partners. While the BlackBerry-branded devices will remain on the market, BlackBerry itself will focus completely on software and will not invest in development of devices. The move edges the company closer to exiting the hardware business after years of considering such a move.

“The company plans to end all internal hardware development and will outsource that function to partners,” said John Chen, CEO and chairman of BlackBerry. “This allows us to reduce capital requirements and enhance return on invested capital," continued Chen.

Less than three years ago BlackBerry inked a strategic partnership with Foxconn, under which the two companies jointly developed certain BlackBerry-branded smartphones. Foxconn then built the hardware and managed the entire inventory associated with these devices. Now, the company intends to cease all of its hardware-related R&D activities and outsource this function to others.BlackBerry will now focus on development of extra-secured versions of Google’s Android operating system (recently the company introduced its own version of Android 6.0 that is used on the DTEK50 smartphone) as well as applications with enhanced security available through its BlackBerry Hub+ service.

In addition to Foxconn, BlackBerry has worked with other hardware makers. BlackBerry’s DTEK50 smartphone released earlier this year resembles Alcatel’s Idol 4 handset developed by Chinese TCL. Therefore, right now BlackBerry has at least two partners, which can build smartphones carrying the well-known brand all by themselves. In fact, this deal with BlackBerry puts TCL into an interesting position because it now can make handsets both under BlackBerry and Palm brands (in addition to Alcatel trademark, which TCL uses for its smartphones).

Today, BlackBerry also announced its first licensing agreement with joint venture PT Merah Putih, an Indonesia-based company. Under the terms of the agreement, the latter manages production and distribution of BlackBerry-branded devices running the BlackBerry’s Android software. While it is not completely clear to which degree PT Merah Putih develops its hardware in-house (typically, such companies outsource design of their products to others), it is more than likely that the actual devices are made by an ODM, such as Foxconn or TCL.

BlackBerry has been considering an exit from the hardware business for several years now, ever since the company appointed John Chen as CEO. The head of the company has said on multiple occasions that software and security technologies are the main strength for BlackBerry and warned that the firm could drop hardware completely if this business is not profitable. As it appears, BlackBerry will cease development of its smartphones, but will allow others to do it. Therefore, BlackBerry-branded devices will remain on the market, but the company will not spend big money on their development.

Source: BlackBerry

View All Comments

  • BrokenCrayons

  • - Wednesday, September 28, 2016 -

  • link

  • Blackberry shedding hardware development is no surprise. In fact, what is a surprise is the company's desire to continue attempting to function at all rather than just shutting down. I think what'd be best for them at this point is to start offering their security features via Google Play. If they went ad-supported, they could develop Blackberry-themed SMS apps, mail, and a UI dress up for free. The company would have to be pretty small at that point, but I think the best use of the brand identity is in putting their paw print logo on Android apps.
  • Reply
  • ddriver

  • - Wednesday, September 28, 2016 -

  • link

  • That's right, forget the hardware and focus on the hype. You will need quite a lot of hype to make dummies pay a premium for a brand that failed to produce anything worthy for years.
  • Reply
  • SeleniumGlow

  • - Wednesday, September 28, 2016 -

  • link

  • What I hoped was that Blackberry would license out their Blackberry OS to other phone manufacturers like Windows did (or does?). It was a well designed OS in my point of view.
  • Reply
  • BillBear

  • - Wednesday, September 28, 2016 -

  • link

  • Does anybody remember when the iPhone was doomed because it didn't have a physical keyboard?
  • 1


Copyright 2016 AnandTech

Best Android Phones: Q3 2016

AnandTech — 9/28/2016 6:00:00 AM

After the usual summer respite, our Q3 2016 Best Android Smartphones guide arrives in the middle of the fall frenzy, which has already produced a number of new phones, including the Samsung Galaxy Note7, Apple iPhone 7 and 7 Plus, Honor 8, the new modular Moto Z family from Motorola, and even a new brand from Huawei—Nova—to name just a few. There’s still several high-profile products still to come too, such as the LG V20, two new Nexus phones, and a new Mate phablet from Huawei. Several of the Chinese OEMs are also releasing some interesting phones at reasonable prices that will interest our international readers.

Keeping in mind that our guide only includes phones that we’ve reviewed, and that we do not have the bandwidth to review every phone that’s available, here are the Android phones we currently like.

Best Android Phablet:

Samsung Galaxy Note5

Until Samsung sorts out the Galaxy Note7’s battery issue, the Note5 remains our top phablet choice. Its 5.7-inch 2560x1440 SAMOLED display is still one of the best available, with excellent black levels, reasonably good brightness, and several different display modes ranging from a very accurate sRGB mode to a couple of wider gamut modes with more vivid colors. Its 16MP rear camera with PDAF and OIS is also one of the best we’ve tested. The Note7’s camera focuses and snaps photos more quickly, and produces better images in low-light scenes, but the Note5’s camera still has the edge in daytime image quality.

The Note5’s Exynos 7420 SoC was the first to use Samsung’s 14nm LPE FinFET process, and its four ARM Cortex-A57 CPU cores running at up to 2.1GHz and four Cortex-A53 cores running at up to 1.5GHz still delivers quick performance. Its 4GB of LPDDR4 RAM gives Samsung’s memory hungry TouchWiz UI some extra room to work.

The piece of hardware that really makes Samsung’s Note line unique is the S-Pen. Being able to jot down notes, sketch pictures, sign documents, annotate screenshots, and select and manipulate text with the active stylus make the Note5 a good choice for people who use their phone for work as well as communication and killing time. Just be sure not to insert it into its silo backwards, or you’ll have to break the phone to get it back out.

People have a love/hate relationship with TouchWiz, and while some questionable design elements remain and it suffers from some performance hiccups, it does include some useful phablet features, including the ability to shrink the whole screen by pressing the home button three times, the option to use a smaller keyboard for one-handed thumb typing, and the two-pane Multi Window feature that allows you to work in two apps at the same time. The Note5 should receive an update to Android 7 at some point in the future, but Samsung has not set an exact date.​

Best High-End Android Smartphones:

Galaxy S7


HTC 10

Earlier this year, Samsung released its seventh generation Galaxy S series. The Galaxy S7 improves upon the design and features of the popular Galaxy S6. The design is very similar, but Samsung has tweaked the curvature of the back, edges, and cover glass to make the phone significantly more ergonomic. The chassis does get thicker and heavier, but this allows for a significant reduction to the camera hump and an increase in battery capacity.

As far as specs go, the Galaxy S7 comes in two versions. Both have 5.1-inch 2560x1440 SAMOLED displays, 32GB or 64GB of UFS 2.0 NAND, 4GB of LPDDR4 memory, a 12MP Sony IMX260 camera with a f/1.7 aperture, and a 3000mAh battery. Depending on where you live you'll either get Qualcomm's Snapdragon 820 or Samsung's Exynos 8890 SoC, both of which use custom ARM CPU cores. More specifically, the US, Japan, and China versions receive Snapdragon 820, while the rest of the world gets Exynos 8890.

Regardless of which Galaxy S7 you get, you'll be getting the best hardware that Samsung has to offer. The Galaxy S6 was a good phone, but it was not perfect. The S7 addresses several of these shortcomings with a more ergonomic design, a larger battery, support for microSD cards, and the return of IP68 dust and water protection.

The other phone worth discussing at the high end is the HTC 10, which manages to best the Galaxy S7 in at least a few areas. In terms of audio quality, design, OEM UI, and other areas like perceptual latency I would argue that HTC is just clearly ahead of Samsung. HTC also has proper USB 3.1 and USB-C support, which does make the device more future-proof than the Galaxy S7’s microUSB connector in that regard. The front facing camera is also just clearly better on the basis of having OIS and optics that can actually focus on a subject instead of being set to infinity at all times.

However, Samsung is clearly ahead in display and the camera is clearly the fastest I’ve ever seen in any phone, bar none. Samsung is also just clearly shipping better WiFi implementations right now in terms of antenna sensitivity and software implementation, along with IP68 water resistance and magstripe payments for the US and South Korea.

To further muddy the waters, there are areas where HTC and Samsung trade blows. While Samsung’s camera is clearly faster, HTC often has better detail in their images, especially at the center of the frame but the Galaxy S7 has better detail at the edge of the frame. Noise reduction tends to be a bit less heavy-handed and sharpening artifacts aren’t nearly as strong as it is on the Galaxy S7. HTC’s larger sensor also means that it’s possible to get actual dSLR-like bokeh with macro shots, which is honestly something that I’ve never seen before in any smartphone camera ever.

Overall, I think it’s pretty fair to say that the HTC 10 is a solid choice. If I had to pick between the two I would probably lean towards the HTC 10, but this is based upon personal priorities. I don’t think you can really go wrong between the two. The HTC 10 is currently 699 USD when bought unlocked through HTC with Carbon Gray and Glacial Silver with 32 GB of internal storage, which is a bit more than the Galaxy S7 but considering how smartphones are often used for 2-3 years now I don’t think 50 dollars should be a major point in favor or against a phone.

Best Mid-Range Android Smartphone:

OnePlus 3

The OnePlus 3, with its list of impressive hardware at a reasonable price, is still our (upper) mid-range choice. The Motorola Moto Z Play Droid is about the same price and includes a nice display, a good camera, and a large battery—not to mention support for Moto Mods such as the Hasselblad True Zoom Mod—but its eight Cortex-A53 CPU cores and Adreno 506 GPU cannot offer the same level of performance as the OnePlus 3’s Snapdragon 820 SoC. The Moto Z Play Droid also comes with less RAM (3GB), less internal storage (32GB), and lacks 802.11ac Wi-Fi. Its little brother, the Moto G4 Plus costs less than the OnePlus 3—$299 for 4GB of RAM and 64GB of internal NAND—but again falls short of the OnePlus 3’s overall user experience.

Huawei’s Honor 8 is another contender that costs the same as the OnePlus 3 and is available in the US and internationally. We’re not far enough into our review to give it a thumbs up or thumbs down, but it’s a nice looking phone with decent specs. It also has a smaller 5.2-inch display, giving it a smaller footprint than the OnePlus 3.

When we first looked at the OnePlus 3, Brandon discovered that the display’s grayscale and color accuracy were quite poor, its video quality was subpar, and it evicted apps from RAM too aggressively, especially considering that it comes with 6GB of LPDDR4; however, in subsequent software updates OnePlus has either fixed or improved each of these issues.

The build quality of the OnePlus 3 is excellent, its 16MP rear camera with PDAF and OIS takes nice photos, and its Snapdragon 820 SoC delivers good performance. It also includes 64GB of internal UFS 2.0 NAND storage but no microSD slot, and the usual array of wireless connectivity options including NFC—something the OnePlus 2 lacked. The OnePlus 3 comes in only one configuration and costs $399.

Best Budget Android Smartphones:

Huawei Honor 5X

(US) and

Xiaomi Redmi Note 3 Pro

While the rest of the planet is awash with lower-cost phones containing decent hardware, it’s difficult to recommend a budget smartphone for the US market. Take the Xiaomi Redmi Note 3 Pro, for example. Its Snapdragon 650 SoC contains two high-performance Cortex-A72 CPU cores running at up to 1.8GHz and four Cortex-A53 cores at up to 1.4GHz, which easily outperforms the standard octa-core A53 SoCs common at this price point. Its performance is really quite remarkable, rivaling some upper mid-range and flagship devices. The Adreno 510 GPU supports the latest graphics APIs, including support for tessellation, and is fast enough to play most games currently available. Battery life is excellent too, thanks in part to a large 4050 mAh battery. There’s even an infrared blaster and support for 802.11ac Wi-Fi and FM radio.

Of course some sacrifices need to be made to reach such a low price. The Redmi Note 3 Pro’s weakest component is its 5.5-inch 1080p IPS display, whose poor black level and inaccurate white point and gamma calibration hurt image quality. The panel’s backlight does not fully cover the sRGB gamut, which further reduces color accuracy. While not perfect, it clearly moves the bar higher in this segment and raises our expectations for future lower-cost phones.

Unfortunately, the Redmi Note 3 Pro, like most phones made by Chinese OEMs, is not sold in the US and does not support the LTE frequencies used by US carriers. Instead US consumers must choose from a number of underwhelming phones such as the LG X Power and its Snapdragon 212 SoC that uses four Cortex-A7 CPU cores—not even A53s—and 1.5GB of RAM. The Huawei Honor 5X cannot match the Redmi Note 3 Pro’s performance or photo quality, but it remains a solid option for the US despite being almost a year old. Even the recently released Moto G4 and G4 Play really do not bring anything new. The Honor 5X recently received a long awaited update to Android 6.0 and EMUI 4.0 and is still available for about $200.

View All Comments

  • Meteor2

  • - Wednesday, September 28, 2016 -

  • link

  • Mmmm, I think I would have waited a week and seen what Google has to announce next week before posting this review. I suspect we're going to see the 'best Android phone' then. Maybe even what's next after Android.
  • Reply
  • Meteor2

  • - Wednesday, September 28, 2016 -

  • link

  • Oh and, the best Android phones are the Nexus or the forthcoming Pixel phones. Nothing else is getting Android 7 for ages, and it's all you could want in a phone OS.
  • Reply
  • tsk2k

  • - Wednesday, September 28, 2016 -

  • link

  • Next week 2/3rds of this list can be replaced by the Google Pixel phone.
  • Reply
  • 1


Copyright 2016 AnandTech

NVIDIA Teases Xavier, a High-Performance ARM SoC for Drive PX & AI

AnandTech — 9/28/2016 3:45:00 AM

Ever since NVIDIA bowed out of the highly competitive (and high pressure) market for mobile ARM SoCs, there has been quite a bit of speculation over what would happen with NVIDIA’s SoC business. With the company enjoying a good degree of success with projects like the Drive system and Jetson, signs have pointed towards NVIDIA continuing their SoC efforts. But in what direction they would go remained a mystery, as the public roadmap ended with the current-generation Parker SoC. However we finally have an answer to that, and the answer is Xavier.

At NVIDIA’s GTC Europe 2016 conference this morning, the company has teased just a bit of information on the next generation Tegra SoC, which the company is calling Xavier (ed: in keeping with comic book codenames, this is Professor Xavier of the X-Men). Details on the chip are light – the chip won’t even sample until over a year from now – but NVIDIA has laid out just enough information to make it clear that the Tegra group has left mobile behind for good, and now the company is focused on high performance SoCs for cars and other devices further up the power/performance spectrum.

So what’s Xavier? In a nutshell, it’s the next generation of Tegra, done bigger and badder. NVIDIA is essentially aiming to capture much of the complete Drive PX 2 system’s computational power (2x SoC + 2x dGPU) on a single SoC. This SoC will have 7 billion transistors – about as many as a GP104 GPU – and will be built on TSMC’s 16nm FinFET+ process. (To put this in perspective, at GP104-like transistor density, we'd be looking at an SoC nearly 300mm2 big)

Under the hood NVIDIA has revealed just a bit of information of what to expect. The CPU will be composed of 8 custom ARM cores. The name “Denver” wasn’t used in this presentation, so at this point it’s anyone’s guess whether this is Denver 3 or another new design altogether. Meanwhile on the GPU side, we’ll be looking at a Volta-generation design with 512 CUDA Cores. Unfortunately we don’t know anything substantial about Volta at this time; the architecture was bumped further down NVIDIA’s previous roadmaps for Pascal, and as Pascal just launched in the last few months, NVIDIA hasn’t said anything further about it.

Meanwhile NVIDIA’s performance expectations for Xavier are significant. As mentioned before, the company wants to condense much of Drive PX 2 into a single chip. With Xavier, NVIDIA wants to get to 20 Deep Learning Tera-Ops (DL TOPS), which is a metric for measuring 8-bit Integer operations. 20 DL TOPS happens to be what Drive PX 2 can hit, and about 43% of what NVIDIA’s flagship Tesla P40 can offer in a 250W card. And perhaps more surprising still, NVIDIA wants to do this all at 20W, or 1 DL TOPS-per-watt, which is one-quarter of the power consumption of Drive PX 2, a lofty goal given that this is based on the same 16nm process as Pascal and all of the Drive PX 2’s various processors.

NVIDIA’s envisioned application for Xavier, as you might expect, is focused on further ramping up their automotive business. They are pitching Xavier as an “AI Supercomputer” in relation to its planned high INT8 performance, which in turn is a key component of fast neural network inferencing. What NVIDIA is essentially proposing then is a beast of an inference processor, one that unlike their Tesla discrete GPUs can function on a stand-alone basis. Coupled with this will be some new computer vision hardware to feed Xavier, including a pair of 8K video processors and what NVIDIA is calling a “new computer vision accelerator.”

Wrapping things up, as we mentioned before, Xavier is a far future product for NVIDIA. While the company is teasing it today, the SoC won’t begin sampling until Q4 of 2017, and that in turn implies that volume shipments won’t even be until 2018. But with that said, with their new focus on the automotive market, NVIDIA has shifted from an industry of agile competitors and cut-throat competition, to one where their customers would like as much of a heads up as possible. So these kinds of early announcements are likely going to become par for the course for NVIDIA.

View All Comments


Copyright 2016 AnandTech

GTC Europe 2016: NVIDIA Keynote Live Blog with CEO Jen-Hsun Huang

AnandTech — 9/28/2016 12:17:00 AM

AnandTech Live Blog: The newest updates are at the top. This page will auto-update, there's no need to manually refresh your browser.

04:49AM EDT - 'NVIDIA invented the GPU, and 10 years ago we invented GPU computing'

04:49AM EDT - 'The ability to perceive and the ability to learn are fundamentals of AI - we now have the three pillars to solve large-scale AI problems'

04:48AM EDT - 'These three achievements are great: we now have the ability to simulate human brains: learning, sight and sound'

04:48AM EDT - 'Reminder, humans don't achieve 0% error rate'

04:47AM EDT - 'The English language is fairly difficult for computers to understand, especially in a noisy environment'

04:47AM EDT - Correction, Microsoft hit 6.3% error rate in speech recognition

04:45AM EDT - 'Speech will not only change how we interact with computers, but what computers can do'

04:45AM EDT - 'Speech recognition is one of the most researched areas in AI'

04:44AM EDT - 'Traditional CV approaches wouldn't ever work for auto'

04:44AM EDT - 'One of the big challenges is autonomous vehicles'

04:44AM EDT - 'Now, Deep Learning can beat humans at image recognition - it has achieved 'Super Human' levels'

04:43AM EDT - 'As we grow, the computational complexity of these networks becomes even greater'

04:42AM EDT - 'e.g., 2015 where Deep Learning beat humans at ImageNet, 2016 where speech recognition reaches sub-3% in conversational speech'

04:41AM EDT - 'Now, not a week goes by when there's a new breakthrough or milestone reached'

04:40AM EDT - 'One of the most exciting events in computing for the last 25 years'

04:40AM EDT - 'The neural network out of that paper, 'AlexNet' beat seasoned Computer Vision veterans with hand tuned algorithms at ImageNet'

04:39AM EDT - 'ImageNet Classification with Deep Convolutional Neural Networks' by Alex Krizhevsky at the University of Toronto

04:38AM EDT - 'Deep Neural Nets were then developed on GPUs to solve this'

04:38AM EDT - 'The handicap lasted two decades'

04:37AM EDT - 'It required a large amount of data to write its own software, which is computationally exhausting'

04:37AM EDT - 'Deep Learning was in the process, and the ability to generalize learning was a great thing, but it had a handicap'

04:35AM EDT - 'A brand new type of processor is needed for this revolution - it happened in 2012 with the Titan X'

04:35AM EDT - 'Windows, ARM, Android'

04:35AM EDT - 'In each era of computing, a new computing platform emerged'

04:34AM EDT - 'Now, we have software that writes software. Machines learn. And soon, machines will build machines.'

04:34AM EDT - '10 years later, we have the AI revolution'

04:33AM EDT - 'We could put high performance compute technology in the hands of 3 billion people'

04:33AM EDT - 'In 2006, the mobile revolution and Amazon AWS happened'

04:33AM EDT - 'Several things at once came together to make the PC era something special'

04:32AM EDT - 'We're at the beginning of something important, the 4th industrial revolution'

04:32AM EDT - 'GPUs can do what normal computing cannot'

04:31AM EDT - JSH to the stage

04:30AM EDT - Mentioning AlphaGO

04:30AM EDT - 'Using AI to sort trash'

04:30AM EDT - 'Using AI to deliver relief in harsh conditions' (drones)

04:30AM EDT - 'Deep Learning is helping farmers analyze crop data in days what used to take years'

04:29AM EDT - Opening video

04:27AM EDT - We're about to start

04:25AM EDT - This is essentially GTC on the road - they're doing 5 or 6 of these satellite events around the world after the main GTC

04:25AM EDT - This is a satellite event to the main GTC in San Francisco. By comparison the main GTC has 5000 attendees, this one has 1600-ish

04:24AM EDT - I'm here at the first GTC Europe event, ready to go for the Keynote talk hosted by CEO Jen-Hsun Huang.

View All Comments


Copyright 2016 AnandTech

Xiaomi Mi 5s and Mi 5s Plus Announced

AnandTech — 9/27/2016 1:25:00 PM

Xiaomi announced two new flagship smartphones today. The Mi 5s and Mi 5s Plus are updates to the Mi 5 / Mi 5 Pro and Mi 5 Plus phones that were announced at MWC 2016 in February, and pack some new hardware inside a new brushed-aluminum chassis.

Both the Mi 5s and Mi 5s Plus use Qualcomm’s Snapdragon 821 SoC, which itself is an updated version of the popular Snapdragon 820 that’s inside the Mi 5 phones and many of the other flagship phones we’ve seen this year. With Snapdragon 821, max frequencies increase to 2.34GHz for the two Kryo CPU cores in the performance cluster and 2.19GHz for the two Kryo cores in the power cluster. Complementing the quad-core CPU is Qualcomm’s Adreno 530 GPU that also sees a small 5% increase in peak frequency to 653MHz. While it’s unclear if the 821 includes any changes to its micro-architecture, Qualcomm has likely done some layout optimization as it’s quoting a 5% increase in power efficiency. The Mi 5s and Mi 5s Plus still pair the SoC with LPDDR4 RAM and UFS 2.0 NAND like their predecessors.

Note: We're still trying to confirm the Mi 5s and Mi 5s Plus specifications with Xiaomi.

The Mi 5s still comes with a 5.15-inch 1080p IPS LCD. This is an extended color gamut panel that will display exceptionally vivid, but inaccurate, colors. Xiaomi claims the display will reach a peak brightness of 600 nits, which it achieves by increasing the number of LEDs in the backlight assembly from the typical 12 to 14 in most edge-lit IPS displays to 16, a feature also shared with the Mi 5. This improves power efficiency by 17%, according to Xiaomi, presumably from using more LEDs at lower individual output levels. The Mi 5s Plus has a larger 5.7-inch 1080p IPS display with a pixel density of 386ppi, which is still decent for an LCD.

Xiaomi Mi 5s

While the front camera still uses a 4MP sensor with large 2.0μm pixels, both new phones receive new rear cameras. The Mi 5s looks to improve low-light performance by using a larger format Sony IMX378 Exmor RS sensor that features 1.55µm pixels; however, image resolution drops to 12MP, the same as Samsung’s Galaxy S7 and Apple’s iPhone 7. The Mi 5s Plus has the more interesting camera setup, employing dual 13MP sensors. Similar to Huawei’s P9 and Honor 8, the Mi 5s Plus uses one sensor for capturing color images and the other sensor for capturing black and white images. The black and white camera lacks an RGB Bayer filter, allowing it to capture more light than a color camera. By combining the output of both sensors, the Mi 5s Plus can theoretically capture brighter images with higher contrast and less noise. The P9 and Honor 8 also use the second camera for measuring depth, aiding camera focusing and allowing the user to adjust bokeh effects after the image is captured, but it’s not clear if the Mi 5s Plus also has these capabilities.

Xiaomi Mi 5s Plus

The other big change is a completely new chassis made entirely from brushed aluminum. The back edges are still curved, but there’s no longer any glass or ceramic on the back like the Mi 5 and Mi 5 Pro, respectively. The change to aluminum means the Mi 5s now includes plastic antenna lines on the top and bottom of the back panel. The Mi 5s Plus goes a different route by using plastic inserts at the top and bottom that try to blend in by mimicking the color and texture of the surrounding aluminum.

The Mi 5s Plus includes a circular, capacitive fingerprint sensor on the back that’s slightly recessed, making it easier to locate. The Mi 5s goes the less conventional route with an ultrasonic fingerprint sensor that sits below the edge-to-edge cover glass on the front. Both phones use capacitive buttons rather than onscreen navigation controls and 2.5D cover glass that blends into a chamfered edge on the aluminum frame.

Both phones come in four different colors—silver, gray, gold, and pink—and will be available for sale in China starting September 29.

View All Comments


Copyright 2016 AnandTech

Razer Updates The DeathAdder Elite Gaming Mouse

AnandTech — 9/27/2016 7:00:00 AM

Although Razer has become one of the well known gaming computer companies, they got their start with gaming mice, and today Razer is launching their next iteration of the best selling gaming mouse of all time, the Razer DeathAdder Elite. The DeathAdder series was first introduced in 2006.

As an iterative update, there could just be some new lights, or what not, but this update brings about a new Razer 5G Optical Sensor, rated for up to 16,000 DPI, which is the highest yet. It can also track at 450 inches per second, which is yet another new standard, and supports up to 50 g of acceleration. Razer is also announcing the DeathAdder Elite has the highest measured resolution accuracy in a gaming mouse at 99.4 percent. If high speed and precision is required, this mouse appears to have that sewn up.

The more interesting bit though is that Razer has also upped their game on the switches. Razer has co-designed and produced new mechanical switches with Omron, which are “optimized for the fastest response times” and more importantly to me, an increased durability rating of 50 million clicks.

Razer has also included an improved tactile scroll wheel design. I’ve used the DeathAdder in the past, and one of the things that made me abandon it was the scroll wheel, which gave plenty of grip, but would actually wear through the skin on my finger due to the sharp nubs on the wheel. Hopefully the new version is improved in this regard. For fast gaming, the extra grip is likely a nice bonus, but for everyday use I found it uncomfortable.

The overall design hasn’t changed, which is a good thing, since it was a pretty comfortable and ergonomic gaming mouse. It also keeps the Razer Chroma RGB LED lighting system as well, so you can customize away. The mouse has seven programmable buttons, 1000 Hz polling, and a 2.1 m / 7 ft braided USB cable. It weighs in at 105 grams.

The mouse is available for pre-order starting today for $69.99 USD, with worldwide shipments starting in October.

Source: Razer

Gallery: Razer Updates The DeathAdder Elite Gaming Mouse

View All Comments

  • Eden-K121D

  • - Tuesday, September 27, 2016 -

  • link

  • Cool. I'm a noob. Can anyone explain the fuss about Gaming Mice?
  • Reply
  • JoeyJoJo123

  • - Tuesday, September 27, 2016 -

  • link

  • Sounds pretty cool, hopefully it supercedes Logitech's G502 Proteus Core's Pixart PMW 3366 sensor in terms of accuracy.
  • I was never a fan of the lightning, logo, but the sculpture was always good and Razer mice are relatively light in comparison to some other brands. I like using basic looking, but well performing mice, like the Zowie FK1. If this Razer lives up to the claims presented in the article, I'd really like to have that performance in a Zowie shell.
  • It's nice to see that mice manufacturers are caring increasingly more about the baseline performance of their products, rather than issuing out differently designed plastic shells with the same internals as they did for about 2 decades. I suppose we can thank e-sports publicity also raising awareness that the market does want better performing and more accurate PC peripherals.
  • Reply
  • 1


Copyright 2016 AnandTech

New ARM IP Launched: CMN-600 Interconnect for 128 Cores and DMC-620, an 8Ch DDR4 IMC

AnandTech — 9/27/2016 7:00:00 AM

You need much more than a good CPU core to conquer the server world. As more cores are added, the way data moves from one part of the silicon to another gets more important. ARM has announced today a new and faster member to their SoC interconnect IP offerings in the form of CMN-600 (CMN stands for 'coherent mesh network', as opposed to cache coherent network of CCN). This is a direct update to CCN-500 series, which we've discussed at AnandTech before.

The idea behind a coherent mesh between cores as it stands in the ARM Server SoC space is that you can put a number of CPU clusters (e.g. four lots of 4xA53) and accelerators (custom or other IP) into one piece of silicon. Each part of the SoC has to work with everything else, and for that ARM offers a variety of interconnect licences for users who want to choose from ARM's IP range. For ARM licensees who pick multiple ARM parts, this makes it easier for to combine high core counts and accelerators in one large SoC.

The previous generation interconnect, the CCN-512, could support 12 clusters of 4 cores and maintain coherency, allowing for large 48-core chips. The new CMN-600 can support up to 128 cores (32 clusters of 4). As part of the announcement, There is also an agile system cache which a way for I/O devices to allocate memory and cache lines directly into the L3, reducing the latency of I/O without having to touch the core.

Also in the announcement is a new memory controller. The old DMC-520, which was limited to four channels of DDR3, is being superseded by the DMC-620 controller which supports eight channels of DDR4. Each DMC-620 channel can contain up to 1 TB DDR4, giving a potential SoC support of 8TB.

According to ARM through simulations, the improved memory controller offers 50% lower latency and up to 5 times more bandwidth. Also, the new DMC is being advertised as supporting DDR4-3200. 3200 MT/s offers twice as much bandwidth than 1600 MT/s, and doubling the channels offers twice the amount of bandwidth - so we can explain 4 times more bandwidth, so it is interesting that ARM claims 5x more, which would suggest efficiency improvements as well.

If you double the number of cores and memory controllers, you expect twice as much performance in the almost perfectly scaling SPEC int2006_rate. ARM claims that their simulations show that 64 A72s will run 2.5 times faster than 32 A72 cores, courtesy of the improved memory controller. If true, that is quite impressive. By comparison, we did not see such a jump in performance in the Xeon world when DDR3 was replaced by DDR4. Even more impressive is the claim that the maximum compute performance of a 64x A72 SoC can go up by a factor six compared to 16x A57 variant. But we must note that the A57 was not exactly a success in the server world: so far only AMD has cooked up a server SoC with it and it was slower and more power hungry than the much older Atom C2000.

We have little doubt we will find the new CMN-600 and/or DMC-620 in many server solutions. The big question will be one of application: who will use this interconnect technology in their server SoCs? As most licensees do not disclose this information, it is hard to find out. As far as we know, Cavium uses its own interconnect technology, which would suggest Qualcomm or Avago/Broadcom are the most likely candidates.

Source: ARM

View All Comments

  • shelbystripes

  • - Tuesday, September 27, 2016 -

  • link

  • So this enables putting 128 ARM cores on a single piece of silicon? Even with little cores and 14nm process, that's going to be a pretty large die.
  • This would be pretty cool for building a BIG.little supercomputer though. A small block of 4 big cores to manage the OS and task scheduling, and then 124 little cores in parallel... Add a high speed interconnect to talk to other nodes and external storage servers, and you've got an entire HPC node as an SoC. Want a half million ARM cores in a single 19" rack?
  • Reply
  • 1


Copyright 2016 AnandTech

CEVA Launches Fifth-Generation Machine Learning Image and Vision DSP Solution: CEVA-XM6

AnandTech — 9/27/2016 5:30:00 AM

Deep learning, neural networks and image/vision processing is already a large field, however many of the applications that rely on it are still in their infancy. Automotive is the prime example that uses all of these areas, and solutions to the automotive 'problem' are require significant understanding and development in both hardware and software - the ability to process data with high accuracy in real-time opens up a number of doors for other machine learning codes, and all that comes afterwards is cost and power. The CEVA-XM4 DSP was aimed at being the first programmable DSP to support deep learning, and the new XM6 IP (along with the software ecosystem) is being launched today under the heading of stronger efficiency, compute, and new patents regarding power saving features.

Playing the IP Game

When CEVA launched the XM4 DSP, with the ability to infer pre-trained algorithms in fixed-point math to a similar (~1%) accuracy as the full algorithms, it won a number of awards from analysts in the field, claiming high performance and power efficiency over competing solutions and the initial progression for a software framework. The IP announcement was back in Q1 2015, with licensees coming on board over the next year and the first production silicon using the IP rolling off the line this year. Since then, CEVA has announced its CDNN2 platform, a one-button compilation tool for trained networks to be converted into suitable code for CEVA's XM IPs. The new XM6 integrates the previous XM4 features, with improved configurations, access to hardware accelerators, new hardware accelerators, and still retains compatibility with the CDNN2 platform such that code suitable for XM4 can be run on XM6 with improved performance.

CEVA is in the IP business, like ARM, and works with semiconductor licensees that then sell to OEMs. This typically results in a long time-to-market, especially when industries such as security and automotive are moving at a rapid pace. CEVA is promoting the XM6 as a scalable, programmable DSP that can scale across markets with a single code base, while also using additional features to improve power, cost and performance.

The announcement today covers the new XM6 DSP, CEVA's new set of imaging and vision software libraries, a set of new hardware accelerators and integration into the CDNN2 ecosystem. CDNN2 is a one-button compilation tool, detecting convolution and applying the best methodology for data transfer over the logic blocks and accelerators.

XM6 will support OpenCL and C++ development tools, and the software elements include CEVA's computer vision, neural network and vision processing libraries with third-party tools as well. The hardware implements an AXI interconnect for the processing parts of the standard XM6 core to interact with the accelerators and memory. Along with the XM6 IP, there are hardware accelerators for convolution (CDNN assistance) allowing lower power fixed function hardware to cope with difficult parts of neural network systems such as GoogleNet, De-Warp for adjusting images taken on fish-eye or distorted lenses (once the distortion of an image is known, the math for the transform is fixed-function friendly), as well as other third party hardware accelerators.

The XM6 promotes two new specific hardware features that will aid the majority of image processing and machine learning algorithms. The first is scatter-gather, or the ability to read values from 32 addresses in L1 cache into vector registers in a single cycle. The CDNN2 compilation tool identifies serial code loading and implements vectorization to allow this feature, and scatter-gather improves data loading time when the data required is distributed through the memory structure. As the XM6 is configurable IP, the size/associativity of the L1 data store is adjustable at the silicon design level, and CEVA has stated that this feature will work with any size L1. The vector registers for processing at this level are 8-wide VLIW implementations, meaning 'feed the beast' is even more important than usual.

The second feature is called 'sliding-window' data processing, and this specific technique for vision processing has been patented by CEVA. There are many ways to process an image for either processing or intelligence, and typically an algorithm will use a block or tile of pixels at once to perform what it needs to. For the intelligence part, a number of these blocks will overlap, resulting in areas of the image being reused at different parts of the computation. CEVA's method is to retain that data, resulting in fewer bits being needed in the next step of analysis. If this sounds straightforward (I was doing something similar with 3D differential equation analysis back in 2009), it is, and I was surprised that it had not been implemented in vision/image processing before. Reusing old data (assuming you have somewhere to store it) saves time and saves energy.

CEVA is claiming up to a 3x performance gain in heavy vector workloads for XM6 over XM4, with an average of 2x improvement for like-for-like ported kernels. The XM6 is also more configurable than the XM4 from a code perspective, offering '50% more control'.

With the specific CDNN hardware accelerator (HWA), CEVA cites that convolution layers in ecosystems such as GoogleNet consume the majority of cycles. The CDNN HWA takes this code and implements fixed hardware for it with 512 MACs using 16-bit support for up to an 8x performance gain (and 95% utilization). CEVA mentioned that a 12-bit implementation would save die area and cost for a minimal reduction in accuracy, however there are a number of developers requesting full 16-bit support for future projects, hence the choice.

Two of the big competitors for CEVA in this space, for automotive image/visual processing, is MobilEye and NVIDIA, with the latter promoting the TX1 for both training and inference for neural networks. Based on TX1 on a TSMC 20nm Planar process at 690 MHz, CEVA states that their internal simulations give a single XM6 based platform as 25x the efficiency and 4x the speed based on AlexNet and GoogleNet (with the XM6 also at 20nm, even though it will most likely be implemented at 16nm FinFET or 28nm). This would mean, extrapolating the single batch TX1 data published, that XM6 using AlexNet at FP16 can perform 268 images a second compared to 67, at around 800 mW compared to 5.1W. At 16FF, this power number is likely to be significantly lower (CEVA told us that their internal metrics were initially done at 28nm/16FF, but were redone on 20nm for an apples-to-apples with the TX1). It should be noted that TX1 numbers were provided for multi-batch which offered better efficiency over single batch, however other comparison numbers were not provided. CEVA also implements power gating with a DVFS scheme that allows low power modes when various parts of the DSP or accelerators are idle.

Obviously the advantage that NVIDIA has with their solution is availability and CUDA/OpenCL software development, both of which CEVA is attempting to address with one-button software platforms like CDNN2 and improved hardware such as XM6. It will be interesting to see which semiconductor partners and future implementations will combine this image processing with machine learning in the future. CEVA states that smartphones, automotive, security and commercial (drones, automation) applications are prime targets.

Source: CEVA

View All Comments


Copyright 2016 AnandTech

AMD Announces Embedded Radeon E9260 & E9550 - Polaris for Embedded Markets

AnandTech — 9/27/2016 5:00:00 AM

While it’s AMD’s consumer products that get the most fanfare with new GPU launches – and rightfully so – AMD and their Radeon brand also have a solid (if quiet) business in the discrete embedded market. Here, system designers utilize discrete video cards for commercial, all-in one products. And while the technology is much the same as on the consumer side, the use cases differ, as do the support requirements. For that reason, AMD offers a separate lineup of products just for this market under the Radeon Embedded moniker.

Now that we’ve seen AMD’s new Polaris architecture launch in the consumer world, AMD is taking the next step by refreshing the Radeon Embedded product lineup to use these new parts. To that end, this morning AMD is announcing two new Radeon Embedded video cards: the E9260 and the E9550. Based on the Polaris 11 and Polaris 10 GPUs respectively, these parts are updating the “high performance” and “ultra-high performance” segments of AMD’s embedded offerings.

We’ll start things off with the Embedded Radeon E9550, which is the new top-performance card in AMD’s embedded lineup. Based on AMD’s Polaris 10 GPU, this is essentially an embedded version of the consumer Radeon RX 480, offering the same number of SPs at roughly the same clockspeed. This part supersedes the last-generation E8950, which is based on AMD’s Tonga GPU, and is rated to offer around 93% better performance, thanks to the slightly wider GPU and generous clockspeed bump.

The E9550 is offered in a single design, an MXM Type-B card that’s rated for 95W. These embedded-class MXM cards are typically based on AMD’s mobile consumer designs, and while I don’t have proper photos for comparison – AMD’s supplied photos are stock photos of older products – I’m sure it’s the same story here. Otherwise, the card is outfitted with 8GB of GDDR5, like the E8950 before it, and is capable of driving up to 6 displays. Finally, AMD will be offering the card for sale for 3 years, which again is par for the course here for AMD.

Following up behind the E9550 is the E9260, the next step down in the refreshed Embedded Radeon lineup. This card is based on AMD’s Polaris 11 GPU, and is similar to the consumer Radeon RX 460, meaning it’s not quite a fully enabled GPU. Within AMD’s lineup it replaces the E8870, offering 2.5 TFLOPS of single precision floating point performance to the former’s 1.5 TFLOPS. AMD doesn’t list official clockspeeds for this card, but based on the throughput rating this puts its boost clock at around 1.4GHz. The card is paired with 4GB of GDDR5 on a 128-bit bus.

Meanwhile on the power front, the E9260 is being rated for up to 50W. Notably, this is down from the 75W designation of its predecessor, as the underlying Polaris 11 GPU aims for lower power consumption. And unlike its more powerful sibling, the E9260 is being offered in two form factors: an MXM Type-A card, and a half height half length (HHHL) PCIe card. Both cards have identical performance specifications, differing only in their form factor and display options. Both cards can support up to 5 displays, though the PCIe card only has 4 physical outputs (so you’d technically need an MST hub for the 5th). Finally, both versions of the card will be offered by AMD for 5 years, which at this point would mean through 2021.

Moving on, besides the immediate performance benefits of Polaris, AMD is also looking to leverage Polaris’s updated display controller and multimedia capabilities for the embedded market. Of particular note here is support for full H.265 video encoding and decoding, something the previous generation products lacked. And display connectivity is greatly improved too, with both HDMI 2.0 support and DisplayPort 1.3/1.4 support.

The immediate market for these cards will be the same general markets that previous generation products have been pitched at, including digital signage, casino gaming, and medical, all of whom make use of GPUs in various degrees and need parts to be available for a defined period of time. Across all of these markets AMD is especially playing up the 4K and HDR capabilities of the new cards, along of course with overall improved performance.

At the same time however, AMD’s embedded group is also looking towards the future, trying to encourage customers to make better use of their GPUs for compute tasks, a market AMD considers to be in its infancy. This includes automated image analysis/diagnosis, machine learning inferencing to allow a casino machine or digital sign to react to a customer, and GPU beamforming for medical. And of course, AMD always has an eye on VR and AR, though for the embedded market in particular that’s going to be more off the beaten path.

Wrapping things up, AMD tells us that the new Embedded Radeon cards will be shipping in the next quarter. The E9260 will be shipping in production in the next couple of weeks, while the E9550 will be coming towards the end of Q4.

Gallery: AMD Embedded Radeon E9260 & E9550 Launch Deck

View All Comments


Copyright 2016 AnandTech

Xilinx Launches Cost-Optimized Portfolio: New Spartan, Artix and Zynq Solutions

AnandTech — 9/27/2016 3:00:00 AM

Some of the key elements of the embedded market are cost, power and efficiency. A number of applications for embedded vision and IoT, when applying complexity, rely on the addition of additional fixed function of variable function hardware to accelerate throughput. To that end, the product has multiple FPGA/SoC devices to achieve the goal. The FPGA market is large, however Xilinx is in the process of redefining their product ecosystem to include SoCs with FPGAs built in: silicon with both general purpose ARM processors (Cortex-A series) and programmable logic gates to deal with algorithm acceleration, especially when it comes to sensor fusion/programmable IO and low-cost devices. As a result, Xilinx is releasing a new single-core Zynq 7000 series SoC with an embedded FPGA, as well as new Spartan-7 and Artix-7 FPGAs focused on low cost.

The new Spartan-7, built on 28nm TSMC and measuring 8x8mm, is aimed at sensor fusion and connectivity, while the Artix-7 is for the signal processing market. The Zynq-7000 is for the programmable SoC space, featuring a single ARM Core (various models starting with Cortex-A9, moving up to dual/quad Cortex-A53) allowing for on-chip analytical functions as well as host-less driven implementations and bitstream encryption. The all-in-one SoC with onboard FPGA adds a benefit in bringing a floorplan design of multiple chips down from three to one, with the potential to reduce power and offer improved security by keeping the interchip connections on silicon. While the Zynq family isn’t new, the 7000 series for this announcement is aimed squarely at embedded, integrated and industrial IoT platforms by providing a cost-optimized solution.

Gallery: Xilinx Product Specifications

We spoke with Xilinx’s Steve Glaser, SVP of Corporate Strategy, who explained that Xilinx wants to be in the prime position to tackle four key areas: Cloud, Embedded Vision, Industrial IoT and 5G. The use in the cloud is indicative of high focused workloads that land between ASICs and general purpose compute, but also for networking (Infiniband) and storage, with Xilinx IP in interfaces, SSD controllers, smart networking, video conversion and big data. This coincides with the announcement of the CCIX Consortium for coherent interconnects in accelerators.

Embedded Vision is a big part of the future vision of Xilinx, particularly in automotive and ADAS systems. Part of this involves machine learning, and the ability to apply different implementations on programmable silicon as the algorithms adapt and change over time. Xilinx cites a considerable performance and efficiency benefit over GPU solutions, and a wider range of applicability over fixed function hardware.

Industrial IoT (I-IoT) spans medical, factory, surveillance, robotics, transportation, and other typical industry verticals where monitoring and programmability go hand-in-hand. Steve Glaser cited that Xilinx has an 80% market share in I-IoT penetration, quoting billions of dollars in savings industry wide for very small efficiency gains on the production line.

One thing worth noting that FPGA and mixed SoC/FPGA implementations require substantial software on top to operate effectively. Xilinx plays in all the major computer vision and neural network implementations, and we were told with an aim to streamline compilation with simple pragmas that identify code structures for FPGA implementation. This is where the mixed SoC/FPGA implementations, we are told, work best, allowing the analytics on the ARM cores to adjust the FPGA on the fly as required depending on sensor input or algorithm adjustment.

Xilinx sits in that position as being a hardware provider for a solution, but not the end-goal solution provider, if that makes sense. Their customers are the ones that implement what we see in automotive or industrial, so they typically discuss their hardware at a very general level but it still requires an understanding of the markets they focus on to discuss which applications may benefit from FPGA or mixed SoC/FPGA implementations. When visiting any event about IoT or compute as a journalist, there always involves some discussion around FPGA implementation and that transition from hardware to software to product. Xilinx is confident about their position in the FPGA market, and Intel's acquisition of Altera has the markets where both companies used to compete has raised a lot of questions about FPGA roadmaps, with a number of big customers now willing to work on both sides of the fence to keep their options open.

On the new cost-optimized chip announcement, the Spartan-7, Artix-7 and Zynq-7000 will be enabled in the 2016.3 release of the Vivado Design Suite and Xilinx SDx environments later this year, with production devices shipping in Q1 2017.

Gallery: Xilinx Slide Deck

View All Comments


Copyright 2016 AnandTech

The Phononic HEX 2.0 TEC CPU Cooler Review

AnandTech — 9/26/2016 5:30:00 AM

Ever since the birth of the first commercial computers, cooling has always been an issue. While the first chips hardly required significant cooling, the rapid advancements of the past few decades and the high commercial demand led to significant research and development efforts placed towards the improvement of cooling solutions and methods.


Semiconductor cooling, particularly cooling for enthusiast PCs, has come a long way, with hundreds of advanced coolers available and liquid cooling no longer reserved only for hardcore enthusiasts. With the mass production and competitive pricing of all-in-one (AIO) liquid coolers, basic liquid cooling systems can be easily found inside typical living room PCs. Competitive overclockers still experiment and use some extreme cooling methods (e.g. liquid nitrogen), but such sub-zero methods usually can only be used (very) temporarily.

One of the PC CPU cooling methods that was originally explored by overclockers in the 90’s is the use of a thermoelectric (TEC) cooler. These devices had a few advantages but also crippling disadvantages that prevented the technology from finding wide commercial use in consumer PCs. There have been a handful of commercial CPU coolers with a TEC pre-installed many years ago but not a single one of them found commercial success.

Today we are having our first contact with Phononic, a newcomer in the PC cooling market. The company was founded back in 2009, is based in North Carolina and is focused on the research and development of advanced cooling and refrigeration solutions. Their first and currently only CPU cooler, the HEX 2.0, is a very surprising and unique product. It looks like a relatively small tower cooler, yet it has an integrated electronically controlled TEC heat pump that is even partially controllable via software.

A few Words About Thermoelectric Coolers (TECs)

Simply put, a TEC is two metallic plates which when current is applied, one side heats up and the other side cools down. The cool side is typically the one on the CPU, with a sufficient cooling system to remove the heat from the top side (previously, strong air or water cooling was needed, as these systems have an efficiency rating that the hot side produces more heat than the standard CPU. So the TEC requires massive regular cooling alongside it to get the advantages.

The technical description is that the two metallic with electronic junctions sandwiched between them. When electrical energy in the form of DC current is introduced, the device pumps thermal energy from one side to the other (Peltier effect), creating a temperature difference between the two sides. There are however a few problems when working with TECs:

1. Condensation. A typical TEC can produce a temperature difference of up to 70 °C between its cold and hot side. Assuming that a heatsink is mounted to the hot side and that it is capable to maintain a near-room temperature, the cold side of an uncontrolled TEC can be significantly colder than its ambient surroundings. That will cause condensation, which will be disastrous inside a PC.

2. Efficiency. TECs are generally inefficient, with an efficiency usually lower than 15%, which means that they consume disproportionally high amounts of electrical energy for the work they actually offer.

3. The electrical energy losses that the TEC inserts are converted directly to thermal energy and transferred to its hot side. Therefore, the heatsink has to deal with the thermal load of the system plus the energy losses of the TEC, increasing the size and performance requirements.

All that being said, any company willing to attempt the challenges of the physics behind TECs is welcome to try, especially if it ends up as a commercial product for home PCs. Hence why we got the Hex 2.0 in for review.

Packaging & Bundle

We received the Hex 2.0 in a well-designed and very sturdy cardboard box. The walls of the box are very thick and the cooler itself is protected by several layers of cardboard, providing excellent shipping protection.

Alongside with the cooler, Phononic supplies the necessary mounting hardware, the required cables, a simple but useful guide, a simple screwdriver tool and a generous amount of thermal grease. The thermal grease that the company supplies should be enough for perhaps a dozen applications.

View All Comments

  • saratoga4

  • - Monday, September 26, 2016 -

  • link

  • >Note that a large percentage of this energy consumption is inserted as additional thermal load for the cooler to dissipate.
  • All of it should show add to the air cooler load. Energy is conserved.
  • Reply
  • ImSpartacus

  • - Monday, September 26, 2016 -

  • link

  • This is one of those times where you're reminded why coolers use the tech that they use. It seems to be tough to beat.
  • Good review though. Always interesting to see new things.
  • Reply
  • 1


Copyright 2016 AnandTech

Satechi and StarTech USB 3.1 Gen 2 Type-C HDD/SSD Enclosures Review

AnandTech — 9/26/2016 3:00:00 AM

Storage bridges come in many varieties within the internal and external market segments. On the external side, they usually have one or more downstream SATA ports. The most popular uplink port is some sort of USB connection. eSATA as an uplink interface is on the way out. High-end products have Thunderbolt support. Within the USB storage bridge market, device vendors have multiple opportunities to tune their product design for specific use-cases.

Today's review will take a look at the StarTech.com S251BU31C3CB and the Satechi B01FWT2N3K. Both of them are USB 3.1 Gen 2 Type-C enclosures for 2.5" SATA drives with a metallic exterior. There are some subtle differences between the two - the StarTech.com unit has the Type-C cable integrated, and the chassis is designed to be able to stow away that cable for easy portability. The Satechi unit has a Type-C cable included in the package. Consumers interested in the aesthetics aspects might also find the Satechi unit attractive, as it comes in three different colors - Space Gray, Gold and Silver.

Gallery: StarTech.com S251BU31C3CB and Satechi B01FWT2N3K

Both the units come with a screwdriver and appropriate screws. The StarTech.com unit has screw mounts for the 2.5" drive to the internal plastic bay, and the bay to the metal chassis. The Satechi unit has screw mounts only for the bay to the metal chassis (and it comes pre-installed, as can be seen in the above gallery). The pictures in the gallery also show that the StarTech.com unit uses the ASMedia ASM1351 bridge chip, while the Satechi unit uses the VIA Labs VL716 bridge chip.

Consumers need to keep the following aspects in mind for external storage devices / enclosures with a USB interface:

  • Support for UASP (USB-attached SCSI protocol) for better performance (reduced protocol overhead and support for SATA Native Command Queueing (NCQ))
  • Support for TRIM to ensure SSDs in the external enclosure can operate optimally in the long run
  • Support for S.M.A.R.T passthrough to enable monitoring of the internal SATA device by the host OS

Our evaluation routine for storage bridges borrows heavily from the testing methodology for direct-attached storage devices. The testbed hardware is reused. CrystalDiskMark is used for a quick overview, as it helps determine availability of UASP support and provides some performance numbers under ideal scenarios. Real-world performance testing is done with our custom test suite involving robocopy bencharks and PCMark 8's storage bench. We use the Crucial MX200 500GB SSD as the storage drive for all the bridges / enclosures that we are evaluating as part of this review series.

The table below presents the detailed specifications and miscellaneous aspects of the units and how they compare.

Performance Benchmarks

CrystalDiskMark uses four different access traces for reads and writes over a configurable region size. Two of the traces are sequential accesses, while two are 4K rando accesses. Internally, CrystalDiskMark uses the Microsoft DiskSpd storage testing tool. The 'Seq Q32T1' sequential traces use 128K block size with a queue depth of 32 from a single thread, while the '4K Q32T1' ones do random 4K accesses with the same queue and thread configurations. The plain 'Seq' traces use a 1MiB block size. The plain '4K' ones are similar to the '4K Q32T1' except that only a single queue and single thread are used.

Comparing the '4K Q32T1' and '4K' numbers can quickly tell us whether the storage device supports NCQ (native command queuing) / UASP (USB-attached SCSI protocol). If the numbers for the two access traces are in the same ballpark, NCQ / UASP is not supported. This assumes that the host port / drivers on the PC support UASP. We can see that the NCQ/UASP is supported by both enclosures as the 4KQ32T1 numbers are more than 8x and 3x of the 4KQ1 read and write numbers respectively.

Moving on to the real-world benchmarks, we first look at the results from our custom robocopy test. In this test, we transfer three folders with the following characteristics.

  • Photos: 15.6 GB collection of 4320 photos (RAW as well as JPEGs) in 61 sub-folders
  • Videos: 16.1 GB collection of 244 videos (MP4 as well as MOVs) in 6 sub-folders
  • BR: 10.7 GB Blu-ray folder structure of the IDT Benchmark Blu-ray (the same that we use in our robocopy tests for NAS systems)

The test starts off with the Photos folder in a RAM drive in the testbed. robocopy is used with default arguments to mirror it onto the storage drive under test. The content on the RAM drive is then deleted. robocopy is again used to transfer the content, but, from the storage drive under test to the RAM drive. The first segment gives the write speed, while the second one gives the read speed for the storage device. The segments end with the purge of the contents from the storage device. This process is repeated thrice and the average of all the runs is recorded as the performance number. The same procedure is adopted for the Videos and the BR folders.

);Photos ReadPhotos WriteVideos ReadVideos WriteBlu-ray Folder ReadBlu-ray Folder Write

High-performance external storage devices can also be used for editing multimedia files directly off the unit. They can also be used as OS-to-go boot drives. Evaluation of this aspect is done using PCMark 8's storage bench. The storage workload involves games as well as multimedia editing applications. The command line version allows us to cherry-pick storage traces to run on a target drive. We chose the following traces.

  • Adobe Photoshop (Light)
  • Adobe Photoshop (Heavy)
  • Adobe After Effects
  • Adobe Illustrator

Usually, PC Mark 8 reports time to complete the trace, but the detailed log report has the read and write bandwidth figures which we present in our performance tables. Note that the bandwidth number reported in the results don't involve idle time compression. Results might appear low, but that is part of the workload characteristic. Note that the same CPU is being used for all configurations. Therefore, comparing the numbers for each trace should be possible across different DAS units.

);Adobe Photoshop Light ReadAdobe Photoshop Heavy ReadAdobe After Effects ReadAdobe Illustrator ReadAdobe Photoshop Light WriteAdobe Photoshop Heavy WriteAdobe After Effects WriteAdobe Illustrator Write

The StarTech.com unit and the Satechi unit perform very similarly in almost all the benchmarks. There are a few traces for which the Satechi performs better, but the StarTech.com unit also has an equal number of access traces for which its performance comes out on top.

Thermal Aspects and Power Consumption

The thermal design of external storage enclosures has now come into focus, as high-speed SSDs and interfaces such as USB 3.1 Gen 2 can easily drive up temperatures. This aspect is an important one, as the last thing that users want to see when copying over, say, 100 GB of data to the drive inside the enclosure, is the transfer rate going to USB 2.0 speeds. In order to identify the effectiveness with which the enclosure can take away heat from the internal drive, we instrumented our robocopy DAS benchmark suite to record various parameters while the robocopy process took place in the background. Internal temperatures can only be gathered for enclosures that support S.M.A.R.T passthrough. Readers can click on the graphs below to view the full-sized version. Between the two enclosure, we find the Satechi one ended up with the SSD at 51C, while the StarTech.com one ended up with the SSD at 55C. That said, there is no issue with overheating or performance consistency in both enclosures.

It is challenging to isolate the power consumption of the storage bridge alone while treating the unit as a black box. In order to study this aspect in a comparative manner, we use the same SSDs (Curcial MX200 500GB) in the units and process the same workloads on them (CrystalDiskMark 5.1.2's benchmark traces with a region size of 8GB and the number of repetitions set to 5). Plugable's USBC-TKEY power delivery sniffer was placed between the host PC and the storage bridge to record the power consumption. The average power consumption for each access trace was recorded. The pictures below present the numbers in a compact and easy to compare manner.

Any difference in power consumption for the same access trace between two different units is down to the storage bridge itself (since the drive used is the same in all cases). As we could guess from the temperatre graphs, the StarTech.com unit / ASMedia ASM1351 bridge consumes slightly more power compared to the Satechi unit / VIA Labs VL716 bridge.

Miscellaneous Aspects and Concluding Remarks

Storage bridges that support UASP fully can translate the SCSI UNMAP command to TRIM commands for SSDs connected to the downstream port. Checking for TRIM support has been a bit tricky so far. CyberShadow's trimcheck is a quick tool to get the status of TRIM support. However, it presents a couple of challenges: it sometimes returns INDETERMINATE after processing, and, in case TRIM comes back as NOT WORKING or not kicked in yet, it is not clear whether the blame lies with the OS / file system or the storage controller / bridge chip or the SSD itself. In order to get a clear idea, our TRIM check routine adopts the following strategy:

  • Format the SSD in NTFS
  • Load the trimcheck program into it and execute
  • Use the PowerShell command
  • Optimize-Volume -DriveLetter Z -ReTrim -Verbose
  • (assuming that the drive connected to the storage bridge is mounted with the drive letter Z)
  • Re-execute trimcheck to determine status report

Conclusions can be made based on the results from the last two steps.

The Satechi unit / VIA Labs VL716 bridge chip has no trouble supporting TRIM for SSDs connected to it. We repeated the same test with the StarTech.com enclosures.

The StarTech.com unit / ASMedia ASM1351 bridge chip has no trouble supporting TRIM. However, we must mention here that we are using a review unit from the latest production batch that has a newer firmware. StarTech.com will be hosting the firmware update for older units on their poduct support page soon.

Coming to the business end of the review, we find that the StarTech.com unit as well as the Satechi unit target slightly different customers. With the choice of colors, the Satechi unit might find wider acceptance with the average consumer. On the other hand, the StarTech.com unit with an integrated cable and a stowing mechanism allows for a more compact and portable unit. The possibility of a missing cable to connect to the PC doesn't arise. However, it also means that users will need an adapter to use with older systems that have only Type-A ports.

For use-cases involving 2.5" hard drives, I would recommend the StarTech unit more because of the extra screws to keep the hard drive mounted to the plastic bay that slides out. The Satechi unit just relies on an elastic pad at the end opposite to the SATA port. Vibration handling could be an issue since the drive might not be mounted snugly in that case. However, this is not an issue for SSDs at all.

Based on our measureents with power consumption for various access traces, the VIA Labs bridge chip appears to be more power efficient compared to the ASMedia one. For use-cases involving notebooks and other battery systems, the Satechi unit with the VIA Labs bridge chip might be preferable.

Both the Satechi and StarTech.com units perform in a similar manner and there is not much difference in the price. We have no hesitation in recommending either unit for purchase, and the reader can opt for the one most relevant to the use-case.

View All Comments


Copyright 2016 AnandTech

New Chrome OS Update Enables Google Play on Acer’s and ASUS Chromebooks

AnandTech — 9/24/2016 12:30:00 PM

Google has released an update for its Chrome OS that enables select Chromebooks to run apps designed for Google Android and access Google Play Store. Right now, only two mobile PCs, one from Acer and another from ASUS, are compatible with the new build, but the fact of the release confirms Google’s intentions to enable Android software on its OS for PCs.

Google has been trying to bring programs developed for Android to its Chrome OS platform for over two years now. At first, it tried to encourage developers to port certain apps to Chrome OS, but that only worked out for a limited number of programs. At its I/O conference this year, Google announced plans to alter Chrome OS to enable all Android apps from the Play Store to work in sandbox environments. Then, the company made its Play Store available to select Chromebook models running dev or beta channel builds. Finally, starting from this month, the Play Store is heading to stable Chrome OS builds.

Last week Google released Stable channel 53.0.2785.129 (Platform version: 8530.90.0) for the Acer Chromebook R11 (C738T) as well as the ASUS Chromebook Flip. The update contains numerous bug fixes, security updates, feature improvements as well as the Google Play Store (beta). Those, who have already received the new stable channel version, will need to enable the PlayStore in the Chrome settings.

It is not completely clear which kernel and security features are required to run Android apps in sandboxes, but at present the update is only available for the aforementioned two laptops and not even for Google’s own 2016 Chromebook Pixel. It remains unknown when and whether Google intends to enable the Play Store on other Chromebook devices. Meanwhile, one of the reasons why Google chose the Acer Chromebook R11 (C738T) and the ASUS Chromebook Flip as the first Google Play Store-compatible laptops could be their flip form-factor. It will be easier for consumers to use Android apps for tablets on a device, which can transform into a tablet.

Android applications will make Google’s Chrome OS a bit more attractive to those who are looking for an alternative to web apps. Since there are hundreds of millions of active Android users, compatibility of Chrome OS with those applications could be good for Google's PC platform. However, keep in mind that many Chromebooks are built to rely on cloud-based services rather than on locally stored programs, which is why they only feature a limited amount of NAND flash-based storage. Therefore, to a certain degree, Android apps on Chrome OS will alter the concept of this platform and will require makers of hardware to take that into account when they design their next-gen Chromebooks.

Sources: Google, AndroidAuthority, Android Police.

View All Comments

  • alpha64

  • - Saturday, September 24, 2016 -

  • link

  • Google has a list of Chrome devices that will support Android apps here:
  • https colin slash slash sites dot google dot com slash a slash chromium dot org slash dev slash chromeium-os slash chrome-os-systems-supporting-android-apps
  • Reply
  • 1


Copyright 2016 AnandTech

AMD 7th Gen Bristol Ridge and AM4 Analysis: Up to A12-9800, B350/A320 Chipset, OEMs first, PIBs Later

AnandTech — 9/23/2016 5:00:00 AM

Over the last two weeks, AMD officially launched their 7th Generation Bristol Ridge processors as well as the new AM4 socket and related chipsets. The launch was somewhat muted, as the target for the initial launch is purely to the big system OEMs and system integrators, such as Lenovo, HP, Dell and others – for users wanting to build their own systems, ‘Product-in-Box’ units (called PIBs) for self-build systems will come at the end of the year. We held off on the announcement because the launch and briefings left a number of questions unanswered as to the potential matrix of configurations, specifications of the hardware and how it all connects together. We got a number of answers, so let’s delve in.

The CPUs

The seven APUs and one CPU being launched for OEM systems spans from a high-frequency A12 part using the 7th Generation microarchitecture (we call it Excavator v2) to the A6, and they all build on the Bristol Ridge notebook parts that were launched earlier in the year but focused on the desktop this time around. AMD essentially skipped the 6th Gen, Carrizo, for desktop as the design was significantly mobile focused – we ended up with one CPU, the Athlon X4 845 (which we reviewed), with DDR3 support but no integrated graphics. Using the updated 28nm process from TSMC, AMD was able to tweak the microarchitecture and allow full on APUs for desktops using a similar design.

The full list of processors is as follows:

AMD’s mainstream processors will now hit a maximum of 65W in their official thermal design power (TDP), with the launch offering a number of 65W and 35W parts. There is the potential to offer CPUs with a configurable TDP, however much like the older parts that supported 65W/45W modes, it was seldom used, and chances are we will see OEMs stick with the default design power windows here. Also, the naming scheme: any 35W part now has an ‘E’ at the end of the processor name, allowing for easier identification.

As part of this review, we were able to snag a few extra configuration specifications for each of the processors, including the number of streaming processors in each, base GPU frequencies, base Northbridge frequencies (more on the NB later), and confirmation that all the APUs launched will support DDR4-2400 at JEDEC sub-timings.

Copyright 2016 AnandTech

NVIDIA Releases 372.90 WHQL Game Ready Driver

AnandTech — 9/21/2016 9:00:00 PM

Not to be outdone by AMD, NVIDIA also has their own driver release this evening, with the release of driver version 372.90.

Among the fixes in this latest drivers includes several game stability and G-Sync issues. In Mirrors Edge Catalyst NVIDIA has fixed an issue with the the Intensity slider, flickering has been fixed in Star Wars the Old Republic, and a crash with extended gameplay sessions on Rise of the Tomb Raider has been buffed out. Meanwhile G-Sync has received two fixes this time around, with with NVIDIA addressing lag in G-Sync windowed mode, and removing screen tearing in World of Warcraft that was occuring with in game V-Sync enabled.

A more impactful fix, since this issue made the news a couple of months ago, is a fix for the HTC Vive when running the video feed through DisplayPort. It turns out that the headset was not lighting up despite connecting, but those who wish to use DisplayPort instead of HDMI for their VR endeavors should now be able to do so. Lastly, NVIDIA Ansel will be enabled by default in the driver for white-listed games.

Bundled in with all of these fixes we are also given game ready support for Forza Horizon 3, the latest racing game to be published by Microsoft Studios. Forza is seeing release next week on Tuesday the 27th, though Ultimate Edition owners will get a head start this Friday.

Anyone interested can download the updated drivers through GeForce Experience or on the NVIDIA driver download page. More information on this update and further issues can be found in the 372.90 release notes.

Source: NVIDIA

View All Comments


Copyright 2016 AnandTech

AMD Releases Radeon Software Crimson Edition 16.9.2 - Support for Forza Horizon 3

AnandTech — 9/21/2016 8:45:00 PM

As more games approach this fall, we can expect GPU manufacturers to keep updates rolling out so that our cards can be ready for the latest games as they arrive. To that end, AMD's Radeon Software release 16.9.2 - driver Version 16.40.2311 - brings with it a collection of fixes, a new CrossFire profile, and support for Microsoft’s latest racing sim, Forza Horizon 3.

Starting with bug fixes, the latest driver addresses RX 400 series issues, including intermittent mouse cursor corruption, occasional crashes during video playback in Mozilla Firefox, and flickering in Rocket league when running in CrossFire mode. Continuing on with CrossFire related issues, AMD has resolved issues with small amounts of stutter while playing Deus Ex: Mankind Divided in CrossFire mode under DX11, and the possibility of crashing Ashes of the Singularity while playing with DX12 and Multi-GPU enabled.

Meanwhile the Radeon Settings application has a few more fixes of its own this month. Previously, upgrading from a earlier version of Radeon Software Crimson Edition may have caused user settings in Radeon Settings to reset to defaults, so this has been corrected. As has a Radeon Settings crash under Windows 10 Anniversary Edition.

Finally, as part of this hotfix we will also get a CrossFire profile for the upcoming capital ship combat game Dreadnought, and launch-day support for Forza Horizon 3, the latest racing game to be published by Microsoft Studios. Forza is seeing release next week on Tuesday the 27th, though Ultimate Edition owners will get a head start this Friday.

As always, those interested in reading more or installing the updated hotfix drivers for AMD’s desktop, mobile, and integrated GPUs can find them either under the driver update section in Radeon Settings or on AMDs Radeon Software Crimson Edition download page.

Source: AMD

View All Comments

  • RaichuPls

  • - Thursday, September 22, 2016 -

  • link

  • RX480 deep dive when? RX 460/470 reviews?
  • Reply
  • 1


Copyright 2016 AnandTech

NZXT Unveils Fully Customizable Aer RGB LED Fans

AnandTech — 9/21/2016 11:00:00 AM

For system builders looking for a bit of extra flair, NZXT has announced a new family of computer fans with RGB LEDs, whose lighting can be customized using the company’s HUE+ hardware controller as well as CAM software. The 120- and 140-mm Aer RGB fans are designed for modders and high-end system builders, who would like to add maximum customization to their rigs and change their lighting settings on the fly.

The NZXT Aer RGB fans feature eight embedded LEDs covered with a special matte light-scattering material. The LEDs can be dynamically controlled using the NZXT CAM software when the fan is plugged to the company’s HUE+ module (sold separately). Each module can control up to five Aer RGB fans, which can be daisy chained, but still controlled separately. The CAM app and HUE+ modules support various presets for different lighting effects (breathing, pulse, fading, etc.) and allow users to create their own presents as well, which enables everyone to build highly-custom PCs while using off-the-shelf components. The new Aer RGB fans compliment NZXT’s LED strips, which can also be controlled using the HUE+.

NZXT offers two versions of Aer RGB fans in 120 mm form-factor as well as in 140 mm form-factor. Both fans use fluid dynamic beading, feature 500 – 1500 RPM speed and can produce 31 dBA or 33 dBA of noise.

The manufacturer will start to sell its Aer RGB fans in late October worldwide. Since we are talking about unique, aesthetic-focused products, they are not going to be cheap: the Aer RGB120 will cost $29.99, whereas the Aer RGB140 will be priced at $34.99. For those, who do not have a HUE+, NZXT will offer starter packs with two Aer RGB fans and one module for $79.99 (120 mm fans) and $89.99 (140 mm fans). In addition, the company will offer triple packs consisting of three fans of the same size for $79.99 (120 mm fans) and $89.99 (140 mm fans).

Copyright 2016 AnandTech

NVIDIA Announces Gears of War 4 Game Bundle for GTX 1080 and 1070

AnandTech — 9/21/2016 5:00:00 AM

As we enter the AAA blockbuster season and leave the avalanche of GPU releases behind us, we are beginning to see the first game bundles approach. Those considering an upgrade, but perhaps not quite convinced thus far, may see an enticing offer sway them over in these coming months.

Kicking off this week is a new bundle for high-end cards that will see Microsoft's upcoming Gears of War 4 bundled with GeForce GTX 1070 and 1080 graphics cards, including notebook variations thereof. Interestingly, this title is one of Microsoft's first AAA "Xbox Play Anywhere" games, which means purchasing (or in this case, receiving) a game allows it to be played on either the PC or the Xbox One. Along these lines, the game is receiving a simultaneous launch on both platforms on October 11th, nearly 10 years after the original's Xbox 360 release.

Interesting features here will be cross-platform online campaign co-op, cross-platform online Horde mode gameplay and platform specific competitive multiplayer. Of course it is worth noting the PC version provides enhanced graphics and supports 4K and 21:9 resolutions. Along with this we have the usual gamut of graphics options we’ve become accustomed to such as lighting, Anti-Aliasing, and texture settings with GeForce Experience available for those who don’t want to tinker.

It's worth noting that Microsoft's official "recommended" spec calls for either the GeForce GTX 1060 or the Radeon RX 480, which NVIDIA says is good for 1440p. Meanwhile for those gamers receiving the game as part of the GeForce bundle, the GTX 1070 meets Microsoft's "ideal" (4K) specification, and the GTX 1070 won't be far behind. Otherwise, as is usually the case, this bundle isn't being extended to the top-tier Titan X Pascal, nor is NVIDIA running any bundles with the GTX 1060 at this time.

This offer will run from now until October 30, 2016 or until supplies last. There are a couple of important notes to keep in mind though. This game is only available on Windows 10 Anniversary Update, hence customers will be able to redeem the code, but unable to view or download it on any previous version of Windows. Also, shoppers are encouraged to verify that a seller is participating in this bundle before buying, since NVIDIA cannot give codes to those that didn’t purchase from participating retailers/etailers.

Source: NVIDIA

View All Comments

  • Kevin G

  • - Wednesday, September 21, 2016 -

  • link

  • "the GTX 1070 meets Microsoft's "ideal" (4K) specification, and the GTX 1070 won't be far behind"
  • I'm pretty sure GTX 1070 vs GTX 1070 results in a tie. I think there is a typo here.
  • Reply
  • 1


Copyright 2016 AnandTech

Superhero Bits: Luke Cage’s Ties To The Avengers, Justice Society of America, Psylocke Training & More

/Film — 10/1/2016 12:30:29 AM

Posted on Friday, September 30th, 2016 by Ethan Anderton

Want to see more concept art of Ant-Man when he was considered for Team Iron Man? Are you ready to see the Justice Society of America assembled in a new photo from DC’s Legends of Tomorrow? Have you seen how much training Olivia Munn did for X-Men: Apocalypse? How does Luke Cage tie into The Avengers? All that and more in this edition of Superhero Bits.

@prattprattpratt it’s on! Well done to whoever made this ????????

A photo posted by ?? (@tomholland2013) on Sep 27, 2016 at 10:43am PDT

Thanks to a clever fan’s meme work, Tom Holland playfully challenged Chris Pratt to a dance-off.

Deathstroke co-creator Marv Wolfman expresses his excitement to see Joe Manganiello take on the villain role.

See some snippets of new footage from an international TV spot for Doctor Strange, arriving in just over a month.

Birth.Death.Movies has a little primer on Smallville as the show arrives in its entirety on Hulu starting on October 1st.

That's a wrap for Skurge !!

Big Thanks @TaikaWaititi @Marvel and the amazing cast n crew , So much Fun ! pic.twitter.com/ArD6O70H9b

Karl Urban posted a photo of Thor: Ragnarok director Taika Waititi announcing that his work as Skurge is all done.

Get a jump start on future episodes of Supergirl, The Flash and Arrow with released synopses for the new seasons.

Concept artist Andy Park revealed some more art when Ant-Man could have ended up on Team Iron Man.

Susanna Thompson will be returning to Arrow as Moira Queen for the show’s 100th episode coming up this season.

Continue Reading Superhero Bits>>

Due to the amount of graphics and images included in Superhero Bits, we have to split this post over THREE pages. Click the link above to continue to the next page of Superhero Bits.

Pages: 1 2 3Next page

Comment Now!


Comic Book/Superhero, Superhero Bits, Ant-Man, Arrow, Batman, Captain America: Civil War, Daredevil, DC's Legends of Tomorrow, Doctor Strange, Gotham, Luke Cage, Smallville, spider-man, Suicide Squad, Supergirl, The-Avengers, The-Flash, The-Punisher, Thor, X-Men: Apocalypse

Featured Posts

Tim Burton’s ‘Miss Peregrine’s Home for Peculiar Children’ Needs More Peculiarity [Review]

Ranking the Movies of Director Peter Berg: Plenty of Handheld Chaos & Full Hearts

‘Jurassic World 2’ Will Not Be ‘Jurassic War’; More Animatronics, Suspenseful and Scary

‘American Honey’ Star Sasha Lane on Road Tripping and Finding Hope in Flyover Country [Fantastic Fest Interview]

Is Doctor Strange’s Eye of Agamotto the Fifth Infinity Stone?

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]


Copyright © 2005-2016 /Film. Privacy Policy. Web design by Pro Blog Design. Logo Concept by: Illumination Ink

All names, trademarks and images are copyright their respective owners. Affiliate links used when available.

Tim Burton’s ‘Miss Peregrine’s Home for Peculiar Children’ Needs More Peculiarity [Review]

/Film — 10/1/2016 12:00:22 AM

Posted on Friday, September 30th, 2016 by Angie Han

On paper, Miss Peregrine’s Home for Peculiar Children sounds like a perfect combination of talent, material, and timing. It’s essentially an X-Men movie, in keeping with the current craze for superhero films, but one with a fanciful gothic vibe. It is directed by the master of fanciful gothic vibes, Tim Burton — who knows a thing or two about superheroes and big-budget blockbusters already. It’s led by the living Tim Burton drawings Eva Green and Asa Butterfield. Oh, and it’s based on a bestselling novel by Ransom Riggs.

In short, it has all the makings of a big hit that brings some much-needed quirkiness back to the multiplex. So why, then, does it all feel so… uninspired? So familiar? So not-very-peculiar?

Miss Peregrine opens in sleepy suburban Florida, where teenage Jake Portman (Asa Butterfield) lives a life so unremarkable he might as well not exist at all. His parents (Chris O’Dowd and Kim Dickens) barely seem to notice him; the popular kids at school certainly don’t. His closest friend might be his paternal grandfather Abraham (Terence Stamp), but sadly he is killed off by a Slenderman-like creature within the minutes of our meeting him.

As Jake struggles to come to terms with Abraham’s death — and the uneasy feeling that something isn’t quite right about it — he heads to Wales to seek out the orphanage where Abraham grew up. He soon learns, though, that it’s no ordinary children’s home. It’s a haven for “peculiar” kids with superpowers, hidden inside a time loop set to September 3, 1943 and overseen by the unflappable Miss Peregrine (Eva Green). Imagine if Mary Poppins took over Xavier’s School for Gifted Youngsters and furnished it with wares found by searching for “steampunk” on Etsy, and you’ve more or less got the idea.

But wait, there’s more: Miss Peregrine and her children are in danger thanks to the evil Mr. Barron (Samuel L. Jackson), who commands a team of wights (evil peculiars, basically) and hollowgasts (those Slenderman-like monsters) and believes these peculiars are their key to immortality. And wouldn’t you know it, it turns out Jake has a secret of his own that might make him the peculiars’ last, best hope for survival.

FX’s ‘Archer’ Ending After Season 10

/Film — 9/30/2016 11:30:05 PM

Over the summer, FX renewed its spy spoof Archer for not one, not two, but three seasons, bringing the total season count up to ten and keeping the show on air through about 2019. Once that season ends, though, it’ll be lights out for Sterling and Lana and Malory and the rest of the gang. “The plan is to end Archer after season 10,” series creator Adam Reed said in a recent interview.

Reed spilled the beans about the Archer end date on the Murmur podcast (via Splitsider):

The plan is to end Archer after season 10. I don’t know that anybody has talked about that, but that is definitely my plan – to do 8, 9, and 10 and they’re gonna be each shorter seasons of just eight episodes – and then wind it up.

I was gonna end it after 8, but then I had sort of a brain explosion of a way that I could do three more seasons and really keep my interest up. So the three seasons that are coming up are gonna be pretty different from what has come before, and they’re gonna be different from each other.

The first seven seasons of Archer consisted of 10-13 episodes each, but when FX renewed the show they confirmed that seasons eight through ten would have eight episodes each. Season eight will premiere in 2017 and seasons nine and ten will likely air in 2018 and 2019. Reed’s reveal that he originally wanted to end the show after season eight may explain why it took FX longer than usual to get around to announcing the renewal.

Archer started out as a spy spoof, but it’s really more of a workplace comedy than anything else. Reed hasn’t been afraid to throw the occasional curveball to keep things interesting. Season five soft-rebooted the show as Archer: Vice, a Miami Vice-inspired satire that saw the main characters start their own drug cartel. The show reverted to its usual format in season six and then soft-rebooted again with season seven — this time as a sort of Magnum P.I. sendup, with the characters opening a detective agency.

It should be interesting to see what other surprises Reed has in store for us over the next three seasons, especially if the looming end date gives him more freedom to explore dramatic plot twists or pursue deeper character development. (But not too much deeper, hopefully, because these characters’ unrepentant assholery is why we love them.) Heck, maybe Reed and producer Matt Thompson will even try out the live-action idea they’ve tossed around in the past. Anything’s possible with this show, especially as we inch closer to the end.

‘Star Trek’ Fan Film Lawsuit Moving Forward; JJ Abrams’ Claims “Are Irrelevant”

/Film — 9/30/2016 11:00:36 PM

Last year, over $1 million was raised on Kickstarter and Indigogo for Axanar, a Star Trek fan film. The ambition was to make both a short film and a polished, feature-length fan film, but then CBS and Paramount stepped in and filed a lawsuit to prevent Axanar from going into production, claiming the film violates the rights of intellectual property. After a bit of an uproar surrounding the lawsuit, Star Trek Into Darkness director J.J. Abrams said the lawsuit would soon go away. But it hasn’t.

Update from Editor Peter Sciretta: This week, paperwork was filed in court by the movie studio to dismiss this claim as an irrelevant third party statement. According to TF, Axanar has requested to obtain any communications the studios had with Justin Lin and J.J. Abrams over the issue but according to their lawyers, CBS and Paramount refused to hand anything over, claiming the information is irrelevant to the case. After the jump, see the e-mail.

In the joint motion, CBS and Paramount reiterate that the directors of the Star Trek films are not authorized to speak on behalf of the movie studio:

“J.J. Abrams is a producer/director of certain Star Trek Copyrighted Works and Justin Lin was the director of Star Trek Beyond. Neither Mr. Abrams nor Mr. Lin is an authorized representative of either of the Plaintiffs.” … “A third party’s statement about the merits of this lawsuit has absolutely no bearing on the amount of money Defendants’ obtained by their infringing conduct, nor does it bear on any other aspect of damages.”

It does seem ridiculous that the studios are refusing to drop the case, even after JJ Abrams made the comments while speaking at the studio-produced internet-broadcast Star Trek Beyond fan event.

Our previous story written by Jack Giroux on June 17th 2016 follows:

Not much has changed since Abrams’ comments, though, as Paramount and CBS haven’t dropped the lawsuit yet. Below, find out more about the Star Trek fan film lawsuit.

It was reported Abrams and Star Trek Beyond director, Justin Lin, convinced Paramount not to move forward with the lawsuit. A month ago, Abrams said “within a few weeks” it would come to an end:

A few months back there was a fan movie and this lawsuit that happened between the studio and these fans, and Justin was sort of outraged by this as a longtime fan. We started talking about it and realized this wasn’t an appropriate way to deal with the fans. The fans of Star Trek are part of this world. We went to the studio and pushed them to stop this lawsuit. Within a few weeks, it’ll be announced that this lawsuit is going away.

Following Abrams’ comments and a tweeted statement from Paramount, Axanar Productions filed a counterclaim. They asked the judge to declare their fan film — which takes place 21 years prior to “Where No Man Has Gone Before,” the first episode of the original Star Trek — did not infringe on Paramount and CBS’ copyrights, but their motion to dismiss got rejected.

According to The Hollywood Reporter, this past Wednesday, Paramount and CBS informed a California Federal Judge their lawsuit remains pending. In their original statement, the plaintiffs said:

The Axanar Works infringe Plaintiffs’ works by using innumerable copyrighted elements of Star Trek, including its settings, characters, species, and themes.

Axanar producer Alec Peters was originally confident there wouldn’t be any legal issues, especially after meeting with CBS. Last year, Peters told The Wrap:

CBS has a long history of accepting fan films. I think Axanar has become so popular that CBS realizes that we’re just making their brand that much better.

Peters predicted there’d be a backlash against CBS and Paramount if a lawsuit was filed, and, in that regard, he was proven correct. With Axanar, Peters and all involved wanted to “minimize the intellectual property” they would use. For example, they wouldn’t use Star Trek logos and would avoid the word as much as possible. They also wouldn’t make a profit from the film, which they thought would be enough to avoid a potential lawsuit.

Paramount and CBS could still be considering dropping the lawsuit after Abrams and Lin urged them to, but at this moment in time, it’s still pending. Here’s the Axanar team’s statement following Abrams’ hopeful comments:

While we’re grateful to receive the public support of JJ Abrams and Justin Lin, as the lawsuit remains pending, we want to make sure we go through all the proper steps to make sure all matters are settled with CBS and Paramount. Our goal from the beginning of this legal matter has been to address the concerns of the plaintiffs in a way that still allows us to tell the story of AXANAR and meets the expectations of the over 10,000 fans who financially supported our project.

HBO Wants ‘Game of Thrones’ Spinoff: “It’s About Finding the Right Take”

/Film — 9/30/2016 10:30:32 PM

Posted on Friday, September 30th, 2016 by Angie Han

We’ve got just two seasons left of Game of Thrones, with the eighth and final season set to air in 2018. But don’t assume that’ll be the last we see of Westeros. The network has previously floated the possibility of a Game of Thrones spinoff, and HBO programming president Casey Bloys says the idea is still under consideration. It’s just about finding the “right take.”

Game of Thrones has been a massive success for HBO on just about every level. It’s been showered in critical acclaim and permeated popular culture. Ratings have increased each season, peaking with the most recent finale. And just a few weeks ago it set the all-time record for most Emmy wins of any scripted series. In short, it’s no wonder HBO wants to keep the Game of Thrones franchise going as long as possible.

But Game of Thrones is also telling a single story, and that means the show has to come to a definitive end eventually — much to HBO’s regret. “In a perfect world, Game of Thrones would keep going and we wouldn’t have to deal with any of this,” Bloys told The Hollywood Reporter. But he sees an opportunity to keep the franchise going with a spinoff. “There are so many properties and areas to go to,” he said. “For us, it’s about finding the right take with the right writer.”

Bloys clarified that HBO is not actively developing any further Game of Thrones series at the moment. “Not yet,” he said. But it sounds like they’ve at least started to toss some ideas around. “There are things that sound interesting, but at this point, we have no writers assigned or anything like that.”

For his part, Game of Thrones author George R.R. Martin seems open to the idea of a spinoff, though he’s stopped short of confirming anything.

I do have thousands of pages of fake history of everything that led up to Game of Thrones. There’s a wealth of material there, and I’m still writing more, but at the moment, we still have this show to finish, and I still have two books to finish. So, that’s all speculation.

As for who would write this (still theoretical) spinoff, Game of Thrones showrunners D.B. Weiss and David Benioff have already ruled themselves out. “I’m sure there will be other series set in Westeros, but for us this is it,” said Benioff. Bloys, however, hopes they won’t leave the franchise entirely. “I would not expect them to do it, because they’re going to need to decompress for a good amount of time,” he said, “but it would certainly be nice to have their involvement in some way. At what level? I have no idea.”

Cool Stuff: Hot Toys ‘Star Wars: the Force Awakens’ Luke Skywalker 1/6th Scale Figure

/Film — 9/30/2016 10:00:50 PM

Posted on Friday, September 30th, 2016 by Peter Sciretta

It may be “Rogue Friday” but not all the Star Wars announcements today concern Rogue One: A Star Wars Story. Yesterday we shared with you the latest in Hot Toys’ Movie Masterpiece Series, a bunch of 1/6th scale figures from Rogue One. Today we bring you a 1/6th scale figure that many fans have been waiting for — Luke Skywalker for Star Wars: The Force Awakens. That’s right, a collectible figure replicating Mark Hamill in the fantastic ending sequence of JJ Abrams’ Star Wars sequel. Hit the jump to take a look at the Hot Toys Star Wars: the Force Awakens Luke Skywalker 1/6th Scale Figure.

Hot Toys Star Wars: the Force Awakens Luke Skywalker 1/6th Scale Figure Photos

Hot Toys has released the first images of their Star Wars: the Force Awakens Luke Skywalker 1/6th Scale Figure. I must admit, usually Hot Toys nails the facial and head sculpt, providing scary-real looking replicas of the actor’s facial features. But I’m not that impressed with the sculpt for old Luke Skywalker. Something just seems a bit off. Here is the official info from Hot Toys:

“Luke Skywalker? I thought he was a myth.” – Rey. In the aftermath of the fall of the Empire, Luke Skywalker, the last surviving Jedi, has put himself in exile after his attempt to train a new generation of Jedi went horribly awry. As a new threat to the galaxy known as the First Order emerges, Leia, Han, Chewie, and a group of Resistance heroes risk their lives trying to locate Luke’s whereabouts – with the hope of bringing him back into the fold in their desperate struggle to restore peace and justice to the galaxy.

The much-anticipated 1/6th scale Luke Skywalker Collectible Figure from Star Wars: The Force Awakens is specially crafted based on the image of Mark Hamill as Luke Skywalker in the film featuring a newly developed head sculpt, specially tailored costume, a mechanical right hand, and a Star Wars-themed figure stand. When you pre-order now, you can also receive a specially designed diorama figure base as a pre-order bonus accessory.

Hot Toys – MMS390 – Star Wars: The Force Awakens 1/6th scale Luke Skywalker Collectible Figure Specification

~ Movie Masterpiece Series ~

The 1/6th scale Luke Skywalker Collectible Figure’s special features:

  • – Authentic and detailed likeness of Mark Hamill as Luke Skywalker in Star Wars: The Force Awakens
  • – Movie-accurate facial expression with detailed skin texture
  • – Body with over 30 points of articulations
  • – Approximately 28 cm tall
  • – Three (3) pieces of interchangeable mechanical right hand including:
  • – One (1) right fist
  • – One (1) relaxed right hand
  • – One (1) gesturing right hand
  • – Three (3) pieces of interchangeable left hands including:
  • – One (1) left fist
  • – One (1) relaxed left hand
  • – One (1) gesturing left hand
  • – Each piece of head sculpt is specially hand-painted

Lee Daniels Is Working On A Musical About His Own Life Similar To Fellini’s ‘8 1/2’

/Film — 9/30/2016 9:30:15 PM

With four feature films and a TV series under his belt, Lee Daniels has created a rather successful career as a director, writer, and producer. But has he achieved enough and lived a compelling enough life to merit an autobiographical musical? The filmmaker seems to think so because he’s currently in talks to make the project happen.

Find out about the Lee Daniels musical biopic after the jump.

Speaking with Billboard (via The Playlist) about his forthcoming 2017 television series Star, Lee Daniels revealed development of the biographical musical:

“My publicist will kill me, but I’m in talks about doing a musical film about my life. I’ve had a pretty interesting life. I’ve come from the projects. I’ve been homeless. It’ll have original music and sort of be like Fellini’s ‘8 1/2‘ or ‘All That Jazz.’”

It’s rather bold even to think that your own life is interesting enough for a biographical drama, let alone one that’s also a musical. Even bolder is the comparison to iconic pictures such as Federico Fellini’s 8 1/2 (which served as inspiration for the musical Nine) and the musical All That Jazz, which was also inspired by the aforementioned surreal Italian film. But Daniels does have the kind of life that could make for a juicy drama.

Daniels revealed details about his past while appearing SAG-AFTRA Foundation benefit event last fall (via THR), expanding upon that detail about being homeless a little bit:

“I was homeless for a little bit. I didn’t pay my rent in Hollywood when I got here,” said Daniels. “I lived in a church and I started directing theater and it was empowering. I’ve been very blessed because I shouldn’t be here today. I’m very blessed that I wasn’t shot growing up. I’m very blessed that I didn’t die of HIV as many of my friends did — and I held them in my arms in the ’80s — because certainly I was destined to do that. That I’m not HIV [positive] is a miracle. I then went on to do drugs because I thought that I should die, and I survived two heart attacks.”

Also, Daniels had a father who was a Philadelphia police officer who ended up being killed in the line of duty. Sadly, his father was also physically abusive after the future filmmaker came out as gay. And all this is before his professional career.

Lee Daniels is a self-made man in Hollywood. According to his Wikipedia page, Daniels started working as a receptionist in a nursing agency in California after college. However, he realized that he could have this same business under his own control, so he quit and created his own agency which recruited 5000 nurses. These skills led him to start working as a casting director and manager, going on to represent actors like Wes Bentley around the time he broke through with American Beauty.

While I still think that comparisons to Fellini’s 8 1/2 are a little audacious, you can’t say Daniels hasn’t been through hardships and adversity before finding success. The question is whether this musical would be something as surreal as the Italian filmmaker’s “autobiographical” movie, or maybe a little more restrained. It’s pretty early in development at this point, so we’ll see if this movie even gets off the ground anytime soon.

‘War for the Planet of the Apes’ Plot Revealed: Caesar vs. The Colonel… and Himself

/Film — 9/30/2016 9:00:14 PM

Posted on Friday, September 30th, 2016 by Peter Sciretta

20th Century Fox has announced that War for the Planet of the Apes will be providing a sneak preview to attendees of New York Comic-Con next week. But more importantly, they have released the first official plot synopsis for the upcoming Dawn of the Planet of the Apes sequel. Hit the jump to learn about the War for the Planet of the Apes plot and also find out more about the NYCC festivities.

Here is the official plot synopsis for War For The Planet Of The Apes:

In War for the Planet of the Apes, the third chapter of the critically acclaimed blockbuster franchise, Caesar and his apes are forced into a deadly conflict with an army of humans led by a ruthless Colonel. After the apes suffer unimaginable losses, Caesar wrestles with his darker instincts and begins his own mythic quest to avenge his kind. As the journey finally brings them face to face, Caesar and the Colonel are pitted against each other in an epic battle that will determine the fate of both their species and the future of the planet.

This plot summary is particularly interesting because it confirms that War will take place some time after the events of Dawn of the Planet of the Apes. The Rise of the Planets of the Apes sequel ended with the humans calling for military assistance from another faction. The original shot but deleted ending scene would have shown Ceasar and his apes looking out from the Golden Gate Bridge at the oncoming battleships.

This story appears to take place in a different location with a new villain, the Colonel who we know will be played by Woody Harrelson. It’s also fascinating that the story will focus on Caesar struggling with his darker instincts. It also seems like sort of a mythic western journey. Here is the behind the scenes photo that director Matt Reeves tweeted at the start of production:

#It has begun.@ApesMovies pic.twitter.com/6AZVzWfJXW

— Matt Reeves (@mattreevesLA) October 17, 2015

Previously, Reeves said he wants “the story to be able to connect from the human to the ape world.”

“So first one, [Rise of the Planet of the Apes] is this sort of how [Caesar] goes from humble beginnings to becoming a revolutionary. In Dawn, he rose to the occasion of becoming a leader, a great leader in really challenging difficult times. The notion of what we’re after in the third is continue that trajectory to how he becomes the seminal figure in ape history and almost becomes sort of like an ape Moses of sorts, a kind of mythic ascension. We’re trying to play out those themes and try to explore it in this universe of exploring human nature under the guise of apes.”

He later stated that the film would focus on Ceasar more than the last two, a “unique character in that he was raised as human but he’s an ape, and he wasn’t part of either.” Serkis has said that at the start of War, the “ape community has fallen apart” and Ceasar is “going to have to lead the apes in darker times” with a potential war on the horizon.

As for the information about the NYCC event, here goes:

At New York Comic-Con, 20th Century Fox will debut an exclusive behind-the-scenes look at the creation and filming process, as well as offer is never before seen film footage. At the event, director Matt Reeves and producer Dylan Clark will join Andy Serkis on stage to discuss “the unique relationship between performance capture acting and filmmaking.” Attendees can catch the panel on Thursday, October 6th at 8:30 pm, the event will be held at the REGAL E-WALK THEATER on 247 West 42nd Street.

War for Planet of the Apes will hit theaters nationwide on July 14th, 2017. The film stars Andy Serkis, Woody Harrelson, Steve Zahn, Terry Notary and Karin Konoval.

Ranking the Movies of Director Peter Berg: Plenty of Handheld Chaos & Full Hearts

/Film — 9/30/2016 8:00:44 PM

Posted on Friday, September 30th, 2016 by Jack Giroux

Very little is showy about Peter Berg‘s movies. He’s typically a filmmaker who manages to stay invisible, often successfully trying his hand at different genres. His strengths — his eye for performances and grasp on tension, in particular — are never overt in his movies. He’s a director that can build and build pressure over an extended period and create a great sense of geography with some quick cutting, but again, his skills never draw your attention away from the story.

As the director’s latest film, Deepwater Horizon, hits theaters, I wanted to take a look back at his career so far. Unfortunately, I didn’t have the opportunity to see Deepwater Horizon before putting together this list, but the reviews are enough to convince me to see it as soon as possible. Below, check out our Peter Berg ranking.



This box office bomb isn’t a disaster. There’s hardly anything offensive about Berg’s misfire, and it’s about as self-aware as a summer movie gets. With $200 million, Berg made a movie featuring Rihanna and Brooklyn Decker in starring roles (not a knock), a brief appearance from Liam Neeson, aliens, and a few references to the game. This is Berg’s attempt at making an extremely audience-friendly movie, but in trying to appeal to everyone, it lacks specificity, anything fresh to make it stand out. Worst of all, Battleship is also overlong, lacking excitement or any characters of substance. Taylor Kitsch and everyone else is serviceable considering what they’re asked to, but Battleship is too thin to maintain interest for over two hours. Maybe if this story of a naval ship facing off against aliens was a brisk, fast-paced action movie it’d entertain, but this just another summer movie that’s more bloated than huge. To the filmmakers’ credit, sometimes it’s appealing to the eye, and the special effects are often impressive, like in the sequence pictured above.



This Will Smith star vehicle was a disappointment at the time of its 2008 release. Berg’s film attempts to give us a bottom-of-the-barrel type of superhero with Hancock, a sloppy and unpleasant drunk who often does more harm than good. It’s a good setup, but the third act of Hancock goes off the rails. Tonally, it becomes a different movie, and the change isn’t justified. The first half of Hancock is a funny, slightly dark comedy, but the tonal turn and the third act twists are jarring and unsatisfying. Berg maintains his intimate handheld shooting style with Hancock; his fingerprints are more evident here than in Battleship. The performances from Smith, Jason Bateman, and Charlize Theron are unsurprisingly good. Smith still somehow manages to keep his signature charm while playing one of his more unlikable characters. Hancock is frustrating and messy, but Smith provides some fun.

Quentin Tarantino Almost Made a ‘Luke Cage’ Movie, Discusses His Hopes For Marvel’s Show

/Film — 9/30/2016 7:30:53 PM

Today brings the entire first season of Marvel’s new Netflix series Luke Cage to everyone’s streaming devices. But did you know Quentin Tarantino once almost made a movie focusing on the Hero for Hire before he decided to make Pulp Fiction?

The revelation of the once possible existence of a Quentin Tarantino Luke Cage movie came when the filmmaker was making publicity rounds last winter for The Hateful Eight,. Now that the series is out, he was recently asked about what he’d like to see from the series adaptation of the comic book.

First up, here’s what Quentin Tarantino had to say about his abandoned plans for a Luke Cage movie when he appeared on the Nerdist podcast back in December of 2015:

One of the things I wanted to do before Pulp Fiction to some degree or another…one of the outside projects that I considered doing was doing a Luke Cage movie.

Tarantino even had an idea of who should play Luke Cage in the movie, but it was discussions with friends that turned him off of the prospect of even trying to make it happen. The director explains:

In the case of Luke Cage, it was my comic geek friends that almost talked me out of it, because I thought [Laurence] Fishburne back in the day would’ve been a great Luke Cage, and they were talking about Wesley Snipes. And I could see them both, but it was like ‘I think Fish would be better.’ And they go ‘Yeah…he could work out and everything, but he doesn’t have the bod that Wesley Snipes has, and Luke Cage needs to have the bod.’ And I literally was so turned off that that would be their both starting and ending point, that it literally put it in my head that, if I do a comic book movie, it should be an original character. It should be something I create rather than try to fit in.

Even if Tarantino decided to pursue a Luke Cage movie around the mid-1990s, I can’t imagine the development would have gone well at the studio. After all, this was a time when the only superheroes on the big screen were Christopher Reeve as Superman and Michael Keaton as Batman, soon to be Val Kilmer as Batman. Bringing one of the more obscure Marvel heroes to the big screen likely wouldn’t have been met with much enthusiasm by studio executives, especially before comic book movies were all the rage.

So now that it’s a bit easier to bring a superhero like Luke Cage to life on screen, what does Tarantino hope to see from Netflix’s series? Well, he might be a little pickier than most. Here’s what he had to say to Yahoo:

To tell you the truth, I might be one of the pains in their asses because I love the way the character was presented so much in the ’70s. I’m not really that open to a rethinking on who he was. I just think that first issue, that origin issue … was so good, and it was really Marvel’s attempt to try to do a blaxsploitation movie vibe as one of their superhero comics. And I thought they nailed it. Absolutely nailed it. So, just take that Issue 1 and put it in script form and do that.

But considering the current political climate and racial tension, perhaps rethinking Luke Cage was a little wise. Society was in a different place in the 1970s, and blaxploitation may not fit all that well in the framework of contemporary culture. Plus, it seems like the show is progressive in its own right since it tackles certain race topics from today’s headlines.

The only way to find out if Luke Cage is done right is to watch the show on Netflix right now.

‘Jurassic World 2’ Will Not Be ‘Jurassic War’; More Animatronics, Suspenseful and Scary

/Film — 9/30/2016 7:00:10 PM

The Orphanage/The Impossible filmmaker J.A. Bayona is busy working with screenwriters Jurassic World sequel with writers Derek Connolly and Colin Trevorrow. The Jurassic World director also serves as a producer, and we’ve heard he’s very involved in the new sequel. Trevorrow appeared on a recent episode of the InGeneral podcast and talked about how they are planning to involve more animatronic dinosaurs in the next film, which he also describes as “more suspenseful and scary.” Hit the jump to learn the details.

Why J.A. Bayona Is Directing Jurassic World 2

Trevorrow explained on the InGeneral Podcast that the reason he “wanted Bayona to direct it long before anyone ever heard that was a possibility.” In fact, he teased in an Empire interview six months before Bayona was announced that “There are some pretty cool Spanish horror directors whose Jurassic Park movie I’d love to see as a fan.” He says the story was “built around his skillset.” And by that Trevorrow says, Jurassic Park 2 “will be more suspenseful and scary,” as “It’s just the way it’s designed; it’s the way the story plays out.”

Trevorrow seems very involved, talking about the collaborative nature of the project he revealed that he’s been in the office every day since July “working closely with J.A. [Bayona], listening to his instincts, and honing the script with Derek [Connolly] to make sure it’s something that all of us believe in.”

“Film has become so cutthroat and competitive; it felt like an opportunity to create a situation where two directors could really collaborate. It’s rare these days, but it’s something that the directors that we admire used to do all the time—one writes and produces and the other directs, and the end result is something that’s unique to both of them.”

Emulating The Structure Of Jurassic Park

The sequel will try to emulate the structure of the original Jurassic Park, having the biggest action sequence in the middle and then funnel down into an intimate and personal ending:

“That is a model that worked very well and I’m very interested in, J.A. is going to be perfect for something like that.”

I think if you look at the structure of the original Steven Spielberg film, this quote is very telling about the sequel.

Learning From Jurassic World And Borrowing From Michael Crichton

Trevorrow said that this film would be a lot different than the last because they are building the story from the ground up, as opposed to the last movie which was in development for over a decade before he came on board. While he had creative control over his vision, coming on that far along does bring some baggage. He admits there was also a pressure for the last film to introduce the Jurassic Park franchise to a new generation who maybe had not seen the original.

As for Jurassic World 2, Trevorrow realizes that in our sequel cynical times, “this has to prove it has a reason to exist beyond just making more money.” Trevorrow believes it does and says there is more story to tell, and they are expanding on Jurassic Park author Michael Crichton‘s original ideas. They have even included a bit of dialogue from Crichton’s original book in the sequel.

More Animatronics

Having learned some lessons from Jurassic World, Trevorrow admits that one of the “key motivations of this movie is not that we need to make it bigger necessarily for it to be equally compelling to people.” He realizes it’s just as thrilling to have grounded small suspenseful moments rather than massive CG-filled action sequences. And speaking of CG, Trevorrow says “There will be animatronics for sure.”

“We’ll follow the same general rule as all of the films in the franchise which is the animatronic dinosaurs are best used when standing still or moving at the hips or the neck. They can’t run or perform complex physical actions. And anything beyond that you go to animation. The same rules applied in Jurassic Park. I think the lack of animatronics in Jurassic World had more to do with the physicality of the Indominus, the way the animal moved. It was very fast and fluid, it ran a lot, and needed to move its arms and legs and neck and tail all at once. It wasn’t a lumbering creature. And we’ve written some opportunities for animatronics into this movie, because it has to start at the script level—and I can definitely tell you that Bayona has the same priorities, he is all about going practical whenever possible.”

This Won’t Be Jurassic War

He’s certainly saying all the right things. One of the things I loved about Jurassic Park and Jurassic World was the concept of a dinosaur theme park. I’ve never been as interested in the militarized dinosaur storyline that has been teased by the series. Thankfully it doesn’t sound like that is the direction they are taking the Jurassic World trilogy:

“As a writer of the thing, I’m not that interested in militarized dinosaurs, at least not in practice. I liked it in theory as the pipe dream of a lunatic. When that idea was first presented to me as part of an earlier script, it was something that the character that ended up being Owen was for, that he supported, something that he was actively doing even at the beginning. Derek [Connolly] and I, one of our first reactions was ‘No if anyone’s gonna militarize raptors that’s what the bad guy does, he’s insane.'”

He says he leans more towards the idea of proliferation and open sourcing over-militarization, so don’t expect Jurassic War to be the title of the next film. Trevorrow joked that you’d need six movies to get to a place where that kind of reality would fit with our reality. Hearing him say that is very reassuring.

Giving larger clues as to what we might expect, Trevorrow says his interest in this story, is to look at our relationship with animals, how we’ve used and abused them, and what that relationship says about us; how we share the planet with other living things.”

“We’ve seen the movie about don’t mess with science. And we’ve seen the movie about how corporate greed can put the needs of a few over the needs of many. And now I’m interested in exploring further our relationship with other living creatures on this earth and how can use dinosaurs as a parable for that.”

They are currently prepping the film in London and will be shooting on soundstages in the UK, but Trevorrow says it’s a common misconception that people assume the movie’s story will take place in the country. As for where will the sequel be set, he’s not ready to reveal that. We had heard that production would return to Hawaii, and Colin confirms that while it is a primary location, it’s not the main location for this sequel story.

You can listen to Colin Trevorrow’s full interview on the InGeneral Podcast below. Its a great interview and even director J.A. Bayona makes a breif appearance at the end:

The yet-to-be-titled Jurassic World sequel is scheduled to hit theaters on June 22nd, 2018.

New ‘Trollhunters’ Photos Show Off Guillermo Del Toro’s Animated Netflix Series

/Film — 9/30/2016 6:30:15 PM

Originally planned to be a feature film at Walt Disney Pictures back in 2010, Trollhunters is now an animated series over at Netflix, produced by DreamWorks Animation. It’s the latest endeavor from Guillermo del Toro, who has been passionate about the project for years now. The story was first published last summer as a young adult novel, and now we’re getting a better look at the series as the premiere on Netflix later this year gets closer.

Check out the new Trollhunters photos after the jump.

Entertainment Weekly debuted the new photos from the show which features the voice of the late Anton Yelchin as a 15-year-old named Jim who comes to be a defender of the good trolls who are at war with some opposing bad trolls. This is all happening underneath his hometown, San Bernardino.

Jim isn’t alone in this fight, though. He has his friend Toby (voiced by Charlie Saxton) to fight along with him, as well as one of the good trolls named Blinky (voiced by Kelsey Grammer), who lends a couple extra hands with his four arms. Of course, all this fighting has to happen in between the mundane activities of school.

Though Yelchin was killed in a tragic accident earlier this year, he had already completed most of his work on the series, so his character didn’t need to be recast. Del Toro says, “We went through great pains to ensure that his voice is preserved for the series. He was so passionate about it, and he had so much fun doing this.” The director added there was a “simple goodness and beautiful soul” that Yelchin brought to his performance for Jim. Presumably if the show is popular enough to get more seasons, that role will be recast, but it’s nice to have one of Yelchin’s last performances intact.

Here’s the official synopsis of the book published last summer:

Jim Sturges is your typical teen in suburban San Bernardino-one with an embarrassingly overprotective dad, a best friend named “Tubby” who shares his hatred of all things torturous (like gym class), and a crush on a girl who doesn’t know he exists. But everything changes for Jim when a 45-year old mystery resurfaces, threatening the lives of everyone in his seemingly sleepy town. Soon Jim has to team up with a band of unlikely (and some un-human) heroes to battle the monsters he never knew existed.

‘American Honey’ Star Sasha Lane on Road Tripping and Finding Hope in Flyover Country [Fantastic Fest Interview]

/Film — 9/30/2016 6:00:46 PM

Posted on Friday, September 30th, 2016 by Jacob Hall

Andrea Arnold’s American Honey is a remarkable movie and one of the best films of 2016. At the center of this intimate and quietly epic drama is newcomer Sasha Lane as Star, a young woman who escapes her abusive home by joining a “mag crew” of equally disaffected youth. We follow this crew as they travel from state-to-state, peddling magazines, having misadventures, and finding hope and pain in every nook and cranny of the the American heartland.

Lane gives the kind of raw and brutally real performance you do not often find from more polished and experienced actors. The same applies to an interview in a karaoke room at Fantastic Fest – she’s not one to offer a canned answer. Over the course of a too-brief conversation, we spoke about working with a director as empathetic as Andrea Arnold, what it’s like to work with Shia LaBeouf, and how most movies turn away from the subject matter explored in American Honey.

I saw American Honey at 8:00 in the morning and I was worried because it is a very long movie and I was very tired. But I was riveted from the first scene. It flies by.


What was it like when you watched it for the first time? How did you feel?

Dude, I was like…it was so intense. I couldn’t even watch it as a movie and still to this day can’t watch it anymore because there’s so much emotion and it’s so intense. I was just remembering how I felt on that day and how my brother was there and so many different things. But it’s cool. You get into it. The trailer I feel like is a good overall [representation] but the movie hits hard. It hits hard, like when “God’s Whisper” comes on.

The trailer does a good job of selling the more exciting aspects of the movie, but there is a lot of subtlety in it. A lot of unseen emotions.


Correct me if I’m wrong, but [director] Andrea Arnold found you on a beach, right?


So how’d that happen?

It’s just as weird as it sounds. Literally. She picked me out from the beach and we ended up talking and I stayed a week with her. It was the most bizarre, random thing ever, but so organic – “I’m doing a movie, I dig who you are, I want to get to know you more, throw you in some situations.” By the end of it, she’s like “You ready to go? Because I’m ready to go” and I’m like “Yep, let’s go!”

Were you aware of her at all before you met her?

No! But I went and watched Fish Tank after and I really dug the whole aesthetic of it and who she was and how she was describing Star as a strong person. Even if she’s naive and impulsive, she’s free-spirited. She’s a good strong girl. She’s not just the come-save-me type thing.

You’re from Texas, right?


So had you visited many of the areas seen the movie before? Oklahoma and the middle of America?

I hadn’t… Oklahoma is the only place I had been, but knowing the midwest and Texas and knowing the type of life and those kinds of people…it was very familiar, even though I hadn’t been there.

Since movies are so often made on the coasts, I felt like this is one of the few movies to really, truly get this part of the country. Even at its ugliest, it’s familiar.

Yeah! Exactly. People are always like “Did you learn things about America? How was it like road tripping?” And I’m like “No, I know that!” I’m from that. Even if I hadn’t been there, it was very much familiar territory.

And this was filmed as a road trip. You guys did really just drive all around these states as a group. How much of the shoot was carefully structured and how much of it was figuring it out as you went along?

Pretty much all of it was written. The parts in the van were the most…that’s documentary style. Because if anything, [Andrea Arnold] would say “I want you to bring up this, but do as you do.” Sometimes we’d be in the van and they’d randomly turn the camera on. But we’re just chilling. We’re just doing as we do as we talked. That was improv’d, documentary style. The rest was directed, but we were still free to say as we said it and that’s why editing was awful! [Laughs]

The cast really feels like a unit who have been traveling together for a long time. Did you have the chance to hang out before the movie? Did you do a lot of hanging out off set to strengthen your bonds?

We met a week before we started filming and from then on, we were just in it. I had days where I worked without them a lot, but as soon as I was done filming, I’d go hang out with them and go right back to work. We were constantly together. We lived in hotels, so all we had to do was hang out in parking lots. And we were in that van, so we were scrunched up next to each other in a van, so it’s just like…I know you. I know you.

A lot of sing-alongs.

Yeah! Which is cool because it brings people together. That connection was very much inevitable.

Was there a lot of music played on the set? Music is so important in the movie.

We actually had to cheat certain things, because we had each song playing as we were going through it. It was very real, very alive.

Daniel Craig Still the “First Choice” to Play James Bond

/Film — 9/30/2016 5:30:03 PM

Posted on Friday, September 30th, 2016 by Angie Han

It’s been fun to speculate about who the next James Bond could be, especially since the current Bond has said on the record that he’d rather slash his wrists than play Bond again. However, any predictions may be premature at this point. According to Callum McDougall, who’s executive produced all the Daniel Craig Bond films to date, Craig remains “absolutely the first choice” to play 007 in the next film.

McDougall touched upon the future of the Bond franchise during an appearance on BBC Radio 4’s Today program (via Deadline). Asked whether Craig would return as Bond, McDougall responded, “I wish I knew.” But he made it very clear that he and the team at Eon Productions are hoping Craig will. “We love Daniel. We would love Daniel to return as Bond,” he said. “Without any question he is absolutely Michael G. Wilson and Barbara Broccoli’s first choice. I know they’re hoping for him to come back.”

Craig has confirmed he is contracted for one more Bond movie, and he’s since tried to downplay the whole “slash my wrists” thing, so it’s not completely crazy to think he might return. While he does have a few other projects lined up, including the Showtime miniseries Purity, it doesn’t sound like they’ll necessarily prevent him from reprising the Bond role if he really wants to.

But Craig does not sound like he really wants to. The star seems to hate his most famous character, and has said that if he does do another Bond movie, “it would only be for the money.” And for what it’s worth, his good friend Mark Strong has said another Daniel Craig James Bond movie is “probably never going to happen.” Meanwhile, there are reports that the producers have quietly started talking to potential replacements.

Although it’s not unheard of for actors or producers to play coy with the press as a negotiation tactic, Craig’s comments seem to go beyond mere strategizing. He seems genuinely eager to move on, to the point where he’s willing to piss off his own bosses by criticizing his own character in public. He might get dragged back kicking and screaming yet, but it’s hard to believe he’s going to be happy about it.

It’s easy to understand why the producers want him back. Craig helped revitalize the franchise and his films have been massive hits. Plus, they probably want to avoid the fan fatigue that comes with rebooting a franchise too quickly (remember Amazing Spider-Man?). But at some point, it may just be easier for both sides to go their separate ways. If that happens, we’ve got a few ideas about whom we’d like to see Eon hire next.

Zack Snyder Teases Deathstroke in ‘Justice League’ Set Photo

/Film — 9/30/2016 5:00:56 PM

A few weeks ago, Ben Affleck posted a brief video revealing that Deathstroke would be making his way to the DC Extended Universe. Shortly afterward, Geoff Johns confirmed that Joe Manganiello had been cast in the role for Affleck’s Batman solo movie, which is expected out around 2018 or 2019. But it looks like we might get to meet the villain a little bit earlier than that.

Director Zack Snyder has posted a behind-the-scenes photo from the set of next year’s Justice League, which doesn’t look like much of anything at first glance but upon closer examination seems to suggest Deathstroke will appear in that movie.

Here’s the original photo posted by Snyder:

#JusticeLeague #Cosplay pic.twitter.com/JwzoevN2HI

— Zack Snyder (@ZackSnyder) September 29, 2016

Why is Snyder wearing a Bat-gauntlet to fiddle around on his tablet? Well, why not? Don’t pretend you wouldn’t do the exact same thing in his situation.

But the photo gets more interesting when you zoom in on what he’s looking at. @TheDCEU used Photoshop magic to get a closer look.

.@ZackSnyder posts new behind-the-scenes pic. And with a li'l bit of photoshop and enhancements, appear to be Deathstroke on the storyboard. pic.twitter.com/XsG1kzYZ6d

— DC Extended Universe (@TheDCEU) September 29, 2016

Yup, it looks like a Justice League storyboard featuring Deathstroke. In the comics, Deathstroke is the alter ego of mercenary and assassin Slade Wilson. Thanks to an experimental serum, he has enhanced speed, stamina, endurance, and reflexes as well as an accelerated healing factor. On top of all that, he boasts extensive martial arts and combat training and a brilliant tactical mind. Justice League will be the character’s first time appearing in a live-action movie, though he has previously appeared in shows like Arrow and Smallville.

It’s a little tougher to puzzle out whom Deathstroke is talking to, since the image doesn’t offer a good look at their face. But their apparent baldness could point to Lex Luthor. We already know Jesse Eisenberg is set to return for Justice League, after all.

Elsewhere on Snyder’s desk, you can also make out an Om symbol and a Charles Bukowski quote. Perhaps that means Justice League will see Cyborg take up yoga and Aquaman start a book club; more likely, they’re just bits of decoration or inspiration for Snyder.

Justice League hits theaters November 17, 2017. The film stars Ben Affleck as Batman, Gal Gadot as Wonder Woman, Henry Cavill as Superman, Jason Momoa as Aquaman, Ezra Miller as the Flash, and Ray Fisher as Cyborg, plus Eisenberg, Amy Adams as Lois Lane, Jeremy Irons as Alfred Pennyworth, J.K. Simmons as Commissioner Gordon, Willem Dafoe as Nuidis Vulko, Amber Heard as Mera, and, we now know, Joe Manganiello as Deathstroke.

Fueled by his restored faith in humanity and inspired by Superman’s selfless act, Bruce Wayne enlists the help of his newfound ally, Diana Prince, to face an even greater enemy. Together, Batman and Wonder Woman work quickly to find and recruit a team of metahumans to stand against this newly awakened threat. But despite the formation of this unprecedented league of heroes — Batman, Wonder Woman, Aquaman, Cyborg and The Flash -— it may already be too late to save the planet from an assault of catastrophic proportions.

Is Doctor Strange’s Eye of Agamotto the Fifth Infinity Stone?

/Film — 9/30/2016 4:00:00 PM

A History of the Infinity Stones Thus Far

Before we delve in, let’s back up and take a look at the playing field. Thanos, in an attempt to court Death herself, is trying to destroy Earth, obtaining the six Infinity Stones to put in his Infinity Gauntlet, which will make him unstoppable. The Marvel Studio films have introduced us to four of the Infinity Stones so far. Each stone has its own unique powers and color:

  • Space Stone[Blue]:
  • Nazi/Hydra commander Johann Schmidt found the
  • Tesseract
  • in a box hidden behind a sculptured mural of Yggdrasil, the world tree. He tried to use the object’s power to take over the world but was stopped by Captain America, as seen in
  • Captain America: The First Avenger
  • . The cosmic cube fell into the dark depths of the ocean, from which it was recovered by SHIELD founder Howard Stark. Years later, SHIELD director Nick Fury hired astrophysicist Dr. Erik Selvig to research and study the Tesseract with hopes of unlocking its power. In
  • The Avengers
  • , Loki was able to steal the cube and use its power to open a portal allowing a Chitauri army to attack Earth. After the Avengers helped save the planet, Thor used the power of the Tesseract to transport Loki back to Asgard where it has remained in Odin’s vault. But with Loki taking the form of Odin, is the stone safe?
  • Reality Stone [Red]:
  • Malekith and his army of Dark Elves attempted to harness the power of the
  • Aether
  • and use its destructive power to destroy all of the Nine Realms. In
  • Thor: The Dark World
  • , the Aether was regained by Asgard, where it was decided that it was too dangerous to keep two Infinity Stones so close together. So they gave the Aether to Taneleer Tivan (aka the Collector) for safe keeping. But is the stone safe? Tivan seems to be after the other gems, and his museum blew up during the events of
  • Guardians of the Galaxy
  • .
  • Power Stone [Purple]:
  • Discovered by Star-Lord hidden away in Morag’s Temple Vault, the
  • Orb
  • now is safely locked away in Nova Corps’ high-security vault on Xandar.
  • Mind Stone [Yellow]:
  • Thanos gave Loki a
  • Scepter
  • containing the Mind Stone to aid him in the invasion of Earth (as seen in
  • The Avengers
  • ). After the Battle of New York, Tony Stark experimented with the Stone as a power source. In
  • Avengers: Age of Ultron
  • , he stone was implanted by Ultron in the Vision’s head to bring him to life.

Those are the four known Infinity Stones already introduced in the Marvel Cinematic Universe. The other two gems seen in the comics include Soul and Time stones. Is it possible that Doctor Strange’s Eye of Agamotto could be the Time Stone?

What If the Eye of Agamotto Is an Infinity Stone?

In the comic books, Doctor Strange’s Eye of Agamotto is an amulet that Strange wears on his chest.Created by writer Stan Lee and artist Steve Ditko, it first appeared in “The Origin of Dr. Strange,” an eight-page story in Strange Tales #128 published in December 1963. In the comics, Strange has described the Eye as “one of the most powerful mystic conduits on this physical plane.” The Eye is a weapon of wisdom that can radiate a mystical light which allows Strange to see through all disguises and illusions, witness past events, and track both ethereal and corporeal beings by their psychic or magical emissions. Strange even used the amulet to combat Thanos in the comics.

The Time Gem is green colored in the comics, the same color as the glow from inside the amulet in the movie. The gem grants its user “total control over the past, present, and future,” which includes seeing into the past, slowing down the flow of time and traveling through time. Marvel Studios has adapted many of the above MacGuffin from the comics into Infinity Stones for the movie universe, so why not the Eye of Agamotto? It seems to fit better than any of the other examples.

Kevin Feige Provides Some Clues

Marvel Studios head Kevin Feige has said that the movie version of the Eye of Agamotto has the ability to “screw around with time.” Here is the full quote:

In this film, the Eye is a very important relic that can be quite dangerous if used in the wrong hands, because it has the ability to do any number of things, the most dangerous of which is, it can sort of manipulate probabilities. Which is also another way of saying, ‘screw around with time’ — which is part of our story.

More recently, we talked to producer Kevin Feige on the set of Doctor Strange and asked him if an Infinity Stone that we haven’t seen yet might appear in Doctor Strange. Here is what Feige said:

If you’re tracking such things, perhaps. But we don’t get into it in this movie because, again, we’ve got…

At that moment, one of the journalists interjected, asking “Shall we look at an eye of Agamotto?” as the prop was sitting right behind Feige. The Marvel Studios head quickly responded:

It’s closed. You can look at it as long as you want. But again, there’s a lot to take in in this movie, there are a lot of new concepts, there are a lot of new characters, there’s a lot of new mythologies that we didn’t to clutter up by telling you about other MacGuffins.

These new comments seem to suggest that the Eye of Agamotto is the time stone but that this film will not delve into that fact. So if this is the case and Doctor Strange is in possession of the Eye of Agamotto at the end of the film, it seems like the brilliant yet arrogant sorcerer will have a much larger role to play in Avengers: Infinity War than most fans are anticipating.

The Infinity War Will Change Dimensions

Doctor Strange will also introduce the idea of multiple dimensions to the Marvel Cinematic Universe. And remember that Infinity War co-director Anthony Russo has used that word in his teases for Avengers 3:

Those movies are intended to be the culmination of everything that’s happened in the MCU from the very first Iron Man movie years ago. So they will end up changing the MCU more profoundly than any movie has yet, and there will be some things that come to an end in those movies, dimensions of the MCU will end in those films, dimensions will change forever in those films, dimensions will find new life in those films. They’re a real threshold for what we’ve come to know as the Marvel Cinematic Universe, and I think that dimension can be a little intimidating but also exciting.

New ‘Fast 8’ Image Reveals When The First Trailer Will Arrive

/Film — 9/30/2016 3:30:09 PM

Posted on Friday, September 30th, 2016 by Ethan Anderton

Back in August, production wrapped on Fast 8, the next sequel in the surprisingly long-running Fast & Furious franchise. The official page for the film series announced the end of production and teased the arrival of the first trailer in December. Now Vin Diesel has taken to his Facebook page with a more specific update featuring an image of himself with a sequel and an exact date for the trailer’s release.

Find out the Fast 8 trailer release date after the jump.

Here’s the photo that Vin Diesel posted to Facebook:

Funnily enough, Vin Diesel actually made this announcement a few days ago on #TorrettoTuesday where he said, “The trailer for F8 is going to blow your mind… New York City December 11th. You will see why there was tension, you will understand the intensity.” This appears to be a reference to the feud that erupted between Diesel and co-star Dwayne Johnson, though there was a rumor circulating that it was just a publicity stunt for WrestleMania. But maybe Diesel and Johnson’s characters end up at odds again somehow in the sequel?

We’ll learn what all this tension talk is about when the first trailer arrives on December 11th. Presumably we’ll see the trailer in theaters later that month too, but with which movie? More than likely it won’t be attached to Universal’s release of Sing! that month, since Fast 8 isn’t really a kids movie. The most viable options are Rogue One: A Star Wars Story, Assassin’s Creed or Passengers. All seem like the most viable blockbusters for the trailer to hit their desired audience, and it just might end up being on all of them.

This time the latest Fast & Furious sequel is directed by F. Gary Gray (Straight Outta Compton, Law Abiding Citizen) with a cast that includes the usual crew of Michelle Rodriguez, Tyrese Gibson, Chris “Ludacris” Bridges and Dwayne Johnson. In addition, Natalie Emmanuel is returning from Furious 7 along with Jason Statham and Kurt Russell. Newcomers to the cast include Charlize Theron, Scott Eastwood, Kristofer Hivju and Helen Mirren.

The sequel will take us to previously uncharted territory for the Dominic Torretto and his crew with locations including Cuba, New York City and Iceland. Surely they’ll be creating plenty of destruction and chaos in all of these locations, and we can’t wait to see how ridiculous the action gets this time around.

Fast 8 hits theaters on April 14, 2017.

‘Westworld’ Review: An Exciting, Disturbing, and Thoughtful Reimagining

/Film — 9/30/2016 3:00:14 PM

Michael Crichton‘s Westworld is a movie packed with ideas, but it doesn’t add up to much more than robots going berserk. While it’s an enjoyable film with some terrific sequences, there’s a goldmine of untapped ideas in it. The creators of HBO’s Westworld, Jonathan Nolan and Lisa Joy, take real time to explore some of those ideas — and plenty more they bring to the table — in the thrilling, unsettling, and thoughtful first four episodes of season one.

Below, read our Westworld review.

For the low, low price of $40,000 a day you can become the hero or outlaw you’ve always dreamed of becoming. Westworld, in the eyes of its creator Dr. Robert Ford (Sir Anthony Hopkins), is far more than a high-tech theme park; it’s a place for visitors to understand their potential. These visitors — the main two of whom are played by Jimmi Simpson and Ben Barnes — are called “newcomers” by the hosts (i.e., the artificial intelligence). The hosts provide the newcomers with whatever they desire, no matter how revolting. They’re completely unaware of the roles they’re playing in the narratives written for them and the new arrivals.

The sweet and wholesome lady of Westworld, Dolores Abernathy (Evan Rachel Wood), for example, is programmed to have no memory of her previous loops and all the horrors her synthetic eyes have seen. She begins to ask questions, though. A glitch in Ford’s most recent update has caused some bugs in the hosts, but are Dolores or Maeve Millay (Thandie Newton) experiencing glitches or are they, in some way, evolving? Dr. Robert Ford and Westworld’s head of programming, Bernard Lowe (Jeffrey Wright), are interested in finding out, for reasons that aren’t entirely clear at first.

In the pilot, “The Original,” Joy and director Jonathan Nolan spend more time introducing Westworld through action than talking heads. Rather than primarily sticking to the point-of-view of a pair of humans, like the original film, Nolan and Joy spend most of the pilot with the characters most familiar with the park. With the help of Dolores, Nolan and Joy show us a bit of Westworld through her tragic routine. She’s experienced the worst of Westworld, almost on a daily basis. We see a day in the life of Dolores in the pilot, and it’s genuinely unsettling and made all the more horrific by Dolores’ endearing, wide-eyed innocence. Whether Dolores’ pain is programmed doesn’t really matter. Evan Rachel Wood always makes her host’s heartbreak or fear feel real, and the same goes for Thandie Newton and other actors playing the hosts.

Plenty of hosts, including Dolores, have suffered at the hands of The Man in Black (Ed Harris), a mystery guest who’s been coming to the park for 30 years. What makes these instances of horror, when the newcomers have their sadistic fun, truly upsetting isn’t so much the realistic depiction of violence but how the newcomers often revel in it. Whenever The Man in Black or another human hurts a host in delight, their cold distance or joy, contrasted with the very human horror expressed by the hosts, is something straight out of a nightmare.

All the people see the hosts in a different light. Westworld is a large ensemble story — I haven’t even mentioned James Marsden or Sidse Babett Knudsen‘s pivotal roles yet — with so many different perspectives presented. Anybody watching the show will likely find common ground with someone working at Westworld or visiting Westworld, in regards to how they view the role of the hosts. Some folks look at them as expensive toys, while others recognize the humanity in them. Every perspective or opinion you can imagine someone having about Westworld is included.

Nolan, Joy, and all involved leave no stones unturned when it comes to Westworld. The ins and outs of this seemingly endless environment are laid out with absolute clarity — and often with humor thanks to Shannon Woodward, who plays a charming, foul-mouthed employee of Westworld. Already the scope of the show is big, partially because of how many different roles and moving pieces there are involved in bringing Dr. Robert Ford’s vision to life. We see the exhaustive work it actually takes to keep this place running. Nolan and Joy establish the world and characters without any trouble, while also raising a few alluring questions. There are already a handful of mysteries are at play in Westworld. We’ll have to wait and see how they’re paid off, but already from the start, the show has you asking questions about motivations and the bigger picture.

What’s most exciting about Westworld isn’t the questions, the surprising amount of laughs, or all the violence and sex expected from HBO. In one scene, Dr. Robert Ford explains the appeal of Westworld: people come back for the fine details, not the shock and horror. Westworld‘s shock and horror are top-notch, but the subtleties are what make the first handful of episodes addictive. I’m in the midst of watching the episodes again, and on the second watch, new details flourish, especially in the pilot. The J.J. Abrams-executive-produced show has scenes, performances, and lines to be studied under a microscope. Westworld, at least at the start, is as rewarding as it is entertaining.

‘What We Do In The Shadows’ Gets A TV Series Spin-Off

/Film — 9/30/2016 2:30:25 PM

Posted on Friday, September 30th, 2016 by Ethan Anderton

Over a year ago, there were rumblings of not only a What We Do In The Shadows sequel, but also a TV series spin-off. Since then, we’ve learned that the feature film sequel will be called We’re Wolves, following Rhys Darby, Stuart Rutherford and the rest of the werewolves we see in the first movie. Just recently we heard the sequel is probably at least a couple years away, but we haven’t gotten on update on the spin-off series. At the time, co-director, co-writer and co-star Jemaine Clement thought gettng the series off the ground would be a long shot, but today we have good news on the What We Do In The Shadows TV series front.

Last year, Jemaine Clement revealed that he and fellow co-director, co-writer, co-star Taika Waititi (who is busy at work on Thor: Ragnarok right now) had pitched a show in New Zealand following the two cops Karen and Mike (played by Karen O’Leary and Mike Minogue) seen in the movie. However, at the time Clement wasn’t optimistic about the show getting made, explaining, “We’ve already been told before we even handed it in that there’s no money for comedy.”

Well, it sounds like some money was scrounged up because Radio New Zealand reports New Zealand On Air (NZOA) has decided to fund the series with $1.4 million, and it has a six episode order to air on TVNZ 2 down in New Zealand.

The show is called Paranormal Event Response Unit, and it will follow Mike and Karen as they protect people “from supernatural phenomena in their own police reality series.” The two cops treat their encounters with vampires and werewolves rather nonchalantly in What We Do In The Shadows, and seeing the other duties they have as part of a unit meant to specifically deal with these matters has the potential to be extremely funny.

NZOA’s chief executive Jane Wrightson said, “We are delighted that Jemaine Clement and Taika Waititi are bringing their talents to smaller screens in Paranormal Event Response Unit.” Of course, this isn’t the first time Clement and Waititi have taken to television since they collaborated on Flight of the Conchords together several times.

The question is whether we’ll actually get to see this series in the United States or not. Since this show is specifically funded by NZOA and is slated to air in the country, there’s no guarantee it will be imported to the United States. But considering the popularity of What We Do In The Shadows, it’s bound to be picked up for distribution over here.

New ‘Rules Don’t Apply’ Trailer: Warren Beatty’s Long-Awaited Return Is Almost Here

/Film — 9/30/2016 2:00:37 PM

Rules Don’t Apply brings Warren Beatty back to us. Beatty, who hasn’t directed since 1998’s hilarious Bulworth, has returned with a movie that defines “longtime passion project.” Ever since seeing Howard Hughes in-person in 1973, Beatty has wanted to make a film about the man. He finally did it with Rules Don’t Apply, although it’s less a biopic, more of a romance between Lily Collins and Alden Ehrenreich‘s characters.

Below, watch the new Rules Don’t Apply trailer.

The 1958-set film is about aspiring actress Marla Mabrey (Collins) and her relationship with her engaged driver Frank Forbes (Ehrenreich). They’re both employed by Howard Hughes (Beatty), and the troubled billionaire forbids relationships between drivers and actresses. The two deeply religious youngsters face all sorts of new challenges as they work for Mr. Hughes. Rules Don’t Apply co-stars Alec Baldwin, Annette Bening, Matthew Broderick, Dabney Coleman, Candice Bergen, Haley Bennett, Steve Coogan, Taissa Farmiga, Ed Harris, Oliver Platt, and Martin Sheen.

Here’s the new trailer for Rules Don’t Apply, which remains one of our most anticipated movies of the year:

This trailer is slightly more focused than the previous one we saw, which was overlong and a little messy. The comedy is Marla Mabrey and Frank Forbes’ story, which this trailer places more emphasis on. Obviously, Beatty has a significant role to play as Hughes, but he’s not the focus of it. Beatty, who looks like good fun as Hughes, started shooting the film in early 2014. As the director tends to do, he took his time in the editing room. Rules Don’t Apply will finally debut in November at the AFI Film Festival. Until then, fingers crossed we’ll hear more from the typically press-shy Beatty, who participated in a reedit Q & A this morning to promote his upcoming picture. Here are two expectedly short but amusing responses from Beatty:

Beatty’s fans might also be pleased to learn he intends on one day writing an autobiography.

Here’s the official synopsis for Rules Don’t Apply:

It’s Hollywood, 1958. Small town beauty queen and devout Baptist virgin Marla Mabrey (Lily Collins), under contract to the infamous Howard Hughes (Warren Beatty), arrives in Los Angeles. At the airport, she meets her driver Frank Forbes (Alden Ehrenreich), who is engaged to be married to his 7th grade sweetheart and is a deeply religious Methodist. Their instant attraction not only puts their religious convictions to the test, but also defies Hughes’ #1 rule: no employee is allowed to have any relationship whatsoever with a contract actress. Hughes’ behavior intersects with Marla and Frank in very separate and unexpected ways, and as they are drawn deeper into his bizarre world, their values are challenged and their lives are changed.

NBC Working on ‘The Italian Job’ TV Series

/Film — 9/30/2016 1:30:13 PM

F. Gary Gray‘s remake of The Italian Job is light fun. For a few years after the film was released, we’d hear bits and pieces of news about The Brazilian Job. There were scripts written for the sequel, but Paramount never moved forward with it. Since that project becomes more unrealistic as the years go by, that leaves room for The Italian Job television series, inspired by both Peter Collinson‘s original 1969 film and Gray’s remake.

Below, learn more about a potential The Italian Job TV series.

According to Deadline, NBC has given The Italian Job television show a script commitment plus penalty. Writer and executive producer Rob Weiss (Ballers), Benjamin Brand (IFC’s Bollywood Hero) and a producer of the 2003 version, Donald De Line, are involved in the project, which hails from Paramount TV. The setup of the show quite clearly differs from the movies.

Charlie Croker (previously played by Michael Caine in 1969 and Mark Wahlberg in 2003) is still The Italian Job‘s protagonist, but this time, he isn’t trying to steal gold. In the show, Charlie, “a handsome and charming ex-con,” tries to go straight, but Charlie and the rest of his team are pulled back into a life of crime when they have the opportunity to free their “patriarch” from jail. Ideally, the show will have a sense of humor, style of action, and romance that’s in tune with the movies.

For many years, The Brazilian Job was one of those slightly odd sequels, like Five Brothers, that never sounded very realistic. The cast and Gray would talk about it now and then, but there never seemed to be any serious moves made to get the sequel to theaters. The key people at Paramount that remade The Italian Job are long gone, which, in Gray’s words, was a part of the reason why the sequel — which David Twohy (Pitch Black) once wrote a script for — faced trouble getting made.

The Italian Job is joining Shooter, Snatch, School of Rock, Lethal Weapon, and, the bright and shining example, Fargo, as one of the many films turning into television shows. So far, these film-to-TV adaptations are hit and miss, but it’s easy to imagine The Italian Job as a TV show, with perhaps Charlie Croker and his crew plotting a new scheme each season. We have to wait and see if we even get a first season of The Italian Job, though.

‘Inferno’ Clips: Ben Foster Has the Cure for Humanity

/Film — 9/30/2016 1:00:57 PM

After over a seven-year absence from theaters, everybody’s favorite symbologist, Robert Langdon, returns in next month’s Inferno. Back in his franchise role is Tom Hanks, playing author Dan Brown‘s character for the third time. In the Ron Howard-directed sequel, it’s once again up to Langdon to save the day.

Below, check out a few Inferno clips.

Dante Alighieri’s The Inferno holds the clues to the latest mystery Langdon attempts to solve. After Langdon wakes up in a hospital without any memory of the past 36 hours, he discovers he’s been framed. Dr. Sienna Brooks (Felicity Jones) will help him regain his memories and prove his innocence. They’ll also have to do their best to prevent a virus from wiping out half of humanity. Inferno, which was adapted by screenwriter David Koepp (Premium Rush), co-stars Ben Foster (Hell or High Water) as the filthy rich, brainiac villain Bertrand Zobrist; Irrfan Khan (Life of Pi) as Harry Sims; and Omar Sy (The Intouchables) as Christoph Brüder, the head of the SRS team.

Here’s a compilation of Inferno clips (clip four is when the real spoilers start):

Perhaps my memory of Howard’s The Da Vinci Code and Angels & Demons is failing me, but there’s sense of humor in these clips that’s slightly out of the ordinary for Robert Langdon’s world. Irrfan Khan and Felicity Jones’ jokes (“rhetorical”) help Inferno come across as less dry than its two predecessors. It certainly wouldn’t hurt this franchise to try to have a little more fun, like showing an infected and decaying Tom Hanks (which is the highlight of these clips). There’s still plenty of exposition and Langdon scrambling to interpret pieces of art, based on the clips, but from what we’ve seen so far from Inferno, it looks a step up for Howard’s franchise.

Here’s the official synopsis:

Academy Award® winner Ron Howard returns to direct the latest bestseller in Dan Brown’s (Da Vinci Code) billion-dollar Robert Langdon series, Inferno, which finds the famous symbologist (again played by Tom Hanks) on a trail of clues tied to the great Dante himself. When Langdon wakes up in an Italian hospital with amnesia, he teams up with Sienna Brooks (Felicity Jones), a doctor he hopes will help him recover his memories. Together, they race across Europe and against the clock to stop a madman from unleashing a global virus that would wipe out half of the world’s population.

Inferno opens in theaters on October 28, 2016.

Comment Now!


Adaptation, Sony, Thriller, Video Clips, Felicity Jones, Inferno, Ron-Howard, Tom-Hanks

Featured Posts

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]

‘A Dark Song’ Makes Occult Magic Scary (and Human) Again [Fantastic Fest Review]

Watch: Animated ‘Indiana Jones’ Movie ‘The Adventures of Indiana Jones’ From Patrick Schoenmaker

‘Shin Godzilla’ Is a Fascinating and Frustrating Return for the King of the Monsters [Fantastic Fest Review]

Is IMDb Thinking of Switching to a 5-Star Rating System, or Prioritizing Critic Ratings?

Why Doctor Strange’s Costume Was A Big Challenge To Adapt For The Big Screen

Get A Sneak Peek at ‘Harry Potter and the Chamber of Secrets’ Illustrated Edition

/Film — 9/30/2016 12:30:28 PM

Posted on Friday, September 30th, 2016 by Ethan Anderton

In case you hadn’t heard, last year Bloomsbury and Scholastic started releasing illustrated editions of all of the Harry Potter books. We’re not just talking about a few sketches tossed throughout the book. These illustrated editions of the books contain some beautiful, wonderfully detailed, full color images by artist Jim Kay that truly enhance the magical adventures within.

Following the release of Harry Potter and the Sorcerer’s Stone Illustrated Edition last year, this year brings the illustrated version of Harry Potter and the Chamber of Secrets to shelves on October 4th, and a new video gives us a sneak peek at some of the images within.

Here’s an animated glimpse of Harry Potter and the Chamber of Secrets Illustrated Edition:

Prepare to be spellbound by Jim Kay’s dazzling full-color illustrations in this stunning new edition of J.K. Rowling’s Harry Potter and the Chamber of Secrets. Breathtaking scenes, dark themes and unforgettable characters – including Dobby and Gilderoy Lockhart – await inside this fully illustrated edition. With paint, pencil and pixels, award-winning illustrator Jim Kay conjures the wizarding world as we have never seen it before. Fizzing with magic and brimming with humor, this inspired reimagining will captivate fans and new readers alike, as Harry and his friends, now in their second year at Hogwarts School of Witchcraft and Wizardry, seek out a legendary chamber and the deadly secret that lies at its heart.

These are beefy books, but since they’re wider, more text fits on certain pages, so the page count is actually less than the regular text only versions of the books. I’m wondering just how big they’ll be once we get to books like Goblet of Fire and Order of the Phoenix, but we’ve got a couple years before we see those released. You can still pre-order Harry Potter and the Chamber of Secrets Illustrated Edition right here.

Comment Now!


Books, Cool Stuff, Fantasy, Sequels, Video Of The Day, Harry Potter and the Chamber of Secrets, Harry-Potter

Featured Posts

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]

‘A Dark Song’ Makes Occult Magic Scary (and Human) Again [Fantastic Fest Review]

Watch: Animated ‘Indiana Jones’ Movie ‘The Adventures of Indiana Jones’ From Patrick Schoenmaker

‘Shin Godzilla’ Is a Fascinating and Frustrating Return for the King of the Monsters [Fantastic Fest Review]

Is IMDb Thinking of Switching to a 5-Star Rating System, or Prioritizing Critic Ratings?

Why Doctor Strange’s Costume Was A Big Challenge To Adapt For The Big Screen

Cool Stuff: Adam Savage Builds A Mobile Movie Theater In A Pick-Up Truck

/Film — 9/30/2016 12:00:48 PM

Mythbusters co-host Adam Savage has built plenty of awesome props, gadgets, mechanisms and more over the long history of The Discovery Channel program. But his latest endeavor for the folks at Tested might be the coolest creation for all you cinephiles out there.

As part of a cross-promotion with Honda, Adam Savage turned their Ridgeline pick-up truck into a mobile movie theater for all those times that you want to go camping, but would like to have everyone able to watch a movie without crowding around a laptop.

Watch how Adam Savage builds a mobile movie theater thanks to Tested:

It may not be perfect, but it’s a pretty cool little contraption to put in the back of a pick-up truck. The screen likely wouldn’t be all that great on windy nights, but weather has an impact on all sorts of camping activities, so that’s just part of the experience.

Of course, this isn’t exactly the most practical build for just anyone to build. Not everyone has a projector lying around, and those inflatable movie screens usually cost a couple hundred dollars. So if this is something you’re looking to put in the back of your pick-up truck, hopefully you have some extra cash lying around.

Even though this is a cool creation by Adam Savage, your average cinephile and camping fan may just be better served to set up a sheet on a clothesline and anchor it to the ground. You still need your own projector obviously, but that’s the price you pay for creating a movie theater experience in the outdoors.

Comment Now!


Cool Stuff, Creations, Adam Savage, Tested

Featured Posts

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]

‘A Dark Song’ Makes Occult Magic Scary (and Human) Again [Fantastic Fest Review]

Watch: Animated ‘Indiana Jones’ Movie ‘The Adventures of Indiana Jones’ From Patrick Schoenmaker

‘Shin Godzilla’ Is a Fascinating and Frustrating Return for the King of the Monsters [Fantastic Fest Review]

Is IMDb Thinking of Switching to a 5-Star Rating System, or Prioritizing Critic Ratings?

Why Doctor Strange’s Costume Was A Big Challenge To Adapt For The Big Screen

‘Down Under’ Is the Skin-Crawling, Bleakly Hilarious Race Riot Comedy That 2016 Deserves [Fantastic Fest Review]

/Film — 9/30/2016 1:00:30 AM

Set in the aftermath of the Cronulla race riots that rocked Sydney, Australia in 2005, Down Under follows two groups of fictional doofuses on a collision course. On one side, you have a gang of young white friends who are sick and tired of Muslim immigrants taking over their beaches. On the other, you have a group of Lebanese immigrants who are sick and tired of being pushed around. Both sides are armed and both sides are hopelessly dim and fueled entirely by macho rage.

Making a comedy about racial violence feels like a recipe for disaster, but Forsythe and his cast walk this tightrope and make it look effortless. The easiest comparison is Chris Morris’ Four Lions, which mined comedy from a team of bumbling terrorists, but it is also the most accurate. Like that film, Down Under is very funny without losing sight of the bigger picture. It never trivializes its subject matter, knowing when to press pause on the laughs and let the darkness and pain and tragedy of this material to wash over you. And Down Under also knows when to meld horror and comedy into single moments, delivering gags so silly and so sad that they dare you to laugh.

If Down Under has another cinematic cousin, it would be the work of Jody Hill, who has mastered the art of mining comedy from toxic masculinity and the victims it leaves in its wake. Forsythe shoots his characters with a deadpan eye, holding back and letting their idiotic behavior speak for itself. No one winks at the camera as they’re loading a trunk full of a dozen canisters of gasoline just in case or showing off their increasingly absurd Ned Kelly tattoos or doing donuts in their crappy cars or pausing their hunt for immigrants so they can pick up some shawarma. These characters, so ably performed by some very funny people, are portrayed as real people, making their irrational decisions, their racism, and their increasingly poor decisions made just to serve their worst impulses all the more horrifying…and all the more hilarious.

And there is also an incredible recurring joke about a bad mixtape. The margins surrounding the film are chock-full of tiny pleasures.

Down Under doesn’t handle its subject matter with care. It doesn’t wear gloves and goggles. It takes the subject of race relations, dips it in gasoline, and molds it into shape near an open flame. It wants you to get angry. It wants to enrage you. And it wants you to laugh. Because recognizing the absurdity of this situation, realizing that there is nothing more absurd than angry men going to war over the color of their skin, feels vital. Comedy is our great weapon: aim it at hatred and watch every excuse dissolve before your very eyes. In a year dominated by ugly politics, Down Under feels devastating and relevant (there’s even an extended exchange about building a wall). It wants us to laugh, to recognize the absurdity of this all, so we do not become the joke on screen.

/Film Rating: 8.0 out of 10

Comment Now!

About the Author

Related Posts


/Featured Stories Sidebar, Comedy, Fantastic Fest, Features, Movie Reviews, Down Under

Featured Posts

‘A Dark Song’ Makes Occult Magic Scary (and Human) Again [Fantastic Fest Review]

Watch: Animated ‘Indiana Jones’ Movie ‘The Adventures of Indiana Jones’ From Patrick Schoenmaker

‘Shin Godzilla’ Is a Fascinating and Frustrating Return for the King of the Monsters [Fantastic Fest Review]

Is IMDb Thinking of Switching to a 5-Star Rating System, or Prioritizing Critic Ratings?

Why Doctor Strange’s Costume Was A Big Challenge To Adapt For The Big Screen

Desktop Wallpaper Calendars: October 2016

Smashing Magazine — 9/30/2016 9:58:23 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 30th, 2016

A new month means new wallpapers! This journey has been going on for eight years1 now, and each time anew artists and designers from across the globe challenge their artistic skills to cater for some fresh inspiration on your desktop. And, well, it wasn’t any different this time around.

This post features their designs for October 2016. The collection is a mix of ideas and styles, of wallpapers that are a little more distinctive than the usual crowd. All wallpapers come in versions with and without a calendar and can be downloaded for free — just choose your favorite. A big thank-you to everyone who shared their ideas with us! Happy October!

Please note that:

  • All
  • images can be clicked on
  • and lead to the preview of the wallpaper,
  • You can
  • feature your work in our magazine
  • 2
  • by taking part in our Desktop Wallpaper Calendars series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?


“I’m just a sucker for Halloween, candy, tiny witches and giant kittens. And you can’t tell me that October is not Halloween, because I’ve waited the whole year for this. I thought that I would make illustration central to this calendar so I started with the idea of a tiny witch who’s stolen a ton of candy along with her cat — who’s gotten herself in trouble and can’t unstick the bubble gum from her giant teeth. A typical Halloween scene, right?” — Designed by Kalashniköv3 from Spain.


Howl For October

“No Halloween scary tales, witches and goblins fright us! We have our friend Wolfey whose howling scares away the dark, and we can wait for All Hallows Eve in the peace and comfort of our home.” — Designed by PopArt Studio38 from Serbia.


Festival Of The Dead

“Shadows of a thousand years rise again unseen, Voices whisper in the trees, ‘Tonight is Halloween!’ — Dexter Kozen.” — Designed by Suman Sil83 from India.


Spooky Town

Designed by Xenia Latii114 from Germany.


Cute Halloween

“Halloween is one of my favorite holidays, so I decided to create four of my favorite monsters.” — Designed by Maria Keller157 from Mexico.


Scary Monsters

Designed by Servanne210 from France.


Trick Or Treat!

“It’s Halloween on the 31st (of course), and I think this poor kid might have picked the wrong house to go Trick or Treat-ing!” — Designed by James Mitchell253 from the United Kingdom.


Coffee Time

“Chilly autumn days are here. It’s time to make a cup of good hot coffee.” — Designed by Milada Černá Ovec274 from the Czech Republic.


Leaf City

Designed by Katarzyna Szporna297 from Poland.


Celebrating Durga Puja

“Durga Puja – the ceremonial worship of the mother goddess, is one of the most important festivals of India. Apart from being a religious festival for the Hindus, it is also an occasion for reunion and rejuvenation and a celebration of traditional culture and customs. While the rituals entail ten days of fast, feast and worship, the last four days – Saptami, Ashtami, Navami and Dashami – are celebrated with much gaiety and grandeur in India and abroad, especially in Bengal.” — Designed by Dipanjan Karmakar328 from India.


Time For Explorers!

“October is the perfect month to explore new things. We have just passed summer, and autumn is coming with a lot of surprises! I love October!” — Designed by Veronica Valenzuela357 from Spain.


Fall Is Here

“October is the peak of autumn here in Pennsylvania, and the colorful leaves and landscape are more beautiful than any other time of the year.” — Designed by Marc Andre378 from the United States.


Nurture Nature

“In the midst of our rat races to the top, we forgot how to listen to and understand the rhythms of nature. Relax, take a deep breath and listen to it! And preserve our natural heritage – after all, another good planet is hard to find.” — Designed by Faheem Nistar423 from India.


Let Us Live And Let Live

“We are all created equal. Nature showers one and all with unbiased love and compassion. Each and every other creature has the right to exist just as we human beings. This October, we have come up with a design that reflects the importance of protecting nature and the creatures who live in there, contributing our part to maintain a balance in the natural ecosystem which is the home to many.” — Designed by Acodez IT Solutions438 from India.


Stars And Sun

“Looking closely at a sunflower, I noticed that there are little stars inside the heart of it.” — Designed by Philippe Brouard481 from France.


Leaves Dance

“The world is beautiful even in the most inclement weather. Every moment is wonderful…” — Designed by Anastasiya510 from Russia.


Untouched Beauty

“Was touched by the amazing landscape during my short vacation between Lake Tahoe and Yosemite Park in California. While driving out there in the open, I was making frequent stops to capture the overwhelming beautiful landscape that was surrounding me. Photo is taken on my way to Yosemite Park from Lake Tahoe, this was a small abandoned lake that seemed literally untouched by any human hand.” — Designed by Ognen Trpeski541 from the United States.


Join In Next Month!


Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Automating Art Direction With The Responsive Image Breakpoints Generator

Smashing Magazine — 9/29/2016 10:17:40 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 29th, 2016

Four years ago, Jason Grigsby asked a surprisingly difficult question: How do you pick responsive image breakpoints?1. A year later, he had an answer2: Ideally, we’d set responsive image performance budgets to achieve “sensible jumps in file size.” Cloudinary built a tool3 that implements this idea, and the response from the community was universal: “Great! Now, what else can it do?” Today, we have an answer: art direction!

Since its release earlier this year, the Responsive Image Breakpoints Generator has been turning high-resolution originals into responsive <img>s with sensible srcsets at the push of a button. Today, we’re launching version 2, which allows you to pair layout breakpoints with aspect ratios, and generate art-directed <picture> markup, with smart-cropped image resources to match. Check it out4, and read on.

Responsive Image Breakpoints: Asked And Answered


Why did we build this tool in the first place?

Responsive images send different people different resources, each tailored to their particular context; a responsive image is an image that adapts. That adaptation can happen along a number of different axes. Most of the time, most developers only need adaptive resolution — we want to send high-resolution images to large viewports and/or high-density displays, and lower-resolution images to everybody else. Jason’s question about responsive image breakpoints concerns this sort of adaptation.

When we’re crafting images that adapt to various resolutions, we need to generate a range of different-sized resources. We need to pick a maximum resolution, a minimum resolution and (here’s the tricky bit) some sizes in between. The maximum and minimum can be figured out based on the page’s layout and some reasonable assumptions about devices. But when developers began implementing responsive images, it wasn’t at all clear how to size the in-betweens. Some people picked a fixed-step size between image widths:

Rectangles showing the relative dimensions of a group of srcset resources that use a fixed-step-size strategy.

Others picked a fixed number of steps and used it for every range:

Rectangles showing the relative dimensions of three groups of srcset resources that use a fixed-number-of-steps strategy.

Some people picked common display widths:

Rectangles showing the relative dimensions of a group of srcset resources scaled to common display widths.

At the time, because I was lazy and didn’t like managing many resources, I favored doubling:

Rectangles showing the relative dimensions of a group of srcset resources scaled using a doubling strategy.

All of these strategies are essentially arbitrary. Jason thought there had to be a better way. And eventually he realized that we shouldn’t be thinking about these steps in terms of pixels at all. We should be aiming for “sensible jumps in file size”; these steps should be defined in terms of bytes5.

For example, let’s say we have the following two JPEGs:

300 pixels wide (37 KB)

1200 pixels wide (333 KB)

The biggest reason we don’t want to send the 1200-pixel-wide resource to someone who only needs the small one isn’t the extra pixels; it’s the extra 296 KB of useless data. But different images compress differently; while a complex photograph like this might increase precipitously in byte size with every increase in pixel size, a simple logo might not add much weight at all. For instance, this 1000-pixel-wide PNG6 is only 8 KB larger than the 200-pixel-wide version7.

Sadly, there haven’t been any readily useable tools to generate images at target byte sizes. And, ideally, you’d want something that could generate whole ranges of responsive image resources for you — not just one at a time. Cloudinary has built that tool8!


A screenshot of the Responsive Image Breakpoints Generator (View large version10)

And it has released it as a free open-source4411 web app.

But the people wanted more.

The Next Frontier? Automatic Art Direction!


So, we had built a solution to the breakpoints problem and, in the process, built a tool that made generating resolution-adaptable images easy. Upload a high-resolution original, and get back a fully responsive <img> with sensible breakpoints and the resources to back it up.

That basic workflow — upload an image, get back a responsive image — is appealing. We’d been focusing on the breakpoints problem, but when we released our solution, people were quick to ask, “What else can it do?”

Remember when I said that resolution-based adaptation is what most developers need, most of the time? Sometimes, it’s not enough. Sometimes, we want to adapt our images along an orthogonal axis: art direction12.

Any time we alter our images visually to fit a different context, we’re “art directing.” A resolution-adaptable image will look identical everywhere — it only resizes. An art-directed image changes in visually noticeable ways. Most of the time, that means cropping, either to fit a new layout or to keep the most important bits of the image visible when it’s viewed at small physical sizes.


On small screens, we want to zoom in on the image’s subject.

People asked us for automatic art direction.

… Which is a hard problem! It requires knowing what the “most important” parts of an image are. Bits and bytes are easy enough to program around; computer vision and fuzzy notions of “importance” are something else entirely.

For instance, given this image…


(Image source: Cloudinary15) (View large version16)

… a dumb algorithm might simply crop in on the center:


(Image source: Cloudinary18)

What you need is an algorithm that can somehow “see” the cat and intelligently crop in on it.

It took us a few months but we built this, too, and packaged it as a feature available to all Cloudinary users19.

Here’s how it works: When you specify that you want to crop your image with “automatic gravity” (g_auto), the image is run through a series of tests, including edge-detection, face-detection and visual uniqueness. These different criteria are then all used to generate a heat map of the “most important” parts of the image.


The master rolled-up heat map (View large version21)

A frame with the new proportions is then rolled over the image, possible crops are scored, and a winner is chosen. Here’s a visualization of the rolling frame algorithm (using a different source image22):

It was immediately obvious that we could and should use g_auto‘s smarts to add automatic art direction to the Generator. After a few upgrades to the markup logic and some (surprisingly tricky) UX decisions, we did it: Version 2 of the tool26 — now with art direction — is live.

How do you use the Responsive Image Breakpoints Generator?

The workflow has been largely carried over from the first version: Upload an image (or pick one of the presets), and set your maximum and minimum resolutions, a step size (in bytes!), and a maximum number of resources (alternatively, you can simply use our pretty-good-most-of-the-time defaults). Click “Generate,” et voila! You’ll get a visual representation of the resulting image’s responsive breakpoints, some sample markup, and a big honkin’ “download images” button.

The new version has a new set of inputs, though, which enable art direction. They’re turned off by default. Let’s turn a couple of them on and regenerate, shall we?

The first output section is unchanged: It contains our “desktop” (i.e. full) image, responsively breakpointed to perfection. But below it is a new section, which shows off our new, smartly cropped image:

And below that, we now have all of the markup we need for an art-directed <picture> element that switches between the two crops at a layout breakpoint.

Finally, there’s a live <picture> example that shows you what all of that markup actually does.

Let’s circle back and look at the art direction inputs in a little more detail.

Each big box maps to a device type, and each device type has been assigned a layout breakpoint. The text under the device type’s name shows the specific media query that, when true, will trigger this crop.

Below that, we can specify the aspect ratio that we want to crop to on this device type.

Below that, we specify how wide the image will appear relative to the width of the viewport on this type of device. Will it take up the whole viewport (100%) or less than that? The tool uses this percentage to generate simple sizes markup41 — which specifies how large the image is in the layout. If you’re using this code in production, you’ll probably want to go back into the example markup and tailor these sizes values to match your particular layout more precisely. But depending on your layout, inputting rough estimates here might be good enough.

And there you have it: simple, push-button art direction.

What if you want to work with more than one image at a time? If you’re building entire websites with hundreds or thousands (or hundreds of thousands!) of images — especially if you’re working with user-generated content — you’ll want more than push-button ease; you’ll need full automation. For that, there’s Cloudinary’s API, which you can use to call the smart-cropping and responsive image breakpoints functions that power the Generator, directly. With the API, you can create customized, optimized and fully automated responsive image workflows for projects of any shape or size.

For instance, here’s Ruby code that will upload an image to Cloudinary, smart-crop it to a 16:9 aspect ratio, and generate a set of downscaled resources with sensible responsive image breakpoints:

If you work only on the front end, all of this functionality is available via URL parameters, too! Here’s a URL powered by Client Hints42 and smart-cropping43 that does the same thing on download that the Ruby code above does on uploadand it delivers different, dynamically optimized resources to different devices, responsively:

A tremendous amount of smarts is packed into that little URL!

But back to the Generator. Now, it can do more than “just” pick your image breakpoints — it can pick your art-directed crops, too. And it will generate all of the tedious resources and markup for you; upload one high-resolution original, and get back all of the markup and downscaled resources you need to include a scalable and art-directed image on your web page.

Have I mentioned that the Responsive Image Breakpoints Generator is free? And open-source4411? Give it a whirl45, and please send us feedback. Who knows, maybe we’ll be back again soon with version 3!

Building Hybrid Apps With ChakraCore

Smashing Magazine — 9/28/2016 8:02:53 AM

There are many reasons to embed JavaScript capabilities into an app. One example may be to take a dependency on a JavaScript library that has not yet been ported to the language you’re developing in. Another reason could be your desire to allow users to eval small routines or functions in JavaScript, e.g., in data processing applications.


ChakraCore on Linux and OS X (Image credit2) (Large preview3)

ChakraCore4 provides a high performance JavaScript engine that powers the Microsft Edge browser5 and Windows applications written with WinJS6. The key reason for our investigation of ChakraCore was to support the React Native framework on the Universal Windows Platform, a framework for declaring applications using JavaScript and the React7 programming model.

Hello, ChakraCore


Embedding ChakraCore in a C# application is quite easy. To start, grab a copy of the JavaScript runtime wrapper from GitHub8. Include this code directly in your project or build your own library dependency out of it, whichever better suits your needs. There is also a very simple console application9 that shows how to evaluate JavaScript source code and convert values from the JavaScript runtime into C# strings.

Building Apps With ChakraCore


There are a few extra steps involved when building C# applications with ChakraCore embedded. As of the time of writing, there are no public binaries for ChakraCore. But don’t fret. Building ChakraCore is as easy as this:

  • Open the
  • solution
  • 11
  • in Visual Studio (VS 2015 and the Windows 10 SDK are required if you wish to build for ARM).
  • Build the solution from Visual Studio.
  • The build output will be placed in
  • Build\VcBuild\bin
  • relative to your Git root folder.

If you wish to build from the command line, open up a Developer Command Prompt for Visual Studio, navigate to the Git root folder for ChakraCore, and run:

You’ll want to replace the Configuration and Platform parameters with the proper settings for your build.

Now that you have a version of ChakraCore.dll, you have some options for how to ship it with your application. The simplest way is to just copy and paste the binary into your build output folder. For convenience, I drafted a simple MSBuild target to include in your .csproj to automatically copy these binaries for you each time you build:

For those who don’t speak MSBuild, one of the MSBuild conventions is to run targets in your project named AfterBuild after the build is complete. The above bit of XML roughly translates to “after the build is complete, search the references path for files that match the pattern ChakraCore.* and copy those files to the output directory.” You’ll need to set the $(ReferencesPath) property in your .csproj as well.

If you are building your application for multiple platforms, it helps to drop the ChakraCore.dll dependencies in folder names based off of your build configuration and platform. E.g., consider the following structure:

That way you can declare the MSBuild property $(ReferencesPath) based on your build properties, e.g.

JavaScript Value Types In ChakraCore


The first step to building more complex applications with ChakraCore is understanding the data model. JavaScript is a dynamic, untyped language that supports first-class functions. The data model for JavaScript values in ChakraCore supports these designs. Here are the value types supported in Chakra:

  • Undefined
  • ,
  • Null
  • ,
  • Number
  • ,
  • String
  • ,
  • Boolean
  • ,
  • Object
  • ,
  • Function
  • ,
  • Error
  • ,
  • Array
  • .

String Conversion With Serialization and Parsing


There are a number of ways of marshalling data from the CLR to the JavaScript runtime. A simple way is to parse and serialize the data as a JSON string once it enters the runtime, as follows:

In the above code, we marshal the JSON data, {"foo":42} into the runtime as a string and, parse the data using the JSON.parse function. The result is a JavaScript object, which we use as input to the JSON.stringify function, then use the ToString() method on the result value to put the result back into a .NET string. Obviously, the idea would be to use the parsedInput object as an input to your logic running in Chakra, and apply the stringify function only when you need to marshal data back out.

Direct Object Model Conversion (With Json.NET)


An alternative approach to the string-based approach in the previous section would be to use the Chakra native APIs to construct the objects directly in the JavaScript runtime. While you can choose whatever JSON data model you desire for your C# application, we chose Json.NET due to its popularity and performance characteristics. The basic outcome we are looking for is a function from JavaScriptValue (the Chakra data model) to JToken (the Json.NET data model) and the inverse function from JToken to JavaScriptValue. Since JSON is a tree data structure, a recursive visitor is a good approach for implementing the converters.

Here is the logic for the visitor class that converts values from JavaScriptValue to JToken:

And here is the inverse logic from JToken to JavaScript value:

As with any recursive algorithm, there are base cases and recursion steps. In this case, the base cases are the “leaf nodes” of the JSON tree (i.e., undefined, null, numbers, Booleans, strings) and the recursive steps occur when we encounter arrays and objects.

The goal of direct object model conversion is to lessen pressure on the garbage collector as serialization and parsing will generate a lot of intermediate strings. Bear in mind that your choice of .NET object model for JSON (Json.NET in the examples above) may also have an impact on your decision to use the direct object model conversion method outlined in this section or the string serialization / parsing method outlined in the previous section. If your decision is based purely on throughput, and your application is not GC-bound, the string-marshaling approach will outperform the direct object model conversion (especially with the back-and-forth overhead from native to managed code for large JSON trees).

You should evaluate the performance impact of either approach on your scenario before choosing one or the other. To assist in that investigation, I’ve published a simple tool for calculating throughput and garbage collection impact for both the CLR and Chakra on GitHub12.

ChakraCore Threading Requirements


The ChakraCore runtime is single-threaded in the sense that only one thread may have access to it at a time. This does not mean, however, that you must designate a thread to do all the work on the JavaScriptRuntime (although it may be easier to do so).

Setting up the JavaScript runtime is relatively straightforward:

Before you can use this runtime on any thread, you must first set the context for a particular thread:

When you are done using that thread for JavaScript work for the time being, be sure to reset the JavaScript context to an invalid value:

At some later point, on any other thread, simply recreate or reassign the context as above. If you attempt to assign the context simultaneously on two different threads for the same runtime, ChakraCore will throw an exception like this:

While it is appropriate to throw an exception, nothing should prevent you from using multiple threads concurrently for two different runtimes. Similarly, if you attempt to dispose the runtime without first resetting the context to an invalid value, ChakraCore will throw an exception notifying that the runtime is in use:

If you encounter the “runtime is in use” exception that stems from disposing the runtime before unsetting the context, double check your JavaScript thread activity for any asynchronous behavior. The way async/await works in C# generally allows for any thread from the thread pool to carry out a continuation after the completion of an asynchronous operation. For ChakraCore to function properly, the context must be unset by the exact same physical thread (not logical thread) that set it initially. For more information, consult the Microsoft Developer Network site on Task Parallelism13.

Thread Queue Options


In our implementation of the React Native on Windows, we considered a few different approaches to ensuring all JavaScript operations were single threaded. React Native has three main threads of activity, the UI thread, the background native module thread, and the JavaScript thread. Since JavaScript work can originate from either the native module thread or the UI thread, and generally speaking each thread does not block waiting for completion of activity on any other thread, we also have the requirement of implementing a FIFO queue for the JavaScript work.

ThreadPool Thread Capture


One of the options we considered was to block a thread pool thread permanently for evaluating JavaScript operations. Here’s the sample code for that:

The benefit to this approach is it’s simplicity in that we know a single thread is running all of the JavaScript operations. The detriment is that we’re permanently blocking a thread pool thread, so it cannot be used for other work.

Task Scheduler


Another approach we considered uses the .NET framework’s TaskScheduler14. There are a few ways to create a task scheduler that limits concurrency and guarantees FIFO, but for simplicity, we use this one from MSDN15.

The benefit to this approach is that it does not require any blocking operations.

Garbage Collection


One of the main deterrents from using ChakraCore in conjunction with another managed language like C# is the complexity of competing garbage collectors. ChakraCore has a few hooks to give you more control over how garbage collection in the JavaScript runtime is managed. For more information, check out the documentation on runtime resource usage16.

Conclusion: To JIT or not to JIT?


Depending on your application, you may want to weigh the overhead of the JIT compiler against the frequency at which you run certain functions. In case you do decide that the overhead of the JIT compiler is not worth the trade-off, here’s how you can disable it:

The option to run the just-in-time (JIT) compiler in ChakraCore is also optional — all the JavaScript code will be fully interpreted even without the JIT compiler.

Building Social: A Case Study On Progressive Enhancement

Smashing Magazine — 9/27/2016 10:00:56 AM

We talk a lot about progressive enhancement and how it improves backwards compatibility. But how straightforward is it to apply progressive enhancement concepts to a real-life project? When designing a rich interactive experience, it can be difficult to determine what can be implemented purely using HTML and CSS and what absolutely requires JavaScript.

Through this case study on redesigning the Building Social1 website, we’ll share some simple yet often overlooked front-end techniques that defer the use of JavaScript as much as possible, while providing some neat JavaScript enhancements, too.

Introducing Building Social


Building Social is an invitation-only app that connects people who share an office building — who might otherwise never meet — into a building-specific social media platform. People are able to access building events, conversations, likes, replies, trivia contests, a marketplace, and one-click reporting to and from the building’s management, as well as receive emergency and general information notifications.

The initial one-page design concept for the Building Social website was art directed by Patrick Riley and Paul Stanton from their New York office. We were handed a static mock-up of the design concept, along with a video mock-up of how various elements should move. Our job was to experiment with the design further and, more importantly, to bring it to life in the browser.


The initial static mock-up of the website (View large version3)

Considering that the project was relatively small in size, there was no need to establish a specific design or development strategy — unlike, for instance, with the SGS project4. This meant we were able to focus predominantly on the implementation and a number of specific progressive enhancement challenges, including:

  • automated versus synced string swapping;
  • static versus dynamically blended section backgrounds;
  • manual versus scroll position-based animation toggling;
  • pure-CSS modal-box forms enhanced with JavaScript.

Everything Except HTML Is an Optional Extra


Our first task was to set a baseline experience by applying properly structured and semantic HTML5 to all of the design elements. Considering that Patrick and Paul wanted to make use of video content, we also had to ensure there was a fallback for video content6.

Challenge 1: Automated Vs. Synced String Swapping


At first glance, the header appears to be just pure decoration. However, it actually contains usable content, promoting several key features of the Building Social service. Patrick and Paul wanted to include a video feature in the header, too, with each video scene associated with a specific service feature.

Progressive enhancement summary:

  • Mark up key service features in an HTML unordered list to provide a baseline experience should the video, JavaScript and/or CSS fail to load.
  • Animate service features via CSS keyframe animations.
  • When the video loads, swap out each service feature after each subsequent scene change, while keeping it in sync with the video playhead.

Having established the baseline HTML experience, we tried simply to animate the list of service features independent of the video using CSS keyframe animations, as each scene lasts for 3 seconds (21 seconds in total). Below is our original CSS snippet:


Perhaps not the most elegant CSS snippet you’ll ever see, but it certainly did the job. However, we were fully aware that there might be times when the video fails to load altogether or when JavaScript fails to initiate properly. In such instances, a static background image is displayed with each service feature, swapped out according to the keyframe timings seen in the CSS snippet above.

Each key service feature is swapped out using only CSS.

When the video fully loads, we then have to ensure that each service feature remains in sync after every scene change:

Each key service feature is synced after every scene change in the video.

While we could have simply relied on CSS keyframe animations, the risk was that the animated list would quickly get out of sync should the video finish loading after the other assets. The solution: synced string swapping and a small snippet of JavaScript to keep track of the playhead position.

When the JavaScript loads, it begins to track the video’s playhead position, with the service feature string being swapped out depending on the retrieved value. This is done by listening to the timeupdate event in JavaScript (with jQuery) and toggling the class name for the respective element whenever the video’s currentTime matches the condition:

You may have also noticed that we used a factor of 0.9 to toggle the class before the actual scene change? This allows enough time for the animation to execute properly, while the animation itself is still controlled via CSS:

Challenge 2: Static Vs. Dynamically Blended Section Backgrounds


By default, each page section has a designated background color, along with one or more background images. Nothing too extravagant there, except that the New York team really wanted the section backgrounds to smoothly blend into each other as the user scrolls down the page.


Static section backgrounds are the default, and blended backgrounds are added as a progressive enhancement. (View large version8)

Progressive enhancement summary:

  • Set static background colors by default for each page section.
  • Apply enhanced background blending via JavaScript.

Blended backgrounds aren’t something you see or work on every day. Nevertheless, some interesting examples are already out there; for instance, the website for the design agency ustwo9. To create this effect consistently when the user scrolls, we had to use JavaScript, but only as a progressive enhancement. After researching and testing a number of promising JavaScript-based libraries, we settled on Scroll Magic10 for two reasons:

  • It is well documented, with plenty of examples that could be applied to what we needed.
  • It is lightweight, weighing in at 8 KB.

Our initial idea was to detect the position of each successive page section in order to apply the relevant CSS background gradient. However, it soon became apparent that no gradient was being applied whenever we stopped scrolling! Because only a single solid background color was applied to the body element, the gradient was indeed an optical illusion created by the scrolling:

Blended section backgrounds create the illusion of an infinite gradient.

Upon realizing that we had been the victims of an optical illusion up to that stage, we discarded the gradient idea. Instead, we decided simply to create a motion tween between the solid background colors of the adjoining page sections whenever the user scrolls down the page:

Having established that this approach works, we then adopted a similar configuration, using Scroll Magic again, to trigger scene animations that are tied to the scroll position. But before we attempted to do that, we had to make sure that these animations could be triggered without having to rely on JavaScript.

Challenge 3: Manual Vs. Scroll-Position-Based Animation Toggling


We really wanted to incorporate the idea of movement whenever the user scrolls down the page, but at the same time ensure that it isn’t overwhelming (as is often the case with a lot of websites that contain the parallax effect).

Progressive enhancement summary:

  • Use a CSS
  • :hover
  • pseudo-class to toggle animations by default.
  • Toggle animations based on the scroll position using JavaScript.

For simple animations that are not central to the content, the animation options available in CSS are more than sufficient. For instance, the animation length and timing functions, coupled with some of the animatable CSS properties such as positioning, margins and opacity, go a long way. Furthermore, if any elements in a single scene don’t require any specific independent interaction, then only a single event is needed to trigger all of the animation elements.


Simple animation triggered with the :hover pseudo-class (View large version12)

In our case, because the sections were 100% wide, we were able to trigger the animation in each section by using the :hover pseudo-class. Doing so gave us a quick and unobtrusive way to trigger the animations without having to use JavaScript. However, this method doesn’t work on touch-enabled devices, and considering that animations are, for the most part, visual enhancements, we decided to discard them for small viewports.


More complex animation triggered when scrolling (View large version14)

Upon detecting that the JavaScript has fully loaded, we trigger the animations based on the scroll position. By also including background elements in the animations and by separately adjusting the timings for each, we are able to control each individual scene more precisely, thus enhancing the overall experience.

Challenge 4: Pure-CSS Modal-Box Form Enhanced With JavaScript


Considering that the initial concept was a one-page design, with two distinct calls to action (i.e. for landlords and “providers”), we really wanted to ensure that the user isn’t taken away from the page. Furthermore, due to the floating text on the right, progressively disclosing any subsequent contact forms below each call to action would not have been ideal. Therefore, modal boxes were the best solution.

Progressive enhancement summary:

  • Toggle modal boxes by default using only semantic HTML and CSS.
  • Enhance the experience by introducing inline validation and form submission that does not require page reloading via JavaScript.

Toggling elements on and off is very easy when you write semantic HTML, meaning that elements can be progressively enhanced without polluting the actual code. For instance, by ensuring that each toggled element has a semantic id attribute and anchor link (one that points to the same id), we were able not only to provide hooks to support interesting interactions, but also to dramatically improve the accessibility and usability of the unstyled page.


A pure-CSS modal-box web form (View large version16)

But how exactly can anchor links toggle modal boxes? CSS :target pseudo-class to the rescue! Whenever an element’s id matches the hash string in the browser’s address bar, the :target pseudo-class gets triggered, meaning that the element can also be styled differently. Furthermore, the CSS is quite simple:

If you’re aware of the :target pseudo-class technique (after all, it has been around17 for quite some time now), then you will have probably used it already to toggle elements without having to rely on JavaScript.

When you also include a second anchor link that refers back to the initial anchor element (or its parent), you’re then able to use it as a “close” link. This is because the :target pseudo-class, which matches the modal box’s id, is no longer active when the “close” link is clicked. Here’s a quick example:

In addition, when you anchor to the element from which the modal was initially toggled, it will set the page precisely back to its position before the user activated the modal.

We can push the use of the :target pseudo-class even further. Let’s say the modal box contains a web form, and we want to display the result of the form submission in that same modal box (for instance, as a confirmation message).


Display the confirmation message in the modal box to create the illusion of speed and to keep the user focused on the task. (View large version19)

To do so, we simply need to add a hash id (an id identical to the modal box’s id) to the resulting URL — the same one we send to the user after they have submitted the form. Here is a simple example using a header redirect in PHP:

On a really fast Internet connection, it will appear as if the browser has never reloaded the page. On a slow connection or a less-capable device, instead of reloading the page and positioning it towards its head, the browser will automatically skip to the confirmation message, thus dramatically improving the user experience.

With this basic functionality in place, we can enhance it further with JavaScript by introducing inline validation and form submissions that do not require full page reloads20 (using a good ol’ XML HTTP request (XHR), otherwise known as AJAX). Doing so not only will shave off a bit of time, but will make the experience more convenient if there happen to be any input errors.

Lastly, why not provide support for closing the modal box with the Escape key? Following the same principle of toggling classes and anchoring to different elements, the code required to do so is relatively simple:

Taking everything we have discussed into account, below is a list of progressive enhancement options for the modal-box contact form:

  • Toggle and close the model box purely with CSS — JavaScript not required.
  • Enable basic HTML5 validation using the
  • required
  • attribute and proper input types (for instance,
  • type="email"
  • , which will validate the entry against the correct email address pattern) — JavaScript not required.
  • Fully reload a form submission, automatically opening the form in a modal box, and returning a confirmation message or a server-side-validated error message — JavaScript not required.
  • Validate input fields inline — requires JavaScript and duplicate validation logic on both the client side and server side, which could prove difficult to maintain.
  • Submit forms via XHR and return a confirmation message or a server-side-validated error message without reloading the page or closing the modal box.
  • Additional enhancements are possible — for instance, closing the modal box with the Escape key.

If the form is relatively simple and sent to the server using XHR, then step 4 can be skipped; otherwise, depending on the project, this will create unnecessary maintenance overhead. However, as you’re probably aware, forms should always be validated server-side21, even if you implement client-side validation.

Bonus Tip: The Illusion Of Randomness


What do you think of first when you need to randomize something on the client side? Does it involve JavaScript’s Math.random() by any chance?

For this project, we wanted to randomize the clouds in the “Providers” section (the red section), to mimic real cloud movement in nature. Clouds generally sit at different heights, but move more or less in the same direction. Also, from the viewer’s perspective, clouds often appear to travel at different speeds relative to each other.


Clouds move in the same direction but at different speeds. (View large version23)

This resulted in an animation that uses the same traveling distances but is set at different durations:

The technique above proves that sometimes there is no need to use JavaScript. If we step back and revisit the problem and goal, we can achieve the same result simply by using readily available and widely supported technologies but in a much smarter way.

Conclusion: It Can Be Done!


Some of the aforementioned techniques can also be readily applied to other UI elements, such as tabs, tables and dropdowns. By using native HTML functionality, coupled with pseudo-selectors and CSS animation, you can provide relatively simple yet engaging interactive experiences, requiring little effort on your part.

While there’s certainly nothing wrong with using JavaScript to enhance an experience, considering that Javascript-based animation can be just as fast24 as CSS-only animation, do think about semantics25, accessibility and performance before diving head first into any solution. Semantic HTML in particular, which has been widely evangelized26 for many years, reinforces the meaning of content through markup rather than through its presentation. Therefore, investigate multiple options — in most cases, you’ll be able to identify the least complex solution.

Finally, by being creative and using the basic tools at your disposal, you can improve performance and accessibility, as well as simplify code maintenance. Keep in mind that no two projects are ever quite the same, and each is likely to have specific use cases in terms of performance and accessibility. As such, metrics and data may vary. For instance, the UK’s Government Digital Service determined that 1.1%27 (or 1 in 93) of its users miss out on JavaScript enhancements. In attempting to show that progressive enhancement is faster28, Jake Archibald ran a series of convincing tests, although, as he points out, “the uncertainty of real life” can affect the fairness of any test. But the point, as always, is that by getting content on the screen as soon as possible, you will improve the user experience, and in doing so, you will earn a few extra karma points along the way. Everybody wins!

(vf, il, al)



Hold on, Tiger! Thank you for reading the article. Did you know that we also publish printed books and run friendly conferences – crafted for pros like you? Like SmashingConf Barcelona, on October 25–26, with smart design patterns and front-end techniques.

Developing For Virtual Reality: What We Learned

Smashing Magazine — 9/26/2016 10:31:13 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 26th, 2016

With the tools getting more user-friendly and affordable, virtual reality (VR) development is easier to get involved in than ever before. Our team at Clearbridge Mobile1 recently jumped on the opportunity to develop immersive VR content for the Samsung Gear VR, using Samsung’s 360 camera.

The result is ClearVR, a mobile application demo that enables users to explore the features, pricing, interiors and exteriors of listed vehicles. Developing this demo project gave us a better understanding of VR development for our future projects, including scaling, stereoscopic display and motion-tracking practices. This article is an introductory guide to developing for VR, with the lessons we learned along the way.

ClearVR demo video

Developing For Virtual Reality


VR is one of the most stimulating and exciting new fields to develop for. It gives creators the freedom to bring their ideas to life. Any developer can create content for VR. The only requirement is a yearning for innovation and problem-solving. When first developing for VR, creators must consider the different UI, how to scale for a 3D environment and the devices they’re working with. VR brings together several technologies, including 3D stereoscopic displays, motion-tracking hardware, new input devices, computers and mobile phones.

A main challenge when developing for VR is understanding what the user will expect to do and how you’ll be able to meet and exceed their expectations. Many factors, including performance, UI and scaling, play a huge part in upholding the user’s experience. Basic elements of 2D applications can’t be applied in VR content, so you need to know how the user will behave once they enter the virtual environment, and then develop accordingly.

Technology And Tools Used


Here are the prerequisites for VR development:

  • Resources
  • Hardware
  • Samsung Gear Galaxy Note 4, 5, S7, S6 Edge, Edge+ or S7 Edge; Samsung Gear VR headset: Samsung 360 camera; Android phone that is 4.4 or newer and whose API level support is 19. You’ll want to use devices that have a lot of RAM, to push more polygons.
  • Tools

Getting Started


Unity has several features that makes Android development easier, especially for creating games, but the GearVRf is a good choice to consider if you’re on a tight budget. The GearVRf is a native Java framework created by Samsung and released as an open-source library.

Gear VR is a headset that combines Oculus’ optic lenses with new head-tracking technology that houses a mobile phone with a high-resolution display. It was released by Samsung in late 2014 at 199 USD and can be purchased through Samsung’s online store.7 Gear VR’s housing has several intuitive input controls, including an eye-adjustment wheel, as well as other controls, such as volume buttons, a headphone jack and a trackpad. It offers a better experience than DK2, with a nice form factor and fantastic display resolution.

To begin, we followed “Build Configuration for Android Studio8” in the “GearVRf Developer Guide.9” These steps got us to a good base point, until we hit an error: The native development kit (NDK) build command couldn’t be found in Android Studio. To circumvent the issue, we used a command line to remove the automatic build, which recognized the NDK immediately. We used Terminal with an NDK-Build command, which was packaged with the GearVR SDK that was failing in the Android Studio environment when trying to build. We succeeded in using it outside of Android Studio to build all of the SDK files the first time, which were then packaged in our repository. The command was removed from the build steps so that it wouldn’t fail every time our program was compiled and used the compiled SDK files instead. For development alternatives, we also considered Unreal Engine and Unity, but decided that Samsung’s GearVR framework was the best choice for us in terms of ease of use and budgetary restrictions.

We looked at Samsung’s GearVRf and existing demos to identify the inconsistencies in previous VR content, such as fragmentation, low pixel density, low frame rate and overall poor user experience. To increase the quality of your content, it’s crucial to optimize your code as much as possible for a seamless and nausea-free user experience. To achieve this, reduce faces that won’t be seen by the user, limit overdraws to optimize the graphics processing unit (GPU), and limit the level of detail in objects. Sample GearVRf applications, available in the GearVRf SDK, can provide you with valuable insight for writing your own VR applications.

Next, our product team defined the product in a discovery session. They mapped out what features the mobile app would have and created a road map to guide our development team. These features included a 3D model of each car exterior, with the ability to enter the car to see a 360 interior. They also wanted the app to showcase the car’s description and special features in a text box above the car model. From this, we determined what we knew we could do and what was possible with the resources we were working with. Most importantly, we wanted our VR application to have a smoother user experience, with high-quality content. Samsung’s 360 camera helped us create more compelling visuals, which are essential to making the experience truly immersive. With a rough idea from our product team, we laid out the architecture of the app and decided which scenes to include and created a basic code structure from there.

Basic Terminology


  • Mesh
  • There are several ways to draw 3D graphics, but the most common is to use a mesh, which is an object composed of one or more polygonal shapes, constructed out of vertices (x, y, z) that define coordinate positions in the 3D space. The polygons typically used in meshes are triangles (three vertices) and quads (four vertices). Note that 3D meshes are often referred to as models.
  • Polygon count
  • The polygon count, also known as the polycount, is the number of faces in the object that you’re modelling. The higher the polygon count, the higher the file size and the longer the rendering time your project will take. To optimize the VR experience (depending on the devices you’re using), simplify the polygon count to increase the frame rate.
  • Textures and materials
  • The surface of a mesh is defined using additional attributes beyond the x, y and z vertex positions. Surface attributes can be as simple as a single solid color, or they can be more complex, such as light reflecting off of an object. Materials are the surface properties of a mesh and depend on the light in the scene.
  • Bounding box
  • A bounding box is a simple geometric box that is represented with three-dimensional points, with the minimum and maximum coordinates located diagonally from the bottom-left to upper-right corners of the object. A bounding box is created as mesh meta data and, in most cases, works best for selecting and choosing an object. Essentially, it’s a box placed around an object that enables the user to select the object with greater ease.
  • Stereoscopic displays
  • VR is about creating a 3D visual representation of an experience that conveys a sense of depth. To create depth, VR hardware systems such as the Gear VR headset employ a 3D display, also known as a stereoscopic display or head-mounted display. To create an illusion of depth, developers must generate a separate image for each eye, one slightly offset from the other eye. This creates a parallax effect (where the brain perceives depth based on the difference in the position of objects). Many VR headsets on the market now use barrel distortion to achieve this effect.
  • Motion-tracking hardware
  • This helps the brain believe it’s in another place by tracking movements of the head to update the rendered scene in real time. Headsets such as Gear VR use a high-speed inertial measurement unity (IMU) to achieve this by combining gyroscope and accelerometer hardware that measures the change in rotation. Even the slightest lag will break the immersive experience for the user.



There are several important classes in a GVRf project:

  • GVRContext
  • Similar to Android Context, this is mainly used to load resources and transition to a new scene.
  • GVRActivity
  • Used as a starting point for all GearVRf projects, this triggers the GVRScript (GVRMain in version 3) by
  • setScript
  • (or
  • setMain
  • ), and some UI events detection (clicks, swipes, back button press). There’s usually only one
  • GVRActivity
  • in a project; different user interfaces are implemented through different
  • GVRScenes
  • . A very basic set-up would be this:

Stretching The Limits Of What’s Possible

Smashing Magazine — 9/23/2016 10:36:36 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

An Interview With Matan Stauber

Stretching The Limits Of What’s Possible

  • September 23rd, 2016

Designing with “big data” is a challenging task. Matan Stauber, however, took it to the next level. With an impressive outcome. Having studied Visual Communication at Bezalel Academy of Art and Design, Israel’s national school of art, Matan realized a very ambitious final project: an interactive timeline of our galaxy’s history — 14 billion years, from the Big Bang to today.

We talked to Matan about Histography41, about the idea behind it, and how he managed to bring it to life. An interview about stretching the limits of what’s possible.

Q: Matan, what’s the idea behind Histography and how did it come to be?


Stauber: A timeline is one of the most popular ways of visualizing history, but they are limited to a specific time period. Histography is an interactive timeline that allows viewers to explore all of history, across 14 billion years of history, all the way back to the Big Bang. The site draws historical events from Wikipedia and self-updates daily with newly recorded events. Every dot in Histography represents a historic event, along with videos, articles, and images. Viewers can adjust to any time range, from decades to billions of years, and even compare historic events using different categories, such as war and inventions in the Middle Ages.

For a while before creating Histography, I was interested in Wikipedia and how a redesign of it could look like. One of the features I had in mind was a view where all of Wikipedia’s events are arranged on a historical timeline. Since Wikipedia contains events from every time period we know (from this current year to the Big Bang), a timeline of all Wikipedia events is a timeline of all history as we know it. I found this concept to be so exciting that it quickly became the brief for the entire project.


Histography41 visualizes all of known history in an interactive timeline. (View large version5)

Q: How does Histography work from a technical perspective?

Stauber: I created a system that scans Wikipedia, searching for historic events, indexing them, and trying to determine how important each one is. For every event the system then asks Google for an image, YouTube for a video, and Wikipedia for an article.

Imagine telling a person: “Go to every Wiki page you can find. Every time you come across a date and you think it’s a historic event, write it down. For each of them go to Google and ask for a photo and to YouTube and see if there is a relevant video, and give each one a rating score of 1–100. Events with a Wiki page will get a higher score as events without. Longer and popular articles will get higher scores as short and unpopular ones.”

The reason behind the rating system is that there were a lot of very niche historic events that are probably not relevant to most people. My first experience with Histography was exploring for a very long time before I could find events I could relate to. The solution was to promote the more “important” events, the ones people are writing and talking about. You can’t really tell, but every time you move your cursor, the system tries to pick the more interesting event out of the events in the radius around your cursor.

As you hover over the dots, Histography picks the most interesting event from the radius around your cursor.

Q: Could you explain what is going on behind the scenes the moment a user has selected a timeframe?

Stauber: Every time you select a timeframe, the system constructs a new graph layout. Each column in this new layout can represent anything between months to millions of years (depends on how big your selected timeframe is).

The next step is that each of the thousands of dots on the screen gets its own new location. Every millisecond or so, the system runs through all of the dots and moves each of them slightly closer to their new location. This is what drives the animation. Because each dot has it’s own speed, it feels as if the particles come flying in, not like a robotic movement.

When I started thinking about how to design such a timeline, one that can support billions of years and thousands of events, the main concern was proportions. The series “The Cosmos” once calculated that if you place all of the galaxy’s history on a one-year calendar, all of human history will fill the last second of that calendar. I knew that in order to create a timeline of all history, it cannot be limited to a specific period. It will need to be able to show events on different scales and to allow people to focus on a single year as well as billions of them. Trying to answer that required many different design concepts and explorations — from 3D timelines to Matryoshka-type designs, and going back to the drawing book time after time.

A nice detail: When you change the timeframe, new events, represented by dots, come flying in dynamically.

Q: Please describe the process of building this mammoth project: Where did you start? What were the first steps you took?

Stauber: It started by trying to map all the different types of existing timelines, if it’s in a printed newspaper or displayed in a museum. I wanted to know what binds timelines to a specific period and what I can do differently so a timeline of all history would be possible.

The next step was taking a notebook and starting to draw ideas and concepts. Most of them were useless, but every once in a while there was a drawing that made me think “Oh, this might actually work.”


How does one build a timeline of all of history? Matan tinkered with a lot of concepts before settling on one. Here one of his early sketches. (View large version7)


The timeline now in use on Histography. It lets you select a timeframe yourself, or you can choose a certain period, Iron Age, for example. (View large version9)

Q: What were the challenges you had to master along the way?

Stauber: A big problem was navigation. How do you navigate between billions of years to a specific one? The slider at the bottom of the page allows viewers to adjust the time period they would like to see. You could jump to a specific period, like the Iron Age, or choose a custom one. It’s designed in a way that the “sensitivity” of it changes as you move from one period to another. For example, if you are looking at -3.7 billion years and move the slider a bit, the next one would be -3.6 billion (100 million years). But if you are looking at 1937, the next one would be 1938 (one year). It’s basically an exponential function, the further you are in time, the shorter the steps will be.


To make navigating through 14 billion years of history as convenient as possible, Matan made the slider sensitive. Here a sketch of the concept. (View large version11)

Q: What techniques and tools did you use to build Histography? Were there any existing libraries you could rely on to ease the job or did you have to build everything from scratch?

Stauber: A lot of notebook sketches. I like designing in Bohemian Sketch and Adobe Illustrator. The Code in Histography is CSS, Javascript and WebGL (running on Pixi) for all events flying in on the screen. Pixi.js12 is a great library for 2D graphics using the powerful WebGL. The rest is mostly made from scratch. I’d say that I recommend using WebGL only when you have complex graphics. For most projects, CSS will give you great results in a shorter time.

Q: How long did the project take to build?

Stauber: Histography was created in four months (end to end) as my final student project at the Bezalel Academy of Arts and Design.

Q: Did Histography turn out as you expected it or were there sacrifices you had to make?

Stauber: One idea that I really liked and had to sacrifice was having another view that can present all of the historic events on a global map. When doing data visualization, there is sometimes a gap between the ideas we have and the actual data. In this case, the data just didn’t contain location for most of the historic events I had.

Q: Were there things you wish you’d done differently, both in terms of design and code?

Stauber: Not differently, but there are many sides to Histography I would like to expand. For example, all the data I’m using comes from the English Wikipedia (which is the biggest data source) and in many ways this information represents the Western view of history. I would love to see how a Chinese or Arab Histography looks like.

Q: Did you have any inspiration for this project?

Stauber: I’ll typically try to find inspiration outside the field I’m currently working on. If I’m doing graphic design, I’ll look at industrial design, architecture and such. In the case of Histography, some of the things that really opened the way I think about history came from natural history museums and history-related exhibitions.

Q: Where do you see information design heading?

Stauber: There is a lot of discussion about “big data” and for a good reason: We have never harvested and stored such a vast amount of information and never in history was this information so accessible to the general public. The question is: What can we do with all of this data? I think we will see more and more projects trying to find ways to make information on large scales more approachable.

Q: Are there any examples of data visualization you can point us at that inspire you or which, in your opinion, get things right?

Stauber: There is a great project called “We Feel Fine13” by Jonathan Harris that scans Twitter for tweets starting with “I feel…”. The result is an interactive map of human emotion from all around the globe.

Q: Last, but not least: Do you have a piece of advice you’d like to share with our readers?

Stauber: Big data and infographics in general, like other areas of design, are about storytelling. It’s about the story of your data, and as such, it has to be coherent. A common mistake is to try to communicate too many ideas. There are too many infographics that are beautiful but simply too complex to understand. My advice would be to think of the first glimpse of your project and what story it tells.

Thank you, Matan, for offering us a deeper look into your work! We’re already curious what you will come up with in the future.




Hold on, Tiger! Thank you for reading the article. Did you know that we also publish printed books and run friendly conferences – crafted for pros like you? Like SmashingConf Barcelona, on October 25–26, with smart design patterns and front-end techniques.

Choosing The Right Prototyping Tool

Smashing Magazine — 9/22/2016 11:19:16 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 22nd, 2016

When it comes to creating prototypes, so many tools and methods are out there that choosing one is no easy task. Which one is the best? Spoiler alert: There is no “best” because it all depends on what you need at the moment! Here I’ll share some insight into what to consider when you need to pick up a prototyping solution.

I’ve always wanted to stay up to date on the latest design and prototyping tools, testing them shortly after they launch, just to see if any of them might improve my workflow and enable me to achieve better results. In the beginning, a few years ago, I think it was easier than it is now to decide whether a new tool was useful. Nowadays, apps are being released every day, and it’s kind of difficult to give them all a proper try.


My list of prototyping tools2: I swear it was much shorter in the beginning! (View large version3)

In a desperate attempt to become more conscious about each prototyping tool’s features and characteristics, and to better decide which to try next, I started to compile a short list of prototyping tools4. Gradually, as I added more and more items to the list, it got out of control — a reliable sign that too many solutions exist.

Perhaps it’s because of this situation that, quite often, after having presented at a conference or taught a class, some of the attendees would ask for my advice, wanting to know which is the best tool out there. Honestly, I don’t feel capable of giving a straight answer, because, as with choosing a pair of running shoes, the “best” often depends on your needs at that particular moment and on what outcome you want to achieve5.

I guess after a while, I have developed a kind of sixth sense for design — an ability to understand (or, at least, believe — I’m not Superman, after all) whether a given tool is worth trying just by giving it a quick look.

Luckily for you, you don’t need a sixth sense or any other superpowers in order to choose a prototyping tool. There are other, more objective means of choosing one. It all depends on your current priorities, so let’s start there.

1. Learning Curve


The most effective methods of learning take advantage of previous knowledge, so that we don’t have to start from scratch. This is what we call “knowledge transfer”: applying previously acquired knowledge to a new situation. It is also useful when you’re learning how to use a new prototyping tool: The ones with a familiar interface and a familiar set of tools will probably be easier to learn than ones that are new in every aspect.

This is especially true for Adobe’s suite, in which every app is designed to resemble the others. You know where the panels and dialogs will be6, and the similarities make it easier to learn new apps within the suite or to switch between apps — for example, from Illustrator7 to Photoshop8.

But also compare how much time you expect to invest in learning a new tool with how much time you expect to actually be using that tool in your design process. The ideal situation would be to dedicate a little time to learning a tool that you will use frequently or even every day.

2. Support For Teamwork


I need my prototyping tool to consolidate feedback from clients about my designs, so that I can use the information to improve my work and then share a new, better version.


With InVision, gathering feedback and comments about a design is easy. (View large version10)

To achieve this, I’ll usually upload my design screens to InVision11, where the client can add comments about a particular feature in the exact spot they’re referring to. Then, I’ll have a chance to reply to the comment or close it if the issue has been resolved.

But if you work in a company, then not only should the client feel like a part of the team, so should your fellow designers. It’s important to have a tool that enables your workmates to share and upload their own versions of your design screens, so that that everybody stays on the same page while contributing to the project. Tools such as InVision present the general activity of a project in a kind of timeline view, so that you can stay up to date and keep track of all changes.

3. Level Of Fidelity


From day one, when we have only a basic idea of what a product will be, our prototype starts to evolve, fueled by learning. That’s why we design in iterations, and in each phase we test different things according to our priorities.

For example, at the very beginning, when we don’t know whether an idea is valid, it is not advisable to focus on design details, such as color palette or grid system. Instead, we should be prototyping. And the prototyping tool we choose will depend on the fidelity we’re aiming for (i.e. how close the prototype should be to the intended final product).

Fidelity can build incrementally: low when we simply want to test the idea (the tool should allow navigation from one screen to another), medium when we’re focusing on layout, information and interaction design (the tool should be capable of more precise design), and high when the most important things are visual design, animation and micro-interactions (the tool should be capable of adding motion and transitions).

Each tool should help us to achieve the prototype we need — and perhaps not much more — and then enable us to move quickly to the next stage, where that tool might not be needed anymore.

Low Fidelity


When I merely want to test the idea for a digital product, an app that gives me a lot of control over the design is not convenient, because I will easily get distracted by details that are not relevant during that stage. More important is being able to navigate from one screen to another, without wondering about whether elements of the interface have the proper size or layout. (Yes, I know it’s difficult to resist the temptation to align elements, but believe me, it’s not crucial at this point.)


While many prefer to do their conceptual work digitally, there’s a freedom in putting an old-school pencil to paper13. (View large version14)

When I have just come up with an idea and go straight to the computer, I often find myself asking questions such as what size should the design document be or what colors should I choose — when I don’t even know whether the concept is on the right track. That’s why, in moments like these, I prefer to use the oldest and most basic option: pen and paper.

The idea is not new15:

“But why should we start with sketching?” you might ask. The reason is because getting caught up in pixel-precision this early on in a project by going straight to digital is just too easy, and it’ll cost a bit of time in the long run.[…] Dropping back to pencil and paper is both a fast and easy way to get your ideas out so that you can start iterating.

Using pen and paper, I won’t be worried about any of the design-specific details that I mentioned before. Instead, I can focus on the idea.


Using pen and paper during the early stages of the prototyping process. (View large version17)

I can quickly draw a design to commit what I have in mind, and then, using a tool such as Marvel18 or POP19, take pictures of it to build a working prototype that includes gestures and transitions, in order to test some basic flows. The good thing about prototyping in this way is that if the concept fails (but you have to continue working on that million-dollar idea), you won’t feel attached to your work, and restarting with a different approach will be very easy.


Marvel allows you to take pictures of a design on paper and add interaction, but you can also design a basic interface directly on your phone. (View large version21)

Tip: If you are designing temporary views, such as alerts, tooltips or short feedback messages, you can draw them separate from the main interface. Then, cut one of the messages with scissors, and put the little piece of paper on top of the main design. This way, you can take one picture with the message, and another without. You’ll have two screens for the price of one and without having to draw the two versions by hand!


Create two screens for the price of one. (View large version23)

Medium Fidelity


Pen and paper are fine, but there comes a point in the design process when they’re not enough. When I’m sure about the app’s core concept and I have already made some basic prototypes on paper, I need a different tool to move forward. Normally, when we talk about medium-fidelity prototypes, we are referring to wireframes whose primary purpose is to convey interaction and information.

When I design wireframes, I try to use real information as much as possible. However, I don’t always have all of the data at my disposal at this stage. So, I usually have to approximate the final text, graphics and colors, because these are tied to branding. (Don’t blame me: Those guys are always late!). At least I can focus on achieving a convincing layout and interaction.


For many designers, Sketch has been a game-changer, especially for its focus on interface design and getting rid of all the things you don’t need. (View large version25)

During this stage, I normally use Sketch26. This tool is relatively easy to use and helps me take my paper design concept to the next level. Using Sketch, I can easily reuse UI elements, so that I don’t have to start from scratch, and I can benefit from many standard UI components. There are also plenty of additional interface components that you can use to build layouts, like the ones found on Sketch App Sources27. As my process keeps going, I can also control the degree of customization of those elements and decide where to pause for user testing.

Using these design components is also good idea if you want to align with expectations and not over-design. Normally, designing everything from scratch will take a lot more time (and developers will take more time, too, when they implement your design). That’s why it’s better to reuse common UI elements, such as lists, dialogs, forms and tabs.

But (yes, there’s always a “but”) Sketch is a Mac-only tool; so, if you are using Windows, you’ll have to rely on something else. Balsamiq28 and Omnigraffle29 are well known and have been available for a while. A couple of new UI design tools are web-based (and so don’t need any setup or installation): Gravit30 and Figma31.

High Fidelity


When your prototype grows and get closer to being a viable product, you will need to design components that were less relevant before32, like infrequent dialogs, some feedback messages (error messages and messages that show the result of an action), empty states, disabled buttons and so on.

Basically, during the earlier stages of low- and medium-fidelity prototyping, we were focused on structure, information and flow and on a small set of core use cases. As the design matures, we need to consider more:

  • additional use cases (often, less frequent ones);
  • edge cases and contingencies (for example, what happens in a check-out flow if a credit card is rejected?);
  • error prevention and handling.

All of these use cases are important to consider for a good UX, but they shouldn’t be the first things we design. We start with the core use cases and focus on the most relevant and salient aspects of the design. Then, we include the edge conditions in order to complete and validate the design.

At this stage, then, it becomes increasingly important to choose a tool that gives you granular control over the components of the design, so that you can determine the aspect and behavior of each one of the elements of the UI.

A while ago, I used Axure33 for these types of tasks. In fact, in one of my first job interviews in Barcelona, they asked me if I knew how to use it, because it was being used widely across that company. Of course, I said yes in order to get the job, and in the days before starting work, I learned it from the inside out. It was then when I discovered its full potential, using features such as conditionals, which enable you to show and hide dialogs, banners and other temporary blocks of information, depending on the user’s interaction. This might come in handy because it minimizes the number of screens to be designed completely from scratch.

If you have been reading carefully, you have probably realized by now that I have been focused mostly on static designs. What about animation? This is becoming more and more important, not only because animation can be found everywhere in modern interfaces, but also because it is very hard to communicate with the rest of the team how you want something to move or fly without showing a sample.

When it comes to prototyping animations, micro-interactions and transitions, I divide prototyping tools into two groups:

  • tools that have familiar UIs and that don’t require you to learn any code;
  • tools with which you get your hands dirty with at least a few lines of code.

In the first group, a few new tools have appeared, such as Pixate34, Principle35 and Flinto36. In many situations, you would use these tools to prototype not the whole app, but only a subset of screens, to see how different elements will be displayed or how to transition from one state (or screen) to another.


If you want more precise control over your designs, then Framer is a good option. (View large version38)

If you want to go a step further, you might opt for the second group. This set of apps might look less familiar to designers, but you will have more precise control over animations. Also, in many cases, you can use native components to achieve a more realistic outcome, thereby making the move from prototype to final code easier. If you’d like to go even further, I would suggest trying Framer39 (which is based on JavaScript) or Facebook Origami40, whose accompanying Origami Studio41 will be released later this year, allowing you to export code snippets that can be sent to developers.

For iOS, you could use Interface Builder42, which enables you to design interfaces using native iOS components in a visual environment. (This solution is completely code-free — yay!). For Android, there’s Android Studio43.

4. Integration With Your Workflow


Another point to consider when choosing a prototyping tool is how well it fits your design process44 and other tools you regularly use. Prototyping is part of a much broader process that includes researching users, testing, gathering metrics, communicating the idea to stakeholders, and sharing designs with developers for final implementation.

You probably won’t find one tool that does everything (more on this later), but prototyping tools should at least help you move through the process smoothly, especially when you are expected to iterate under tight deadlines.

For example, if you are designing in Photoshop, Illustrator or Sketch, it would be great if your prototyping software could directly use the files produced by these apps without requiring you to export assets separately and then build everything from scratch to create the interactions.

Personally, I’m pretty satisfied with Sketch (again). I can export images and even use the original, editable Sketch file and upload it to a different tool to complete my prototype. When I want to add interactions, I upload files to Marvel45, and when I need to animate, to Framer or Flinto.


Lookback integrates with other tools, making for a smooth design workflow, without major interruptions when moving from one stage to the next. (View large version47)

One of the last (and most important) steps when building a prototype is testing it and gathering information (gestures, taps and responses) from real users so that you can improve the product. Tools such as InVision and Marvel connect with Lookback48, enabling you to test the app and record video at the same time, so that you can analyze the data with the rest of the team.

5. Ease Of Use And Comfort


Finally, how you feel with a prototyping tool is important! If you are going to be using it every day — and sometimes even on Saturdays and Sundays if your are a freelancer, like me — it should feel good, right?

This is personal, so my advice here is limited. Look for a tool that satisfies you, not one that makes your work harder, puts hurdles in your way, slows you down, adds extra steps or forces you to find workarounds.



Given that so many design and prototyping tools are out there today (and I didn’t even mention them all), you might feel intimidated. Perhaps that’s why we are starting to expect the appearance of “one tool to rule them all” — an app that enables us to create all kinds of designs and even make prototypes.

In a way, we are starting to see this with Adobe Experience Design CC49 (a new design tool that allows you to link between design screens) and Sketch (when used with plugins such as Craft Prototype50 for interactions and AnimateMate51 for animations).


When crafting something, we are used to switching between tools depending on what we need. Why should digital design be any different? (View large version53)

What does the future of design and prototyping tools hold? I’m not sure, but I think that if we go in this direction, we might end up with an overly complex tool, like a Swiss Army knife that has plenty of little tools, but none of which are truly useful. Also, other professionals, including surgeons and mechanics, use different tools depending on the occasion. Why should we designers be any different? One of the most important things is to identify which tool is most suitable for the given job.

In any case, don’t obsess about the tools so much. They are supposed to help us shape our ideas; they should not determine or constrain how our products look or behave.

I also understand that the guidelines above will be pretty useless if your corporation forces you to use a particular tool (as did mine once). If that is the case, you could try to persuade your team to at least try something different, if your reasoning is clear and logical. Perhaps some of the points above would support your argument.

Lastly, be wary when someone tells you that a certain tool is “the best” or “the easiest to learn.” This is highly subjective, and you should discover it on your own. In the end, you, like me and everybody else, are different.

How To Design Error States For Mobile Apps

Smashing Magazine — 9/21/2016 8:51:50 AM

  • September 21st, 2016

To err is human. Errors occur when people engage with user interfaces. Sometimes, they happen because users make mistakes. Sometimes, they happen because an app fails. Whatever the cause, these errors and how they are handled, have a huge impact on the user experience. Bad error handling paired with useless error messages can fill users with frustration, and can lead to users abandoning your app.

In this article, we’ll examine how the design of apps can be optimized to prevent user errors and how to create effective error messages in cases when errors occur independently of user input. We’ll also see how well-crafted error handling can turn a moment of failure into a moment of delight. Adobe introduced a new design and wireframing app called Experience Design (Adobe XD) that lets you design interactive wireframes and error states. You can download and test Adobe XD1 for free.

What Is An Error State?


An error state is a screen that is shown when things go wrong. It is an example of a situation where the user is getting something other than their desired state. Since errors can occur in surprising combinations, these states can include anything from incompatible user operations (such as invalid data input), to the inability of an app to connect to the server, or even the inablity to process a user request.


Error state screens. Image credit: Material Design7. (Large view8)

Every error, regardless of cause, becomes a point of friction for your users and blocks them from moving forward in their experience. Luckily, well-designed error handling can help reduce that friction.

Prevention Is Better Than Cure


If you design apps, you should be familiar with the most common in-app interactions that could lead to the error state (error-prone conditions). For example, it’s usually hard to correctly fill out a form on the first attempt, or it’s impossible to properly sync data if the device has a poor network connection. You should take these cases into account to minimize the possibility of errors. In other words, it’s better to prevent users from making errors in the first place by offering suggestions, utilizing constraints, and being flexible.

For instance, if you’re allowing people to search for a hotel reservation, why make past dates available and display an error if users select dates that are in the past?


Large view10

As shown in the Booking.com example, you can simply use a date selector that allows users to only choose today’s date or dates in the future. This will force users to pick a date range that fits.


The date picker in the Booking.com app displays a full monthly calendar, but grays out past dates so users can choose proper dates. (Large preview12)

Error Screen For Form Validation


A form is a conversation. Like any conversation, it should be represented by a consistent communication between two parties — the user and your app. Validation plays an essential part of this conversation. Form validations are meant to have conversations with users and guide them through the difficult times of errors and uncertainty. When done right, it can turn an ambiguous interaction into a clear one. Generally speaking, there are four important elements that good form validation consists of:

  • Right time of informing about errors (or success)
  • Right place for validation output
  • Right color for the message
  • Clear language for your message

Right Time (Inline Validation)


Form validation errors are inevitable and are a natural part of a user’s data input (since user’s input is error-prone). Yes, error-prone conditions should be minimized, but validation errors won’t ever be eliminated. So, the most important question is, “How do you make it easy for the user to recover from form errors?”

Users dislike going through the process of filling out a form, only to find out at submission that they’ve made an error. It’s especially frustrating when you complete a long form and once you’ve pressed submit, you are rewarded with multiple error messages. And it’s even more annoying when it isn’t clear what errors you’ve committed, and where.


Image credit: Stackexchange14. (Large view15)

Validation should immediately inform users about the correctness of a provided answer right after the user has submitted the data. The primary principle of good form validation is this: “Talk to the users! Tell them what is wrong!” and real-time inline validation immediately informs users about the correctness of provided data. This approach allows users to correct the errors they make faster without having to wait until they press the submit button to see the errors.

However, avoid validating on each keystroke because, in most cases, you simply cannot verify until someone has finished typing their answer. Forms that perform the validation during the data entry punish the users as soon as they start entering the data.


Google Forms states the email isn’t valid when you’re not done typing it. (Image credit: Medium242017) (Large preview18)

On the other hand, forms that perform validation after the data entry are not informing the user soon enough that they fixed the error.


Validation in Apple Store is performed after the data entry. (Image credit: Medium242017) (Large preview21)

Mihael Konjević in his article22 “Inline validation in forms — designing the experience” examined different validation strategies and propose a hybrid validation strategy: reward early, punish late.


Hybrid — reward early, punish late — approach. (Image credit: Medium242017) (Large preview25)

Right Place


Proximity is another important tool. When you’re wondering what place to choose for your validation messages, follow this rule of thumb — always put the message in the context of action. If you want to inform the user about an error occurring in a particular field — show it next to the field. Instant validation is best positioned to the right-hand side of the input, or failing that, immediately below.


Communicate form errors in real time. (Image credit: ThinkwithGoogle27( (Large preview28)

Right Color (Intuitive Design)


Color is one of the best tools to use when designing validation. Because it works on an instinctual level, adding red to error messages, yellow to warning messages, and green to success messages is incredibly powerful. But, make sure that colors in your digital interface are accessible for your users. It’s a crucial aspect of a well-executed visual design.

Error text should be legible, with a proper color and noticecable contrast against its background. Image credit: Material Design29

Clear Message


A typical error might state, “the email is invalid,” without telling the user why it’s invalid. (Is it a typo? Is it occupied?) Straightforward instructions, or guidelines, make all the difference. There is no guessing for the user who receives this error, and also no confusion or frustration. You can see in the example, the form informs the user that this email is already in use. It then offers some options (either to login or recover my password).


App Errors: Failure To Load Data (Large preview31)

Okay, it’s time to display an error page to indicate that something has gone wrong. As an example, let’s take a situation when connectivity is down and a user is on a screen that is only available online. You should use this opportunity to make people aware that you know what is happening and follow the model of immediate helpfulness — your error message should be a helping hand for your users. That’s why you should never show:

  • A raw error message
  • . Messages which contain an app’s internal error codes or abbreviations such as “an error of type 2 has occurred” are cryptic and scary.


This error message was written by a developer for a developer. (Large preview33)

  • A dead-end error message
  • . Simply because, such error states don’t provide any helpful information for users.


Spotify’s35 error screen just states ‘An error occurred’ and doesn’t provide any constructive advice on how to fix the problem. (Large preview36)

  • A vague error message
  • . The error screen in the example below gives users the same amount of information as the previous one. Users won’t have any clue what it means and what to do next.


Buffer38 has a well-designed error state, but the copy won’t mean much to users. (Image credit: emptystat.es39) (Large preview40)

Don’t scare users with errors. Also, don’t assume people know about the context of a message or assume they are tech-savvy enough to figure things out. Instead, tell people what’s wrong in plain language. To achieve this, you should avoid using technical jargon and express everything in the user’s vocabulary.

Make your error message both readable and helpful — error states must include concise, polite, and instructive copy that clearly states:

  • What went wrong and possibly why.
  • What’s the next step the user should take to fix the error.


Remote app42 explains why a user cannot see anything, and how to solve it. (Large preview43)

Incorporate Imagery And Humor Into Error States


Error states are an excellent opportunity to utilize icons and illustrations, because people respond better to visual information than plain text. But, you can go further and incorporate unique imagery that is branded to match your app, yet still be helpful for your users. It’s a good way to both humanize your message and communicate the app’s personality.


Azendoo45 uses a memorable illustration and humorous copy which encourage users to solve the problem. (Large preview46)

Humor is the spice of life. A bit of humor never hurts and can help diffuse the frustration of an error. You can find plenty of great examples of humorous error messages at Littlebigdetails47. Here are some of my favorites:

  • Basecamp: when there is a form field error, the character on the left makes a surprising facial expression.



Image credits: Des Traynor50

  • A cheeky error message is displayed when you type too many full-stops when creating a new account in
  • Gmail
  • 51
  • .

Image credit: Simon Souris52

However, be careful with humor because it may not always be appropriate to use in your error message; it really depends on the severity of the error. For instance, humor is good for a simple validation problem such as a “404 Page Not Found” error. But when a user is losing a significant amount of time due to a failure saying “Uh oh!” is entirely inappropriate.


(Image credit: Thomas Fuchs54) (Large preview55)

Comprehensive Checklist Of A Perfect Error Page


Perfect error pages are a helping hand for your users and should have the following six qualites:

  • Error messages occur dynamically, just as the problem appears. This immediately informs users about the problem.
  • Keep all user input safe. Your app shouldn’t undo, destroy, or delete anything entered or uploaded by user in the event of an error.
  • Speak the same language as the user. It should clearly state what went wrong and possibly why; what’s the next step the user should take to fix the error?
  • Don’t shock or confuse users. (The message shouldn’t be dramatic).
  • Don’t hijack control of the system. (If problem isn’t critical, the user should be able to interact with as much of the rest of the app as possible).
  • Use a little sense of humor to humanize the problem.

404 Not Found Error


The main goal of 404 page is to direct your user to the page they were looking for as quickly as possible. Your 404 page should offer a few key links and directions your user can choose between. A safe bet is to have a “Home” link as a primary action on the 404 page — a quick and friendly way to start over. You can also place “Report this Page” to quickly report that the page is broken, but make sure that the primary action (link to the “Home” page) carries a stronger visual weight.


(Image credit: Dribbble57) (Large preview58)

Cannot Login


Login screens are usually relatively minimal, with a field for a username and another for a password. But, minimal doesn’t always equal simple. There are many reasons why a user might be stuck on the login screen. The main rule for a login page is very simple – don’t make the user guess.

Let’s propose solutions for the most common problems using examples from MailChimp59, which does a great job with error messaging.

  • User forgets his username. If you detect that the problem is an unknown username, you should offer a link to let the user fix it. Tell users where they can get it (e.g. “check email from us”) or provide a link to the username recovery.


Users make multiple attempts to log in using an incorrect password. To prevent brute force attacks, user accounts are often temporarily locked after too many failed login attempts. This is a necessary security practice, but be sure to warn users before their account is locked.


Credit Card Rejection


Credit card rejection error page can be caused by (1) errors in the data formatting (typo or missing data) or (2) a card can be declined (expired card or fraud). Gabriel Tomescu62 in his article “The anatomy of a credit card form,” suggested the following strategy for both error states:

For the first problem, you should follow standard real-time inline validation practice and visually indicate an error:


(Image credit: uxdesign64) (Large preview65)

However, when a card is declined by the payment network for some reason, it usually looks like fraud. You need to clear the data entered by the user. Even then, you still need to notify the user what has happened; the error message should be as clear as possible.


(Image credit: uxdesign67) (Large preview68)

Connectivity Is Down


Internet access is not ubiquitous, and offline support should be a crucial consideration for nearly every modern app. When connectivity is down, you should try to provide a rich offline experience. Users should be able to interact with as much of the rest of your app as possible. This means the app should have cached content to provide a good offline experience.

Daniel Sauble in his article69 provides perfect insight on how social, mapping, and productivity apps function offline. It’s totally clear why he suggests that it’s better to cache a little of everything than a lot of some things and nothing of others. Because, when a user opens an app, they expect to see content, regardless of whether they’re connected to the internet or not. If the content isn’t there, they’ll get frustrated and switch to a different app which does a better job of caching the information they want to see.

Make sure your app functions offline as well as it possibly can. Here is some practical advise from Robert Woo70 that can be incorporated almost in every app on the market.

Save the last state. Below you can see two apps made for content delivery. The CNN app provides a better user experience by caching the last view and providing users with the headlines for the articles that were last loaded.


(Image credit: rocketfarmstudios72) (Large preview73)

Provide offline functionality and features. There are features on every app that can (and should) work without an internet connection. Let’s take Evernote as an example. The app is entirely functional offline: you can edit existing notes or write a new one, and the app will sync everything up with the cloud once you’re connected again.

(Image credit: emptystates74) (Large preview75)



The best error message is the one that never shows up. It is always better to prevent errors from happening in the first place by guiding users in the right direction ahead of time. But, when errors do arise, well-designed error handling not only helps teach users how to use the app as you intended, but also prevents users from feeling ignorant. Of course, the error state is one of the least-desirable states to design for. However, if you put a lot of effort into this state, your product will be infinitely more enjoyable to use.

Recommended Materials


This article is part of the UX design series sponsored by Adobe. The newly introduced Experience Design app80 is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app.

You can check out more inspiring projects created with Adobe XD on Behance81, and also visit the Adobe XD blog82 to stay updated and informed. Adobe XD is being updated with new features frequently, and since it’s in public Beta, you can download and test it for free83.

(ms, vf, il, al)



Understanding REST And RPC For HTTP APIs

Smashing Magazine — 9/20/2016 9:37:51 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 20th, 2016

For the last few years, whenever somebody wants to start building an HTTP API, they pretty much exclusively use REST as the go-to architectural style, over alternative approaches such as XML-RPC, SOAP and JSON-RPC. REST is made out by many to be ultimately superior to the other “RPC-based” approaches, which is a bit misleading because they are just different.

This article discusses these two approaches in the context of building HTTP APIs, because that is how they are most commonly used. REST and RPC can both be used via other transportation protocols, such as AMQP, but that is another topic entirely.

REST stands for “representational state transfer,” described by Roy Fielding in his dissertation1. Sadly, that dissertation is not widely read, and so many people have their own idea of what REST is, leading to a lot of confusion and disagreement. REST is all about a client-server relationship, where server-side data are made available through representations of data in simple formats, often JSON and XML. These representations for resources, or collections of resources, which are then potentially modifiable, with actions and relationships being made discoverable via a method known as hypermedia. Hypermedia is fundamental to REST, and is essentially just the concept of providing links to other resources.

Beyond hypermedia there are a few other constraints, such as:

  • REST must be stateless: not persisting sessions between requests.
  • Responses should declare cacheablility: helps your API scale if clients respect the rules.
  • REST focuses on uniformity: if you’re using HTTP you should utilize HTTP features whenever possible, instead of inventing conventions.

These constraints (plus a few more2) allow the REST architecture to help APIs last for decades, not just years.

Before REST became popular (after companies such as Twitter and Facebook labeled their APIs as REST), most APIs were built using an XML-RPC or SOAP. XML-RPC was problematic, because ensuring data types of XML payloads is tough. In XML, a lot of things are just strings, so you need to layer meta data on top in order to describe things such as which fields correspond to which data types. This became part of the basis for SOAP (Simple Object Access Protocol). XML-RPC and SOAP, along with custom homegrown solutions, dominated the API landscape for a long time and were all RPC-based HTTP APIs.

The “RPC” part stands for “remote procedure call,” and it’s essentially the same as calling a function in JavaScript, PHP, Python and so on, taking a method name and arguments. Seeing as XML is not everyone’s cup of tea, an RPC API could use the JSON-RPC protocol3, or you could roll a custom JSON-based API, as Slack4 has done with its Web API5.

Take this example RPC call:

In JavaScript, we would do the same by defining a function, and later we’d call it elsewhere:

The idea is the same. An API is built by defining public methods; then, the methods are called with arguments. RPC is just a bunch of functions, but in the context of an HTTP API, that entails putting the method in the URL and the arguments in the query string or body. SOAP can be incredibly verbose for accessing similar-but-different data, like reporting. If you search “SOAP example” on Google, you’ll find an example from Google that demonstrates a method named getAdUnitsByStatement, which looks like this:

This is a huge payload, all there simply to wrap this argument:

In JavaScript, that would look like this:

In a simpler JSON API, it might look more like this:

Even though this payload is much easier, we still need to have different methods for getAdUnitsByStatement and getAdUnitsBySomethingElse. REST very quickly starts to look “better” when you look at examples like this, because it allows generic endpoints to be combined with query string items (for example, GET /ads?statement={foo} or GET /ads?something={bar}). You can combine query string items to get GET /ads?statement={foo}&limit=500, soon getting rid of that strange SQL-style syntax being sent as an argument.

So far, REST is looking superior, but only because these examples are using RPC for something that REST is more adept at handling. This article will not attempt to outline which is “better,” but rather will help you make an informed decision about when one approach might be more appropriate.

What Are They For?


RPC-based APIs are great for actions (that is, procedures or commands).

REST-based APIs are great for modeling your domain (that is, resources or entities), making CRUD (create, read, update, delete) available for all of your data.

REST is not only CRUD, but things are done through mainly CRUD-based operations. REST will use HTTP methods such as GET, POST, PUT, DELETE, OPTIONS and, hopefully, PATCH to provide semantic meaning for the intention of the action being taken.

RPC, however, would not do that. Most use only GET and POST, with GET being used to fetch information and POST being used for everything else. It is common to see RPC APIs using something like POST /deleteFoo, with a body of { "id": 1 }, instead of the REST approach, which would be DELETE /foos/1.

This is not an important difference; it’s simply an implementation detail. The biggest difference in my opinion is in how actions are handled. In RPC, you just have POST /doWhateverThingNow, and that’s rather clear. But with REST, using these CRUD-like operations can make you feel like REST is no good at handling anything other than CRUD.

Well, that is not entirely the case. Triggering actions can be done with either approach; but, in REST, that trigger can be thought of more like an aftereffect. For example, if you want to “Send a message” to a user, RPC would be this:

But in REST, the same action would be this:

There’s quite a conceptual difference here, even if they look rather similar:

  • RPC
  • We are sending a message, and that might end up storing something in the database to keep a history, which might be another RPC call with possibly the same field names — who knows?
  • REST
  • We are creating a message resource in the user’s messages collection. We can see a history of these easily by doing a
  • GET
  • on the same URL, and the message will be sent in the background.

This “actions happen as an afterthought” can be used in REST to take care of a lot of things. Imagine a carpooling app that has “trips.” Those trips need to have “start,” “finish” and “cancel” actions, or else the user would never know when they started or finished.

In a REST API, you already have GET /trips and POST /trips, so a lot of people would try to use endpoints that look a bit like sub-resources for these actions:

  • POST /trips/123/start
  • POST /trips/123/finish
  • POST /trips/123/cancel

This is basically jamming RPC-style endpoints into a REST API, which is certainly a popular solution but is technically not REST. This crossover is a sign of how hard it can be to put actions into REST. While it might not be obvious at first, it is possible. One approach is to use a state machine, on something like a status field:

Just like any other field, you can PATCH the new value of status and have some logic in the background fire off any important actions:

Statesman6 is an incredibly simple state machine for Ruby, written by the GoCardless7 team. There are many other state machines in many other languages, but this is an easy one to demonstrate.

Basically, here in your controllers, lib code or DDD8 logic somewhere, you can check to see if "status" was passed in the PATCH request, and, if so, you can try to transition to it:

When this code is executed, it will either make the transition successfully and run whatever logic was defined in the after_transition block, or throw an error.

The success actions could be anything: sending an email, firing off a push notification, contacting another service to start watching the driver’s GPS location to report where the car is — whatever you like.

There was no need for a POST /startTrip RPC method or a REST-ish POST /trips/123/start endpoint, because it could simply be handled consistently within the conventions of the REST API.

When Actions Can’t Be Afterthoughts


We’ve seen here two approaches to fitting actions inside a REST API without breaking its RESTfulness, but depending on the type of application the API is being built for, these approaches might start to feel less and less logical and more like jumping through hoops. One might start to wonder, Why am I trying to jam all of these actions into a REST API? An RPC API might be a great alternative, or it could be a new service to complement an existing REST API. Slack uses an RPC-based Web API, because what it’s working on just would not fit into REST nicely. Imagine trying to offer “kick,” “ban” or “leave” options for users to leave or be removed from a single channel or from the whole Slack team, using only REST:

DELETE seems like the most appropriate HTTP method to use at first, but this request is so vague. It could mean closing the user’s account entirely, which might be very different to banning the user. While it could be either of those options, it definitely would not be kick or leave. Another approach might be to try PATCHing:

This would be a weird thing to do, because the user’s status wouldn’t be globally kicked for everything, so it would need further information passed to it to specify a channel:

Some folks try this, but this is still odd because there is a new arbitrary field being passed, and this field doesn’t actually exist for the user otherwise. Giving up on that approach, we could try working with relationships:

This is a bit better because we’re no longer messing with the global /users/jerkface resource, but it is still missing a “kick,” “ban” or “leave” option, and putting that into the body or query string is once again just adding arbitrary fields in an RPC way.

The only other approach that comes to mind is to create a kicks collection, a bans collection and a leaves collection, with some endpoints for POST /kicks, POST /bans and POST /leaves endpoints to match. These collections would allow meta data specific to the resource, like listing the channel that a user is being kicked from, for example, but it feels a lot like forcing an application into a paradigm that doesn’t fit.

Slack’s Web API looks like this:

Nice and easy! We’re just sending arguments for the task at hand, just like you would in any programming language that has functions.

One simple rule of thumb is this:

  • If an API is mostly actions, maybe it should be RPC.
  • If an API is mostly CRUD and is manipulating related data, maybe it should be REST.

What if neither is a clear winner? Which approach do you pick?

Use Both REST And RPC


The idea that you need to pick one approach and have only one API is a bit of a falsehood. An application could very easily have multiple APIs or additional services that are not considered the “main” API. With any API or service that exposes HTTP endpoints, you have the choice between following the rules of REST or RPC, and maybe you would have one REST API and a few RPC services. For example, at a conference, somebody asked this question:

We have a REST API to manage a web hosting company. We can create new server instances and assign them to users, which works nicely, but how do we restart servers and run commands on batches of servers via the API in a RESTful way?

There’s no real way to do this that isn’t horrible, other than creating a simple RPC-style service that has a POST /restartServer method and a POST /execServer method, which could be executed on servers built and maintained via the REST server.



Knowing the differences between REST and RPC can be incredibly useful when you are planning a new API, and it can really help when you are working on features for existing APIs. It’s best not to mix styles in a single API, because this could be confusing both to consumers of your API as well as to any tools that expect one set of conventions (REST, for example) and that fall over when it instead sees a different set of conventions (RPC). Use REST when it makes sense, or use RPC if it is more appropriate. Or use both and have the best of both worlds!

(rb, yk, al, il)



Hold on, Tiger! Thank you for reading the article. Did you know that we also publish printed books and run friendly conferences – crafted for pros like you? Like SmashingConf Barcelona, on October 25–26, with smart design patterns and front-end techniques.

The Thumb Zone: Designing For Mobile Users

Smashing Magazine — 9/19/2016 9:28:26 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 19th, 2016

If there is one thing that will stand the test of time, it’s thumb placement on mobile devices. This makes consideration of the “thumb zone1“, a term coined in Steven Hoober’s research, an important factor in the design and development of mobile interfaces.

Have you ever interacted with a mobile website or app that simply didn’t play nice with your thumbs? Perhaps you’ve had to stretch to get to an important menu, or swiping turned into a battle with multiple swiping elements. Mishaps such as these reveal poor consideration of the thumb zone.

In this article, I will share the knowledge I’ve acquired about the thumb zone and how to apply its rules to navigation, cards and swipe gestures.

Learning From The Best


As mentioned, Steven Hoober researched and wrote about the thumb zone in Designing Mobile Interfaces. This is where I first encountered the notion that it might be important to consider thumbs while developing.

Along with Hoober, Josh Clark has recorded in-depth information on how people hold their devices in his book Designing for Touch. You can read an excerpt on A List Apart2.

Using Hoober and Clark’s study of how thumbs interact on devices, I performed user testing on wireframes that varied the location of design elements. My tests ran with navigation elements on the top and bottom of the screen, cards with buttons in different locations, and gesture areas outside and inside the thumb zone.

My test results validated Hoober and Clark’s research, while providing solid evidence of what works and what doesn’t in design. Below, I’ll share my findings on the design elements I tested. Let’s get started!

Thumbs Vs. Touchscreens


Opposable thumbs are nice to have, aren’t they? In addition to making us way cooler than jellyfish, thumbs are also key to how we interact with our mobile touchscreen devices. Hoober’s research shows that 49% of people hold their smartphones with one hand, relying on thumbs to do the heavy lifting. Clark took this even further and determined that 75% of interactions are thumb-driven.

With this understanding of hand placement, we can conclude that certain zones for thumb movement apply to most smartphones. We’ll define them as easy-to-reach, hard-to-reach and in-between areas.


Thumb-zone mapping for left- and right-handed users. The “combined” zone shows the best possible placement areas for most users. Image Credit: Designing for Touch274 by Josh Clark. (View the large version5).

The trick is to design for the flow of the thumb zone. This provides a framework for making better design decisions, creating human-friendly experiences and getting fewer headaches. Through user testing and experimentation, I’ve discovered a few ways to use this knowledge in everyday development.

Problems With Navigation


We all remember a time when mobile navigation was simply a dropdown list of links. It wasn’t pretty, but it got the job done. Today, we see endless examples of navigation patterns6. What’s the best fit for the thumb zone?

The natural movement of the user is the first thing I learned to take into account. Ask questions: “Does my app have a long list of links?” “Do I need to mix menus?” “What goes well with my website design?” The answers to these questions will help you determine where to place navigational triggers and hooks.

If your app has a long list of links, then you’ll probably want to use a full-screen overlay7 menu. This type of menu affords space for you to organize the list, social buttons and other useful content. The pattern scales well between desktop and mobile devices, and the menu provides an opportunity to align clickable elements within the thumb zone.

Huge8 has always made great use of full-screen overlay menus on mobile devices:


Huge uses a full-screen overlay menu. (View large version10)

On the flip side, if your app does not have a long list of links, then a sticky menu might be best. This type of menu attaches to the top or bottom of the screen and provides real estate for many links, depending on the design.

Airbnb’s mobile app11 has a sticky menu, attached to the bottom of the screen, providing easy access to important booking, messaging and listing information:


Airbnb’s mobile app has a fixed footer. (View large version13)

If you have a large website, mixing menus might work. Because mixing menus can get complex, it’s helpful to prioritize menu links based on their importance in the app. Sticky menus are great for commonly visited links, whereas full-screen and drawer menus come in handy for important but not high-priority links.

Consider Facebook’s mobile app14:


Facebook’s mobile app combines multiple fixed menus and drawers. (View large version16)

Facebook mixes menus based on the size of content within them. In the screenshot above, we see two sticky menus, each containing valuable links for the user. The top sticky menu is in the stretch zone, but just low enough on the page that it feels natural. The bottom sticky menu items are organized to provide comfortable tapping of popular links.

By gathering user data, practicing good design and leveraging the thumb zone, Facebook is owning sticky menus. The next time you’re trolling your friend’s posts, remember the series of decisions that have made your trolling experience that much better.

Remember that in addition to keeping important navigation items within the thumb zone, placing links outside of the friendly zone is acceptable at times. The general rule is to keep frequently used links in the easy-to-reach zone and to keep infrequently used links in the hard-to-reach zone.

Keeping Cards Friendly


Next, we’re going to review how a well-designed card pattern can work for your app. The card pattern17 has been widely used for a while now. Cards are quick, easy and predictable; they provide bursts of information in small doses, making it easy to deliver the right content at the right time.

Often, we couple cards with actions: send, save, done, close, etc.


Ponch: Wake Up Weather’s card pattern (View large version19)

In this example we see the Poncho: Wake Up Weather20 app. This is a great example of placing actionable links within a card: The weather report doesn’t require a thumb tap, so it’s placed way inside the unreachable zone. The action item — in this case, a share button — is placed directly in the natural zone.

On the other side, Poncho places its “location search” and “use current location” links far inside the hard-to-reach zone. This is acceptable: A user would use those features infrequently, because the app remembers your location from the last time it was open.

On the flip side, there are times when card patterns don’t utilize the thumb zone. A prime example of this is Etsy’s mobile app21. During checkout, Etsy provides a form in a popup card for the user to enter their shipping information:


Etsy’s checkout flaws in the card pattern. (View large version23)

At first glance, this use of a card seems appropriate and design-savvy. Digging deeper, we see flaws. The first problem is the “Cancel” link in the top-left corner. Does that link close the card or cancel the checkout process (if I’m confused, others surely will be, too). Also, The “x” is at the edge of the thumb zone, forcing the user to stretch to reach it.

Here’s a dilemma: Adding a close button to a top corner of a card is a common pattern, but it goes against the thumb-zone rubric. If you’re breaking out of the thumb zone to meet a user’s expectations, look for an alternative solution. We could experiment by adding a close button at the bottom of the card, or — since cards are best when delivering short bursts of content — we could try limiting the length of content in cards.

As the card design fad takes hold, it’s a good idea to run designs through the thumb-zone map to ensure that most elements are easily accessible and not confusing. Avoid following trends; instead, make human-oriented decisions throughout the design and development of your app.

Gestures and Movement


The gesture24: tap, double-tap, swipe, drag, pinch and press. These are the icing on the smartphone cake. Gestures enable us to engage with technology through our sense of touch.

You might be able to guess where this going. Keep gestures within the thumb zone. More importantly, allow the user to perform gestures naturally. This seems obvious, but to really pull off a comfortable experience, it’s important to calculate where the gesture should happen.

Let’s focus on the swipe interaction. Through swipe-tracking scripts25, I found some really interesting movement data.


Visualization of swipe-gesture data found during user testing. Image Credit: Designing for Touch274 by Josh Clark. (View large version28)

In the map above, circles represent taps, and arrows represent swipes. The data that I collected from tests show that users usually swipe somewhere from the device’s edge towards the middle, diagonally downward. I also found that users generally swipe in the natural area of the thumb zone.

Originally, I had the misconception that users swipe horizontally across, which created problems when measuring thumb areas for swipe gestures. My design specifications did not provide enough room to swipe without triggering another swipe area simultaneously. As with most mobile design elements, consider the thumb space required for swiping. I’ve found an appropriate size of swipe areas to be at least 45 pixels tall and wide.

With all of this information, we can conclude that it’s better to place swipe-gesture actions in easy-to-reach areas, while also allowing enough space to prevent accidental inputs.

A great example of the swipe gesture is Google’s Inbox app29.


Google Inbox supports swipe gestures in the right places, with the right amount of space. (View large version31)

The smart decisions here are:

  • keep swipe gestures out of hard-to-reach areas;
  • provide enough tapping space;
  • allow swipes to start anywhere in each email block element.

With all of this, gestures feel natural and comfortable, making email management faster and less complicated. Keep on keepin’ on, Google!



What have we learned? Hopefully, you better understand why the thumb zone is important. Remember these points:

  • Mobile devices and languages will change, but as long as there are touchscreens, the thumb zone will remain a critical part of design.
  • Navigational design is thumb-friendly when important links are in the easy-to-reach zone and unimportant links are in the hard-to-reach zones.
  • Cards are a powerful design asset when content and actions are thumb zone-friendly.
  • Determining swipe gesture areas becomes simpler when we consider how a person thumb swipes against a glass screen.

Useful Links


I’ll leave you with this: Keep reading! There is much to learn from other people in the industry. Below are a few links on the subject of designing for humans:

  • Designing for Touch
  • 33
  • , Josh Clark

The Art Of Hand Lettering

Smashing Magazine — 9/16/2016 2:33:25 PM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 16th, 2016

Hand lettering has taken the world by storm. It has become the beautiful connection — a juxtaposition — between design and words. The letter forms in the typography have been broken down into their shapes, flourishes, and textures.

Hand lettering speaks volumes. This is an art form which allows us to see the space between the letters, and the style of the lettering as a piece of art that can deeply evoke emotions and bring meaning — nostalgia, happiness, joy, and love.

I’ve selected some of the hand lettering artists whose work continues to speak strongly and boldly of this beautiful way of creating art. Art that moves us to feel, to act, to be, to believe, to participate, and to belong. That is exactly what art does. It connects us all to the common thread between us. It is a game of connect-the-dots. We see patterns, themes, and feel emotions when art stops us — and moves us. Calligraphy — or hand lettering — can do all of this. Here are some of the best hand letterers around.

Thiago Bellotti


Among other hand lettering artists, Thiago Bellotti’s skills include his beautiful Victorian lettering. His work is highly detailed and full of brush lettering, swashes and flourishes.



(Image credit3) (Follow Thiago on Instagram4)

Katie Daisy


Katie Daisy5 is a long-time Etsy Artist whose free-spirited art and hand lettering spreads the positive vibe of nostalgia, nature, freedom, adventure, and beauty through her beautiful and whimsical wildflowers and nature art.


(Image credit7) (Follow Katie on Instagram8)

Andy Lethbridge


Andy’s hand lettering is a script that has a impressive typographical structure. The details and architecture of his letters are just beautiful.



(Image credit11) (Follow Andy on Instagram12)

Valerie McKeehan


Valerie McKeehan13 is the creative director and founder of Lily & Val. She is also the author of The Complete Book of Chalk Lettering. Valerie’s works primarily in chalk — her work is worth checking out!


(Image credit15) (Follow Valerie on Instagram16)

Dinara Mirtalipova


Dinara Mirtalipova17 is a hand lettering artist who has a unique style of her own. She’s a self-taught illustrator and designer who has even worked at American Greetings — she is now a freelance designer. Her work has a beautiful hand-written feel that cannot be ignored.


(Image credit19) (Follow Dinara on Instagram20)

Nathan Douglas Yoder


Nathan is a hand lettering artist, illustrator and animator. If you take a closer look, you’ll see that his work is mostly inspired by the vintage charm.



(Image credit23) (Follow Nathan on Instagram24)

Lindsay Sherbondy


Lindsay Sherbondy25 is a hand lettering artist based in Wisconsin. Her hand lettering can be found on book covers as well as in her retail store. You may have already seen her beautiful work on one of Shauna Niequist26‘s books.


(Image credit28) (Follow Lindsay on Instagram29)

Ken Barber


Ken is a typeface designer at the illustrious House Industries, known for their beautiful fonts.



(Image credit32) (Follow Ken on Instagram33)

Ruth Simmons Chou


Ruth Simmons Chou’s work has a modern take on the traditional pen and nib style of lettering. She combines this with her gorgeous watercolor paintings to create her pretty style.




(Image credit37) (Follow Ruth on Instagram38)

Joseph Alessio


Joseph Alessio is a hand lettering artist whose work is simply stunning. He can really turn objects like pennies, snow, and more beautiful works of art.



(Image credit41) (Follow Joseph on Instagram42)

Molly Jacques


Molly is perhaps the quintessential hand lettering artist. She is a veteran and teaches calligraphy workshops to thousands around the world in her calligraphy workshops. She has corporate clients and her workshops are booked solid. Well, no wonder — just look at her impressive talent.



(Image credit45) (Follow Molly on Instagram46)

Andreas Hansen


Andreas Hansen is a hand lettering artist from Denmark. His style is a brush lettering script with a black ink vibe. Impressive!



(Image credit49) (Follow Andreas on Instagram50)

Jessica Hische


Jessica is an iconic letterer, illustrator, type designer, and author. She is best known for her personal projects, like ‘Daily Drop Cap.’ She is a lettering icon — a true veteran in the field who is always coming up with new visions for the future of lettering.




(Image credit54) (Follow Jessica on Instagram55)

Neil Secretario


Neil is a hand lettering artist with a hand-drawn type style. The typeforms he has created are simply stunning. A true art form — wouldn’t you agree?



(Image credit58) (Follow Neil on Instagram59)

Kal Barteski


Kal Barteski is an amazing artist and brush script lettering artist. She is passionate about wildlife conservation and education. Her work is often described as “meaningful, authentic, and poetic”. She believes in art’s power to connect and heal, and has created art for charities. Highly inspiring!




(Image credit63) (Follow Kal on Instagram64)

Zachary Smith


Zachary Smith’s hand lettering has a very hipster feel to it. Like a trip up the mountains with a warm mug of hot chocolate in your hand. It’s a vintage, rubber-stamp pad, meets hand lettered in pencil style.



(Image credit67) (Follow Zachary on Instagram68)

Karla Lim


Karla Lim is based in Vancouver, Canada, but loves that her work can travel all over the world. She specializes in heirloom wedding invitations and calligraphy.


(Image credit70) (Follow Karla on Instagram71)

Ian Barnard


Ian Barnard is an artist who created the beautiful font named “Outbound”. His work is beautifully scripted and has an adventure-seeking vibe, with a faith-based feel.



(Image credit74) (Follow Ian on Instagram75)

Minna So


Minna is a freelance graphic designer, illustrator, and designer. Her work is a playful hand-written style. She specializes in hand lettering, and illustrative designs. Her work has been featured with notable clients like Pinterest.


(Image credit77) (Follow Minna on Instagram78)

Max Pirsky


Max Pirsky is an up-and-coming hand lettering artist. His work is lively and full of gusto with lots of Jackson Pollock-style paint splatters. His work is just satisfying to look at.



(Image credit81)

Lauren Saylor


Lauren started out as a hobbyist, then turned her etsy shop into a real business, working from home, after teaching herself calligraphy and working with brands she loves on her blog.



(Image credit84) (Follow Lauren on Instagram85)

Ged Palmer


Ged is a sign painter and lettering artist based in London. His lettering and sign painting is reminiscent of the old vintage sign painters of yesterday. Ged creates large-scale murals in collaborations with other artists. His work is captivating.


(Image credit87) (Follow Ged on Instagram88)

Julie Song


Julie Song is an illustrator, calligrapher and lettering artist based in the Bay Area in California. Her work has also featured in several books, including How to Style Your Brand written by Fiona Humberstone. Her work is worth checking out!


(Image credit90) (Follow Julie on Instagram91)

Scott Biersack


Scott is a NY hand lettering artist with decidedly vintage-style lettering skills. His work ranges from vector artwork to hand-drawn typeforms. His script is beautiful and the connecting flourished details make it unique.



(Image credit94) (Follow Scott on Instagram95)

Melissa Esplin


Melissa teaches some great online calligraphy classes. She’s a hand lettering artist that has been around for a while. She is well known for her classes in the calligraphy and hand lettering circle.


(Image credit97) (Follow Melissa on Instagram98)

Matt Vergotis


Matt is a hand lettering artist who uses any medium — even markers to create his hand lettering art. Matt’s style has a very street quality to it. It’s got flourishes that are quick and urban.



(Image credit101) (Follow Matt on Instagram102)

Alison Carmicheal


Alison is a hand lettering artist that has her work in print as well as on book covers. Based in London, she studied Graphic Design at Ravensbourne college of design. She has been working commercially for about 15 years winning many industry accolades. Her work is full of romantic flourishes and free-spirited strokes.


(Image credit104) (Follow Alison on Instagram105)

Bryan Patrick Todd


Bryan is a hand letterer and muralist who creates large-scale lettering projects with a vintage vibe.



(Image credit108) (Follow Bryan on Instagram109)

Tara Royer Steele


I believe that Tara Royer Steele has the best handwriting. She has covered her charming bakery with encouraging quotes and words. Even the bathroom walls are full of uplifting quotes! Her chalkboards feature an ever-changing plethora of wise words, catchy sayings, scripture, and inspiration.



(Image credit112) (Follow Tara on Instagram113)

Paul von Excite


Paul is a logo specialist, lettering maniac, typography killer and branding expert. That’s right: all in one.



(Image credit116) (Follow Paul on Instagram117)

Kristen Drozdowski


Worthwhile Paper is a collection of lively screen printed paper goods designed by Kristen Drozdowski and her husband. Their goal is to create meaningful print work to share with others. In effort to spread happiness using art, they create designs inspired by nature, plants, and feel-good experiences.


(Image credit119) (Follow Kristen on Instagram120)

Luke Choice


Luke Choice uses bold, bright colors in his hand lettering for a very cool style. He also uses a brush and grunge style that combines a chrome-look for a highly stylized feel that looks like melted and liquid type.



(Image credit123) (Follow Luke on Instagram124)

Lisa Congdon


Lisa is an author, fine artist, author and illustrator who is known for her colorful abstract paintings, intricate line drawings, pattern design and hand lettering. Her work is very cheerful, colorful, and unique.


(Image credit126) (Follow Lisa on Instagram127)

Mateusz Witczak


Mateusz has a beautiful hand-drawn style to his hand lettering. It has an almost gothic lettering style reminiscent of the old vintage letterers. Take a look!



(Image credit130) (Follow Mateusz on Instagram131)

Jess Levitz


The combination of upper and lowercase letters really makes Jess’ work shine, and her flourished cursive hand lettering will make you swoon.


(Image credit133) (Follow Jessica on Instagram134)

João Neves


João has a very beautiful style of hand lettering. It is very unique, stylized, and lovely.



(Image credit137) (Follow João on Instagram138)

Elizabeth McKenzie


Elizabeth has a unique style of hand lettering and illustrations. She even self-published the amazing hand lettered and illustrated “The ABCs of Homesteading” — a children’s book every homesteading family will want on their shelves. She’s a regular contributor to taproot magazine (a lovely magazine for homesteaders), and she supports many causes with her work.



(Image credit141) (Follow Elizabeth on Instagram142)

Jason Vandenberg


Jason is a hand lettering artist based in Toronto. His work features lots of details and flourishes. Very modern and worth checking out!



(Image credit145) (Follow Jason on Instagram146)

Mary Kate McDevitt


Here’s a pioneer in the hand lettering industry. Mary Kate is an author, illustrator, and hand lettering artist. Her work is very iconic; you may probably already know it is her work by the way she letters. She uses bold colors, lots of design elements, and vintage style in her lettering.


Driving App Engagement With Personalization Techniques

Smashing Magazine — 9/15/2016 10:37:38 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 15th, 2016

Once upon a time, in the not-so-distant past, people considered websites to be a prime indication of how users’ attention was brief and unforgiving. Remember the dreaded bounce rate? Remember the numerous times you worried that your content and graphics might not be 100% clear to users? That was nothing. Compared to mobile, engaging users on the web is a piece of cake.

Researchers claim that we humans can no longer brag about being able to concentrate for a full 12 seconds on a coherent thought, thanks to the digitization of our lives. We now lose concentration after about 8 seconds1, which, sadly, is 1 second less than the attention span of a goldfish. This is the attention-deficit state of mobile users that you need to overcome in order to successfully engage with them. Mobile users don’t just have brief attention spans — they also expect immediate satisfaction when interacting with your app, otherwise they might be quick to close or even uninstall it.

So, what should you do when tasked with “improving app engagement”? There are several actions you should take, but one of the most crucial is to get up close and personal with users. If you don’t segment and personalize your users’ journeys, then you should expect lower rates of conversion and retention.

Whether your product is a bookstore, baby supplies store or retail app, your users expect you to know who they are, what they want to get from your app and how they prefer to receive it. This means that only using your users’ first names in messages is not enough.


(View large version3)

Here are the three fundamentals you should follow:

  • Show that you understand them.
  • To become personal with your app’s users, you have to truly know who they are and what they want. Segment your audiences, understand their activity patterns in your app, and respond appropriately to their interests and preferences.
  • Several tools on the market
  • 4
  • enable you to analyze your users’ in-app behavior, and you can also connect the app to your CRM for deeper segmentation.
  • Communicate with users at the right mobile moment.
  • Analyzing your users’ general usage and past behavior is just one step. You also have to
  • track their real-time interaction with the app
  • 5
  • and act upon it. People open your app to perform a particular action. Make sure to interact with them using mobile-engagement features such as surveys, messages and banners in a way that respects the reason they opened your app in the first place and that adds value to what they were planning to do.
  • Provide meaningful content.
  • Mobile engagement needs to be done in context. You need to know the point a user has reached in their journey, their demographics, their physical location, and information about their overall app usage. All of these should be taken into account for optimal personalization.

To better explain these three points, here are five examples of using personalization to drive mobile engagement.

Gamified In-App Messages


The first step towards personalization is to segment your users. Collecting data on users’ past interactions with your app will enable you to segment them by how active they are (visit frequency, time in app, actual usage, etc.).

Each and every app has its power users — the people who are most active and loyal. According to Capgemini Consulting, a customer who is loyal to a brand delivers a 23% premium6 in share of wallet, revenue and profitability compared to the average customer. Loyal users should receive a totally different type of in-app message than less active or dormant users, who might feel harassed by gamified messages.

At the right time and in the relevant context, power users should receive in-app messages that are gamified and that drive them to perform a certain action.

In the example below, a frequent user of a bookstore app is asked to recommend a book to a friend to gain 15 points to become a “Super Reader.” This is a great example of how gamified messages give users the feeling that they are an integral part of your app’s community. In addition, they will be thrilled to be recognized for their commitment to your app and to know that you appreciate them.

When users feel this way, they are more likely to rate the app or share it with their friends and connections, helping to create a community of readers who use the app.


Encourage your users to engage with gamified incentives. (View large version8)

Gamified in-app messages can be as simple as this one or more complex. You might choose to give your power users credits, points, access to special features and offers, and more.

Whatever type of gamified in-app message you choose, make it fun and make it personal.

Video Messages


Let’s say you want to drive users to learn more about one of your products and to purchase it. Understanding how the product works and its value is a strong motivator for consumers to purchase it, and even to learn about more products that they might purchase in the future.

If you’ve already used banners or in-app messages (triggered at the right time, of course) to promote this particular product, but your users aren’t converting as much as you’d like, then a video could be a great way to improve your conversion rates. In the example below, an explainer video is triggered when users who have already engaged with items such as baby food and baby clothes in past sessions are now visiting your toys section for the first time. The video drives purchasing intent by explaining how the baby toy would nurture their baby’s development.


Use videos to trigger an emotional reaction to your brand among your segmented users. (View large version10)

Combining the right video with the right mobile moment could lead to better engagement results.

Don’t forget, too, that video is increasingly popular among mobile users. According to an IAB study11, video is on the rise, with 35% of respondents viewing content on mobile devices (in the US, it’s almost 50%). Using mobile video messages is a great way to communicate with users to convey a message that requires a more detailed explanation, or to deliver an emotional message that connects retail buyers to your brand.

Reminder Push Notifications


It is not uncommon for a user to add items to their shopping cart but then suddenly not continue with the purchasing process. You want the user to complete the action as soon as possible, though. Push notifications are the ideal solution in this type of scenario, because they can be highly personalized and can contain a deep link that takes the user to exactly where they left off. Push notifications without personalization can drive users insane, whereas one small personalized message — triggered at the right mobile moment — can push (pun intended) users in the right direction.

When possible, add new information to push notifications that can help drive conversion decisions. For example, receiving an offer for free shipping after having added products to their shopping cart could persuade a user to take the next step and finalize the order. Just like the gamification example in our first point, push notifications serve to recognize and reward past activity in the app.


Push notifications can drive users to complete actions. You can also use in-app messages after a certain amount of time has passed since a user started a transaction and didn’t complete it.

Use Surveys To Ask And Respond To User Feedback


You can’t only talk to your users; you need to listen, too. Whereas tracking their “digital body language” can help you understand what users feel about your app, surveys are more about listening to them and taking the conversation to the next level.

An important step when using surveys is to provide each user with different feedback (such as messages, a secondary survey screen, a video, etc.) and not to use identical “thank you” messages, which are sometimes a turn-off.

In the example below, a user who says that they wouldn’t likely return to a hotel is shown another screen that asks them their main reason for not returning. Meanwhile, those who respond very positively are asked whether they would like to write a short review.


Respond differently to each piece of feedback. Keep the conversation going. (View large version14)

A combo survey personalizes the responses of users and drives them to complete the proper action in the right mobile moment.

Offer Coupons Based On The User’s Journey And Purchasing History


Coupons have a better chance of being redeemed if they are relevant to the customer’s purchasing history. But you also need to consider the right moment to present a coupon. Choose a time when the user is most likely to be interested in what the coupon offers.

Let’s say you have a customer who has looked at a particular pair of shoes several times throughout the day but has yet to purchase them. To drive the user to purchase, you could offer a time-limited coupon of 20% off the item the next time they visit the screen.

Targeting the right users at the right moment is key to increasing your conversion rate.


A coupon offer should relate to past behavior and to real-time interaction in the app.

Understanding the individual user’s journey and how they are using the app overall is key to personalizing your mobile app experience. Mobile users have high expectations16 of the mobile experience and brief attention spans, so careful personalization and in-context awareness are crucial for effective communication and engagement with them.

(yk, il, al)



Hold on, Tiger! Thank you for reading the article. Did you know that we also publish printed books and run friendly conferences – crafted for pros like you? Like SmashingConf Barcelona, on October 25–26, with smart design patterns and front-end techniques.

Creating Websites With Dropbox-Powered Hosting Tools

Smashing Magazine — 9/14/2016 11:00:03 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

Quick Tips

Creating Websites With Dropbox-Powered Hosting Tools

  • September 14th, 2016

Let’s say you want to quickly sketch out your idea of a website, or just quickly whip up a small site for testing purposes. Also, neither should take a lot of time to build nor should they need a full-stack toolkit. So, where and how do you start? Have you tried creating a website with some Dropbox-powered hosting tools? Well, they certainly can provide a fast and easy solution for these occasions. You don’t have to fiddle with servers or bother about deployment, some of them even come with pre-configured templates that you can use or customize to spare you coding time.

In today’s article, we’ve collected nine tools for you that hook up to your Dropbox to streamline the process of creating and hosting static websites. However, before you settle on a Dropbox-powered platform, always balance the pros and cons to make sure it is the right choice for your project — including performance-wise. As for prototyping, the following tools are a great way to make your workflow more efficient so you can spend more time on the details that really matter.

Small Victories


One creation tool that is based on Dropbox is Small Victories31. The idea behind it is simple: The tool connects to your Dropbox, creates a special folder there, and whatever you drop into the folder (be it HTML, Markdown or TXT files, images or videos) will automatically be turned into a website. No installation, server, CMS or uploading necessary; every change you make in the Dropbox folder will be pushed to your site instantly. To ease the job, you can draw on nine ready-made themes (among them a homepage, an e-commerce product page and a feed stream) or if you want more control, you can code your own static HTML files instead and use Dropbox essentially as a hosting service.


Drop your files into a designated Dropbox folder and Small Victories31 will make a website out of it.

To keep everything as simple as possible, the content will be displayed on your site in the order as it appears in your Dropbox folder. This requires a bit of planning ahead when it comes to choosing file names, and, as it turns out, numbering them is the easiest way to maintain a structure. Alternatively, you can set the order to sort by date or by manually comma-delimiting the file names in the order you want them to appear in the settings.txt file. Despite its simplicity, the tool isn’t inflexible at all, but surprises with quite some customization features. Adding your own styles and scripts to the default CSS and JS files goes without saying, while Google Fonts and Typekit support give you even more freedom.

By default, every site created with Small Victories will be hosted as a subdomain of .smvi.co, but you can also use a custom domain if you prefer. Just enter the domain in the settings.txt file and make sure to register it in your domain’s DNS settings, too. If you want to make your content available only to a selected group of users, to show an idea to a client, for example, or to share slides, you can also set a password protection. Especially when you’re looking for fast results or want to collaborate with your team members, the tool is convenient to work with. A shared Dropbox folder is all it takes.

Here some examples of sites that were built using Small Victories:


vis:dmcg5, an inspiration showcase.


Salon des Refusés7, an exhibition of art and photography.


XXIX Store9, an e-commerce site.


Wise App11’s product page for an app.



Also based on the “drop your files into a Dropbox folder” idea, is Pancake1412. Apart from HTML, it supports .md and .txt and creates a page for every text file you save to the respective Dropbox folder. By default, your project will be hosted as a subdomain of .pancakeapps.com (which is SSL-secured, by the way), but you can also use your own domain with a Pancake site. Apart from Dropbox sync, the platform offers a beta version with git push that comes with version control and supports popular generators such as Jekyll, Sphinx, and Wintersmith.


Pancake1412 serves HTML and text files directly from your Dropbox folder.



DropPages1715 already launched back in 2011 and offers the same approach as the tools above. Three sub-folders in your Dropbox’s project folder house text (which can also be written in Markdown), images and CSS, and, finally, HTML files to render the text. You simply edit the files on your computer, save, and they’ll automatically sync with your live site. If you don’t want to edit live content, create a folder called _drafts and get started with your new content in there. When you’re done, overwrite the live version with it. Content is minified, gzipped and cached. DropPages is free to use.


DropPages1715 syncs between your Dropbox and your website to make editing content as painless as possible.



Site442018 updates your website as soon as you’ve saved any changes within the folder you’ve created in Dropbox (it constantly monitors it specially for this purpose, too). Sites created with the tool usually live on a .site44.com sub-domain, but using custom domains is possible as well. For fast response times, your content will be cached on Site44’s servers. Advanced features include password protection, custom 404 pages, and support for redirects. A 30-day trial is free, personal plans start at $4.95 per month.


Site442018 monitors a Dropbox folder which you use to save your website assets in. As soon as you make changes to this folder, your website will automatically be updated.



Another static site generator living in a symbiosis with Dropbox is Sitebox.io21. You can code your own layout (Markdown is supported) or use one of three ready-made, responsive themes. The possibility to set a meta description for every page helps make your website search-engine friendly. A nice bonus: The tool won’t push all changes you make to the folder live instantly. Instead, you can preview your changes first, and when everything is as you want it to be, you can publish in one click. Sitebox is free to use with up to five pages with a subdomain at .sitebox.io, a professional license with an unlimited amount of websites and the option to connect your own domain is available for $10 a month.


An example site created with Sitebox23 and the default Unison theme.

Cloud Cannon


Cloud Cannon2624 is a simple CMS that syncs back and forth with Dropbox or, alternatively, GitHub and Bitbucket, to build and deploy your website to a test or custom domain. Focus lies on collaboration between developers and non-developers. While developers have full control over the source code and can use their favorite tools to build static or Jekyll sites, non-developers can update content visually in the inline editor. For improved performance, websites built with Cloud Cannon are optimized and assets served from a CDN.

Here’s a handy feature: You can set up a staging site for testing purposes and easily push to the live site when you’re ready. Restricting public access to your site or to parts of it is also possible. Plans for Cloud Cannon start at $25 for one user and unlimited websites, more expensive agency and enterprise plans are also available. If you just want to give it a try, there’s a 30-day free trial, too.


Cloud Cannon2624 manages the balancing act between giving developers full freedom and empowering non-developers and clients to edit content themselves.



No frills, just a Dropbox-powered hosting service — that’s KISSr2927. You save your website to Dropbox, and when you update files, the changes will be copied to KISSr’s servers. One prototype site is free, for $5 per month you get unlimited sites, storage, and bandwidth.


KISSr2927 provides simple Dropbox web hosting without requiring a personal FTP server.



Paperplane3230 spares you fiddling around with FTP by connecting to your Dropbox (or GitHub if you prefer). To use it, pick a name, point Paperplane to your files, and that’s it — the service will transform your assets into a website. Custom domains can be used, too. Paperplane weighs in with $9 per month, but you can also test it out for free with three sites max and no custom domains.


Paperplane3230 wants to make static hosting simple.



Synkee3533 works differently than the other tools on our list. It connects to your Dropbox but doesn’t replace an FTP server — simple deployment and version control are the magic words here. A typical workflow with Synkee is as follows: You save your website assets to Dropbox, edit them with your favorite text editor, and the changes get synced to your website’s server as your Dropbox syncs. Deployments can be handled via a dashboard either manually, or you set them to apply automatically when you save a file. Built-in version control and the possibility to revert changes on the FTP server also add to a more efficient workflow. Synkee also works with GitHub and Bitbucket and offers a two-week free trial. After the trial has ended, plans start at $5 per month for one user and ten projects, team plans are also available.


Synkee3533 lets you deploy and sync websites that you save in Dropbox to your FTP server.

What are your experiences? Have you used one of these tools before? Or do you know of one we haven’t listed? Let us know in the comment section below.




How To Boost Your Conversion Rates With Psychologically Validated Principles

Smashing Magazine — 9/13/2016 11:09:34 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 13th, 2016

It is often easy to overlook the underlying principles that compel people to take action. Instead, we tend to obsess over minute details — things like button color1, pricing2 and headlines. While these things can compel users to take action, it is worth considering the psychological principles that influence users’ behavior.

Unfortunately, few organizations try to understand what influences user action. Research by Eisenberg Holdings3 shows that for every $92 the average company spends attracting customers, a meager $1 is spent converting them. Real conversion optimization is rooted deeply in psychology.

In this article, we will analyze seven psychology studies that date as far back as 1961. Each experiment raises principles that will help you boost conversions on your website. Some of the experiments are so controversial that they will make you cringe, but the lessons are fundamental.

The Authority Principle: Leverage The Influence Of Authority Figures To Get People To Act


In perhaps the most famous study about obedience in psychology, Yale University psychologist Stanley Milgram conducted a series of experiments4 to observe how people react when instructed by an authority figure to do something that conflicts with their conscience. The aim of the experiment was to see how far people would go to obey authority, even if the act of obedience involved harming someone else and acting against their conscience.

For the studies, which began in 1961, Milgram selected participants for his experiment by placing an advert in a newspaper. Once people responded to the advert, Milgram paired the participants and cast lots to determine which of each pair would be the learner and which the teacher. Unbeknownst to the participants, the experiment was rigged — all of them would be teachers, while all of Milgram’s associates would be chosen as learners.

The learner (Milgram’s associate) was taken into a room and connected to an electric chair; the teacher (one of the participants) was then taken to a room next door that contained a row of switches, marked with a scale of 15 to 450 volts — with 15 volts being a “slight shock” and 450 volts producing a “fatal shock.” The teacher was able to see the reactions of the learner through a screen.


(Image: Gina Perry6)

Once in the other room, the researcher told the teacher (i.e. the participant) to administer an electric shock every time the learner answered a question incorrectly. The learner was then asked a series of questions and mainly gave wrong answers (on purpose). In return, the authority figure — dressed in a gray lab coat — asked the teacher to administer an electric shock for each wrong answer. The result was stunning: 65% of participants administered the electric shock to the maximum 450 volts, even when the learner long stopped showing signs of breathing. In a variation of the experiment, the authority figure was replaced with an ordinary person, and compliance dropped to a stunning 20%.

Milgram’s experiment7 shows that we will go to great lengths to obey orders, especially from those seen to be as legitimate authorities (whether legal or moral).

How to Use The Authority Principle to Boost Conversions


The authority principle can be used to boost conversions in your business. For instance, getting authority endorsements will always go a long way to boosting your conversions and profits. You are far better off, of course, avoiding trying to sell products to people who don’t want them, but even scrupulous websites can boost conversions by tapping into the power of authority. Here are some tips:

  • Get an authority figure or respected celebrity in your industry to endorse you.
  • A
  • great example
  • 8
  • of the effectiveness of endorsements from authority figures is Dr. Oz. Dr. Oz is renowned in the health field, and products will sell out at stores as soon as he recommends them. The phrase “Dr. Oz Approved” currently has 1.6 million results in Google (shown below), showing how seriously people take his recommendations.

Content Security Policy, Your Future Best Friend

Smashing Magazine — 9/12/2016 11:19:14 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 12th, 2016

A long time ago, my personal website was attacked. I do not know how it happened, but it happened. Fortunately, the damage from the attack was quite minor: A piece of JavaScript was inserted at the bottom of some pages. I updated the FTP and other credentials, cleaned up some files, and that was that.

One point made me mad: At the time, there was no simple solution that could have informed me there was a problem and — more importantly — that could have protected the website’s visitors from this annoying piece of code.

A solution exists now, and it is a technology that succeeds in both roles. Its name is content security policy (CSP).

What Is A CSP?


The idea is quite simple: By sending a CSP header from a website, you are telling the browser what it is authorized to execute and what it is authorized to block.

Here is an example with PHP:

Some Directives


You may define global rules or define rules related to a type of asset:

The base argument is default-src: If no directive is defined for a type of asset, then the browser will use this value.

In this example, we’ve authorized the domain name www.google-analytics.com as a source of JavaScript files to use on our website. We’ve added the keyword 'self'; if we redefined the directive script-src with another rule, it would override default-src rules.

If no scheme or port is specified, then it enforces the same scheme or port from the current page. This prevents mixed content. If the page is https://example.com, then you wouldn’t be able to load http://www.google-analytics.com/file.js because it would be blocked (the scheme wouldn’t match). However, there is an exception to allow a scheme upgrade. If http://example.com tries to load https://www.google-analytics.com/file.js, then the scheme or port would be allowed to change to facilitate the scheme upgrade.

In this example, the keyword data: authorizes embedded content in CSS files.

Under the CSP level 1 specification, you may also define rules for the following:

  • img-src
  • valid sources of images
  • connect-src
  • applies to XMLHttpRequest (AJAX), WebSocket or EventSource
  • font-src
  • valid sources of fonts
  • object-src
  • valid sources of plugins (for example,
  • <object>
  • ,
  • <embed>
  • ,
  • <applet>
  • )
  • media-src
  • valid sources of
  • <audio>
  • and
  • <video>

CSP level 2 rules include the following:

  • child-src
  • valid sources of web workers and elements such as
  • <frame>
  • and
  • <iframe>
  • (this replaces the deprecated
  • frame-src
  • from CSP level 1)
  • form-action
  • valid sources that can be used as an HTML
  • <form>
  • action
  • frame-ancestors
  • valid sources for embedding the resource using
  • <frame>
  • ,
  • <iframe>
  • ,
  • <object>
  • ,
  • <embed>
  • or
  • <applet>
  • .
  • upgrade-insecure-requests
  • instructs user agents to rewrite URL schemes, changing HTTP to HTTPS (for websites with a lot of old URLs that need to be rewritten).

For better backwards-compatibility with deprecated properties, you may simply copy the contents of the actual directive and duplicate them in the deprecated one. For example, you may copy the contents of child-src and duplicate them in frame-src.

CSP 2 allows you to whitelist paths (CSP 1 allows only domains to be whitelisted). So, rather than whitelisting all of www.foo.com, you could whitelist www.foo.com/some/folder to restrict it further. This does require CSP 2 support in the browser, but it is obviously more secure.

An Example


I made a simple example for the Paris Web 2015 conference, where I presented a talk entitled “CSP in Action1.”

Without CSP, the page would look like this:


View large version3

Not very nice. What if we enabled the following CSP directives?

What would the browser do? It would (very strictly) apply these directives under the primary rule of CSP, which is that anything not authorized in a CSP directive will be blocked (“blocked” meaning not executed, not displayed and not used by the website).

By default in CSP, inline scripts and styles are not authorized, which means that every <script>, onclick or style attribute will be blocked. You could authorize inline CSS with style-src 'unsafe-inline' ;.

In a modern browser with CSP support, the example would look like this:


View large version5

What happened? The browser applied the directives and rejected anything that was not authorized. It sent these notifications to the console:


View large version7

If you’re still not convinced of the value of CSP, have a look at Aaron Gustafson’s article “More Proof We Don’t Control Our Web Pages8.”

Of course, you may use stricter directives than the ones in the example provided above:

  • set
  • default-src
  • to
  • 'none'
  • ,
  • specify what you need for each rule,
  • specify the exact paths of required files,
  • etc.



CSP is not a nightly feature requiring three flags to be activated in order for it to work. CSP levels 1 and 2 are candidate recommendations! Browser support for CSP level 19 is excellent.


View large version11

The level 2 specification is more recent12, so it is a bit less supported.


View large version14

CSP level 3 is an early draft now, so it is not yet supported, but you can already do great things with levels 1 and 2.

Other Considerations


CSP has been designed to reduce cross-site scripting (XSS) risks, which is why enabling inline scripts in script-src directives is not recommended. Firefox illustrates this issue very nicely: In the browser, hit Shift + F2 and type security csp, and it will show you directives and advice. For example, here it is used on Twitter’s website:


View large version16

Another possibility for inline scripts or inline styles, if you really have to use them, is to create a hash value. For example, suppose you need to have this inline script:

You might add 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng=' as a valid source in your script-src directives. The hash generated is the result of this in PHP:

I said earlier that CSP is designed to reduce XSS risks — I could have added, “… and reduce the risks of unsolicited content.” With CSP, you have to know where your sources of content are and what they are doing on your front end (inline styles, etc.). CSP can also help you force contributors, developers and others to respect your rules about sources of content!

Now your question is, “OK, this is great, but how do we use it in a production environment?”

How To Use It In The Real World


The easiest way to get discouraged with using CSP the first time is to test it in a live environment, thinking, “This will be easy. My code is bad ass and perfectly clean.” Don’t do this. I did it. It’s stupid, trust me.

As I explained, CSP directives are activated with a CSP header — there is no middle ground. You are the weak link here. You might forget to authorize something or forget a piece of code on your website. CSP will not forgive your oversight. However, two features of CSP greatly simplify this problem.



Remember the notifications that CSP sends to the console? The directive report-uri can be used to tell the browser to send them to the specified address. Reports are sent in JSON format.

So, in the csp-parser.php file, we can process the data sent by the browser. Here is the most basic example in PHP:

This notification will be transformed into an email. During development, you might not need anything more complex than this.

For a production environment (or a more visited development environment), you’d better use a way other than email to collect information, because there is no auth or rate limiting on the endpoint, and CSP can be very noisy. Just imagine a page that generates 100 CSP notifications (for example, a script that display images from an unauthorized source) and that is viewed 100 times a day — you could get 10,000 notifications a day!

A service such as report-uri.io17 can be used to simplify the management of reporting. You can see other simple examples for report-uri18 (with a database, with some optimizations, etc.) on GitHub.



As we have seen, the biggest issue is that there is no middle ground between CSP being enabled and disabled. However, a feature named report-only sends a slightly different header:

Basically, this tells the browser, “Act as if these CSP directives were being applied, but do not block anything. Just send me the notifications.” It is a great way to test directives without the risk of blocking any required assets.

With report-only and report-uri, you can test CSP directives with no risk, and you can monitor in real time everything CSP-related on a website. These two features are really powerful for deploying and maintaining CSP!

Why CSP Is Cool


CSP is most important for your users: They don’t have to suffer any unsolicited scripts or content or XSS vulnerabilities on your website.

The most important advantage of CSP for website maintainers is awareness. If you’ve set strict rules for image sources, and a script kiddie attempts to insert an image on your website from an unauthorized source, that image will be blocked, and you will be notified instantly.

Developers, meanwhile, need to know exactly what their front-end code does, and CSP helps them master that. They will be prompted to refactor parts of their code (avoiding inline functions and styles, etc.) and to follow best practices.

How CSP Could Be Even Cooler


Ironically, CSP is too efficient in some browsers — it creates bugs with bookmarklets. So, do not update your CSP directives to allow bookmarklets. We can’t blame any one browser in particular; all of them have issues:

Most of the time, the bugs are false positives in blocked notifications. All browser vendors are working on these issues, so we can expect fixes soon. Anyway, this should not stop you from using CSP.

General Information


Reducing Cognitive Overload For A Better User Experience

Smashing Magazine — 9/9/2016 11:41:42 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 9th, 2016

“Getting in the way of a speeding freight train usually doesn’t end well. It takes a lot of effort to shift the course of something with that much momentum. Rather than forcing people to divert their attention from their primary task, come to where they are.”

– Luke Wroblewski, Product Director at Google

The best user experience is the one the user doesn’t notice. It appears smooth and simple on the surface, but hundreds of crucial design decisions have been made to guide, entertain and prevent trouble. If the user experience design does what it’s supposed to do, the user won’t notice any of the work that went into it. The less users have to think about the interface or design, the more they can focus on accomplishing their goal on your website. Your job as a designer is to give them a straight path to their goal by clearing out the obstacles beforehand.

After all, consider the alternative. Complicated and confusing interfaces force users to find solutions to problems that shouldn’t be there in the first place. A user who feels confused by the options, the interface, the navigation and so on will likely feel overwhelmed in their thinking process. Even momentary pauses are enough to rip users back into the reality that they’re sitting in front of their computer.

This excessive thinking is called cognitive overload, and here we’ll explain how you can avoid it. First, we need to explain what exactly in our brains is at risk of being overloaded.


(Image: Dierk Schaefer2) (View large version3)

The Scientific Roots Of Cognitive Overload


Cognitive load refers to the total amount of information your working memory can handle. Cognitive overload happens when your working memory receives more information than it can handle comfortably, leading to frustration and comprised decision-making.

But what does that mean, really? What exactly is working memory? And what does this have to do with design? The first step is to understand the origin of cognitive load theory.

John Sweller and Cognitive Load Theory


While the study of cognition dates back centuries, it wasn’t until the 1980s that Australian educational psychologist John Sweller4 applied the research to instructional design. Sweller sought to discern the best conditions for learners of any kind to retain the information they were taught. In other words, what are the best strategies for making a lesson stick?

Sweller’s work culminated in the 1988 publication of “Cognitive Load Theory, Learning Difficulty, and Instructional Design1285” (PDF), reworked and republished later in 1994. His work incorporated the data organizational constructs known as schema6 and outlined both effective and ineffective teaching methods, but his findings on the limitations of working memory are what designers tend to find most useful.

In many ways, Sweller’s work expanded on the information processing theories7 of George Miller8, a cognitive psychologist and linguist of the 1950s who tested the limits of short-term memory. Miller’s research has since ingrained itself in digital design, especially the technique of chunking639, discussed later in this article. Miller was also responsible for the paper “The Magical Number Seven, Plus or Minus Two10” (PDF), which prompted many designers to limit menu items to between five and nine — although this technique has since been demoted in digital design11.

While these strategies were originally intended for the field of education, they apply equally to user experience (UX) design. As we’ll explain, the same techniques that enhance memorability and learning also reduce user annoyance.

Working Memory


What if every time you wanted to open the fridge, you had to answer a Sphinxian riddle like, “What walks on four feet in the morning, two in the afternoon and three at night?”

It would get old after a while, right? But according to cognitive load theory, that’s the same kind of frustration users feel with poor UX design.


How To Scale React Applications

Smashing Magazine — 9/8/2016 9:49:35 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 8th, 2016

We recently released version 3 of React Boilerplate1, one of the most popular React starter kits, after several months of work. The team spoke with hundreds of developers about how they build and scale their web applications, and I want to share some things we learned along the way.


The tweet that announced3 the release of version 3 of React Boilerplate

We realized early on in the process that we didn’t want it to be “just another boilerplate.” We wanted to give developers who were starting a company or building a product the best foundation to start from and to scale.

Traditionally, scaling was mostly relevant for server-side systems. As more and more users would use your application, you needed to make sure that you could add more servers to your cluster, that your database could be split across multiple servers, and so on.

Nowadays, due to rich web applications, scaling has become an important topic on the front end, too! The front end of a complex app needs to be able to handle a large number of users, developers and parts. These three categories of scaling (users, developers and parts) need to be accounted for; otherwise, there will be problems down the line.

Containers And Components


The first big improvement in clarity for big applications is the differentiation between stateful (“containers”) and stateless (“components”) components. Containers manage data or are connected to the state and generally don’t have styling associated with them. On the other hand, components have styling associated with them and aren’t responsible for any data or state management. I found this confusing at first. Basically, containers are responsible for how things work, and components are responsible for how things look.

Splitting our components like this enables us to cleanly separate reusable components and intermediary layers of data management. As a result, you can confidently go in and edit your components without worrying about your data structures getting messed up, and you can edit your containers without worrying about the styling getting messed up. Reasoning through and working with your application become much easier that way, the clarity being greatly improved!



Traditionally, developers structured their React applications by type. This means they had folders like actions/, components/, containers/, etc.

Imagine a navigation bar container named NavBar. It would have some state associated with it and a toggleNav action that opens and closes it. This is how the files would be structured when grouped by type:

While this works fine for examples, once you have hundreds or potentially thousands of components, development becomes very hard. To add a feature, you would have to search for the correct file in half a dozen different folders with thousands of files. This would quickly become tedious, and confidence in the code base would wane.

After a long discussion in our GitHub issues tracker and trying out a bunch of different structures, we believe we have found a much better solution:

Instead of grouping the files of your application by type, group them by feature! That is, put all files related to one feature (for example, the navigation bar) in the same folder.

Let’s look at what the folder structure would look like for our NavBar example:

Developers working on this application would need to go into only a single folder to work on something. And they would need to create only a single folder to add a new feature. Renaming is easy with find and replace, and hundreds of developers could work on the same application at once without causing any conflicts!

When I first read about this way of writing React applications, I thought, “Why would I ever do that? The other way works absolutely fine!” I pride myself on keeping an open mind, though, so I tried it on a small project. I was smitten within 15 minutes. My confidence in the code base was immense, and, with the container-component split, working on it was a breeze.

Two questions popped into my head while working like this, though: “How do we handle styling?” and “How do we handle data-fetching?” Let me tackle these separately.



Apart from architectural decisions, working with CSS in a component-based architecture is hard due to two specific properties of the language itself: global names and inheritance.

Unique Class Names


Imagine this CSS somewhere in a large application:

Immediately, you’ll recognize a problem: title is a very generic name. Another developer (or maybe even the same one some time later) might go in and write this code:

This will create a naming conflict, and suddenly your title will have a blue border and a yellow background everywhere, and you’ll be digging into thousands of files to find the one declaration that has messed everything up!

Thankfully, a few smart developers have come up with a solution to this problem, which they’ve named CSS Modules4. The key to their approach is to co-locate the styles of a component in their folder:

The CSS looks exactly the same, except that we don’t have to worry about specific naming conventions, and we can give our code quite generic names:

We then require (or import) these CSS files into our component and assign our JSX tag a className of styles.button:

If you now look into the DOM in the browser, you’ll see <div class="MyApp__button__1co1k"></div>! CSS Modules takes care of “uniquifying” our class names by prepending the application’s name and postpending a short hash of the contents of the class. This means that the chance of overlapping classes is almost nil, and if they overlap, they will have the same contents anyway (because the hash — that is, the contents — has to be the same).

Reset Properties For Each Component


In CSS, certain properties inherit across nodes. For example, if the parent node has a line-height set and the child doesn’t have anything specified, it will automatically have the same line-height applied as the parent.

In a component-based architecture, that’s not what we want. Imagine a Header component and a Footer component with these styles:

Let’s say we render a Button inside these two components, and suddenly our buttons look different in the header and footer of our page! This is true not only for line-height: About a dozen CSS properties will inherit, and tracking down and getting rid of those bugs in your application would be very hard.

In the front-end world, using a reset style sheet to normalize styles across browsers is quite common. Popular options include Reset CSS, Normalize.css and sanitize.css! What if we took that concept and had a reset for every component?

This is called an auto-reset, and it exists as a plugin for PostCSS5! If you add PostCSS Auto Reset6 to your PostCSS plugins, it’ll do this exactly: wrap a local reset around each component, setting all inheritable properties to their default values to override the inheritances.



The second problem associated with this architecture is data-fetching. Co-locating your actions to your components makes sense for most actions, but data-fetching is inherently a global action that’s not tied to a single component!

Most developers at the moment use Redux Thunk7 to handle data-fetching with Redux. A typical thunked action would look something like this:

This is a brilliant way to allow data-fetching from the actions, but it has two pain points: Testing those functions is very hard, and, conceptually, having data-fetching in the actions doesn’t quite seem right.

A big benefit of Redux is the pure action creators, which are easily testable. When returning a thunk from an action, suddenly you have to double-call the action, mock the dispatch function, etc.

Recently, a new approach has taken the React world by storm: redux-saga8. redux-saga utilizes Esnext generator functions to make asynchronous code look synchronous, and it makes those asynchronous flows very easy to test. The mental model behind sagas is that they are like a separate thread in your application that handles all asynchronous things, without bothering the rest of the application!

Let me illustrate with an example:

Don’t be scared by the strange-looking code: This is a brilliant way to handle asynchronous flows!

The source code above almost reads like a novel, avoids callback hell and, on top of that, is easy to test. Now, you might ask yourself, why is it easy to test? The reason has to do with our ability to test for the “effects” that redux-saga exports without needing them to complete.

These effects that we import at the top of the file are handlers that enable us to easily interact with our redux code:

  • put()
  • dispatches an action from our saga.
  • take()
  • pauses our saga until an action happens in our app.
  • select()
  • gets a part of the redux state (kind of like
  • mapStateToProps
  • ).
  • call()
  • calls the function passed as the first argument with the remaining arguments.

Why are these effects useful? Let’s see what the test for our example would look like:

Esnext generators don’t go past the yield keyword until generator.next() is called, at which point they run the function, until they encounter the next yield keyword! By using the redux-saga effects, we can thus easily test asynchronous things without needing to mock anything and without relying on the network for our tests.

By the way, we co-locate the test files to the files we are testing, too. Why should they be in a separate folder? That way, all of the files associated with a component are truly in the same folder, even when we’re testing things!

If you think this is where the benefits of redux-saga end, you’d be mistaken! In fact, making data-fetching easy, beautiful and testable might be its smallest benefits!

Using redux-saga as Mortar


Our components are now decoupled. They don’t care about any other styling or logic; they are concerned solely with their own business — well, almost.

Imagine a Clock and a Timer component. When a button on the clock is pressed, we want to start the timer; and when the stop button on the timer is pressed, you want to show the time on the clock.

Conventionally, you might have done something like this:

Suddenly, you cannot use those components separately, and reusing them becomes almost impossible!

Instead, we can use redux-saga as the “mortar” between these decoupled components, so to speak. By listening for certain actions, we can react (pun intended) in different ways, depending on the application, which means that our components are now truly reusable.

Let’s fix our components first:

Breaking Out Of The Box: Design Inspiration (September 2016)

Smashing Magazine — 9/7/2016 11:28:18 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 7th, 2016

Inspiration isn’t tied to a specific timeframe or shows up when you need it. There isn’t a magic formula to rely on. Luckily, this year’s summer vacation was fruitful in providing us with many visual stimuli to get the creative process going. Enjoy!

The Hollywood Reporter


This editorial illustration for the Hollywood Reporter has a really nice style. The face expressions are so well done with just a few simple lines. That’s not easy to accomplish!


Image credit: Ale Giorgini2. (Large preview3)

Wanderlust Alphabet #A


A fantastic series to get the travel bug. Just by looking at this you’ve probably already guessed that the ‘A’ in this case stands for ‘Amsterdam’. Lovely usage of two complimentary colors cleverly applied, especially to add depth into shadows and highlights.


Image credit: Jack Daly5. (Large preview6)

Wanderlust Alphabet #B


Second installment of the travel bug illustrations I’ll be sure to follow. The letter ‘B’ stands for ‘BarcelonaÄ. Again, look at how depth is created by using a strong color palette and this beautiful clean 2D style. The famous parts of this city are so well done with such simple lines.


Image credit: Jack Daly8. (Large preview9)

Giphy Pool


James Curran is one of the masters of animated gifs to watch. This one made laugh and is so colorfully pleasant. And look at that jump! Just perfect.

Image credit: James Curran10>

Slack — Work Simplified


Great integration of the colors used by Slack11. It shows how crazy work life is. Beautiful illustration style. Just perfect branding material.


Image credit: Multiple Owners13. (Large preview14)

Stamps Inspired (1965)


Illustration based on original stamps from Bulgaria. Quite beautiful! I love the bright yellow in combination with the darker color, as well as the red and orange tones.


Image credit: Fe Ribeiro16. (Large preview17)

Saga Magazine


I first noticed the interesting structure/texture that works so well with the chosen colors. Love the circles, and ‘straight’ 90° lines — very geometrical. Feels a bit like a puzzle.


Image credit: Jonny Wan19. (Large preview20)

Vans Surf Pro Classic


The colors just pop so magnificently off the screen in this surf-related illustration. Very fitting for the subject. Inspiration came from late 80s and 90s surf artwork.


Image credit: Ian Jepson22. (Large preview23)

Firenze Duomo


Lovely illustration of the Duomo in Florence. Color palette is on point. Tasty texture, too. Simply beautiful to look at and get inspired.


Image credit: Bailey Sullivan25. (Large preview26)

In The Woods


Well taken shot of one of my favorite moments in the month of May. The smell is just overwhelming of these beautiful bluebells. Magical light and bokeh.


Image credit: Kirstin Mckee28. (Large preview29)

Mastercard Campaign


Some clever use of white space. The dog is just so awesomely done.


Image credit: Stephen Kelleher31. (Large preview32)

City Bike Miami


Love the eyes and the illusion of the hair in the wind. Same with the dog ears, so cute. That lovely summer feeling on the bike! Perfectly expressed.


Image credit: Pietari Posti34. (Large preview35)



An illustration which takes on curiosity and exploration of different tastes and flavors. It’s a great composition and an inspiring color palette.


Image credit: Aleksandar Savić1054337. (Large preview38)

Cinque Terre


Great color scheme! Illustration that documents a trip to Italy to see Cinque Terre. It’s a must-see apparently. The geometry combined with how the colors are applied is just so perfect.


Image credit: Bailey Sullivan40. (Large preview41)

Protected Trees


An illustration for The Telegraph‘s property section on buying a home with protected trees in vicinity. Some inspiring choices of shapes and lines.


Image credit: Aleksandar Savić1054337. (Large preview44)

Shop Magazine — Eye Blue


If you like interesting collages, you’ll love the work of Jimmy Turell. The colors and the half-tone effects are the items that made me pick this one.


Image credit: Jimmy Turell46. (Large preview47)

Year In Ideas (2014)


Created for Wired48. Another color combo that works wonderfully well together. I always look at how things are constructed, and I’m quite impressed by this illustration.


Image credit: Vesa Sammalisto50. (Large preview51)

The Westfjords


Beautiful colors of the sky — almost like fire.


Image credit: Conor MacNeill53. (Large preview54)



Wonderful die cut beer label design. I always admire such great lettering work. Be sure to check the rest of Ben Didier’s the portfolio55 as there is some stellar lettering work in there.


Image credit: Ben Didier57. (Large preview58)

Wanderlust Alphabet #C


The third installment in the Wanderlust alphabet that Jack Daly is creating. This time I believe the ‘C’ stands for ‘Copenhagen’. Interesting palette of colors in this one.


Image credit: Jack Daly60. (Large preview61)

No-Li Small Batch Festival


Nice logo for the No-Li Brewhouse’s Small Batch Festival. The different styles of typefaces really work well together. Beautiful and elegant!


Image credit: Riley Cran63. (Large preview64)

SHOP Magazine Austria Spring/Summer 2016


The cover illustration for the spring edition of SHOP magazine in Austria. It depicts the Museum quarter of Vienna. Lovely combination of geometrical and elegant organic lines. Such perfect soft color tones combined with a few more brighter accents.


Image credit: Andrew Lyons66. (Large preview67)

Girl On The Go


Such a wonderful scene! Especially the colors used and the inspiring details such as the skirt of the woman and the boots of the guy sitting on the bench on the right.


Image credit: Steve Scott69. (Large preview70)

Wanderlust Alphabet #D


The fourth installment in the Wanderlust alphabet that Jack Daly is creating. This time the ‘D’ stands for ‘Dublin’. Just look at how shadows and highlights are applied — such perfect contrast.


Image credit: Jack Daly72. (Large preview73)

An Afternoon At Miticocha


Well worth a two-hour round-trip hike I would say if you get to see a scenery like this. This place has a beautiful view of Ninachanca, Jirishanca, and Jirishanca Chico. Pure wanderlust!


Image credit: Zolashine75. (Large preview76)

Ponderosa 2016


Not just because there’s a bicycle in it ;) Most of all picked because it has a wonderful composition with fine details.


Image credit: Mads Berg78. (Large preview79)

Music Girls


A wonderful fusion between contemporary and retro. Some inspiring texture work going on in there as well. The wooden floor and the little details on the faces — look at those eyes!


Image credit: Loris Lora81. (Large preview82)

Facebook Events — Naomi


Part of a set of illustrations created for Facebook’s event cover images. Totally loving these colors! Lovely simplistic style, too.


Image credit: Naomi Wilkinson84. (Large preview85)

Hiding Behind Mom


A great example of what is possible with a few pencil strokes. The socks on the girl are adorable.


Image credit: Simona Ciraolo87. (Large preview88)

Look Around


An illustration to get the travel bug going. Great style and subtle usage of textures. For a touristic guide of the Garda Lake.


Image credit: Federica Bordoni90. (Large preview91)



“Let the waves hit your feet and the sand be your seat!” Exactly. I love compositions where there is much to discover. Great mix of colors.


Image credit: Putri Febriana93. (Large preview94)

Summer Bike Ride


Love how the diagonal line adds to the whole composition. Subtle use of shadows and transparency. Such perfect curved lines, especially hair and hat are done so perfectly in every way. Looking at this makes me want to go outside and ride.


Image credit: Tjeerd Broere96. (Large preview97)

Wanderlust Alphabet #E


The fifth installment in the Wanderlust alphabet that Jack Daly is creating. This time the ‘E’ stands for ‘Edinburgh’. The color treatment is great again. So many good details.


Image credit: Jack Daly99. (Large preview100)

American Illustration 33


Super clean and the character really gets your attention. Love how the stockings are done. So simple, yet so elegant.


Image credit: Federica Bordoni102. (Large preview103)

Procesni Mehanize


Illustrating the never-ending cycle. I always love to analyze the many elements that make a fantastic illustration. You can learn a lot from it.


Image credit: Aleksandar Savić1054337. (Large preview106)



One of the hardest things to get right is shooting directly into sunlight. This one nails it beautifully. Summer vibes!


Image credit: Anders Jildén108. (Large preview109)

Focus Magazine Illustration


Adorable cuteness and great usage of some basic shapes. Look at that cute mustache of the guy on the left. The color palette is absolutely perfect.


Image credit: Loulou and Tummie111. (Large preview112)

The Joy Of New Roads


Getting out on your bicycle and discovering new roads and amazing sceneries is a joy hard to describe in words. Lovely light in this photo.


Image credit: David Marcu114. (Large preview115)

City Guide Berlin-London-Paris


Inspiring arrangement of all the different items in this composition. Beautiful 2D style with lovely subtle textures and patterns to finish things off. The colors also draw you in.


Image credit: Maite Franchi117. (Large preview118)

Velorama — Lightyear


Love the combination of line art and typography. Looks so elegant! The bike is so well drawn. It shines! Look at the frame, the handlebars and the saddle.


Image credit: Silence TV120. (Large preview121)

Els Amos Ocults Del Totxo


The Brickmasters in the Shadows. Great editorial illustration. The duplication of the gentleman withthe hat is the eye-catcher. The chosen colors make the whole scene complete.


Image credit: Raúl Soria123. (Large preview124)



Right on target! Love what is done with the lines here.


Image credit: Matt Carlson126. (Large preview127)

No Man’s Sky


Pretty fly! It looks highly complicated but is quite simplistic at the same time. The color scheming is on point, too. The subtle background gradient is so perfect.


Image credit: Justin Mezzell129. (Large preview130)

FiveThirtyEight Election


This looks fantastic! The many layers of typography are so inspiring.


Image credit: Bethany Heck132. (Large preview133)

The Secret To Sleep


Lovely muted color palette for starters and some subtle textures work in combination with double shading makes this interesting. I really love the imagination and fantasy.


Image credit: Owen Davey135. (Large preview136)

On Geoengineering


Created for an editorial piece about geoengineering. Love how it all has been translated to the screen.


Image credit: Raúl Soria138. (Large preview139)

Bicycle Adventure Meeting (BAM)


In my opinion, a bit of humor always adds something special to any illustration. This one is about the Bicycle Adventure Meeting, a place where lonely, adventurous bike travelers join together. The lovely bright colors give this illustration a happy feeling.


Image credit: Fabio Consoli141. (Large preview142)

Brooklyn Bridge


Well captured! The tranquility of the water is what does it for me. Just the right shutter speed I’m assuming to get the effect.


Image credit: Alexander Rotker144. (Large preview145)

SHOP Magazine — Czech Republic


Charming textured style and an inspiring color palette.


Image credit: Maïté Franchi147. (Large preview148)

Bicycling Magazine


Editorial illustration for an article on the importance of teamwork when learning road biking. I like how the three guys are nicely aligned and how the legs are drawn. As an illustrator, you have the freedom to break with reality in order to achieve beautiful compositions.


Image credit: Douglas Jones150. (Large preview151)

(yk, il)



Redesigning SGS’ Seven-Level Navigation System: A Case Study

Smashing Magazine — 9/6/2016 11:24:39 AM

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences — crafted for pros like yourself? E.g. upcoming SmashingConf Barcelona, dedicated to smart front-end techniques and design patterns.

  • September 6th, 2016

SGS (formerly Société Générale de Surveillance) is a global service organization and provider of inspection, verification, testing and certification services across 14 industries. SGS’ website (along with 60 localized websites) primarily promotes the organization’s core services, as well as provides access to a multitude of useful services, supplementary content and tools. Our goal was to transform sgs.com1 from being desktop-only to being responsive.

This presented a unique set of challenges, especially around the legacy navigation system, which in areas was up to seven levels deep (divided into two parts) and which consisted of some 12,000 individual navigable items.

Our natural reaction upon seeing and using SGS’ navigation system for the first time was that surely the information architecture (IA) had to be simplified because of the sheer volume of navigable links and content. However, considering the navigation had already been optimized for search engines and the IA prior to this project and considering that SGS offers a wide selection of services across many industries (reflected in the volume of content), it was evident that refactoring the IA would not be a part of the solution.


Previous navigation solution on sgs.com (View large version3)

Simply put, the navigation tree’s structure had to remain intact. Even so, that didn’t prevent us from making some minor adjustments to the IA. For instance, “News, Media & Resources” and “SGS Offices & Labs” were moved to the top level, for greater visibility. With the former, it was important to more clearly reflect that SGS regularly publishes news and hosts events. With the latter, it was vital that it, along with the contact page, were easily reachable from anywhere in the website’s structure. Therefore, the key question was how could such a behemoth of a navigation system be made to easily transition between different viewports while still being usable?

Establishing Project Policies


A healthy client-designer relationship4 is essential to the success of every project. Setting clear expectations as well as providing effective guidance ensures not only that key stakeholders remain focused throughout, but also that trust develops between all parties as the project progresses. This was definitely the case with this project; the collaboration between all parties and the mutual appreciation of each other’s roles and expertise were truly remarkable.

However, to ensure that all parties remained focused, we established at the kick-off meeting a number of important guidelines and requirements within which we could also exercise creativity (some of which we insisted on, others insisted on by the client):

  • Content parity
  • Content should be accessible on every device and platform and under no circumstances should be hidden on mobile.
  • Performance
  • The website should perform at least 20% faster than competing websites. This was particularly useful when deciding how much information should go on each page.
  • Accessibility
  • The website must adhere to WCAG 2.0 level-AA accessibility guidelines. We succeeded in achieving this target, aside from a borderline color-contrast issue, due to the company’s branding.
  • Usability
  • The in-house team had to extensively validate concepts and conduct in-person and remote usability testing.
  • Uninterrupted business
  • The redesign shouldn’t disrupt the company’s business at all. Clearly, the task was not to optimize the company’s services, but rather to optimize the website, taking into account established business processes. For instance, we had the freedom to optimize web forms, but the structure of the data in the CRM had to remain intact.

The Three Major Challenges


With key guidelines established and knowing the navigation’s redesign wouldn’t require a significant overhaul of the IA, we subdivided the redesign into three key yet interdependent sets of activities:

  • Layout placement
  • This was handled mostly by the in-house team, with us suggesting improvements and making sure any decisions wouldn’t have radical implications for other aspects of the new responsive design.
  • Interaction and usability
  • These were worked on collaboratively with SGS’ design team. Ideas were exchanged via email and in on-site workshops and were regularly validated against users, stakeholders and the overall business requirements.
  • Performance
  • This was dealt with solely by us, because it was more of a technical challenge and didn’t require any strategic decision-making other than for us to make the new responsive website fast.

Layout Placement


Navigation is a fundamental element of page layout, regardless of the size or complexity of the website. While an off-screen pattern might seem appealing when you’re dealing with such a large-scale navigation system, remember that there can be issues5 when the navigation is not visible to the user.

SGS’ design team had initially tested a variety of concepts, because they had to not just evaluate the navigation interaction, but also create the right balance with the rest of the page and avoid clutter.


A few early (later discarded) concepts of the navigation placed in the layout (View large version7)

Deciding on the Concept


Given the complexity of the website, it was vital that the navigation always remain visible and inform the user where they are within the tree structure. Rather than divide the navigation into two parts in the UI, we wanted the new navigation system to be seamless (from the top level right through to the bottom levels). Therefore, it had to enable the user to easily browse up and down the navigation tree, as well as sideways across the main sections.

To test and validate all of these combinations, we developed a prototype for each of the eight initial navigation concepts. The prototypes confirmed what the in-house team already suspected: The most viable option in terms of usability, maintenance, cross-screen experience, visual clutter and appeal was for the navigation to be placed in the sidebar on large screens and to appear as a dropdown menu on small screens. Essentially, the navigation module would be functionally and visually self-contained, regardless of screen size.


The new navigation module would be visually and interactively identical across different viewpoints, enabling us to approach the design and development mobile-first. (View large version