James Reads


Day of May 20th, 2019

  • Skeptical U.S. Allies Resist Trump’s New Claims of Threats From Iran

    WASHINGTON — As the Trump administration draws up war plans against Iran over what it says are threats to American troops and interests, a senior British military official told reporters at the Pentagon on Tuesday that he saw no increased risk from Iran or allied militias in Iraq or Syria.

    Read at 08:29 am, May 20th

  • Justin Amash Is Not the Start of Anything

    Rep. Justin Amash speaks during a Politico Playbook Breakfast interview on April 6, 2017 in Washington, DC. On Saturday, Michigan Rep.

    Read at 07:16 am, May 20th

  • Functional-ish JavaScript

    Functional programming is a great discipline to learn and apply when writing JavaScript. Writing stateless, idempotent, side-effect free code really does solve a lot of problems: But there’s a growing impression in the community that functional programming is an all-or-nothing practice.

    Read at 07:16 am, May 20th

Day of May 19th, 2019

  • Ignoring Trump’s Orders, Hoping He’ll Forget

    On March 29, during a weekend jaunt to Mar-a-Lago, Donald Trump announced a major policy decision that surprised top-ranking officials in several government agencies. The United States was cutting off aid to Honduras, Guatemala, and El Salvador, the president said.

    Read at 09:35 pm, May 19th

  • Announcing TypeScript 3.5 RC

    Today we’re happy to announce the availability of our release candidate (RC) of TypeScript 3.5. Our hope is to collect feedback and early issues to ensure our final release is simple to pick up and use right away. Let’s explore what’s new in 3.5!

    Read at 09:28 pm, May 19th

  • Most American Christians Believe They’re Victims of Discrimination

    Many, many Christians believe they are subject to religious discrimination in the United States.

    Read at 09:19 pm, May 19th

  • Woke Aziz Ansari Debuts in Brooklyn: I’ve Had a ‘Reckoning’ After #MeToo Allegations

    Arriving at the first of Aziz Ansari’s string of New York City stops on his Road to Nowhere tour at the Brooklyn Academy of Music (BAM) means first playing a game of Frogger with the Ubers on Lafayette Avenue and wading through a thick cloud of vape smoke, until you eventually arrive at a checkpoi

    Read at 12:13 pm, May 19th

  • Replacing Google Analytics with GoAccess

    Google Analytics is a good tool: it’s free, easy to implement, and has served me well over the years.

    Read at 11:34 am, May 19th

  • A Deep Dive into Native Lazy-Loading for Images and Frames

    Twitter Facebook The initial, server-side HTML response includes an img element without the src attribute so the browser does not load any data. Instead, the image's URL is set as another attribute in the element's data set, e. g. data-src. <img data-src="https://tiny.pictures/example1.

    Read at 12:02 am, May 19th

  • Footnotes That Work in RSS Readers | CSS-Tricks

    Feedbin is the RSS reader I'm using at the moment. I was reading one of Harry's blog posts on it the other day, and I noticed a nice little interactive touch right inside Feedbin. There was a button-looking element with the number one which, as it turned out, was a footnote. I hovered over it, and it revealed the note. The HTML for the footnote on the blog post itself looks like this: &#60;p&#62;...they’d managed to place 27.9MB of images onto the Critical Path. &#13; Almost 30MB of previously non-render blocking assets had just been &#13; turned into blocking ones on purpose with no escape hatch. Start &#13; render time was as high as 27.1s over a cable connection&#60;sup id="fnref:1"&#62;&#13; &#60;a href="#fn:1" class="footnote"&#62;1&#60;/a&#62;&#60;/sup&#62;.&#60;/p&#62; Just an anchor link that points to #fn:1, and the &#60;sup&#62; makes it look like a footnote link. This is how the styling would look by default: The HTML for the list of footnotes at the bottom of the blog post looks like this: &#60;div class="footnotes"&#62;&#13; &#60;ol&#62;&#13; &#60;li id="fn:1"&#62;&#13; &#60;p&#62;5Mb up, 1Mb down, 28ms RTT.&#38;nbsp;&#60;a href="#fnref:1" class="reversefootnote"&#62;&#38;#x21a9;&#60;/a&#62;&#60;/p&#62;&#13; &#60;/li&#62;&#13; &#60;/ol&#62;&#13; &#60;/div&#62; As a little side note, I notice Harry is using scroll-behavior to smooth the scroll. He's also got some nice :target styling in there. All in all, we have: a link to go down and read the note a link to pop back up Nothing special there. No fancy libraries or anything. Just semantic HTML. That should work in any RSS reader, assuming they don't futz with the hash links and maintain the IDs on the elements as written. It's Feedbin that sees this markup pattern and decides to do the extra UI styling and fancy interaction. By inspecting what's going on, it looks like they hide the originals and replace them with their own special stuff: Ah ha! A Bigfoot spotting! It's right in their source. That means they fire off Bigfoot when articles are loaded and it does the trick. Like this: See the Pen Bigfoot Footnotes by Chris Coyier (@chriscoyier)on CodePen. That said, it's based on an already functional foundation. Lemme end this with that same markup pattern, and I'll try to look at it in different RSS readers to see what they do. Feel free to report what it does in your RSS reader of choice in the comments, if it does anything at all. Azul is an abstract board game designed by Michael Kiesling and released by Plan B Games in 2017. From two to four players collect tiles to fill up a 5x5 player board. Players collect tiles by taking all the tiles of one color from a repository, and placing them in a row, taking turns until all the tiles for that round are taken. At that point, one tile from every filled row moves over to each player's 5x5 board, while the rest of the tiles in the filled row are discarded. Each tile scores based on where it is placed in relation to other tiles on the board. Rounds continue until at least one player has made a row of tiles all the way across their 5x5 board. Source: Footnotes That Work in RSS Readers | CSS-Tricks

    Read at 08:13 pm, May 19th

  • How Rust’s standard library was vulnerable for years and nobody noticed

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: How Rust’s standard library was vulnerable for years and nobody noticed

    Read at 02:27 pm, May 19th

  • Everything You Ever Wanted to Know About inputmode | CSS-Tricks

    The inputmode global attribute provides a hint to browsers for devices with onscreen keyboards to help them decide which keyboard to display when a user has selected any input or textarea element. &#60;input type="text" inputmode="" /&#62;&#13; &#60;textarea inputmode="" /&#62; Unlike changing the type of the form, inputmode doesn’t change the way the browser interprets the input — it instructs the browser which keyboard to display. The inputmode attribute has a long history but has only very recently been adopted by the two major mobile browsers: Safari for iOS and Chrome for Android. Before that, it was implemented in Firefox for Android way back in 2012, and then subsequently removed several months later (though it is still available via a flag). Almost six years later, Chrome for Android implemented the feature — and with the recent release of iOS 12.2, Safari now supports it too. DesktopChromeOperaFirefoxIEEdgeSafari665320No75NoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2NoNo6774No But before we go deep into the ins and outs of the attribute, consider that the WHATWG living standard provides inputmode documentation while the W3C 5.2 spec no longer lists it in its contents, which suggests they consider it obsolete. Given that WHATWG has documented it and browsers have worked toward supporting it, we’re going to go assume WHATWG specifications are the standard. inputmode accepts a number of values. Let’s go through them, one by one. None &#60;input type="text" inputmode="none" /&#62; We’re starting here because it’s very possible we don’t want any type of keyboard on an input. Using inputmode=none will not show a keyboard at all on Chrome for Android. iOS 12.2 will still show its default alphanumeric keyboard, so specifying none could be sort of a reset for iOS in that regard. Regardless, none is intended for content that renders its own keyboard control. Numeric &#60;input type="text" inputmode="numeric" /&#62; This one is probably the one of the more common inputmode values out in the wild because it’s ideal for inputs that require numbers but no letters — things like PIN entry, zip codes, credit card numbers, etc. Using the numeric value with an input of type="text" may actually make more sense than setting the input to type="number" alone because, unlike a numeric input, inputmode="numeric" can be used with maxlength, minlength and pattern attributes, making it more versatile for different use cases. The numeric value on Chrome Android (left) and iOS 12.2 (right)I’ve often seen sites using type=tel on an input to display a numeric keyboard, and that checks out as a workaround, but isn’t semantically correct. If that bums you out, remember that inputmode supports patterns, we can add pattern="d*" to the input for the same effect. That said, only use this if you are certain the input should only allow numeric input because Android (unlike iOS) doesn’t allow the user to change to the keyboard to use letters, which might inadvertently prevent users from submitting valid data. Tel &#60;input type="text" inputmode="tel" /&#62; Entering a telephone number using a standard alphanumeric keyboard can be a pain. For one, each number on a telephone keyboard (except 1 and 0) represents three letters (e.g. 2 represents A, B and C) which are displayed with the number. The alphanumeric keyboard does not reference those letters, so decoding a telephone number containing letters (e.g. 1-800-COLLECT) takes more mental power. The tel value on Chrome Android (left) and iOS 12.2 (right)Using inputmode set to tel will produce a standard-looking telephone keyboard, including keys for digits 0 to 9, the pound (#) character, and the asterisk (*) character. Plus, we get those alphabetic mnemonic labels (e.g. ABC). Decimal &#60;input type="text" inputmode="decimal" /&#62; The decimal value on Chrome Android (left) and iOS 12.2 (right)An inputmode set to the decimal value results in a subtle change in iOS where the keyboard appears to be exactly the same as the tel value, but replaces the +*# key with a simple decimal (.). On the flip side, this has no effect on Android, which will simply use the numeric keyboard. Email &#60;input type="text" inputmode="email" /&#62; I’m sure you (and at least someone you know) has filled out a form that asks for an email address, only to make you swap keyboards to access the @ character. It’s not life-threatening or anything, but certainly not a great user experience either. That’s where the email value comes in. It brings the @ character into the fray, as well as the decimal (.) character. The email value on Chrome Android (left) and iOS 12.2 (right)This could also be a useful keyboard to show users who need to enter a Twitter username, given that@ is a core Twitter character for identifying users. However, the email address suggestions that iOS display above the keyboard may cause confusion. Another use case could be if you have your own email validation script and don't want to use the browsers built-in email validation. URL &#60;input type="text" inputmode="url" /&#62; The url value on Chrome Android (left) and iOS 12.2 (right)The url value provides a handy shortcut for users to append TLDs (e.g. .com or .co.uk) with a single key, as well keys typically used in web addresses, like the dot (.) and forward slash (/) characters. The exact TLD displayed on the keyboard is tied to the user’s locale. This could also be a useful keyboard to show users if your input accepts domain names (e.g. css-tricks.com) as well as full URIs (e.g. https://css-tricks.com). Use type="url" instead if your input requires validating the input. Search &#60;input type="text" inputmode="search" /&#62; The search value on Chrome Android (left) and iOS 12.2 (right)This displays a blue Go key on iOS and a green Enter key on Android, both in place of where Return. As you may have guessed by the value’s name, search is useful for search forms, providing that submission key to make a query. If you'd like to showSearch instead of Enter on iOS and a magnifying glass icon on Android in place of the green arrow, use type=search instead. Other things you oughta know Chromium-based browsers on Android — such as Microsoft Edge, Brave and Opera — share the same inputmode behavior as Chrome. I haven’t included details of keyboards on iPad for the sake of brevity. It’s mostly the same as iPhone but includes more keys. Same is true for Android tablets, save for third-party keyboards, which might be another topic worth covering. The original proposed spec had the values kana and katakana for Japanese input but they were never implemented by any browser and have since been removed from the spec. latin-name was also one of the values of the original spec and has since been removed. Interestingly, if it’s used now on Safari for iOS, it will display the user’s name as a suggestion above the keyboard. The latin-name value displays my name as an auto-fill suggestion Demo Oh, you want to see how all of these input modes work for yourself? Here’s a demo you can use on a device with a touchscreen keyboard to see the differences. References Source: Everything You Ever Wanted to Know About inputmode | CSS-Tricks

    Read at 02:16 pm, May 19th

  • Faster script loading with BinaryAST?

    Faster script loading with BinaryAST? kg-card-begin: imagekg-card-end: imageJavaScript Cold startsThe performance of applications on the web platform is becoming increasingly bottlenecked by the startup (load) time. Large amounts of JavaScript code are required to create rich web experiences that we&#x2019;ve become used to. When we look at the total size of JavaScript requested on mobile devices from HTTPArchive, we see that an average page loads 350KB of JavaScript, while 10% of pages go over the 1MB threshold. The rise of more complex applications can push these numbers even higher.While caching helps, popular websites regularly release new code, which makes cold start (first load) times particularly important. With browsers moving to separate caches for different domains to prevent cross-site leaks, the importance of cold starts is growing even for popular subresources served from CDNs, as they can no longer be safely shared.Usually, when talking about the cold start performance, the primary factor considered is a raw download speed. However, on modern interactive pages one of the other big contributors to cold starts is JavaScript parsing time. This might seem surprising at first, but makes sense - before starting to execute the code, the engine has to first parse the fetched JavaScript, make sure it doesn&#x2019;t contain any syntax errors and then compile it to the initial bytecode. As networks become faster, parsing and compilation of JavaScript could become the dominant factor.kg-card-begin: imagekg-card-end: imagekg-card-begin: imagekg-card-end: imageThe device capability (CPU or memory performance) is the most important factor in the variance of JavaScript parsing times and correspondingly the time to application start. A 1MB JavaScript file will take an order of a 100 ms to parse on a modern desktop or high-end mobile device but can take over a second on an average phone &#xA0;(Moto G4).A more detailed post on the overall cost of parsing, compiling and execution of JavaScript shows how the JavaScript boot time can vary on different mobile devices. For example, in the case of news.google.com, it can range from 4s on a Pixel 2 to 28s on a low-end device.While engines continuously improve raw parsing performance, with V8 in particular doubling it over the past year, as well as moving more things off the main thread, parsers still have to do lots of potentially unnecessary work that consumes memory, battery and might delay the processing of the useful resources.The &#x201C;BinaryAST&#x201D; ProposalThis is where BinaryAST comes in. BinaryAST is a new over-the-wire format for JavaScript proposed and actively developed by Mozilla that aims to speed up parsing while keeping the semantics of the original JavaScript intact. It does so by using an efficient binary representation for code and data structures, as well as by storing and providing extra information to guide the parser ahead of time.The name comes from the fact that the format stores the JavaScript source as an AST encoded into a binary file. The specification lives at tc39.github.io/proposal-binary-ast and is being worked on by engineers from Mozilla, Facebook, Bloomberg and Cloudflare.&#x201C;Making sure that web applications start quickly is one of the most important, but also one of the most challenging parts of web development. We know that BinaryAST can radically reduce startup time, but we need to collect real-world data to demonstrate its impact. Cloudflare's work on enabling use of BinaryAST with Cloudflare Workers is an important step towards gathering this data at scale.&#x201D; Till Schneidereit, Senior Engineering Manager, Developer TechnologiesMozillaParsing JavaScriptFor regular JavaScript code to execute in a browser the source is parsed into an intermediate representation known as an AST that describes the syntactic structure of the code. This representation can then be compiled into a byte code or a native machine code for execution.kg-card-begin: imagekg-card-end: imageA simple example of adding two numbers can be represented in an AST as:kg-card-begin: imagekg-card-end: imageParsing JavaScript is not an easy task; no matter which optimisations you apply, it still requires reading the entire text file char by char, while tracking extra context for syntactic analysis.The goal of the BinaryAST is to reduce the complexity and the amount of work the browser parser has to do overall by providing an additional information and context by the time and place where the parser needs it.To execute JavaScript delivered as BinaryAST the only steps required are:kg-card-begin: imagekg-card-end: imageAnother benefit of BinaryAST is that it makes possible to only parse the critical code necessary for start-up, completely skipping over the unused bits. This can dramatically improve the initial loading time.kg-card-begin: imagekg-card-end: imagekg-card-begin: imagekg-card-end: imagekg-card-begin: imagekg-card-end: imagekg-card-begin: imagekg-card-end: imageThis post will now describe some of the challenges of parsing JavaScript in more detail, explain how the proposed format addressed them, and how we made it possible to run its encoder in Workers.HoistingJavaScript relies on hoisting for all declarations - variables, functions, classes. Hoisting is a property of the language that allows you to declare items after the point they&#x2019;re syntactically used.Let's take the following example:kg-card-begin: codefunction f() { return g(); } function g() { return 42; }kg-card-end: codeHere, when the parser is looking at the body of f, it doesn&#x2019;t know yet what g is referring to - it could be an already existing global function or something declared further in the same file - so it can&#x2019;t finalise parsing of the original function and start the actual compilation.BinaryAST fixes this by storing all the scope information and making it available upfront before the actual expressions.kg-card-begin: imagekg-card-end: imageAs shown by the difference between the initial AST and the enhanced AST in a JSON representation:kg-card-begin: imagekg-card-end: imageLazy parsingOne common technique used by modern engines to improve parsing times is lazy parsing. It utilises the fact that lots of websites include more JavaScript than they actually need, especially for the start-up.Working around this involves a set of heuristics that try to guess when any given function body in the code can be safely skipped by the parser initially and delayed for later. A common example of such heuristic is immediately running the full parser for any function that is wrapped into parentheses:kg-card-begin: code(function(...kg-card-end: codeSuch prefix usually indicates that a following function is going to be an IIFE (immediately-invoked function expression), and so the parser can assume that it will be compiled and executed ASAP, and wouldn&#x2019;t benefit from being skipped over and delayed for later.kg-card-begin: code(function() { &#x2026; })();kg-card-end: codeThese heuristics significantly improve the performance of the initial parsing and cold starts, but they&#x2019;re not completely reliable or trivial to implement.One of the reasons is the same as in the previous section - even with lazy parsing, you still need to read the contents, analyse them and store an additional scope information for the declarations.Another reason is that the JavaScript specification requires reporting any syntax errors immediately during load time, and not when the code is actually executed. A class of these errors, called early errors, is checking for mistakes like usage of the reserved words in invalid contexts, strict mode violations, variable name clashes and more. All of these checks require not only lexing JavaScript source, but also tracking extra state even during the lazy parsing.Having to do such extra work means you need to be careful about marking functions as lazy too eagerly, especially if they actually end up being executed during the page load. Otherwise you&#x2019;re making cold start costs even worse, as now every function that is erroneously marked as lazy, needs to be parsed twice - once by the lazy parser and then again by the full one.Because BinaryAST is meant to be an output format of other tools such as Babel, TypeScript and bundlers such as Webpack, the browser parser can rely on the JavaScript being already analysed and verified by the initial parser. This allows it to skip function bodies completely, making lazy parsing essentially free.It reduces the cost of a completely unused code - while including it is still a problem in terms of the network bandwidth (don&#x2019;t do this!), at least it&#x2019;s not affecting parsing times anymore. These benefits apply equally to the code that is used later in the page lifecycle (for example, invoked in response to user actions), but is not required during the startup.Last but not least important benefit of such approach is that BinaryAST encodes lazy annotations as part of the format, giving tools and developers direct and full control over the heuristics. For example, a tool targeting the Web platform or a framework CLI can use its domain-specific knowledge to mark some event handlers as lazy or eager depending on the context and the event type.Avoiding ambiguity in parsingUsing a text format for a programming language is great for readability and debugging, but it's not the most efficient representation for parsing and execution.For example, parsing low-level types like numbers, booleans and even strings from text requires extra analysis and computation, which is unnecessary when you can just store and read them as native binary-encoded values in the first place and read directly on the other side.Another problem is an ambiguity in the grammar itself. It was already an issue in the ES5 world, but could usually be resolved with some extra bookkeeping based on the previously seen tokens. However, in ES6+ there are productions that can be ambiguous all the way through until they&#x2019;re parsed completely.For example, a token sequence like:kg-card-begin: code(a, {b: c, d}, [e = 1])...kg-card-end: codecan start either a parenthesized comma expression with nested object and array literals and an assignment:kg-card-begin: code(a, {b: c, d}, [e = 1]); // it was an expressionkg-card-end: codeor a parameter list of an arrow expression function with nested object and array patterns and a default value:kg-card-begin: code(a, {b: c, d}, [e = 1]) =&gt; &#x2026; // it was a parameter listkg-card-end: codeBoth representations are perfectly valid, but have completely different semantics, and you can&#x2019;t know which one you&#x2019;re dealing with until you see the final token.To work around this, parsers usually have to either backtrack, which can easily get exponentially slow, or to parse contents into intermediate node types that are capable of holding both expressions and patterns, with following conversion. The latter approach preserves linear performance, but makes the implementation more complicated and requires preserving more state.In the BinaryAST format this issue doesn't exist in the first place because the parser sees the type of each node before it even starts parsing its contents.Cloudflare ImplementationCurrently, the format is still in flux, but the very first version of the client-side implementation was released under a flag in Firefox Nightly several months ago. Keep in mind this is only an initial unoptimised prototype, and there are already several experiments changing the format to provide improvements to both size and parsing performance.On the producer side, the reference implementation lives at github.com/binast/binjs-ref. Our goal was to take this reference implementation and consider how we would deploy it at Cloudflare scale.If you dig into the codebase, you will notice that it currently consists of two parts.kg-card-begin: imagekg-card-end: imageOne is the encoder itself, which is responsible for taking a parsed AST, annotating it with scope and other relevant information, and writing out the result in one of the currently supported formats. This part is written in Rust and is fully native.Another part is what produces that initial AST - the parser. Interestingly, unlike the encoder, it's implemented in JavaScript.Unfortunately, there is currently no battle-tested native JavaScript parser with an open API, let alone implemented in Rust. There have been a few attempts, but, given the complexity of JavaScript grammar, it&#x2019;s better to wait a bit and make sure they&#x2019;re well-tested before incorporating it into the production encoder.On the other hand, over the last few years the JavaScript ecosystem grew to extensively rely on developer tools implemented in JavaScript itself. In particular, this gave a push to rigorous parser development and testing. There are several JavaScript parser implementations that have been proven to work on thousands of real-world projects.With that in mind, it makes sense that the BinaryAST implementation chose to use one of them - in particular, Shift - and integrated it with the Rust encoder, instead of attempting to use a native parser.Connecting Rust and JavaScriptIntegration is where things get interesting.Rust is a native language that can compile to an executable binary, but JavaScript requires a separate engine to be executed. To connect them, we need some way to transfer data between the two without sharing the memory.Initially, the reference implementation generated JavaScript code with an embedded input on the fly, passed it to Node.js and then read the output when the process had finished. That code contained a call to the Shift parser with an inlined input string and produced the AST back in a JSON format.This doesn&#x2019;t scale well when parsing lots of JavaScript files, so the first thing we did is transformed the Node.js side into a long-living daemon. Now Rust could spawn a required Node.js process just once and keep passing inputs into it and getting responses back as individual messages.kg-card-begin: imagekg-card-end: imageRunning in the cloudWhile the Node.js solution worked fairly well after these optimisations, shipping both a Node.js instance and a native bundle to production requires some effort. It's also potentially risky and requires manual sandboxing of both processes to make sure we don&#x2019;t accidentally start executing malicious code.On the other hand, the only thing we needed from Node.js is the ability to run the JavaScript parser code. And we already have an isolated JavaScript engine running in the cloud - Cloudflare Workers! By additionally compiling the native Rust encoder to Wasm (which is quite easy with the native toolchain and wasm-bindgen), we can even run both parts of the code in the same process, making cold starts and communication much faster than in a previous model.kg-card-begin: imagekg-card-end: imageOptimising data transferThe next logical step is to reduce the overhead of data transfer. JSON worked fine for communication between separate processes, but with a single process we should be able to retrieve the required bits directly from the JavaScript-based AST.To attempt this, first of all, we needed to move away from the direct JSON usage to something more generic that would allow us to support various import formats. The Rust ecosystem already has an amazing serialisation framework for that - Serde.Aside from allowing us to be more flexible in regard to the inputs, rewriting to Serde helped an existing native use case too. Now, instead of parsing JSON into an intermediate representation and then walking through it, all the native typed AST structures can be deserialized directly from the stdout pipe of the Node.js process in a streaming manner. This significantly improved both the CPU usage and memory pressure.But there is one more thing we can do: instead of serializing and deserializing from an intermediate format (let alone, a text format like JSON), we should be able to operate [almost] directly on JavaScript values, saving memory and repetitive work.How is this possible? wasm-bindgen provides a type called JsValue that stores a handle to an arbitrary value on the JavaScript side. This handle internally contains an index into a predefined array.Each time a JavaScript value is passed to the Rust side as a result of a function call or a property access, it&#x2019;s stored in this array and an index is sent to Rust. The next time Rust wants to do something with that value, it passes the index back and the JavaScript side retrieves the original value from the array and performs the required operation.By reusing this mechanism, we could implement a Serde deserializer that requests only the required values from the JS side and immediately converts them to their native representation. It&#x2019;s now open-sourced under https://github.com/cloudflare/serde-wasm-bindgen.kg-card-begin: imagekg-card-end: imageAt first, we got a much worse performance out of this due to the overhead of more frequent calls between 1) Wasm and JavaScript - SpiderMonkey has improved these recently, but other engines still lag behind and 2) JavaScript and C++, which also can&#x2019;t be optimised well in most engines.The JavaScript &lt;-&gt; C++ overhead comes from the usage of TextEncoder to pass strings between JavaScript and Wasm in wasm-bindgen, and, indeed, it showed up as the highest in the benchmark profiles. This wasn&#x2019;t surprising - after all, strings can appear not only in the value payloads, but also in property names, which have to be serialized and sent between JavaScript and Wasm over and over when using a generic JSON-like structure.Luckily, because our deserializer doesn&#x2019;t have to be compatible with JSON anymore, we can use our knowledge of Rust types and cache all the serialized property names as JavaScript value handles just once, and then keep reusing them for further property accesses.This, combined with some changes to wasm-bindgen which we have upstreamed, allows our deserializer to be up to 3.5x faster in benchmarks than the original Serde support in wasm-bindgen, while saving ~33% off the resulting code size. Note that for string-heavy data structures it might still be slower than the current JSON-based integration, but situation is expected to improve over time when reference types proposal lands natively in Wasm.After implementing and integrating this deserializer, we used the wasm-pack plugin for Webpack to build a Worker with both Rust and JavaScript parts combined and shipped it to some test zones.Show me the numbersKeep in mind that this proposal is in very early stages, and current benchmarks and demos are not representative of the final outcome (which should improve numbers much further).As mentioned earlier, BinaryAST can mark functions that should be parsed lazily ahead of time. By using different levels of lazification in the encoder (https://github.com/binast/binjs-ref/blob/b72aff7dac7c692a604e91f166028af957cdcda5/crates/binjs_es6/src/lazy.rs#L43) and running tests against some popular JavaScript libraries, we found following speed-ups.Level 0 (no functions are lazified)With lazy parsing disabled in both parsers we got a raw parsing speed improvement of between 3 and 10%.kg-card-begin: markdownName Source size (kb) JavaScript Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.403 0.385 -4.56 D3 (v5) 240 11.178 10.525 -6.018 Angular 180 6.985 6.331 -9.822 Babel 780 21.255 20.599 -3.135 Backbone 32 0.775 0.699 -10.312 wabtjs 1720 64.836 59.556 -8.489 Fuzzball (1.2) 72 3.165 2.768 -13.383 kg-card-end: markdownLevel 3 (functions up to 3 levels deep are lazified)But with the lazification set to skip nested functions of up to 3 levels we see much more dramatic improvements in parsing time between 90 and 97%. As mentioned earlier in the post, BinaryAST makes lazy parsing essentially free by completely skipping over the marked functions.kg-card-begin: markdownName Source size (kb) Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.407 0.032 -92.138 D3 (v5) 240 11.623 0.224 -98.073 Angular 180 7.093 0.680 -90.413 Babel 780 21.100 0.895 -95.758 Backbone 32 0.898 0.045 -94.989 wabtjs 1720 59.802 1.601 -97.323 Fuzzball (1.2) 72 2.937 0.089 -96.970 kg-card-end: markdownAll the numbers are from manual tests on a Linux x64 Intel i7 with 16Gb of ram.While these synthetic benchmarks are impressive, they are not representative of real-world scenarios. Normally you will use at least some of the loaded JavaScript during the startup. To check this scenario, we decided to test some realistic pages and demos on desktop and mobile Firefox and found speed-ups in page loads too.For a sample application (https://github.com/cloudflare/binjs-demo, https://serve-binjs.that-test.site/) which weighed in at around 1.2 MB of JavaScript we got the following numbers for initial script execution:kg-card-begin: markdownDevice JavaScript BinaryAST Desktop 338ms 314ms Mobile (HTC One M8) 2019ms 1455ms kg-card-end: markdownHere is a video that will give you an idea of the improvement as seen by a user on mobile Firefox (in this case showing the entire page startup time):kg-card-begin: imagekg-card-end: imageNext step is to start gathering data on real-world websites, while improving the underlying format.How do I test BinaryAST on my website?We&#x2019;ve open-sourced our Worker so that it could be installed on any Cloudflare zone: https://github.com/binast/binjs-ref/tree/cf-wasm.One thing to be currently wary of is that, even though the result gets stored in the cache, the initial encoding is still an expensive process, and might easily hit CPU limits on any non-trivial JavaScript files and fall back to the unencoded variant. We are working to improve this situation by releasing BinaryAST encoder as a separate feature with more relaxed limits in the following few days.Meanwhile, if you want to play with BinaryAST on larger real-world scripts, an alternative option is to use a static binjs_encode tool from https://github.com/binast/binjs-ref to pre-encode JavaScript files ahead of time. Then, you can use a Worker from https://github.com/cloudflare/binast-cf-worker to serve the resulting BinaryAST assets when supported and requested by the browser.On the client side, you&#x2019;ll currently need to download Firefox Nightly, go to about:config and enable unrestricted BinaryAST support via the following options:kg-card-begin: imagekg-card-end: imageNow, when opening a website with either of the Workers installed, Firefox will get BinaryAST instead of JavaScript automatically.SummaryThe amount of JavaScript in modern apps is presenting performance challenges for all consumers. Engine vendors are experimenting with different ways to improve the situation - some are focusing on raw decoding performance, some on parallelizing operations to reduce overall latency, some are researching new optimised formats for data representation, and some are inventing and improving protocols for the network delivery.No matter which one it is, we all have a shared goal of making the Web better and faster. On Cloudflare's side, we're always excited about collaborating with all the vendors and combining various approaches to make that goal closer with every step. Want to learn more about Cloudflare? Learn more Comments Please enable JavaScript to view the comments powered by Disqus. comments powered by Disqus Source: Faster script loading with BinaryAST?

    Read at 02:06 pm, May 19th

  • Iterating a React Design with Styled Components | CSS-Tricks

    In a perfect world, our projects would have unlimited resources and time. Our teams would begin coding with well thought out and highly refined UX designs. There would be consensus among developers about the best way to approach styling. There’d be one or more CSS gurus on the team who could ensure that functionality and style could roll out simultaneously without it turning into a train-wreck. I’ve actually seen this happen in large enterprise environments. It’s a beautiful thing. This article is not for those people. On the flip side of the coin is the tiny startup that has zero funding, one or two front-end developers, and a very short timeline to demonstrate some functionality. It doesn’t have to look perfect, but it should at least render reasonably well on desktop, tablet, and mobile. This gets them to a point where it can be shown to advisors and early users; maybe even potential investors who’ve expressed an interest in the concept. Once they get some cashflow from sales and/or investment, they can get a dedicated UX designer and polish the interface. What follows is for this latter group. Project Kickoff Meeting Let’s invent a company to get the ball rolling. Solar Excursions is a small travel agency aiming to serve the near-future’s burgeoning space tourism industry. Our tiny development team has agreed that React will be used for the UI. One of our front-end developers is big on Sass, and the other is enamored with CSS in JavaScript. But they’ll be hard pressed to knock out their initial sprint goals; there’s certainly no time for arguing about the best possible styling approach. Both coders agree the choice doesn’t matter much in the long run, as long as its consistently executed. They’re certain that implementing the styling from scratch under the gun now will incur technical debt that will have to be cleaned up later. After some discussion, the team opts to plan for one or more "styling refactor" sprints. For now, we’ll just focus on getting something up on the screen using React-Bootstrap. That way we’ll be able to quickly build working desktop and mobile layouts without much fuss. The less time spent on front-end styling the better, because we’ll also need the UI to hook up to the services our backend developer will be cranking out. And, as our application architecture begins to take shape, both front-enders agree it’s important that it be unit tested. They have a lot on their plate. Based on my discussions with the Powers That Be, as a dedicated project manager, I slaved over Balsamiq for at least ten minutes to provide the team with mockups for the booking page on desktop and mobile. I assume they’ll make tablet meet in the middle and look reasonable. Initial mockups for the Solar Excursions Trip Booking Page on desktop (left) and mobile (right).Sprint Zero: Review Meeting Pizza all around! The team worked really hard to hit its goals, and we now have a booking page with a layout that approximates the mockups. The infrastructure for services is coming together, but there’s quite a way to go before we can connect the UI to it. In the interim, the front-enders are using a hardcoded mock data structure. The first iteration of the page in code using react-bootstrap.Here’s a look at our UI code so far: This is all straightforward React. We’re using some of that Hooks hotness, but it’s probably passé to most of you by now. The key takeaway to notice here is how four of our five application components import and use components from react-bootstrap. Only the main App component is unaffected. That’s because it just composes the top level view with our custom components. // App.js imports&#13; import React, { useState } from "react";&#13; import Navigation from "./Navigation";&#13; import Page from "./Page";&#13; &#13; // Navigation.js imports&#13; import React from "react";&#13; import { Navbar, Dropdown, Nav } from "react-bootstrap";&#13; &#13; // Page.js imports&#13; import React from "react";&#13; import PosterCarousel from "./PosterCarousel";&#13; import DestinationLayout from "./DestinationLayout";&#13; import { Container, Row, Col } from "react-bootstrap";&#13; &#13; // PosterCarousel.js imports&#13; import React from "react";&#13; import { Alert, Carousel, Image } from "react-bootstrap";&#13; &#13; // DestinationLayout.js imports&#13; import React, { useState, useEffect } from "react";&#13; import {&#13; Button,&#13; Card,&#13; Col,&#13; Container,&#13; Dropdown,&#13; Jumbotron,&#13; ListGroup,&#13; Row,&#13; ToggleButtonGroup,&#13; ToggleButton&#13; } from "react-bootstrap"; The decision to move fast with Bootstrap has allowed us to hit our sprint goals, but we’re already accumulating technical debt. This is just four affected components, but as the application grows, it’s clear the "styling refactor" sprints that we planned for are going to become exponentially harder. And we haven’t even customized these components much. Once we have tens of components, all using Bootstrap with lots of inline styling to pretty them up, refactoring them to remove react-bootstrap dependencies will be a scary proposition indeed. Rather than building more of the booking pipeline pages, the team decides that we’ll spend the next sprint working to isolate the react-bootstrap usage in a custom component kit since our services are still under construction. Application components will only use components from this kit. That way, when it comes time to ween ourselves from react-bootstrap, the process will be much easier. We won’t have to refactor thirty usages of the react-bootstrap Button throughout the app, we’ll just rewrite the internals of our KitButton component. Sprint One: Review Meeting Well, that was easy. High-fives. No change to the visual appearance of the UI, but we now have a "kit" folder that’s sibling to "components" in our React source. It has a bunch of files like KitButton.js, which basically export renamed react-bootstrap components. An example component from our kit looks like this: // KitButton.js&#13; import { Button, ToggleButton, ToggleButtonGroup } from "react-bootstrap";&#13; export const KitButton = Button;&#13; export const KitToggleButton = ToggleButton;&#13; export const KitToggleButtonGroup = ToggleButtonGroup; We wrap those all kit components up into a module like this: // kit/index.js&#13; import { KitCard } from "./KitCard";&#13; import { KitHero } from "./KitHero";&#13; import { KitList } from "./KitList";&#13; import { KitImage } from "./KitImage";&#13; import { KitCarousel } from "./KitCarousel";&#13; import { KitDropdown } from "./KitDropdown";&#13; import { KitAttribution } from "./KitAttribution";&#13; import { KitNavbar, KitNav } from "./KitNavbar";&#13; import { KitContainer, KitRow, KitCol } from "./KitContainer";&#13; import { KitButton, KitToggleButton, KitToggleButtonGroup } from "./KitButton";&#13; export {&#13; KitCard,&#13; KitHero,&#13; KitList,&#13; KitImage,&#13; KitCarousel,&#13; KitDropdown,&#13; KitAttribution,&#13; KitButton,&#13; KitToggleButton,&#13; KitToggleButtonGroup,&#13; KitContainer,&#13; KitRow,&#13; KitCol,&#13; KitNavbar,&#13; KitNav&#13; }; And now our application components are completely free of react-bootstrap. Here are the imports for the affected components: // Navigation.js imports&#13; import React from "react";&#13; import { KitNavbar, KitNav, KitDropdown } from "../kit";&#13; &#13; &#13; // Page.js imports &#13; import React from "react";&#13; import PosterCarousel from "./PosterCarousel";&#13; import DestinationLayout from "./DestinationLayout";&#13; import { KitContainer, KitRow, KitCol } from "../kit";&#13; &#13; &#13; // PosterCarousel.js imports&#13; import React from "react";&#13; import { KitAttribution, KitImage, KitCarousel } from "../kit";&#13; &#13; &#13; // DestinationLayout.js imports&#13; import React, { useState, useEffect } from "react";&#13; import {&#13; KitCard,&#13; KitHero,&#13; KitList,&#13; KitButton,&#13; KitToggleButton,&#13; KitToggleButtonGroup,&#13; KitDropdown,&#13; KitContainer,&#13; KitRow,&#13; KitCol&#13; } from "../kit"; Here’s the front-end codebase now: Although we’ve corralled all of the react imports into our kit components, our application components still rely a bit on the react-bootstrap implementation because the attributes we place on our kit component instances are the same as those of react-bootstrap. That constrains us when it comes to re-implementing the kit components, because we need to adhere to the same API. For instance: // From Navigation.js&#13; &#60;KitNavbar bg="dark" variant="dark" fixed="top"&#62; Ideally, we wouldn’t have to add those react-bootstrap specific attributes when we instantiate our KitNavbar. The front-enders promise to refactor those out as we go, now that we’ve identified them as problematic. And any new references to react-bootstrap components will go into our kit instead of directly into the application components. Meanwhile, we’ve shared our mock data with the server engineer, who is working hard to build separate server environments, implement the database schema, and expose some services to us. That gives us time to add some gloss to our UI in the next sprint — which is good because the Powers That Be would like to see separate themes for each destination. As the user browses destinations, we need to have the UI color scheme change to match the displayed travel poster. Also, we want to try and spiff up those components a bit to begin evolving our own look and feel. Once we have some money coming in, we’ll get a designer to do a complete overhaul, but hopefully we can reach a happy medium for our early users. Sprint Two: Review Meeting Wow! The team really pulled out all the stops this sprint. We got per-destination themes, customized components, and a lot of the lingering react-bootstrap API implementations removed from the application components. Here’s what the desktop looks like now: Check out the solarized theme for the red planet!In order to pull this off, the front-enders brought in the Styled Components library. It made styling the individual kit components a breeze, as well as adding support for multiple themes. Let’s look at a few highlights of their changes for this sprint. First, for global things like pulling in fonts and setting the page body styles, we have a new kit component called KitGlobal. // KitGlobal.js&#13; import { createGlobalStyle } from "styled-components";&#13; export const KitGlobal = createGlobalStyle`&#13; body {&#13; @import url('https://fonts.googleapis.com/css?family=Orbitron:500&#124;Nunito:600&#124;Alegreya+Sans+SC:700');&#13; background-color: ${props =&#62; props.theme.foreground};&#13; overflow-x: hidden;&#13; }&#13; `; It uses the createGlobalStyle helper to define the CSS for the body element. That imports our desired web fonts from Google, sets the background color to whatever the current theme’s "foreground" value is, and turns off overflow in the x-direction to eliminate a pesky horizontal scrollbar. We use that KitGlobal component in the render method of our App component. Also in the App component, we import ThemeProvider from styled-components, and something called "themes" from ../theme. We use React’s useState to set the initial theme to themes.luna and React’s useEffect to call setTheme whenever the "destination" changes. The returned component is now wrapped in ThemeProvider, which is passed "theme" as a prop. Here’s the App component in its entirety. // App.js&#13; import React, { useState, useEffect } from "react";&#13; import { ThemeProvider } from "styled-components";&#13; import themes from "../theme/";&#13; import { KitGlobal } from "../kit";&#13; import Navigation from "./Navigation";&#13; import Page from "./Page";&#13; export default function App(props) {&#13; const [destinationIndex, setDestinationIndex] = useState(0);&#13; const [theme, setTheme] = useState(themes.luna);&#13; const destination = props.destinations[destinationIndex];&#13; useEffect(() =&#62; {&#13; setTheme(themes[destination.theme]);&#13; }, [destination]);&#13; &#13; return (&#13; &#60;ThemeProvider theme={theme}&#62;&#13; &#60;React.Fragment&#62;&#13; &#60;KitGlobal /&#62;&#13; &#60;Navigation&#13; {...props}&#13; destinationIndex={destinationIndex}&#13; setDestinationIndex={setDestinationIndex}&#13; /&#62;&#13; &#60;Page&#13; {...props}&#13; destinationIndex={destinationIndex}&#13; setDestinationIndex={setDestinationIndex}&#13; /&#62;&#13; &#60;/React.Fragment&#62;&#13; &#60;/ThemeProvider&#62;&#13; );&#13; } KitGlobal is rendering like any other component. Nothing special there, only that the body tag is affected. ThemeProvider is using the React Context API to pass theme down to whatever components need it (which is all of them). In order to fully understand that, we also need to take a look at what a theme actually is. To create a theme, one of our front-enders took all the travel posters and created palettes for each by extracting the prominent colors. That was fairly simple. We used TinyEye for this.Obviously, we weren’t going to use all the colors. The approach was mainly to dub the most used two colors foreground and background. Then we took three more colors, generally ordered from lightest to darkest as accent1, accent2, and accent3. Finally, we picked two contrasting colors to call text1 and text2. For the above destination, that looked like: // theme/index.js (partial list)&#13; const themes = {&#13; ...&#13; mars: {&#13; background: "#a53237",&#13; foreground: "#f66f40",&#13; accent1: "#f8986d",&#13; accent2: "#9c4952",&#13; accent3: "#f66f40",&#13; text1: "#f5e5e1",&#13; text2: "#354f55"&#13; },&#13; ...&#13; };&#13; export default themes; Once we have a theme for each destination, and it is being passed into all the components (including the kit components that our application components are now built from), we need to use styled-components to apply those theme colors as well as our custom visual styling, like the panel corners and "border glow." This is a simple example where we made our KitHero component apply the theme and custom styles to the Bootstrap Jumbotron: // KitHero.js&#13; import styled from "styled-components";&#13; import { Jumbotron } from "react-bootstrap";&#13; &#13; export const KitHero = styled(Jumbotron)`&#13; background-color: ${props =&#62; props.theme.accent1};&#13; color: ${props =&#62; props.theme.text2};&#13; border-radius: 7px 25px;&#13; border-color: ${props =&#62; props.theme.accent3};&#13; border-style: solid;&#13; border-width: 1px;&#13; box-shadow: 0 0 1px 2px #fdb813, 0 0 3px 4px #f8986d;&#13; font-family: "Nunito", sans-serif;&#13; margin-bottom: 20px;&#13; `; In this case, we’re good to go with what gets returned from styled-components, so we just name it KitHero and export it. When we use it in the application, it looks like this: // DestinationLayout.js (partial code)&#13; const renderHero = () =&#62; {&#13; return (&#13; &#60;KitHero&#62;&#13; &#60;h2&#62;{destination.header}&#60;/h2&#62;&#13; &#60;p&#62;{destination.blurb}&#60;/p&#62;&#13; &#60;KitButton&#62;Book Your Trip Now!&#60;/KitButton&#62;&#13; &#60;/KitHero&#62;&#13; );&#13; }; Then there are more complex cases where we want to preset some attributes on the react-bootstrap component. For instance, the KitNavbar component which we identified earlier as having a bunch of react-bootstrap attributes that we’d rather not pass from the application’s declaration of the component. Now for a look at how that was handled: // KitNavbar.js (partial code)&#13; import React, { Component } from "react";&#13; import styled from "styled-components";&#13; import { Navbar } from "react-bootstrap";&#13; &#13; const StyledBootstrapNavbar = styled(Navbar)`&#13; background-color: ${props =&#62; props.theme.background};&#13; box-shadow: 0 0 1px 2px #fdb813, 0 0 3px 4px #f8986d;&#13; display: flex;&#13; flex-direction: horizontal;&#13; justify-content: space-between;&#13; font-family: "Nunito", sans-serif;&#13; `;&#13; &#13; export class KitNavbar extends Component {&#13; render() {&#13; const { ...props } = this.props;&#13; return &#60;StyledBootstrapNavbar fixed="top" {...props} /&#62;;&#13; }&#13; } First, we create a component called StyledBootstrapNavbar using styled-components. We were able to handle some of the attributes with the CSS we passed to styled-components. But in order to continue leveraging (for now) the reliable stickiness of the component to the top of the screen while everything else is scrolled, our front-enders elected to continue using react-bootstrap’s fixed attribute. In order to do that, we had to create a KitNavbar component that rendered an instance of StyledBootstrapNavbar with the fixed=top attribute. We also passed through all the props, which includes its children. We only have to create a separate class that renders styled-component’s work and passes props through to it if we want to explicitly set some attributes in our kit component by default. In most cases, we can just name and return styled-component’s output and use it as we did with KitHero above. Now, when we render the KitNavbar in our application’s Navigation component, it looks like this: // Navigation.js (partial code)&#13; return (&#13; &#60;KitNavbar&#62;&#13; &#60;KitNavbarBrand&#62;&#13; &#60;KitLogo /&#62;&#13; Solar Excursions&#13; &#60;/KitNavbarBrand&#62;&#13; {renderDestinationMenu()}&#13; &#60;/KitNavbar&#62;&#13; ); Finally, we took our first stabs at refactoring our kit components away from react-bootstrap. The KitAttribution component is a Bootstrap Alert which, for our purposes, is little more than an ordinary div. We were able to easily refactor to remove its dependency on react-bootstrap. This is the component as it emerged from the previous sprint: // KitAttribution.js (using react-bootstrap)&#13; import { Alert } from "react-bootstrap";&#13; export const KitAttribution = Alert; This is what it looks like now: // KitAttribution.js&#13; import styled from "styled-components";&#13; export const KitAttribution = styled.div`&#13; text-align: center;&#13; background-color: ${props =&#62; props.theme.accent1};&#13; color: ${props =&#62; props.theme.text2};&#13; border-radius: 7px 25px;&#13; border-color: ${props =&#62; props.theme.accent3};&#13; border-style: solid;&#13; border-width: 1px;&#13; box-shadow: 0 0 1px 2px #fdb813, 0 0 3px 4px #f8986d;&#13; font-family: "Alegreya Sans SC", sans-serif;&#13; &#62; a {&#13; color: ${props =&#62; props.theme.text2};&#13; font-family: "Nunito", sans-serif;&#13; }&#13; &#62; a:hover {&#13; color: ${props =&#62; props.theme.background};&#13; text-decoration-color: ${props =&#62; props.theme.accent3};&#13; }&#13; `; Notice how we no longer import react-bootstrap and we use styled.div as the component base. They won’t all be so easy, but it’s a process. Here are the results of our team’s styling and theming efforts in sprint two: View the themed page on its own here. Conclusion After three sprints, our team is well on its way to having a scalable component architecture in place for the UI. We are moving quickly thanks to react-bootstrap, but are no longer piling up loads of technical debt as a result of it. Thanks to styled-components, we were able to implement multiple themes (like how almost every app on the Internet these days sports dark and light modes). We also don’t look like an out-of-the-box Bootstrap app anymore. By implementing a custom component kit that contains all references to react-bootstrap, we can refactor away from it as time permits. Fork the final codebase on GitHub. Source: Iterating a React Design with Styled Components | CSS-Tricks

    Read at 10:57 am, May 19th

  • AddyOsmani.com - We shipped font-display to Google Fonts!

    We shipped font-display to Google Fonts! At Google I/0 2019, we announced that we would finally be bringing support for font-display to Google Fonts. I'm happy to share this is now available in production for all Google Fonts users via the new display parameter. Shipped! @GoogleFonts now let's you control web font loading using font-display. Say hello to the display parameter 🎉What's font-display? https://t.co/Q7RDeESwkm pic.twitter.com/sn27ySza1B— Addy Osmani (@addyosmani) May 15, 2019 The font-display descriptor lets you decide how your web fonts will render or fallback, depending on how long it takes for them to load. It supports a number of values including auto, block, swap, fallback and optional. Previously, the only way to specify font-display for web fonts from Google Fonts was to self-host them but this change removes the need to do so. To set font-display, pass the desired value in the querystring display parameter: https://fonts.googleapis.com/css?family=Lobster&#38;display=swap Here's a demo on Codepen of using display with multiple font families. Note: The introduction of font-display doesn't negate the benefits of other optimizations to reduce server hops, such as preconnecting to improve your waterfalls. I would still recommend doing this. It's been exciting to see developers already start to see performance gains from the rollout of this feature and we would it helps you too: Using Google Font's new display param saved us 800ms on First Meaningful Paint @addyosmani @SpeedCurve #webperf pic.twitter.com/Oa9HpXtzC3— Josh Deltener (@hecktarzuli) May 16, 2019 To learn more about the benefits of font-display, check out: Specific to this change, also see: Extra history These patches close out issue #358 which was first filed back in 2016. You might be wondering why it took us so long to support a 'simple' query parameter :) Our original discussions focused on Google Fonts having a high cache hit rate. The introduction of any new query parameters could be viewed as contenious because additional permutations could reduce cross-site cache hit rates. We then spent time thinking about how to enable this change at the Web Platform level (outside of Fonts) via @font-feature-values. This would have allowed you to control the display policy for @font-face rules that are not directly under your control. Over time, the CSSWG discussed this, but there were proposals for a more general way of doing partial at-rules that made it less viable in the short term. Double-key caching changed the original caching argument (cross-origin caching) and gave us a chance to re-evaluate our options. Ultimately, the Google Fonts team felt there would be enough end-developer value to ship font-display as a query-parameter and we managed to wrap this work up within the last week. Thanks to developers to waiting this out and providing regular feedback along the journey. It was neat to see so many workarounds explored while we worked on this :) Credits and thanks This effort would not have been possible without Roderick Sheeter, Nathan Williams, Dave Crossland, Kenji Baheux, Paul Irish, Malte Ubl and Sam Saccone. Thanks for helping getting this one across the finishing line &#60;3 Source: AddyOsmani.com &#8211; We shipped font-display to Google Fonts!

    Read at 10:30 am, May 19th

  • What's New in Node.js 12: Private Class Fields | www.thecodebarbarian.com

    The String#replace() function replaces instances of a substring with another substring, and returns the modified string. This function seems simple at first, but String#replace() can do a whole lot more than just replace 'foo' with 'bar'. In this article, I'll explain some more sophisticated ways to use String#replace(), and highlight some common pitfalls to avoid. Source: What&#8217;s New in Node.js 12: Private Class Fields | www.thecodebarbarian.com

    Read at 10:28 am, May 19th

  • New Electron Release Cadence | Electron Blog

    🎉 Electron is moving to release a new major stable version every 12 weeks! 🎉 Simply put, Chromium doesn't stop shipping so Electron is not going to slow down either. Chromium releases on a consistent 6-week schedule. To deliver the most up-to-date versions of Chromium in Electron, our schedule needs to track theirs. More information around Chromium's release cycle can be found here. Every 6 weeks, a new Chromium release comes out with new features, bug fixes / security fixes, and V8 improvements. Electron's users have been loud and clear about wanting these changes in a timely manner, so we've adjusted our stable release dates to match every other Chromium stable release. Up first, Electron v6.0.0 will include M76 and is scheduled for stable release on July 30, 2019, the same release day as Chromium M76. You'll have access to new Chromium and V8 features and fixes sooner than before. Importantly, you'll also know when those new changes are coming, so you'll be able to plan with better information than before. The Electron team will continue to support the latest three major versions. For example, when v6.0.0 goes stable on July 30, 2019, we will support v6.x, v5.x, and v4.x, while v3.x will reach End-Of-Life. Please consider joining our App Feedback Program to help us with testing our beta releases and stabilization. Projects who participate in this program test Electron betas on their apps; and in return, the new bugs they find are prioritized for the stable release. The decisions around stable releases before v3.0.0 did not follow a schedule. We added internal schedules to the project with v3.0.0 and v4.0.0. Earlier this year, we decided to publicize our stable release date for the first time for Electron v5.0.0. Announcing our stable release dates was positively received overall and we're excited to continue doing that for future releases. In order to better streamline these upgrade-related efforts, our Upgrades and Releases Working Groups were created within our Governance system. They have allowed us to better prioritize and delegate this work, which we hope will become more apparent with each subsequent release. Here is where our new cadence will put us in comparison to Chromium's cadence: 📨 If you have questions, please mail us at info@electronjs.org. Source: New Electron Release Cadence | Electron Blog

    Read at 10:27 am, May 19th

  • GitHub Package Registry: Pros and Cons for the Node.js Ecosystem

    GitHub Package Registry: Pros and Cons for the Node.js Ecosystem Share Last week there was a big announcement in the developer community: the GitHub Package Registry ✨😱. In this blog post we will cover some pros and cons of the registry and the expected impact in the Node.js ecosystem. What is a package? A package is a reusable piece of software which can be downloaded from a global registry into a developer’s local environment and included in application code. Because packages act as reusable “building blocks” and typically address common needs (such as API error handling), they can help reduce development time. An individual package may or may not depend on other packages; for example, you may wish to use a package called foo, which depends on another package called bar. Generally speaking, installing foo would automatically install bar as well as any additional dependencies. What is a Package Manager? A package manager lets you manage the dependencies (external code written by you or someone else) that your project needs to work correctly. For JavaScript, the two most popular package managers are npm and yarn. GitHub Package Registry GitHub Package Registry is a package management service that makes it easy to publish public or private packages and is fully-integrated with GitHub. Everything lives in one place, so you can use the same search, browsing, and management tools to find and publish packages as you do for your repositories. Pros GitHub is cooperating with npm and other services to make sure tooling and workflows are maintained. It supports familiar package management tools: JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet), and Docker images, with more tools to come. It’s multi-format: You can host multiple software package types in one registry. Access is entirely based on Github authentication. You can use the same credentials and permissions for both your application code and packages. Packages on GitHub inherit the visibility and permissions associated with the repository, and organizations no longer need to maintain a separate package registry and mirror permissions across systems. It is possible to use Github as a private npm registry without having to create any new credentials or use new tooling. Currently, the Github Package Registry is in limited-access beta and It’s free for both private and public packages during this period. Github has pledged that it will always be free for public packages and Docker images. README content and package metadata will be rendered on a package listing page, like this one You can set up webhook events for a package in order to be notified when it is published or updated. The registry already has GraphQL and webhook support and can be used to make Github Actions, so you can fully customize your publishing and post-publishing workflows It provides analytics for maintainers. Ultimately, Github’s registry is backed up by Microsoft, which means it has the resources and funds to ensure ongoing maintenance. Cons Right now the registry is in limited beta, so a number of features are expected to arrive soon, but not yet available. Not surprisingly, if your application code and packages all depend on Github, it becomes a single point of failure in the unlikely --but not impossible-- case that Github's own infrastructure experiences an outage or major issue. When the beta period ends and the GitHub package registry becomes generally available, users will have to pay to publish and use private packages. It can be confusing (and tedious) to migrate packages from other package managers. GitHub only supports scoped packages for npm. e.g. npm install @nodesource/cool-package instead of npm install cool-package. So if you have non-scoped packages on npm and are considering using GitHub as your registry, the migration can be messy. If you have your packages in multiple places like GitHub and npm, it’s possible that you will have different versions of the same package in both registries (with one version being slightly newer while the other is outdated). So it is a good practice to keep packages independent of the registry, or to use only one place to store your packages. What does this mean for npm users? npm configuration details can be found here - If you want to install something published to Github and not npm, you will need a Github account and to authenticate with the npm client, providing an access token What does it mean for me as a maintainer of a public npm package? It could mean that you may want to publish your public packages to multiple registries, but it is not yet clear how best to do this. You now have a choice for where to publish your packages between npm and github, defined by your package.json registry field. The registry is compatible with npm and allows developers to find and publish their own packages, using the same GitHub interface they use for their code. Source: GitHub Package Registry: Pros and Cons for the Node.js Ecosystem

    Read at 10:25 am, May 19th

  • API Evolution for REST/HTTP APIs | Phil Sturgeon

    API Evolution for REST/HTTP APIs read There are a lot of pros and cons to various approaches to API versioning, but that has been covered in depth before: API Versioning Has No "Right" Way. API evolution is making a comeback these days with GraphQL and gRPC advocates shouting about it. Whatever API paradigm or implementation you subscribe to, evolution is available to you. REST advocates have been recommending API evolution for decades, but in the past I failed to understand how exactly to handle evolution. Luckily, as always, tooling and standards for HTTP have been improving, and these days API evolution is a lot easier to wrap your head around. What is API Evolution API evolution is the concept of striving to maintain the "I" in API, the request/response body, query parameters, general functionality, etc., only breaking them when you absolutely, absolutely, have to. It's the idea that API developers bending over backwards to maintain a contract, no matter how annoying that might be, is often more financially and logistically viable than dumping the workload onto a wide array of clients. At some point change cannot be prevented, so at that time evolution suggests you provide sensible warnings to clients, letting them know if a feature they're using is going away, and not bothering them otherwise. Examples The property name exists, and that needs to be split into first_name and last_name. Easy enough. However the data is handled internally (splitting on first space or last space or some other falsehood defying assumption) you now have two new properties. The serializer can change from outputting just their name, to outputting all three properties: class UserSerializer include FastJsonapi::ObjectSerializer attributes :name, :first_name, :last_name attribute :name do |object| "#{object.first_name} #{object.last_name}" end end When folks POST or PATCH to your API, if they send a name you can convert it, or if they send first_name and last_name it'll get picked up fine on the serializer. Job done. The property price needs to stop being dollars/pounds/whatever as we're starting to support currencies that don't fit into "unit" and "subunit". Switching to an integer to place your cents, pence, etc. would be just as much of a Fallacies Programmers Think About Currencies as using float dollars/pounds, etc. To support the widest array of currencies, some folks like to use "micros", a concept explained well here by Sift Science. In this case, the new property could easily be called price_micros. If somebody grumps about that and you want a more concise name, just call it amount and point folks towards that property instead. A thesaurus is handy. Why don't we just outright change this value from dollars to micros? Because then we'd start charging $1,000,000 for stuff that should only cost $1, and folks probably wouldn't like that. Now clients can either send the price property, and it'll convert, or send the new price_micros property. If currency is a property in the resource (or something nearby) then it's easy enough to support price for whatever initial currencies you had (dollar/pound/euro) and throw an error if somebody tries using price for these newer currencies, pointing them instead to the micro property. Nothing broke for existing use cases, and new functionality was added seamlessly. We have too many old properties kicking around, we need to get rid of them. Deprecations can be communicated in a few ways for API's. For those using OpenAPI v3, you can mark it as deprecated: true in the documentation. That's not ideal, of course, as OpenAPI is usually human-readable documentation, sat out of band on a developer portal somewhere. Rarely are OpenAPI schemas shoved into the response like JSON Schema is, so programmatically clients have no real way to access this. JSON Schema is considering adding a deprecated keyword, and oops I think I'm in charge of making that happen. I'll get back to doing that after this blog post. The idea here would be to pair the schema with a smart SDK (client code) which detects which properties are being used. If the schema marks the foo field as deprecated, and the client code then calls $response-&gt;foo, the SDK can raise a deprecation warning. This is achieved by inspecting the schema file at runtime if you offer your schemas in the Link header, or at compile time if you're distributing schema files with the SDK. GraphQL has the advantage when it comes to property deprecation for sure, as their type system demands clients to specify the properties they want. By knowing which clients are requesting a deprecated property, you can either reach out to that client (manually or automatically), or shove some warnings into the response somewhere to let them know they're asking for a thing which is going away. This is the sort of advantage you get when your type system, clients, etc. are all part of the same package, but HTTP in general can achieve this same functionality through standards. All of that said, removing old properties is usually not all that much of a rush or an issue. Over time new developers will be writing new integrations, looking at your new documentation that tells them to use the new property, and your developer newsletters or changelogs just let them know to move away from it over time. A carpooling company that has "matches" as a relationship between "drivers" and "passengers", suggesting folks who could ride together, containing properties like "passenger_id" and "driver_id". Now we need to support carpools that can have multiple drivers (i.e. Frank and Sally both take it in turns to drive), so this whole matches concept is garbage. At a lot of startups, this sort of conceptual change is common. No number of new properties is going to help out here, as the whole "one record = one match = one driver + one passenger" concept was junk. We'd need to make it so folks could accept a carpool based on the group, and any one of those folks could drive on a given day. Luckily, business names often change fairly regularly in the sort of companies that have fundamental changes like this. There is often a better word that folks have been itching to switch to, and evolution gives you a chance to leverage that change to your benefit. Deprecating the whole concept of "matches", a new concept of "riders" can be created. This resource would track folks far beyond just being "matched", through the whole lifecycle of the carpool, thanks to a status property containing pending, active, inactive, blocked, etc. By creating the /riders endpoint, this resource can have a brand new representation. As always, the same database fields can be used internally, the same internal alerting tools are used for letting folks know about matches (v1 app) or new pending riders (v2 app). The API can create and update "matches" through this new "riders" interface. Clients can then use either one, and the code just figures itself out in the background. Over time the refactoring can be done to move the internal logic more towards riders, and your integration tests / contract tests will confirm that things aren't changing on the outside. @cebe asks: How would the matches endpoint return data where there is more than one driver? If the data does not fit the endpoint anymore, it must be broken or fail for such data? Ex-coworker and API mastermind Nicol&#xE1;s Hock-Isaza says: We only exposed the first driver match to the older apps. If the user accepted it, great. The other driver riders would be denied. If the user rejected the first one, we would show the next one, and the next one, and the next one. All it takes is a little ingenuity, and API evolution isn't so scary. We have all these old endpoints hanging around, can we get rid of these slightly more intelligently than just sending some emails? Yes! API Endpoints can be marked with a Sunset header to signal deprecation (and eventual removal) of an endpoint. The Sunset header is an in-development HTTP response header that is aiming to standardize how URLs are marked for deprecation. tl;dr it looks a bit like this: Sunset: Sat, 31 Dec 2018 23:59:59 GMT The date is a HTTP date, and can be combined with a Link: &lt;http://foo.com/something&gt; rel="sunset" which can be anything that might help a developer know what is going on. Maybe link to your API documentation for the new resource, the OpenAPI/JSON Schema definitions, or even a blog post explaining the change. Ruby on Rails has rails-sunset, and hopefully other frameworks will start adding this functionality. Open-source API Gateway system Tyk is adding support to an upcoming version. Clients then add a middleware to their HTTP calls, checking for Sunset headers. We do this with faraday-sunset (Ruby), Hunter Skrasek made guzzle-sunset (PHP), and anyone can write a thing that looks for a header and logs it to whatever logging thing they're using. We need to change some validation rules, but the clients have rules baked in. How do we let them know change is coming? Certain validation rules are very clearly breaking. For example, lowering the maximum length of a string property would break clients who are expecting to be able to send longer names. Folks would have to shorten the property on certain devices which would be really weird, especially as the client may well be showing it as valid, only to then surface an error from the API. Other rules may seem like they're backwards compatible, but can still break clients in all sorts of ways. For example, making a string property accept a longer value can lead to problems where an out-of-date client is expecting a length of 20, but an up-to-date client has already been used to get that property up to 40. Again they user would find that data is valid on one device, but be stuck unable to submit the form on another device. Baking validation rules into client applications based on whatever the documentation says is brittle, so moving client-side validation logic to server-defined JSON Schema can solve these problems, and a bunch more. It also makes evolution a whole bunch easier, because this is just another category of change you are automatically communicating to client applications, without any developers needing to get involved. Deprecating a specific type of authentication from an endpoint, it's time to say goodbye to HTTP Basic Auth. If the client is making a request with an authorization header, they have some sort of account. If during the signup for that account you've asked them for an email, you can contact them. If you've not got any way to contact them&#x2026; tough. Monitor how many folks are using HTTP basic, blog about it, shove some videos up, and eventually you're just going to have to turn it off. The only other approach to helping out here is an SDK. If you slide some deprecation notices into the code months ahead of the cutoff date, you can throw some warnings saying the code is no longer going to work. This gives you a fighting chance for anyone that keeps a bit up to date. For those that don't, you don't have much choice. Shoving a clear error into your HTTP response (here using the amazing RFC 7807: Problems for HTTP APIs): HTTP/1.1 403 Forbidden Content-Type: application/problem+json { "type": "https://example.org/docs/errors#http-basic-removed", "title": "Basic authentication is no longer supported", "detail": "HTTP Basic has been deprecated since January, 1st 2018, and was removed May, 1st 2018. Applications should switch to OAuth to resume service." } Google Maps are using this approach to remove keyless interactions from Google Maps API. It can be handy in other situations too, like if you're dropping application/xml from your API and want people to know it won't be there forever. More Power The above solutions are a little ad-hoc, and can lead to branched code paths with a bunch of if statements. You can feature flag some of this stuff to help keep things a little tidy, and another approach is to write up change as libraries, something Stripe refer to as "declarative changes". This approach can be a little heavy handed, but it's something to keep in mind. Summary Evolution involves thinking a little differently on how you approach change. Often there are simple things you can do to keep clients ticking along, and whilst clients will have to change at some point, the whole goal here is to allow them a decent amount of time to make that switch, with the minimal change possible during that change, and no lock-step deploys required. And yes, whilst making a new endpoint to switch /matches and /riders is essentially the same as /v1/matches and /v2/matches, you've skipped the quagmire of tradeoffs between global versioning, resource versioning, or gulp method versioning. Global versioning has its place, but so does evolution. Think about it this way. If implementing some change takes twice as long for API developers compared to other versioning approaches, but save 6 or 7 client developer teams from having to do a whole bunch of work, testing, etc. to chase new versions, this has been worthwhile to the company in terms of engineering hours spent. If you've only got a small number of clients (maybe an iOS and Android version) of an API that changes drastically every year or two, then global versioning is clearly the way to go. Previous: Solving OpenAPI and JSON Schema Divergence Next: Picking the right API Paradigm Blog Logo Phil Sturgeon I build API Design tools for Stoplight.io, write about REST/RPC/GraphQL APIs, live on a bike, and occasionally upset hordes of mens rights activists on Reddit. Please enable JavaScript to view the comments powered by Disqus. Source: API Evolution for REST/HTTP APIs | Phil Sturgeon

    Read at 10:15 am, May 19th

  • Picking the right API Paradigm | Phil Sturgeon

    Picking the right API Paradigm read A while back I wrote an article called Understanding RPC, REST and GraphQL which outlined the "what" in how these various approaches differ. This got a few people thinking I was saying REST was drastically superior in all ways, which is a common conclusion when folks hear me describe REST as a layer of abstractions on top of RPC&#x2026; More abstractions does not mean definitively "better", sometimes that's going to be overkill, so let's look at when you might want to use which. Paradigm, Implementation, Specifications To oversimplify things a bit, it's reasonably fair to say that all APIs conform to a paradigm: "RPC", "REST", or "query language". These are general approaches to building APIs, but not a specific tool or specification. They are merely a concept, not a tangible thing. Implementations are something you can actually download, install, and use to build an API, that conforms to whatever rules the implementors picked, from whichever paradigms they wanted to use at the time. Specifications (standard, recommendation, etc.) are often drafted up by various working groups, to help implementations share functionality in the same way. An API and a client in different languages can work together perfectly if they're all following the specification correctly. For example: SOAP is a W3C recommendation, following the RPC paradigm, with implementations like gSOAP gRPC is a implementation, following the RPC paradigm, which has no standard or specification by any working group, but authors Goolge Inc. did document the protocol REST is a paradigm, which has never been turned into a specification, and has no official implementations, but building a REST API is usually just a case of picking appropriate standards and tooling As you can see a direct comparison between any of these things is difficult, but at WeWork I needed to find a way to help people make decisions about how to build an API. Why? Every paradigm has gone through a few years of being the hot trend, then they pass to the left, and it all circles around over, and over, and over again. Often a new implementation will be the thing that makes a paradigm the hot thing again, like gRPC. With gRPC basically being modern SOAP, both of which are RPC with a required type system in there, RPC is back in the lime light, when it was shunned for decades. Jumping on the latest hot trend for your API is a great way to build something you (or your team) end up really resenting, so finding a way to logically help folks make informed decisions is important. Making Decisions I initially wanted to make a diagram to point folks to the appropriate paradigm, but that gets really open-ended. For example, you might want to ask if having a type-system is important for the messages, but some implementations and standards from all three mentioned paradigms use types, and some do not. Making the decision between pradigms alone was so vague it was useless, so instead we went with deciding between gRPC, REST, and GraphQL. This is absolutely a false trichotomy as we could have brought XML-RPC, JSON-RPC, SOAP, SPARQL, FIQL, Micro, &#x2026; &#x1F634; Too much choice is no good for anyone, so we decided to choose one implementation for RPC, one implementation for query languages, and REST is in there with our own custom implementation (we have a template new APIs are built from, which will eventually be generated from OpenAPI files). Here Goes! Wait, what is that "context boundary" thing all about?! Basically, it's the idea that whenever a the line is crossed between any imaginary boundary, a few more layers of abstraction should be used to help with the longevity of the system. REST provides those layers of abstraction, and GraphQL provides a few too. That boundary could be as simple as another team/department/company, or a group of systems that just shouldn't know about each other. Things within the context can treat their own APIs like "private classes" in programming languages, they can change whenever they want, spin up and down, delete, evolve, change, who cares. When going to another context&#x2026; probably use things like REST (with Hypermedia and JSON Schema) to help those clients last longer without needing developer involvement for most change. This bounded context bit is really the crux of a lot of the deciding between when to use gRPC, and when to use something else. Internally you can do whatever you want, but when there's a chance that the developers involved in clients and servers not in close communication (when they have other priorities in the sprint, are on a work retreat, or literally don't know each other or have any way to communicate), these layers of abstraction become a lot more useful. Pushing client-side validation to JSON Schema, for instance, is a layer of abstraction that REST allows (and you could totally do in your own RPC APIs if not using gRPC). Another example would be pushing workflows and resource state to the API instead of having your RPC clients have to try and figure it out by looking at random properties. The when here is important, because should every API be REST or RESTish? Hell no! But REST is very important for more use-cases than folks seem to think these days. GraphQL fits in here when the more important parts of REST are not relevant, and the shape of clients is super different from each other. We've not been recommending it actively at WeWork, and one of the two teams using it has ditched the thing, but I do expect to see it pop up after making this diagram part of our API design guide. Implementations gRPC and GraphQL have officially approved implementations for a wide array of languages, so use those as a starting point. For those of you not working at WeWork (we're hiring, get in touch!), there is a bunch of REST tooling floating around that's not awful. It certainly would be lovely if there was a go-to REST implementation, like gRPC + HTTP URLs + with JSON Schema for client+server-side validation and discovery through HATEOAS&#x2026; that'd be dope. More on GraphQL: Previous: API Evolution for REST/HTTP APIs Blog Logo Phil Sturgeon I build API Design tools for Stoplight.io, write about REST/RPC/GraphQL APIs, live on a bike, and occasionally upset hordes of mens rights activists on Reddit. Source: Picking the right API Paradigm | Phil Sturgeon

    Read at 10:03 am, May 19th

Day of May 18th, 2019

  • Does Anyone Actually Want Joe Biden to Be President?

    What ‘electability’ seems to mean in 2020 — and what it meant in 2018. The most important requirement for the Democratic Party’s presidential nominee? Electability. It matters more, we keep hearing, than nominating a candidate who has good policies.

    Read at 11:55 pm, May 18th

  • Tulsi Gabbard’s Campaign Is Being Boosted by Putin Apologists

    Hawaii Rep. Tulsi Gabbard’s campaign for the Democratic presidential nomination is being underwritten by some of the nation’s leading Russophiles. Donors to her campaign in the first quarter of the year included: Stephen F.

    Read at 11:51 pm, May 18th

  • Google Has A Secret Page That Records All The Things You've Bought Online

    Gmail's "Purchases" page collects and sorts out all of your online shopping and in-app purchase receipts. Google has a page, tucked away deep in your settings, where all your receipts from shopping online are sorted and saved.

    Read at 11:49 pm, May 18th

  • Google uses Gmail to track a history of things you buy — and it's hard to delete

    Google tracks a lot of what you buy, even if you purchased it elsewhere, like in a store or from Amazon. Last week, CEO Sundar Pichai wrote a New York Times op-ed that said "privacy cannot be a luxury good.

    Read at 11:48 pm, May 18th

  • Republicans, Men and Christians Aren't Trying to Ban Abortions. White People Are

    News outlets across the country have characterized two recent bills that effectively banned abortions in Georgia and Alabama in a number of ways.

    Read at 11:47 pm, May 18th

  • Another Rule Trump Could Break: Primary Challengers Doom Incumbent Presidents

    An incumbent president with a middling approval rating and mounting controversies is usually an easy draw for primary challengers. Look to Gerald Ford, Jimmy Carter and George H.W. Bush.

    Read at 11:40 pm, May 18th

  • Stacey Abrams

    As more people of color claim political power, efforts to block them will accelerate — unless we act. Ms. Abrams is the founder of the voting rights group Fair Fight Action.

    Read at 11:35 pm, May 18th

  • Did Beto Blow It?

    In March 2017, a little-known Democratic congressman named Beto O’Rourke proposed something unusual to Will Hurd, a Republican colleague from a neighboring district: that they rent a car and embark on a 24-hour, 1,600-mile road trip from San Antonio to Washington.

    Read at 11:28 pm, May 18th

  • Time’s Up for Capitalism. But What Comes Next?

    What is the relationship of democracy to time? This question may seem abstract but is actually foundational. In a letter to James Madison, Thomas Jefferson posed the question of whether the dead should have the ability to rule from the grave. Jefferson’s answer to himself was a definitive no.

    Read at 05:00 pm, May 18th

  • The Future of React Router and @reach/router

    tl;dr We are bringing together the best of React Router and Reach Router into a new, hook-based API. React Router will be the surviving project. We'll introduce this API in a minor release (5.x). That means it's 100% backward compatible.

    Read at 04:45 pm, May 18th

  • Rei Colina

    Hello! What's your background and what do you do? I'm the Head of Engineering and R&D at letzNav (https://www.letznav.com), a growing startup where our product lines facilitate application adoption, employee onboarding, and data analytics.

    Read at 11:10 am, May 18th

  • Writing Redux-like simple middleware for React Hooks

    React 16.8 came with the addition of React Hooks. I was curious about how far it goes to allow you to replace Redux and create a middleware like Redux Thunk, Observable or Saga. There are a number of…

    Read at 11:01 am, May 18th

  • Ohio State team doctor sexually abused at least 177 students, investigators say

    An Ohio State University team doctor who died years ago sexually abused at least 177 students over a period of decades so wantonly that students described his examinations as hazing -- and their coaches, trainers, other team doctors and school leaders knew about it, according to an investigative rep

    Read at 10:55 am, May 18th

Day of May 17th, 2019

  • Pentagon will pull money from ballistic missile and surveillance plane programs to fund border wall

    The Pentagon will shift $1.

    Read at 06:00 pm, May 17th

  • 15 Months of Fresh Hell Inside Facebook

    The streets of Davos, Switzerland, were iced over on the night of January 25, 2018, which added a slight element of danger to the prospect of trekking to the Hotel Seehof for George Soros’ annual banquet.

    Read at 02:28 pm, May 17th

  • China is raising tariffs on $60 billion of US goods starting June 1

    These are the stocks posting the largest moves before the bell. Stock futures point to sharp losses as Trump ratchets up pressure against China in trade war.

    Read at 09:38 am, May 17th

  • In Appalachia, Coding Bootcamps That Aim To Retrain Coal Miners Increasingly Show Themselves To Be 'New Collar' Grifters

    A recent class action lawsuit filed in West Virginia against a retraining program that promised unemployed coal miners a foothold in the tech industry offers a cautionary tale to those banking on the rise of a Silicon Holler.

    Read at 09:35 am, May 17th

  • It’s Time to Break Up Facebook

    The last time I saw Mark Zuckerberg was in the summer of 2017, several months before the Cambridge Analytica scandal broke. We met at Facebook’s Menlo Park, Calif., office and drove to his house, in a quiet, leafy neighborhood.

    Read at 09:34 am, May 17th

  • 'She feels like my neighbor': Elizabeth Warren is surprisingly winning voters on the campaign trail by not acting like a politician

    OSAGE, Iowa (AP) — Before Elizabeth Warren pitches voters on her tax plan, she talks about her memories of her mom's struggle to pay the mortgage. Before she talked about government ethics on a recent stop, she told locals their town reminds her of her Oklahoma home.

    Read at 08:24 am, May 17th

  • Lobbyists Working to Undermine Medicare For All Host Congressional Staff at Luxury Resort

    At a luxury resort just outside of the nation’s capital last month, around four dozen senior congressional staffers decamped for a weekend of relaxation and discussion at Salamander Resort & Spa.

    Read at 08:16 am, May 17th

  • Beto’s Long History of Failing Upward

    His band didn’t catch on, his alt-weekly flopped and he lost his highest-profile race. Inside the long, risk-free rise of Beto O’Rourke.

    Read at 08:11 am, May 17th

  • Learning negotiation from Jane Austen

    Looking for a job as a software developer can be scary, exhausting, or overwhelming. Where you apply and how you interview impacts whether you’ll get a job offer, and how good it will be, so in some sense the whole job search is a form of negotiation. So how do you learn to make a good impression, to convince people of your worth, to get picked by the job you want? There are many skills to learn, and in this article I’d like to cover one particular subset. Let us travel to England, some 200 years in the past, and see what we can learn. Jane Austen, Game Theorist What does a novelist writing in the early 19th century have to do with getting a programming job? In his book Jane Austen, Game Theorist, Michael Suk-Young Chwe argues quite convincingly that Austen’s goal in writing her books is to teach strategic thinking: understanding what and why people do what they do, and how to interact with them accordingly, in order to achieve the outcomes you want. Strategic thinking is a core skill in negotiation: you’re trying to understand what the other side wants (even if they don’t explicitly say it), and to find a way to use that to get what you want. The hiring manager might want someone who both understands their particular technical domain and can help a team grow, whereas you might want a higher salary, or a shorter workweek. Strategic thinking can help you use the one to achieve the other. Strategic thinking is of course a useful skill for anyone, but why would Jane Austen in particular care about strategic thinking? To answer that we need a little historical context. The worst job search ever Imagine you could only get one job your whole life, that leaving your job was impossible, and that you’d be married to your boss. This is the “job search” that Austen faced in her own life, and is one the main topics covered in her books. Austen’s own family, and the people she writes about, were part of a very small and elite minority. Even the poorest of the families Austen writes about have at least one servant, for example. While the men of the English upper classes, if they were not sufficiently wealthy, could and did work—as lawyers, doctors, officers—their wives and daughters for the most part could not. So if they weren’t married and didn’t have sufficient wealth of their own, upper-class women had very few choices—they could live off money from relations, or take on the social status loss of becoming a governess. Marriage was therefore the presumed path to social status, economic security, and of course it determined who they would live with for the rest of their lives (divorce was basically impossible). Finding the right husband was very important. And getting that husband—who had all the legal and social authority—to respect their wishes after marriage was just as important. And of course the women who didn’t marry lived at the mercy of the family members who supported them. And that’s where strategic thinking comes in: it was a critical skill for women in Austen’s class and circumstances. Learning from Austen If, as Michael Chwe argues, Austin’s goal with her books is to teach strategic thinking, how can you use them to improve your negotiation skills? All of Austen’s books are worth reading—excepting the unfortunate Mansfield Park—but for educational purposes Northanger Abbey is a good starting point. Northanger Abbey is the story of Catherine, a naive young woman, and how she becomes less naive and more strategic. Instead of just reading it as an entertaining novel, you can use it to actively practice your own strategic understanding: In every social interaction, Catherine has a theory about other people’s motivations, why they’re doing or saying certain things. Notice the assumptions underlying her theory, and then come up with your alternative theory or explanation for other characters’ actions. Then, compare both theories as the plot unfolds and you learn more. Other characters also offer a variety of opportunities to see strategic thinking—or lack of it—in action. Once you’ve gone through the book and experienced the growth of Catherine’s strategic thinking, start practicing those skills in your life. Why are your coworkers, family, and friends doing what they’re doing? Do they have the same motivations, goals, and expectations that you do? The more you pay attention and compare your assumptions to reality, the more you’ll learn—and the better you’ll do at your next job interview. Ready to get started? You can get a paper copy from the library, or download a free ebook from Project Gutenberg. Source: Learning negotiation from Jane Austen

    Read at 05:32 pm, May 17th

  • Rethinking React State – Michael Jewell – Medium

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: Rethinking React State – Michael Jewell – Medium

    Read at 05:15 pm, May 17th

  • Security in 5.2 – Make WordPress Core

    .page-header .entry-header Post originally written by Scott Arciszewski. Protection Against Supply-Chain Attacks Starting with WordPress 5.2, your website will remain secure even if the wordpress.org servers get hacked. We are now cryptographically signing WordPress updates with a key that is held offline, and your website will verify these signatures before applying updates. Signature Verification in WordPress 5.2 When your WordPress site installs an automatic update, from version 5.2 onwards it will first check for the existence of an x-content-signature header. If one isn’t provided by our update server, your WordPress site will instead query for a filenamehere.sig file and parse it. The signatures were calculated using Ed25519 of the SHA384 hash of the file’s contents. The signature is then base64-encoded for safe transport, no matter how it’s delivered. The signing keys used to release updates are managed by the WordPress.org core development team. The verification key for the initial release of WordPress 5.2 is fRPyrxb/MvVLbdsYi+OOEv4xc+Eqpsj+kkAS6gNOkI0= (expires April 1, 2021). (For the sake of specificity: Signing key here means Ed25519 secret key, while verification key means Ed25519 public key.) To verify an update file, your WordPress site will calculate the SHA384 hash of the update file and then verify the Ed25519 signature of this hash. If you’re running PHP 7.1 or older and have not installed the Sodium extension, the signature verification code is provided by sodium compat. Our signature verification is implemented in the new verify_file_signature() function, inside wp-admin/includes/file.php. Modern Cryptography for WordPress Plugins The inclusion of sodium_compat on WordPress 5.2 means that plugin developers can start to migrate their custom cryptography code away from mcrypt (which was deprecated in PHP 7.1, and removed in PHP 7.2) and towards libsodium. Example Functions &#60;?php /** * @param string $message * @param string $key * @return string */ function wp_custom_encrypt( $message, $key ) { $nonce = random_bytes(24); return base64_encode( $nonce . sodium_crypto_aead_xchacha20poly1305_ietf_encrypt( $message, $nonce, $nonce, $key ) ); } /** * @param string $message * @param string $key * @return string */ function wp_custom_decrypt( $message, $key ) { $decoded = base64_decode($message); $nonce = substr($decoded, 0, 24); $ciphertext = substr($decoded, 24); return sodium_crypto_aead_xchacha20poly1305_ietf_decrypt( $ciphertext, $nonce, $nonce, $key ); } How to Seamlessly and Securely Upgrade your Plugins to Use the New Cryptography APIs If your plugin uses encryption provided by the abandoned mcrypt extension, there are two strategies for securely migrating your code to use libsodium. Strategy 1: All Data Decryptable at Run-Time If you can encrypt/decrypt arbitrary records, the most straightforward thing to do is to use mcrypt_decrypt() to obtain the plaintext, then re-encrypt your code using libsodium in one sitting. Then remove the runtime code for handling mcrypt-encrypted messages. &#60;?php // Do this in one sitting $plaintext = mcrypt_decrypt( $mcryptCipher, $oldKey, $ciphertext, $mode, $iv ); $encrypted = wp_custom_encrypt( $plaintext, $newKey ); Strategy 2: Only Some Data Decryptable at Run-Time If you can’t decrypt all records at once, the best thing to do is to immediately re-encrypt everything using sodium_crypto_secretbox() and then, at a later time, apply the mcrypt-flavored decryption routine (if it’s still encrypted). &#60;?php /** * Migrate legacy ciphertext to libsodium * * @param string $message * @param string $newKey * @return string */ function wp_migrate_encrypt( $message, $newKey ) { return wp_custom_encrypt( 'legacy:' . base64_encode($message), $newKey ); } /** * @param string $message * @param string $newKey * @param string $oldKey * @return string */ function wp_migrate_decrypt( $message, $newKey, $oldKey ) { $plaintext = wp_custom_decrypt($message, $newKey); if ( substr($plaintext, 0, 7) === 'legacy:' ) { $decoded = base64_decode( substr($plaintext, 7) ); if ( is_string($decoded) ) { // Now apply your mcrypt-based decryption code $plaintext = mcrypt_decrypt( $mcryptCipher, $oldKey, $decoded, $mode, $iv ); // Call a re-encrypt routine here } } return $plaintext; } Avoid Opportunistic Upgrades A common mistake some developers make is to try to do an “opportunistic” upgrade: Only perform the decrypt-then-re-encrypt routine on an as-needed basis. This is a disaster waiting to happen, and there is a lot of historical precedence to this. Of particular note, Yahoo made this mistake, and as a result, had lots of MD5 password hashing lying around their database when they were breached, even though their active users had long since upgraded to bcrypt. Detailed technical information about this new security feature, written by Paragon Initiative Enterprises (the cryptography team that developed it) are available here. #5-2 #dev-notes .entry-content .entry-meta #post-## #comments Post navigation #nav-below Source: (1) Security in 5.2 – Make WordPress Core

    Read at 05:05 pm, May 17th

  • Uber, Lyft drivers manipulate fares at Reagan National causing artificial price surges | WJLA

    Every night, several times a night, Uber and Lyft drivers at Reagan National Airport simultaneously turn off their ride share apps for a minute or two to trick the app into thinking there are no drivers available---creating a price surge. When the fare goes high enough, the drivers turn their apps back on and lock into the higher fare. It's happening in the Uber and Lyft parking lot outside Reagan National airport.Source: Uber, Lyft drivers manipulate fares at Reagan National causing artificial price surges | WJLA

    Read at 02:39 pm, May 17th

Day of May 16th, 2019

  • No ‘do-over’ on Mueller probe, White House lawyer tells House panel, saying demands for records, staff testimony will be refused

    The White House’s top lawyer told the House Judiciary Committee chairman Wednesday that Congress has no right to a “do-over” of the special counsel’s investigation of President Trump and refused a broad demand for records and testimony from dozens of current and former White House staffers.

    Read at 06:26 pm, May 16th

  • The internet didn't shrink 6% real estate commissions. But this lawsuit might

    After moving eight times as her husband's job transferred them around the world, Lindy Chapman felt she knew a thing or two about selling real estate.

    Read at 06:23 pm, May 16th

  • Type aliases vs. interfaces in TypeScript-based React apps

    Type aliases and interfaces are TypeScript language features that often confuse people who try TypeScript for the first time. What’s the difference between them? When should we use one over the…

    Read at 06:09 pm, May 16th

  • These Days, It’s Not About the Polar Bears

    Climate science has struggled mightily with a messaging problem. The well-worn tactic of hitting people over the head with scary climate change facts has proved inadequate at changing behavior or policies in ways big enough to alter the course of global warming.

    Read at 03:01 pm, May 16th

  • What You Can Do to Help Women in States With 6-Week Abortion Bans

    This past week, Georgia became the sixth state to pass an ultra-restrictive law banning abortion at six weeks.

    Read at 09:35 am, May 16th

  • The Real Origins of the Religious Right

    They’ll tell you it was abortion. Sorry, the historical record’s clear: It was segregation.

    Read at 09:33 am, May 16th

  • Televangelist Pat Robertson: Alabama’s abortion ban is ‘extreme’ and has ‘gone too far’

    Alabama’s law, which has been passed by the legislature and signed by the governor, includes a penalty of up to 99 years in prison for doctors who perform abortions and has no exceptions for rape or incest, Robertson noted on his show. “They want to challenge Roe vs.

    Read at 09:17 am, May 16th

  • Uber had a terrible first day as a public company. It might not matter at all.

    Uber’s agonizingly difficult first day as a public company was a cold shower for Silicon Valley’s hottest company, a dousing that drenches both the euphoria of selfie-happy Uber investors and maybe the hopefulness of other startups preparing to follow it to their own IPOs.

    Read at 08:26 am, May 16th

  • ZOZO Redux, or Did I Just Waste 1,500 Words?

    Well that was quick. Around 18 months after launching their ambitious “ZOZOSUIT” product, the Japanese clothing company has shut down all international operations and is no longer offering the custom-fit service.

    Read at 08:24 am, May 16th

  • ‘Build More Housing’ Is No Match for Inequality

    Build more. That’s what a growing number of urbanists hail as the solution to the surging home prices and stark inequality of America’s superstar cities and tech hubs.

    Read at 08:20 am, May 16th

  • Democratic Socialists of America (DSA)

    DSA condemns the draconian new Georgia law outlawing abortion after six weeks. That’s earlier than most people even realize they are pregnant, with no exception for rape or incest. We are happy that legal challenges will likely delay its actual implementation.

    Read at 08:00 am, May 16th

  • How to securely build Docker images for Node.js

    When a Dockerfile doesn’t specify a USER, it defaults to executing the container using the root user. In practice, there are very few reasons why the container should have root privileges. Docker defaults to running containers using the root user.

    Read at 07:54 am, May 16th

  • Everybody’s Having A Great Time Hating De Blasio

    Inside a morning show studio in Times Square, the molten core of capitalism, the former Sandinista mayor of New York confirmed he is running for president.Outside, under the serene countenance of a 10-story Gap ad and a rotating neon one for the state-run Chinese press agency Xinhua, everybody was having an amazing time booing the hell out of him.At the Good Morning America barricades, a contingency from East Brooklyn Congregations assembled, carrying signs (“Stop stealing from seniors”) protesting de Blasio’s slowness to build senior housing. Behind them, the New York City Police Benevolent Association (the police union, currently supporting the officer who placed Eric Garner in a lethal chokehold) climbed up on the stone steps with whistles and orange foam fingers, which read “LIAR.”At first, competing chants:“NO / FRIEND / TO LABOR”“NO / FRIEND / TO SENIORS”But eventually the entire situation melded into one big “LIAR” chant, subsuming a variety of other chants about inability to run the city and so forth, and uniting the people. Teen tourists put on foam fingers and joined in. A pair of women “on holiday” shouted “LIE-AHHR.”Another family of tourists came up and asked not-quite-audible questions about what the protest was about (audible answer: “because he’s an asshole”), and one clad in a black Houston Rockets hat followed it up with, “Why is he a liar?” Occasionally, people would walk up, ask what was going on, and laugh.Setting aside Bill de Blasio, human man, and whatever critiques and commendations you may want to bestow upon him, and whether de Blasio has a chance (since you really never know), and whether there’s a real darkness in this kind of joy: People are having a lot of fun. On Wednesday night, a jogger ran past the mayor’s mansion and shouted, “Don’t do it, Bill!” Told that he was, in fact, doing it, the jogger told NY1’s Grace Rauh, “I can’t believe it. Nobody wants it.” This is practically straight out of last week’s Onion story where the joke is de Blasio’s own PAC begins a campaign to stop him from running: “Our team of canvassers will knock on his door every 20 minutes to personally beg him not to do this.” People posted fliers at the YMCA in Brooklyn he inexplicably frequents that say he wouldn’t be allowed inside if he ran for president.“I can’t even express… the ludicrous-y of this,” an incensed woman begins in a man-on-the-street package NY1 ran last night (3–2 people saying he has no chance, though one of the two is a kid, but she counts, even if her vote wouldn't). “I’m not saying that,” the woman continued when asked if she thought de Blasio would be a bad candidate. “We don’t need another candidate. We don’t need another candidate. We have issues here in New York that he needs to pay attention to.”Anyway, everybody had a grand time on Thursday morning. Said the mayor inside, “A little serenading.” Source: Everybody’s Having A Great Time Hating De Blasio

    Read at 05:15 pm, May 16th

  • I charged $18,000 for a Static HTML Page

    Not too long ago, I made a living working as a contractor where I would hop from project to project. Some were short term where I would work for a week and quickly deliver my service. Others lasted a couple months where I would make enough money to take some time off. I preferred the short ones because they allowed me to charge a much higher rate for a quick job. Not only I felt like my own boss, but I also felt like I didn't have to work too hard to make a decent living. My highest rates were still reasonable, and I always delivered high quality service. That was until I landed a gig with a large company. This company contacted me in urgency and the manager told me they needed someone right away. Someone who required minimum training for maximum performance. For better or worse, that was my motto. This project was exactly the type of work I liked. It was short, fast, and it paid well. After negotiating a decent rate, I received an email with the instructions. They gave me more context for the urgency. Their developer left without prior warning and never updated anyone on the status of his project. We need your full undivided attention to complete this project. For the duration of the contract, you will work exclusively with us to deliver result in a timely manner. We plan to compensate you for the trouble. The instructions were simple: Read the requirements then come up with an estimate of how long it would take to complete the project. This was one of the easier projects I have encountered in my career. It was an HTML page with some minor animations and a few embedded videos. I spent the evening studying the requirements and simulating the implementation in my head. Over the years, I've learned not to write any code for a client until I have a guarantee of pay. I determined that this project would be a day's worth of work. But to be cautious, I quoted 20 hours with a rough total of $1500. It was a single HTML page after all, and I can only charge them so much. They asked me to come on site to their satellite office 25 miles away. I would have to drive there for the 3 days I would be working for them. The next day, I arrived at the satellite office. It was in a shopping center where a secret door led to a secret world where a few workers where churning quietly in their cubicles. The receptionist presented me with a brand new MacBook Pro that I had to set up from scratch. I do prefer using a company's laptop because they often require contractors to install suspicious software. I spent the day downloading my toolkit, setting up email, ssh keys, and requesting invites to services. In other words, I got nothing done. This is why I quoted 20 hours, I lost 8 hours of my estimated time doing busy work. The next day, I was ready to get down to business. Armed with the MacBook Pro, I sent an email to the manager. I told him that I was ready to work and that I was waiting for the aforementioned assets. That day, I stayed in my cubicle under a softly buzzing light, twiddling my fingers until the sun went down. I did the math again. According to my estimate, I had 4 hours left to do the job, which was not so unrealistic for a single HTML page. But needless to say, the next day, I spent those remaining 4 hours in a company sponsored lunch where I ate very well and mingled with other employees. When the time expired, I made sure to send the manager another email, to let him know that I had been present in the company only I had not received the assets I needed to do the job. That email, of course, was ignored. The following Monday, I hesitantly drove the 25 miles. To my surprise, the manager had come down to the satellite office where he enthusiastically greeted me. He was a nice easy-going guy in his mid thirties. I was confused. He didn't have the urgency tone he had on the phone when he hired me. We had a friendly conversation where no work was mentioned. Later, we went down to lunch where he paid for my meal. It was a good day. No work was done. Call me a creature of habit, but if you feed me and pamper me everyday, I get used to it. It turned into a routine. I'd come to work, spend some time online reading and watching videos. I'd send one email a day, so they know I am around. Then I'd go get lunch and hangout with whomever had an interesting story to share. At the end of the day, I'd stand up, stretch, let out a well deserved yawn, then drive home. I got used to it. In fact, I was expecting it. It was a little disappointing when I finally got an email with a link that pointed to the assets I needed for the job. I came back down to earth, and put on my working face. Only, after spending a few minutes looking through the zip file, I noticed that it was missing the bulk of what I needed. The designer had sent me some Adobe Illustrator files, and I couldn't open it on the MacBook. I replied to the email explaining my concerns and bundled a few other questions to save time. At that point, my quoted 20 hours time had long expired. I wanted to get this job over with already. Shortly after I clicked on send, I received an email. All it said was: "Adding Alex to the thread," and Alex was CC'd to the email. Then Alex replied where he added Steve to the thread. Steve replied saying that Michelle was a designer and she would know more about this. Michelle auto responded saying that she was on vacation and that all inquiries should be directed to her manager. Her manager replied asking "Who is Ibrahim?" My manager replied excusing himself for not introducing me. As a contractor, I am usually in and out of a company before people notice that I work there. Here, I received a flood of emails welcoming me aboard. The chain of emails continued for a while and I was forced to answer to those awfully nice messages. Some people were eager to meet me in person. They got a little disappointed when I said that I was all the way down in California. And jealous, they said they were jealous of the beautiful weather. They used courtesy to ignore my emails. They used CC to deflect my questions. They used spam to dismiss anything I asked. I spent my days like an archaeologist digging through the deep trenches of emails, hoping to find answers to my questions. You can imagine the level of impostor syndrome I felt every time I remembered that my only task was to build a single static HTML page. The overestimated 20 hours project turned into a 7 weeks adventure where I enjoyed free lunches, drove 50 miles everyday, and dug through emails. When I finally completed the project, I sent it to the team on github. All great adventures must come to an end. But shortly after, I received an invitation to have my code reviewed by the whole team on Google Hangout. I had spent more than a month building a single static HTML page and now the entire team would have to critique my work? In my defense, there was also some JavaScript interactions, and it was responsive, and it also had CSS animations... Impostor. Of course, the video meeting was rescheduled a few times. When it finally happened, my work and I were not the subject of the meeting. They were all sitting in the same room somewhere in New York and talked for a while like a tight knit group. In fact, all they ever said about the project was: Person 1: Hey is anyone working on that sponsored page?Person 2: Yeah, I think it's done.Person 1: Great, I'll merge it tonight. When I went home that night, I realized that I was facing another challenge. I had been working at this company for 7 weeks, and my original quote was for $1,500. That's roughly the equivalent of $11,100 a year or $214 a week. Or even better, it was $5.35 an hour. This barely covered my transportation. So, I sent them an invoice where I quoted them for 7 weeks of work at the original hourly rate. The total amounted to $18,000. I was ashamed of course, but what else was I supposed to do? Just like I expected, I got no reply. If there is something that all large companies have in common, it's that they are not very eager to pay their bills on time. I felt like a cheat charging so much for such a simple job, but this was not a charity. I had been driving 50 miles everyday to do the job, if the job was not getting done it was not for my lack of trying. It was for their slow responses. I got an answer the following week. It was a cold email from the manager where he broke down every day I worked into hourly blocks. Then he highlighted those I worked on and marked a one hour lunch break each day. At the end he made some calculations with our agreed upon hourly rate. Apparently, I was wrong. I had miscalculated the total. After adjustment, the total amount they owed me was $21,000. Please confirm the readjusted hours so accounting can write you a check. I quickly confirmed these hours. Source: I charged $18,000 for a Static HTML Page

    Read at 10:26 am, May 16th

Day of May 15th, 2019

  • Trump's long trade war

    Senior administration officials tell Axios that a trade deal with China isn't close and that the U.S. could be in for a long trade war.

    Read at 11:43 am, May 15th

  • YouTube’s Newest Far-Right, Foul-Mouthed, Red-Pilling Star Is A 14-Year-Old Girl

    "Soph" has nearly a million followers on the giant video platform. The site's executives only have themselves to blame. What does a 14-year-old girl dressed in a chador have to say on YouTube to amass more than 800,000 followers?

    Read at 11:41 am, May 15th

  • Julian Assange: Sweden reopens rape investigation

    Swedish prosecutors have reopened an investigation into a rape allegation made against Wikileaks co-founder Julian Assange in 2010. The inquiry has been revived at the request of the alleged victim's lawyer.

    Read at 09:36 am, May 15th

  • Theresa May set to let MPs decide as Brexit talks hit buffers

    Theresa May is preparing to concede giving Parliament “definitive votes” to decide Brexit terms as furious MPs pile pressure on her and Jeremy Corbyn to abandon their talks.

    Read at 09:30 am, May 15th

  • Tech workers protest data mining firm Palantir for role in immigrant arrests

    Palantir, the CIA-backed data-mining firm co-founded by Donald Trump’s ally Peter Thiel, became the target of an online protest organized by tech activists against the company’s work with US immigration authorities.

    Read at 09:27 am, May 15th

  • Warren vows to pick ex-public school teacher as Education secretary

    Sen. Elizabeth Warren (D-Mass.) vowed on Monday that she will pick a former public school teacher to lead the Department of Education if she wins the White House in 2020.  "In a Warren administration, we'll have a secretary of Education who is committed to public education.

    Read at 09:25 am, May 15th

  • Angry Birds and the end of privacy

    Angry Birds is so 2009, you might say. “I haven’t played Angry Birds since 2012, at the latest,” you might insist. It doesn’t matter. Angry Birds is still part of your life.

    Read at 09:25 am, May 15th

  • Dems plead with Steve Bullock to abandon White House bid for Senate

    Like Beto O’Rourke, John Hickenlooper and Stacey Abrams, the Montana governor is rebuffing pressure to redraw the Senate map. Top Democrats in Montana and Washington are really excited about Gov. Steve Bullock running — for the Senate, not the presidency.

    Read at 09:08 am, May 15th

  • Donald Trump Jr. reaches deal with Senate Intelligence Committee for testimony in June

    Donald Trump, Jr. and the Senate Intelligence Committee have reached a deal for the President's eldest son to appear before the committee behind closed doors in mid-June, a source familiar with the matter told CNN. The two sides reached a deal after the committee issued a subpoena for Trump Jr.

    Read at 09:04 am, May 15th

  • Elegant error handling with the JavaScript Either Monad

    Elegant error handling with the JavaScript Either Monad An earlier version of this article was first published on the LogRocket blog. Let&#x2019;s talk about how we handle errors for a little bit. In JavaScript, we have a built-in language feature for dealing with exceptions. We wrap problematic code in a try&#x2026;catch statement. This lets us write the &#x2018;happy path&#x2019; in the try section, and then deal with any exceptions in the catch section. And this is not a bad thing. It allows us to focus on the task at hand, without having to think about every possible error that might occur. It&#x2019;s definitely better than littering our code with endless if-statements. Without try&#x2026;catch, it gets tedious checking the result of every function call for unexpected values. Exceptions and try&#x2026;catch blocks serve a purpose. But, they have some issues. And they are not the only way to handle errors. In this article, we&#x2019;ll take a look at using the &#x2018;Either monad&#x2019; as an alternative to try...catch. A few things before we continue. In this article, we&#x2019;ll assume you already know about function composition and currying. If you need a minute to brush up on those, that&#x2019;s totally OK. And a word of warning. If you haven&#x2019;t come across things like monads before, they might seem really&#x2026; different. Working with tools like these takes a mind shift. And that can be hard work to start with. Don&#x2019;t worry if you get confused at first. Everyone does. I&#x2019;ve listed some other references at the end that may help. But don&#x2019;t give up. This stuff is intoxicating once you get into it. A sample problem Before we go into what&#x2019;s wrong with exceptions, let&#x2019;s talk about why they exist. There&#x2019;s a reason we have things like exceptions and try&#x2026;catch blocks. They&#x2019;re not all bad all of the time. To explore the topic, we&#x2019;ll attempt to solve an example problem. I&#x2019;ve tried to make it at least semi-realistic. Imagine we&#x2019;re writing a function to display a list of notifications. We&#x2019;ve already managed (somehow) to get the data back from the server. But, for whatever reason, the back-end engineers decided to send it in CSV format rather than JSON. The raw data might look something like this: timestamp,content,viewed,href 2018-10-27T05:33:34+00:00,@madhatter invited you to tea,unread,https://example.com/invite/tea/3801 2018-10-26T13:47:12+00:00,@queenofhearts mentioned you in 'Croquet Tournament' discussion,viewed,https://example.com/discussions/croquet/1168 2018-10-25T03:50:08+00:00,@cheshirecat sent you a grin,unread,https://example.com/interactions/grin/88 Now, eventually, we want to render this code as HTML. It might look something like this: &lt;ul class="MessageList"&gt; &lt;li class="Message Message--viewed"&gt; &lt;a href="https://example.com/invite/tea/3801" class="Message-link"&gt;@madhatter invited you to tea&lt;/a&gt; &lt;time datetime="2018-10-27T05:33:34+00:00"&gt;27 October 2018&lt;/time&gt; &lt;li&gt; &lt;li class="Message Message--viewed"&gt; &lt;a href="https://example.com/discussions/croquet/1168" class="Message-link"&gt;@queenofhearts mentioned you in 'Croquet Tournament' discussion&lt;/a&gt; &lt;time datetime="2018-10-26T13:47:12+00:00"&gt;26 October 2018&lt;/time&gt; &lt;/li&gt; &lt;li class="Message Message--viewed"&gt; &lt;a href="https://example.com/interactions/grin/88" class="Message-link"&gt;@cheshirecat sent you a grin&lt;/a&gt; &lt;time datetime="2018-10-25T03:50:08+00:00"&gt;25 October 2018&lt;/time&gt; &lt;/li&gt; &lt;/ul&gt; To keep the problem simple, for now, we&#x2019;ll just focus on processing each line of the CSV data. We start with a few simple functions to process the row. The first one will split a line of text into fields: function splitFields(row) { return row.split('","'); } Now, this function is over-simplified because this is a tutorial. Our focus is on error handling, not CSV parsing. If there was ever a comma in one of the messages, this would go horribly wrong. Please do not ever use code like this to parse real CSV data. If you ever do need to parse CSV data, please use a well-tested CSV parsing library. Once we&#x2019;ve split the data, we want to create an object. And we&#x2019;d like each property name to match the CSV headers. Let&#x2019;s assume we&#x2019;ve already parsed the header row somehow. (We&#x2019;ll cover that bit in a moment.) But we&#x2019;ve come to a point where things might start going wrong. We have an error to handle. We throw an error if the length of the row doesn&#x2019;t match the header row. (_.zipObject is a lodash function). function zipRow(headerFields, fieldData) { if (headerFields.length !== fieldData.length) { throw new Error("Row has an unexpected number of fields"); } return _.zipObject(headerFields, fieldData); } After that, we&#x2019;ll add a human-readable date to the object, so that we can print it out in our template. It&#x2019;s a little verbose, as JavaScript doesn&#x2019;t have awesome built-in date formatting support. And once again, we encounter potential problems. If we get an invalid date, our function throws an error. function addDateStr(messageObj) { const errMsg = 'Unable to parse date stamp in message object'; const months = [ 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December' ]; const d = new Date(messageObj.datestamp); if (isNaN(d)) { throw new Error(errMsg); } const datestr = `${d.getDate()} ${months[d.getMonth()]} ${d.getFullYear()}`; return {datestr, ...messageObj}; } And finally we take our object, and pass it through a template function to get an HTML string. const rowToMessage = _.template(`&lt;li class="Message Message--&lt;%= viewed %&gt;"&gt; &lt;a href="&lt;%= href %&gt;" class="Message-link"&gt;&lt;%= content %&gt;&lt;/a&gt; &lt;time datetime="&lt;%= datestamp %&gt;"&gt;&lt;%= datestr %&gt;&lt;/time&gt; &lt;li&gt;`); If we end up with an error, it would also be nice to have a way to print that too: const showError = _.template(`&lt;li class="Error"&gt;&lt;%= message %&gt;&lt;/li&gt;`); And once we have all of those in place, we can put them together to create our function that will process each row. function processRow(headerFieldNames, row) { try { fields = splitFields(row); rowObj = zipRow(headerFieldNames, fields); rowObjWithDate = addDateStr(rowObj); return rowToMessage(rowObj); } catch(e) { return showError(e); } } So, we have our example function. And it&#x2019;s not too bad, as far as JavaScript code goes. But let&#x2019;s take a closer look at how we&#x2019;re managing exceptions here. Exceptions: The good parts So, what&#x2019;s good about try...catch? The thing to note is, in the above example, any of the steps in the try block might throw an error. In zipRow() and addDateStr() we intentionally throw errors. And if a problem happens, then we simply catch the error and show whatever message the error happens to have on the page. Without this mechanism, the code gets really ugly. Here&#x2019;s what it might look like without exceptions. Instead of throwing exceptions, we&#x2019;ll assume that our functions will return null. function processRowWithoutExceptions(headerFieldNames, row) { fields = splitFields(row); rowObj = zipRow(headerFieldNames, fields); if (rowObj === null) { return showError(new Error('Encountered a row with an unexpected number of items')); } rowObjWithDate = addDateStr(rowObj); if (rowObjWithDate === null) { return showError(new Error('Unable to parse date in row object')); } return rowToMessage(rowObj); } As you can see, we end up with a lot of boilerplate if-statements. The code is more verbose. And it&#x2019;s difficult to follow the main logic. Also, a null value doesn&#x2019;t tell us very much. We don&#x2019;t actually know why the previous function call failed. So, we have to guess. We make up an error message, and call showError(). Without exceptions, the code is messier and harder to follow. But look again at the version with exception handling. It gives us a nice clear separation between the &#x2018;happy path&#x2019; and the exception handling code. The try part is the happy path, and the catch part is the sad path (so to speak). All of the exception handling happens in one spot. And we can let the individual functions tell us why they failed. All in all, it seems pretty nice. In fact, I think most of us would consider the first example a neat piece of code. Why would we need another approach? Problems with try&#x2026;catch exception handling The good thing about exceptions is they let you ignore those pesky error conditions. But unfortunately, they do that job a little too well. You just throw an exception and move on. We can work out where to catch it later. And we all intend to put that try&#x2026;catch block in place. Really, we do. But it&#x2019;s not always obvious where it should go. And it&#x2019;s all too easy to forget one. And before you know it, your application crashes. Another thing to think about is that exceptions make our code impure. Why functional purity is a good thing is a whole other discussion. But let&#x2019;s consider one small aspect of functional purity: referential transparency. A referentially-transparent function will always give the same result for a given input. But we can&#x2019;t say this about functions that throw exceptions. At any moment, they might throw an exception instead of returning a value. This makes it more complicated to think about what a piece of code is actually doing. But what if we could have it both ways? What if we could come up with a pure way to handle errors? Coming up with an alternative Pure functions always return a value (even if that that value is nothing). So our error handling code, needs to assume we always return a value. So, as a first attempt, what if we returned an Error object on failure? That is, wherever we were throwing an error, we return it instead. That might look something like this: function processRowReturningErrors(headerFieldNames, row) { fields = splitFields(row); rowObj = zipRow(headerFieldNames, fields); if (rowObj instanceof Error) { return showError(rowObj); } rowObjWithDate = addDateStr(rowObj); if (rowObjWithDate instanceof Error) { return showError(rowObjWithDate); } return rowToMessage(rowObj); } This is not much of an improvement on the version without exceptions. But it is better. We&#x2019;ve moved responsibility for the error messages back into the individual functions. But that&#x2019;s about it. We&#x2019;ve still got all of those if-statements. It would be really nice if there was some way we could encapsulate the pattern. In other words, if we know we&#x2019;ve got an error, don&#x2019;t bother running the rest of the code. Polymorphism So, how do we do that? It&#x2019;s a tricky problem. But it&#x2019;s achievable with the magic of polymorphism. If you haven&#x2019;t come across polymorphism before, don&#x2019;t worry. All it means is &#x2018;providing a single interface to entities of different types.&#x2019; In JavaScript, that means we create objects that have methods with the same name and signature. But we give them different behaviors. A classic example of this is application logging. We might want to send our logs to different places depending on what environment we&#x2019;re in. What if we created two logger objects, like so? const consoleLogger = { log: function log(msg) { console.log('This is the console logger, logging:', msg); } }; const ajaxLogger = { log: function log(msg) { return fetch('https://example.com/logger', {method: 'POST', body: msg}); } }; Both objects define a log function that expects a single string parameter. But they behave differently. The beauty of this is that we can write code that calls .log(), but doesn&#x2019;t care which object it&#x2019;s using. It might be a consoleLogger or an ajaxLogger. It works either way. For example, the code below would work equally well with either object: function log(logger, message) { logger.log(message); } Another example is the .toString() method on all JS objects. We can write a .toString() method on any class that we make. So, perhaps we could create two classes that implement .toString() differently. We&#x2019;ll call them Left and Right (I&#x2019;ll explain why in a moment). class Left { constructor(val) { this._val = val; } toString() { const str = this._val.toString(); return `Left(${$str})`; } } class Right { constructor(val) { this._val = val; } toString() { const str = this._val.toString(); return `Right(${str})`; } } Now, let&#x2019;s create a function that will call .toString() on those two objects: function trace(val) { console.log(val.toString()); return val; } trace(new Left('Hello world')); // &#x2998; Left(Hello world) trace(new Right('Hello world')); // &#x2998; Right(Hello world); Not exactly mind-blowing, I know. But the point is that we have two different kinds of behavior using the same interface. That&#x2019;s polymorphism. But notice something interesting. How many if-statements have we used? Zero. None. We&#x2019;ve created two different kinds of behavior without a single if-statement in sight. Perhaps we could use something like this to handle our errors&#x2026; Left and right Getting back to our problem, we want to define a happy path and a sad path for our code. On the happy path, we just keep happily running our code until an error happens or we finish. If we end up on the sad path though, we don&#x2019;t bother with trying to run the code any more. Now, we could call our two classes &#x2018;Happy&#x2019; and &#x2018;Sad&#x2019; to represent two paths. But we&#x2019;re going to follow the naming conventions that other programming languages and libraries use. That way, if you do any further reading it will be less confusing. So, we&#x2019;ll call our sad path &#x2018;Left&#x2019; and our happy path &#x2018;Right&#x2019; just to stick with convention. Let&#x2019;s create a method that will take a function and run it if we&#x2019;re on the happy path, but ignore it if we&#x2019;re on the sad path: /** * Left represents the sad path. */ class Left { constructor(val) { this._val = val; } runFunctionOnlyOnHappyPath() { // Left is the sad path. Do nothing } toString() { const str = this._val.toString(); return `Left(${$str})`; } } /** * Right represents the happy path. */ class Right { constructor(val) { this._val = val; } runFunctionOnlyOnHappyPath(fn) { return fn(this._val); } toString() { const str = this._val.toString(); return `Right(${str})`; } } Then we could do something like this: const leftHello = new Left('Hello world'); const rightHello = new Right('Hello world'); leftHello.runFunctionOnlyOnHappyPath(trace); // does nothing rightHello.runFunctionOnlyOnHappyPath(trace); // &#x2998; Hello world // &#xFFE9; "Hello world" Map We&#x2019;re getting closer to something useful, but we&#x2019;re not quite there yet. Our .runFunctionOnlyOnHappyPath() method returns the _val property. That&#x2019;s fine, but it makes things inconvenient if we want to run more than one function. Why? Because we no longer know if we&#x2019;re on the happy path or the sad path. That information is gone as soon as we take the value outside of Left or Right. So, what we can do instead, is return a Left or Right with a new _val inside. And we&#x2019;ll shorten the name while we&#x2019;re at it. What we&#x2019;re doing is mapping a function from the world of plain values to the world of Left and Right. So we call the method map(): /** * Left represents the sad path. */ class Left { constructor(val) { this._val = val; } map() { // Left is the sad path // so we do nothing return this; } toString() { const str = this._val.toString(); return `Left(${$str})`; } } /** * Right represents the happy path */ class Right { constructor(val) { this._val = val; } map(fn) { return new Right( fn(this._val) ); } toString() { const str = this._val.toString(); return `Right(${str})`; } } With that in place, we can use Left or Right with a fluent-style syntax: const leftHello = new Left('Hello world'); const rightHello = new Right('Hello world'); const helloToGreetings = str =&gt; str.replace(/Hello/, 'Greetings,'); leftHello.map(helloToGreetings).map(trace); // Doesn't print any thing to the console // &#xFFE9; Left(Hello world) rightHello.map(helloToGreetings).map(trace); // &#x2998; Greetings, world // &#xFFE9; Right(Greetings, world) We&#x2019;ve effectively created two tracks. We can put a piece of data on the right track by calling new Right() and put a piece of data on the left track by calling new Left(). Each class represents a track. The left track is our sad path, and right track is the happy path. Also, I&#x2019;ve totally stolen this railway metaphor from Scott Wlaschin.If we map along the right track, we follow the happy path and process the data. If we end up on the left path though, nothing happens. We just keep passing the value down the line. If we were to say, put an Error in that left track, then we have something very similar to try&#x2026;catch. We use .map() to move us along the track.As we go on, it gets to be a bit of a pain writing &#x2018;a Left or a Right&#x2019; all the time. So we&#x2019;ll refer to the Left and Right combo together as &#x2018;Either&#x2019;. It&#x2019;s either a Left or a Right. Shortcuts for making Either objects So, the next step would be to rewrite our example functions so that they return an Either. A Left for an Error, or a Right for a value. But, before we do that, let&#x2019;s take some of the tedium out of it. We&#x2019;ll write a couple of little shortcuts. The first is a static method called .of(). All it does is return a new Left or Right. The code might look like this: Left.of = function of(x) { return new Left(x); }; Right.of = function of(x) { return new Right(x); }; To be honest, I find even Left.of() and Right.of() tedious to write. So I tend to create even shorter shortcuts called left() and right(): function left(x) { return Left.of(x); } function right(x) { return Right.of(x); } With those in place, we can start rewriting our application functions: function zipRow(headerFields, fieldData) { const lengthMatch = (headerFields.length == fieldData.length); return (!lengthMatch) ? left(new Error("Row has an unexpected number of fields")) : right(_.zipObject(headerFields, fieldData)); } function addDateStr(messageObj) { const errMsg = 'Unable to parse date stamp in message object'; const months = [ 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December' ]; const d = new Date(messageObj.datestamp); if (isNaN(d)) { return left(new Error(errMsg)); } const datestr = `${d.getDate()} ${months[d.getMonth()]} ${d.getFullYear()}`; return right({datestr, ...messageObj}); } The modified functions aren&#x2019;t so very different from the old ones. We just wrap the return value in either Left or Right, depending on whether we found an error. That done, we can start re-working our main function that processes a single row. We&#x2019;ll start by putting the row string into an Either with right(), and then map splitFields to split it: function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); // &#x2026; } This works just fine, but we get into trouble when we try the same thing with zipRow(): function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); const rowObj = fieldsEither.map(zipRow /* wait. this isn't right */); // ... } This is because zipRow() expects two parameters. But functions we pass into .map() only get a single value from the ._val property. One way to fix this is to create a curried version of zipRow(). It might look something like this: function zipRow(headerFields) { return function zipRowWithHeaderFields(fieldData) { const lengthMatch = (headerFields.length == fieldData.length); return (!lengthMatch) ? left(new Error("Row has an unexpected number of fields")) : right(_.zipObject(headerFields, fieldData)); }; } This slight change makes it easier to transform zipRow so it will work nicely with .map(): function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); const rowObj = fieldsEither.map(zipRow(headerFields)); // ... But now we have another problem ... } Join Using .map() to run splitFields() is fine, as .splitFields() doesn&#x2019;t return an Either. But when we get to running zipRow() we have a problem. Calling zipRow() returns an Either. So, if we use .map() we end up sticking an Either inside an Either. If we go any further we&#x2019;ll be stuck, unless we run .map() inside .map(). This isn&#x2019;t going to work so well. We need some way to join those nested Eithers together into one. So, we&#x2019;ll write a new method, called .join(): /** *Left represents the sad path. */ class Left { constructor(val) { this._val = val; } map() { // Left is the sad path // so we do nothing return this; } join() { // On the sad path, we don't // do anything with join return this; } toString() { const str = this._val.toString(); return `Left(${$str})`; } } &#xA0; &#xA0; &#xA0; /** * Right represents the happy path */ class Right { constructor(val) { this._val = val; } map(fn) { return new Right( fn(this._val) ); } join() { if ((this._val instanceof Left) || (this._val instanceof Right)) { return this._val; } return this; } toString() { const str = this._val.toString(); return `Right(${str})`; } } Now we&#x2019;re free to un-nest our values: function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); const rowObj = fieldsEither.map(zipRow(headerFields)).join(); const rowObjWithDate = rowObj.map(addDateStr).join(); // Slowly getting better... but what do we return? } Chain We&#x2019;ve made it a lot further. But having to remember to call .join() every time is annoying. This pattern of calling .map() and .join() together is so common that we&#x2019;ll create a shortcut method for it. We&#x2019;ll call it chain() because it allows us to chain together functions that return Left or Right. /** *Left represents the sad path. */ class Left { constructor(val) { this._val = val; } map() { // Left is the sad path // so we do nothing return this; } join() { // On the sad path, we don't // do anything with join return this; } chain() { // Boring sad path, // do nothing. return this; } toString() { const str = this._val.toString(); return `Left(${$str})`; } } /** * Right represents the happy path */ class Right { constructor(val) { this._val = val; } map(fn) { return new Right( fn(this._val) ); } join() { if ((this._val instanceof Left) || (this._val instanceof Right)) { return this._val; } return this; } chain(fn) { return fn(this._val); } toString() { const str = this._val.toString(); return `Right(${str})`; } } Going back to our railway track analogy, .chain() allows us to switch rails if we come across an error. It&#x2019;s easier to show with a diagram though. The .chain() method allows us to switch over to the left track if an error occurs. Note that the switches only go one way.With that in place, our code is a little clearer: function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); const rowObj = fieldsEither.chain(zipRow(headerFields)); const rowObjWithDate = rowObj.chain(addDateStr); // Slowly getting better... but what do we return? } Doing something with the values We&#x2019;re nearly done reworking our processRow() function. But what happens when we return the value? Eventually, we want to take different action depending on whether we have a Left or Right. So we&#x2019;ll write a function that will take different action accordingly: function either(leftFunc, rightFunc, e) { return (e instanceof Left) ? leftFunc(e._val) : rightFunc(e._val); } We&#x2019;ve cheated and used the inner values of the Left or Right objects. But we&#x2019;ll pretend you didn&#x2019;t see that. We&#x2019;re now able to finish our function: function processRow(headerFields, row) { const fieldsEither = right(row).map(splitFields); const rowObj = fieldsEither.chain(zipRow(headerFields)); const rowObjWithDate = rowObj.chain(addDateStr); return either(showError, rowToMessage, rowObjWithDate); } And if we&#x2019;re feeling particularly clever, we could write it using a fluent syntax: function processRow(headerFields, row) { const rowObjWithDate = right(row) .map(splitFields) .chain(zipRow(headerFields)) .chain(addDateStr); return either(showError, rowToMessage, rowObjWithDate); } Both versions are pretty neat. Not a try...catch in sight. And no if-statements in our top-level function. If there&#x2019;s a problem with any particular row, we just show an error message at the end. And note that in processRow() the only time we mention Left or Right is at the very start when we call right(). For the rest, we just use the .map() and .chain() methods to apply the next function. Ap and lift This is looking good, but there&#x2019;s one final scenario that we need to consider. Sticking with the example, let&#x2019;s take a look at how we might process the whole CSV data, rather than just each row. We&#x2019;ll need a helper function or three: function splitCSVToRows(csvData) { // There should always be a header row... so if there's no // newline character, something is wrong. return (csvData.indexOf('n') &lt; 0) ? left('No header row found in CSV data') : right(csvData.split('n')); } function processRows(headerFields, dataRows) { // Note this is Array map, not Either map. return dataRows.map(row =&gt; processRow(headerFields, row)); } function showMessages(messages) { return `&lt;ul class="Messages"&gt;${messages.join('n')}&lt;/ul&gt;`; } So, we have a helper function that splits the CSV data into rows. And we get an Either back. Now, we can use .map() and some lodash functions to split out the header row from data rows. But we end up in an interesting situation&#x2026; function csvToMessages(csvData) { const csvRows = splitCSVToRows(csvData); const headerFields = csvRows.map(_.head).map(splitFields); const dataRows = csvRows.map(_.tail); // What&#x2019;s next? } We have our header fields and data rows all ready to map over with processRows(). But headerFields and dataRows are both wrapped up inside an Either. We need some way to convert processRows() to a function that works with Eithers. As a first step, we will curry processRows. function processRows(headerFields) { return function processRowsWithHeaderFields(dataRows) { // Note this is Array map, not Either map. return dataRows.map(row =&gt; processRow(headerFields, row)); }; } Now, with this in place, we can run an experiment. We have headerFields which is an Either wrapped around an array. What would happen if we were to take headerFields and call .map() on it with processRows()? function csvToMessages(csvData) { const csvRows = splitCSVToRows(csvData); const headerFields = csvRows.map(_.head).map(splitFields); const dataRows = csvRows.map(_.tail); // How will we pass headerFields and dataRows to // processRows() ? const funcInEither = headerFields.map(processRows); } Using .map() here calls the outer function of processRows(), but not the inner one. In other words, processRows() returns a function. And because it&#x2019;s .map(), we still get an Either back. So we end up with a function inside an Either. I gave it away a little with the variable name. funcInEither is an Either. It contains a function that takes an array of strings and returns an array of different strings. We need some way to take that function and call it with the value inside dataRows. To do that, we need to add one more method to our Left and Right classes. We&#x2019;ll call it .ap() because the standard tells us to. The way to remember it is to recall that ap is short for &#x2018;apply&#x2019;. It helps us apply values to functions. The method for the Left does nothing, as usual: // In Left (the sad path) ap() { return this; } And for the Right class, the variable name spells out that we expect the other Either to contain a function: // In Right (the happy path) ap(otherEither) { const functionToRun = otherEither._val; return this.map(functionToRun); } So, with that in place, we can finish off our main function: function csvToMessages(csvData) { const csvRows = splitCSVToRows(csvData); const headerFields = csvRows.map(_.head).map(splitFields); const dataRows = csvRows.map(_.tail); const funcInEither = headerFields.map(processRows); const messagesArr = dataRows.ap(funcInEither); return either(showError, showMessages, messagesArr); } Now, I&#x2019;ve mentioned this before, but I find .ap() a little confusing to work with. Another way to think about it is to say: &#x201C;I have a function that would normally take two plain values. I want to turn it into a function that takes two Eithers&#x201D;. Now that we have .ap(), we can write a function that will do exactly that. We&#x2019;ll call it liftA2(), again because it&#x2019;s a standard name. It takes a plain function expecting two arguments, and &#x2018;lifts&#x2019; it to work with &#x2018;Applicatives&#x2019;. (Applicatives are things that have an .ap() method and an .of() method). So, liftA2 is short for &#x2018;lift applicative, two parameters&#x2019;. So, a liftA2 function might look something like this: function liftA2(func) { return function runApplicativeFunc(a, b) { return b.ap(a.map(func)); }; } So, our top-level function would use it like this: function csvToMessages(csvData) { const csvRows = splitCSVToRows(csvData); const headerFields = csvRows.map(_.head).map(splitFields); const dataRows = csvRows.map(_.tail); const processRowsA = liftA2(processRows); const messagesArr = processRowsA(headerFields, dataRows); return either(showError, showMessages, messagesArr); } You can see the whole thing in action on CodePen. Really? Is that it? Now, why is this any better than just throwing exceptions? Does it seem like an overly-complicated way to handle something simple? Well, let&#x2019;s think about why we like exceptions in the first place. If we didn&#x2019;t have exceptions, we would have to write a lot of if-statements all over the place. We would be forever writing code along the lines of &#x2018;if the last thing worked keep going, else handle the error&#x2019;. And we would have to keep handling these errors all through our code. That makes it hard to follow what&#x2019;s going on. Throwing exceptions allows us to jump out of the program flow when something goes wrong. So we don&#x2019;t have to write all those if-statements. We can focus on the happy path. But there&#x2019;s a catch. Exceptions hide a little too much. When you throw an exception, you make handling the error some other function&#x2019;s problem. It&#x2019;s all too easy to ignore the exception, and let it bubble all the way to the top of the program. The nice thing about Either is that it lets you jump out of the main program flow like you would with an exception. But it&#x2019;s honest about it. You get either a Right or a Left. You can&#x2019;t pretend that Lefts aren&#x2019;t a possibility. Eventually, you have to pull the value out with something like an either() call. Now, I know that sounds like a pain. But take a look at the code we&#x2019;ve written (not the Either classes, the functions that use them). There&#x2019;s not a lot of exception handling code there. In fact, there&#x2019;s almost none, except for the either() call at the end of csvToMessages() and processRow(). And that&#x2019;s the point. With Either, you get pure error handling that you can&#x2019;t accidentally forget. But without it stomping through your code and adding indentation everywhere. This is not to say that you should never ever use try...catch. Sometimes that&#x2019;s the right tool for the job, and that&#x2019;s OK. But it&#x2019;s not the only tool. Using Either gives us some advantages that try...catch can&#x2019;t match. So, perhaps give Either a go sometime. Even if it&#x2019;s tricky at first, I think you&#x2019;ll get to like it. If you do give it a go though, please don&#x2019;t use the implementation from this tutorial. Try one of the well-established libraries like Crocks, Sanctuary, Folktale or Monet. They&#x2019;re better maintained. And I&#x2019;ve papered over some things for the sake of simplicity here. If you do give it a go, let me know by sending me a tweet. Further Reading Free Cheat Sheet If you found this article interesting, you might like the Civilised Guide to JavaScript Array Methods. It&#x2019;s free for anyone who subscribes to receive updates. Acquire your copy Source: Elegant error handling with the JavaScript Either Monad

    Read at 02:13 pm, May 15th

  • It’s okay to hate your wedding day - The Lily

    It’s supposed to be the perfect day. Everyone says so. For women, the indoctrination begins early: first with Disney movies — pre-woke-princess era of Elsa and Moana — then romantic comedies, watched at sleepovers with a dozen other swooning teenage girls. As women approach peak bridal age, targeted Facebook ads step in with the same message: At your wedding, you should be the happiest you’ve ever been. Except maybe you’re not. Weddings consistently rank as one of the most stressful events in a person’s life — right up there with divorce and major injury or death. But while most brides feel free to grumble about all the hard work that goes into planning a wedding, any negative postgame discussion is generally considered taboo, said Maddie Eisenhart, the chief revenue officer at the wedding website A Practical Wedding. “When people ask, ‘How was your wedding,’ we don’t know how to say anything other than ‘amazing.’ We don’t have that language.” On their wedding day, brides often feel an intense pressure to achieve “emotional perfection,” Eisenhart said. “If you have any kind of mixed emotions about the event, it’s like, well, I personally failed at my one job — which was to be joyful all day long.” But even for brides who are head-over-heels in love, a wedding is not always five hours of pure giddy happiness. First, there is the basic logistical stress of orchestrating an event for, on average — on average! — 167 guests. Most brides (and it is still almost always the bride who plans the wedding) have never planned a formal event. Unless the bride can afford to hire a professional, she’ll probably be the one coordinating with the caterer, the florist, the photographer, the officiant — telling everyone where to go and what to do, and figuring out a plan B when something inevitably goes wrong. “It just sucked,” said Laine Barnes, who got married in 2018 in rural Georgia, handling most of the logistics on her own. Many of her guests had to cancel because of a hurricane that hit a few days before the wedding. Then the caterer changed his recipes without telling her. “I remember how disgusting the food was. I loved the coconut rice at the tasting, but at the wedding it was like, ‘Oh my god, what is this?’” Barnes fixated on the rice — and exactly how much she’d paid for it — all day. There is also the inevitable stress that comes with bringing all of the couple’s friends and family members together in one room. In the weeks leading up to her wedding, Eisenhart said, several of her closest family members threatened not to come. Two days before, she got into a big fight with her mom. On the day of her wedding, she said, “it still stung.” “We think weddings exist in a bubble,” Eisenhart said. “We think we’ll get engaged and everyone will be on their best behavior because it’s our wedding, and why wouldn’t they be?” But difficult family dynamics don’t just disappear. The bride’s parents might be fighting in the corner. If someone close to the couple has passed away, that person’s absence will still color the day. “I think that expectation mixed with that reality makes it hard for a lot of people,” Eisenhart said. The pressure to be completely, incandescently happy on your wedding day can make even the best wedding hard to enjoy. When Julia Carter, a senior lecturer at the University of West England, was writing her dissertation on bridal magazines, the word “perfect” cropped up in every single tome. For decades, Carter said, women — and only very rarely men — have been urged to aspire to the perfect wedding: the perfect dress, the perfect hair, the perfect cake, the perfect venue. This language, she said, is fueled by the multibillion-dollar wedding industry. “This idea of trying to attain ‘perfect’ is just craziness,” said Gwen Helbush, a wedding planner based in Newark, Calif. “Why would you set yourself up for failure in that way?” Still, Helbush uses the term on her website. (“But I say your perfect wedding,” she clarifies, “never just ‘perfect’ all by itself.”) The average cost of a wedding in the United States is a whopping $34,000, according to a 2018 survey by the wedding platform the Knot. Many couples go into debt to pay for the day. It’s consistently seen as something that’s just “worth it,” Carter said — more important even than saving up for a house or paying off student loans. Among brides, Carter said, “there is this commonly repeated myth that you have to have a good wedding to have a good marriage.” If the wedding is full of joy and laughter, the thinking goes, so, too, will the marriage. Even if brides recognize that it’s absolutely ridiculous to think one good night could actually make or break a lifelong relationship, Carter said, the myth — and the suspicion that various guests might buy into it — ratchets up the pressure. And then, on top of everything, there is the expectation that a bride should be “chill.” As women are frantically trying to craft, and enjoy, the perfect wedding, they’re also expected to appear nonchalant about the whole thing, lest they be deemed a “bridezilla.” Particularly since the term bridezilla entered the American lexicon, via the We TV series in the early 2000s, brides have been shamed for caring too much about their weddings, Eisenhart said. These kinds of expectations don’t exist for grooms, she said, presenting a glaring “double standard.” “We pretend that women are just putting on a fun show. You have to care enough to do it right, but you can’t care enough to have any kind of emotional attachment to it.” If things don’t go well, a bride can’t cry or get mad — because even though it’s supposed to be her perfect day, it’s also just a party. Grooms, on the other hand, are generally free to care as little or as much as they’d like. “If you’re not happy as a bride, it’s just one of those things you just keep to yourself,” said Lauren Jones. A few months after she got married in the fall of 2016, she had a panic attack outside of a friend’s wedding, triggered by an impulse to compare her friend’s wedding to her own. She couldn’t stop focusing on all the things she wished she’d done differently. “You spend all this money, all these people came to see you, so if you’re not happy, you feel like it’s all your fault. It’s embarrassing.” A wedding is life’s only major milestone with just one socially acceptable emotional response, Eisenhart said. When you graduate from high school or college, have a baby, or buy a house, she says, you’re allowed to dwell on the bad stuff, along with the good: losing friends, losing sleep, losing money. But a wedding is still understood to be emotionally one-dimensional. After her own wedding, which she describes as “less than perfect,” Eisenhart said, she was depressed for months, thinking about everything that went wrong. The hardest part, she said, was having to pretend like she’d had a great time. “When you can’t acknowledge the experience you had, I think it makes it a lot worse. It feeds a level of anxiety.” Carter has spent years interviewing couples about their relationships. No bride has ever confessed to not enjoying her wedding. She suspects many of them had less than perfect experiences but will never admit it, even to themselves. “Even if you have a bad day, there is so much pressure to be positive that you sort of retell the story in your head,” Carter said. Eisenhart, for one, is a big proponent of “naming the thing”: She has no problem admitting that her wedding day wasn’t all that great. She’s been happily married for 10 years. She has a house and a son. In the end, she said, other things turn out to be a whole lot more important. Source: It’s okay to hate your wedding day &#8211; The Lily

    Read at 11:38 am, May 15th

  • What I learned after writing Clojure for 424 days, straight

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: What I learned after writing Clojure for 424 days, straight

    Read at 08:21 am, May 15th

  • Git ransom campaign incident report—Atlassian Bitbucket, GitHub, GitLab

    Git ransom campaign incident report—Atlassian Bitbucket, GitHub, GitLab John Swanson Background and summary of event Today, Atlassian Bitbucket, GitHub, and GitLab are issuing a joint blog post in a coordinated effort to help educate and inform users of the three platforms on secure best practices relating to the recent Git ransomware incident. Though there is no evidence Atlassian Bitbucket, GitHub, or GitLab products were compromised in any way, we believe it’s important to help the software development community better understand and collectively take steps to protect against this threat. On Thursday, May 2, the security teams of Atlassian Bitbucket, GitHub, and GitLab learned of a series of user account compromises across all three platforms. These account compromises resulted in a number of public and private repositories being held for ransom by an unknown actor. Each of the teams investigated and assessed that all account compromises were the result of unintentional user credential leakage by users or other third-parties, likely on systems external to Bitbucket, GitHub, or GitLab. The security and support teams of all three companies have taken and continue to take steps to notify, protect, and help affected users recover from these events. Further, the security teams of all three companies are also collaborating closely to further investigate these events in the interest of the greater Git community. At this time, we are confident that we understand how the account compromises and subsequent ransom events were conducted. This coordinated blog post will outline the details of the ransom event, provide additional information on how our organizations protect users, and arm users with information on recovering from this event and preventing others. Event details On the evening of May 2 (UTC), all three companies began responding to reports that user repositories, both public and private, were being wiped and replaced with a single file containing the following ransom note: To recover your lost data and avoid leaking it: Send us 0.1 Bitcoin (BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and contact us by Email at admin@gitsbackup.com with your Git login and a Proof of Payment. If you are unsure if we have your data, contact us and we will send you a proof. Your code is downloaded and backed up on our servers. If we dont receive your payment in the next 10 Days, we will make your code public or use them otherwise. Through immediate independent investigations, all three companies observed that user accounts were compromised using legitimate credentials including passwords, app passwords, API keys, and personal access tokens. Subsequently, the bad actor performed command line Git pushes to repositories accessible to these accounts at very high rates, indicating automated methods. These pushes overwrote the repository contents with the ransom note above and erased the commit history of the remote repository. Incident responders from each of the three companies began collaborating to protect users, share intelligence, and identify the source of the activity. All three companies notified the affected users and temporarily suspended or reset those accounts in order to prevent further malicious activity. During the course of the investigation, we identified a third-party credential dump being hosted by the same hosting provider where the account compromise activity had originated. That credential dump comprised roughly one third of the accounts affected by the ransom campaign. All three companies acted to invalidate the credentials contained in that public dump. Further investigation showed that continuous scanning for publicly exposed .git/config and other environment files has been and continues to be conducted by the same IP address that conducted the account compromises, as recently as May 10. These files can contain sensitive credentials and personal access tokens if care is not taken to prevent their inclusion, and they should not be publicly accessible in repositories or on web servers. This problem is not a new one. More information on the .git directory and the .git/config file is available here and here. Additional IPs residing on the same hosting provider are also exhibiting similar scanning behavior. We are confident that this activity is the source of at least a portion of the compromised credentials. Known ransom activity ceased on May 2. All known affected users have had credentials reset or revoked, and all known affected users have been notified by all three companies. We recommend all users take steps to protect themselves from such attacks, more information on doing so and on restoring affected repositories is available below. How to protect yourself Enable multi-factor authentication on your software development platform of choice. Use strong and unique passwords for every service. Strong and unique passwords prevent credential reuse if a third-party experiences a breach and leaks credentials. Use a password manager (if approved by your organization) to make this easier. Understand the risks associated with the use of personal access tokens. Personal access tokens, used via Git or the API, circumvent multi-factor authentication. Tokens may have read/write access to repositories depending on scope and should be treated like passwords. If you enter your token into the clone URL when cloning or adding a remote, Git writes it to your `.git/config` file in plain text, which may carry a security risk if the `.git/config` file is publicly exposed. When working with the API, use tokens as environment variables instead of hardcoding them into your programs. Do not expose .git directories and .git/config files containing credentials or tokens in public repositories or on web servers. Information on securing `.git/config` files on popular web servers is available on this site. How to recover an affected repository If you have a full current copy of the repository on your computer, you can force push to the current HEAD of your local copy using: git push origin HEAD:master --force Otherwise, you can still clone the repository and make use of: Additional assistance on Git usage is available in the following resources: Should you require additional assistance recovering your repository contents, please refer to the following: Bitbucket: GitHub: GitLab: All three platforms provide robust multi-factor authentication options: Bitbucket provides the ability for admins to require 2-factor authentication and the ability to restrict access to users on certain IP addresses (IP Whitelisting) on their Premium plan. GitHub provides token scanning to notify a variety of service providers if secrets are published to public GitHub repositories. GitHub also provides extensive guidance on preventing unauthorized account access. We encourage all users to enable two-factor authentication. GitLab provides secrets detection in 11.9 as part of the SAST functionality. We also encourage users to enable 2FA and set up ssh keys. Thanks to the security and support teams of Atlassian Bitbucket, GitHub, and GitLab, including the following individuals for their contributions to this investigation and blog post: Mark Adams, Ethan Dodge, Sean McLucas, Elisabeth Nagy, Gary Sackett, Andrew Wurster (Atlassian Bitbucket); Matt Anderson, Howard Draper, Jay Swan, John Swanson (GitHub); Paul Harrison, Anthony Saba, Jan Urbanc, Kathy Wang (GitLab). Source: Git ransom campaign incident report—Atlassian Bitbucket, GitHub, GitLab

    Read at 08:12 am, May 15th

  • Beware the Technocracy: Andrew Yang Hijacks Socialist Talking Points to Advance a Dystopian Vision of Future America - Democratic Socialists of America (DSA)

    Beware the Technocracy: Andrew Yang Hijacks Socialist Talking Points to Advance a Dystopian Vision of Future America May 14, 2019   By Mike Weinstein In the 1999 blockbuster The Matrix, while surveying a bleak future landscape where humanity’s sole function is literal robot batteries, Laurence Fishburne’s character muses that “throughout human history, we have been dependent on machines to survive. Fate, it seems, is not without a sense of irony.” Nor without irony, it seems, is the ascendance of wealthy technocrat and 2020 Democratic presidential candidate Andrew Yang in otherwise left-leaning circles. “Who is that guy?” asked the Washington Post in an article published last week, answering enthusiastically that he is a buzz-worthy phenomenon emerging from a crush of 22 Democratic presidential hopefuls. But the discourse pushed by Yang and his endorsers presents a problematic worldview, one that unfortunately hijacks some traditional socialist agendas and subverts them to our current late-capitalist trajectory. In a February 2019 interview with Fox News’ Neil Cavuto, Yang states plainly that he believes “the entire socialism-capitalism dichotomy is out of date…We need to take the best of both worlds and build an economy that centers around how people are doing.” So what is this non-dichotomous economy Yang has in mind? Yang’s primary platform issue is one of universal basic income [UBI]. Let’s put aside his specific policy agenda surrounding this for a moment and focus on how he arrived at this point. The son of a corporate research physicist and statistician, Yang parlayed his Ivy League law school education into a job in corporate law before bouncing around establishing and running tech startups. His most recent endeavor — Venture for America — was established to apparently reproduce himself, by pushing graduates into Silicon Valley-style startups or their venture capital enablers. Immersed in this world for his whole existence, Yang eventually became self-aware enough to be troubled by the implications he saw for workers being routinely and systematically replaced with technology (enough to publish several books on the subject). Unfortunately for educated, upper-class, technology acolytes like Yang, the perceived policy answers for such societal ills are merely optimization routines. Yang and his ilk view the very real lives of working class individuals, and the market in which they labor, as part of an algorithm to be fine tuned. Thus we encounter UBI as Yang’s preferred solution. As his campaign website has it, Yang views what he has termed the ‘Freedom Dividend’ — a guaranteed income of $1,000 per month to all Americans — as a “perpetual boost and support [for] job growth and the economy”. How exactly? In a February 2019 interview with Time magazine, Yang views it a bulwark against “inequality due in part to globalization and automation.” But herein lies Yang’s assumption: automation of labor is a foregone conclusion. To him, this is simply the way the system should be progressing. Nowhere does Yang address the root of the problem. A member of the working class could ask Yang: Why not intervene in this advancing automation? Where are the solutions to ensure workers are not displaced, or are given ownership of their labor? These questions fall on deaf ears because Yang is already living in that AI-ruled world. He sees UBI as the system tweak that enables worker units to survive long enough to find another subroutine to slot into. Yang’s almost single-issue agenda has compounding ramifications for his vision of the future. By overtly telling us that the replacement of workers by AI is inevitable and unstoppable, Yang surreptitiously tells us about the broader economic policy his presidency would support. While he admits that it will be deeply damaging to workers, labor replacement with AI and automation would presumably be fast tracked and prioritized under a Yang administration. In a November 2018 interview on Hill.TV, Yang touted endorsements by “80 techies in Silicon Valley”. He devotes an entire article in a 2017 article for Quartz spreading the gospel of his Silicon Valley buddies: the trend towards worker displacement through AI and automation is irreversible. He goes on in this same article, seemingly without irony, to note that “we should assume that, for millions of people, it’s not going to work.” Which brings us back to UBI. Without doing anything to bolster the working class’s ability to actually own production, and indeed doing exactly the opposite by empowering capitalists to replace their workers, Yang has simply introduced a system where a guaranteed income would flow right back up to the ruling class. He ignores the fundamental fact that the reason workers require money is to pay for their survival. Unsurprisingly, Yang is an aspirant to the likes of billionaire transhumanist Peter Thiel, going so far as to quote Thiel’s investment firm manager Eric Weinstein that “capitalism has been eaten by technology.” For someone who views socialism vs capitalism as a false dichotomy, Yang has bought into this absurd dichotomy wholesale: that somehow our current technological advances have already liberated us from yoke of capital, and are just waiting for the right code to help us course-correct. Despite his protestations, Yang is severely out of touch out with the working people. There is a reason the majority of the public did not care when, in July 2016, 140+ tech entrepreneurs and executives penned an open letter condemning the Trump campaign. If anything, such a letter potentially empowered the campaign. Yang speaks the language of the ruling class, one of inscrutable economics to uphold the narrative of technology as savior. His aim is cloak this in popular socialist ideas such as universal healthcare and income. Yang promotes this package as a self-proclaimed “human-centered economy”. It’s worth noting that the robot antagonists in The Matrix had a human-centered economy, too. Mike Weinstein is an at-large DSA member, STEM Administrator for the School of Arts and Sciences at Southern New Hampshire University, and President’s Doctoral Fellow in Environmental Studies at Antioch University New England.       /sidebar Source: Beware the Technocracy: Andrew Yang Hijacks Socialist Talking Points to Advance a Dystopian Vision of Future America &#8211; Democratic Socialists of America (DSA)

    Read at 08:08 am, May 15th

  • Clarence Thomas Shows How Supreme Court Would Overturn Roe v. Wade

    OVERRULEDClarence Thomas Just Showed How Supreme Court Would Overturn Roe v. WadeIn only 318 words, the arch-conservative laid out a roadmap for overturning decisions permitting abortion, same-sex marriage, and more.In 1992, the Supreme Court looked poised to overturn Roe v. Wade, the landmark case protecting abortion rights. They didn’t, however, and the main reason was respect for precedent—specifically, the legal doctrine known as stare decisis, or “let the decision stand.” Would it do the same today, with over 250 laws meant to test the case pending in states across the country?An otherwise obscure case decided this week, Franchise Tax Board of California v. Hyatt, suggests that a majority of the court would not. Hyatt was, in large part, about stare decisis. A 1979 Supreme Court case, Nevada v. Hall, held that citizens can sue a state in another state’s court. In 1998, Gilbert Hyatt did just that as part of a tax dispute, with tens of millions of dollars at stake. This week, the court overruled its 1979 decision by a vote of 5-4 and tossed out Hyatt’s claim. The split was on ideological lines, with the court’s five conservatives in the majority and four liberals in the minority.Of the 18 pages in the majority opinion written by Justice Clarence Thomas, 17 are about the legal question in the case, which revolves around states’ rights, sovereign immunity, and the Constitution. It’s no surprise that Justice Thomas, in particular, wrote this opinion, as states’ rights have been a focus of his for three decades.“ If this same standard is applied to Roe and Obergefell, they would go down in flames. ”What was surprising is that stare decisis warranted only 318 words in Justice Thomas’ opinion, almost like an afterthought, and that Justice Thomas summarily waved away this important judicial doctrine. If this is how the court’s conservatives treat sovereign immunity, how will they treat abortion rights?That’s what Justice Stephen Breyer asked in his dissent. Unlike the majority opinion, Justice Breyer’s dissent devoted over a quarter of its space to stare decisis. And he concluded, “today’s decision can only cause one to wonder which cases the Court will overrule next.”It’s not hard to guess which cases Justice Breyer was wondering about. Because the same logic applied in Hyatt would overturn not only Roe v. Wade but also the court’s precedent on same-sex marriage, Obergefell v. Hodges.How? Let’s look at Justice Thomas’ reasoning.First, Justice Thomas notes that stare decisis is “‘not an inexorable command” and is “at its weakest when we interpret the Constitution because our interpretation can be altered only by constitutional amendment.” Now, some would say that stare decisis is at its strongest when fundamental constitutional rights are at issue. But for Justice Thomas, in cases like Roe and Obergefell, stare decisis is at its “weakest.”Thomas then goes on to apply a version of the usual stare decisis test, taking into account “the quality of the decision’s reasoning; its consistency with related decisions; legal developments since the decision; and reliance on the decision.”The first prong is the most important. Here, Thomas finds that the 1979 precedent “failed to account for the historical understanding of state sovereign immunity.” But that’s not the same as the decision’s being of poor quality—it’s an imposition of Justice Thomas’ specific, historically oriented “originalism” philosophy. There are, after all, many ways to evaluate the quality of a decision’s reading: its principled analysis of the rights in question, its integration of constitutional norms with contemporary reality, and so on.Here, however Justice Thomas glosses over that jurisprudential debate and simply concludes that a Supreme Court precedent was badly argued—according to his standards.This is the central question in cases like Roe and Obergefell. No one denies that abortion was banned for much of our country’s history, and that same-sex marriage would have been anathema to the Founders of the republic. The debate is over whether history gets a vote or a veto. If this same standard is applied to Roe and Obergefell, they would go down in flames.The fourth prong is also critical. People depend on the law being stable. Hyatt, for example, filed his suit exactly as the law provided. Now, the rug is pulled out from under him, and all Justice Thomas says is that “we acknowledge that some plaintiffs, such as Hyatt, have relied on Hall by suing sovereign States. Because of our decision to overrule Hall, Hyatt unfortunately will suffer the loss of two decades of litigation expenses and a final judgment against the Board for its egregious conduct.”Unfortunately!Now multiply Hyatt’s misfortune a millionfold. As Justice Breyer wrote, overturning Supreme Court precedents except in the rarest of cases “is to cause the public to become increasingly uncertain about which cases the Court will overrule and which cases are here to stay.”Arguably, many more people rely on Roe and Obergefell than on Hall, and so the reliance prong would be more important in challenges to those cases.But that cuts both ways. For every woman seeking an abortion, there is someone who believes that abortion is murder. In at least a dozen states, a majority of democratically elected legislators are trying to ban or severely limit the practice. Just last week, Georgia became the fourth state this year (joining Kentucky, Mississippi, and Ohio) to ban abortions after only six weeks of pregnancy, in a direct frontal challenge to Roe. And, a future conservative justice might point out, women seeking abortions could simply travel to other states if need be (if, of course, they can afford it).Because Justice Thomas so readily dismisses the reliance claim in Hyatt, it’s easy to see him doing the same in Roe. Likewise in Obergefell. For 12 years, we lived in a country in which same-sex marriage was legal in some states and illegal in others; is a return to such a world truly untenable? Anyway, unless marriages like mine were retroactively invalidated, who is really relying on same-sex marriage being legal? Prospective couples could, like victims of rape or incest, simply relocate to a state more favorable to their interests.In short, Justice Thomas’ theory of stare decisis is like a roadmap for how to overrule decisions one disagrees with. First, frame the disagreement as one over “quality” rather than principle. Second, trivialize the ways in which people rely on the law as it stands.And third, with the stroke of a pen, wipe out constitutional rights that people like me mistakenly think we possess.Source: Clarence Thomas Shows How Supreme Court Would Overturn Roe v. Wade

    Read at 07:54 am, May 15th

Day of May 14th, 2019

  • They Were Promised Coding Jobs in Appalachia. Now They Say It Was a Fraud.

    BECKLEY, W.Va. — On a spring day in 2017, Stephanie Frame sat down in her hilltop home deep in the mountain hollows to record a video. She began with the litany of local decline: the vanishing jobs in the coal mines, the shuttering stores, the school that closed down.

    Read at 06:46 pm, May 14th

  • Don't be fooled: Joe Biden is no friend of unions

    In San Francisco there’s a high-end boutique called “Unionmade”. There you will find expensive work jackets and overalls, lit by bare bulbs and displayed on unvarnished metal shelves.

    Read at 06:26 pm, May 14th

  • Simplifying Redux

    Is Redux giving you a headache? Do you feel frustrated navigating Redux’s code base? In this article, we will implement the core functionality of Redux in approximately 30 lines of code! TL;DR —…

    Read at 06:17 pm, May 14th

  • Congratulations to Uber, the Worst Performing IPO in U.S. Stock Market History

    Rideshare unicorn Uber doesn’t do anything small. When it was in the game of raising money, it raised close to $25 billion. When it loses that money—and it does every single quarter—it loses it at astronomical burn rates.

    Read at 03:41 pm, May 14th

  • The fierce New Orleans newspaper war is over. Now comes the (surprising) post-mortem.

    The day of the New Orleans Saints home opener in 2013, fans arrived at the Superdome to find a special edition of the Advocate draped over every seat. The newspaper’s main story was about Saints safety and cult hero Steve Gleason, who has ALS.

    Read at 03:32 pm, May 14th

  • Is There a Connection Between Undocumented Immigrants and Crime?

    A lot of research has shown that there’s no causal connection between immigration and crime in the United States.

    Read at 02:21 pm, May 14th

  • Tech activists protest Palantir's work with ICE

    Tech activists have engaged in a multiday protest of tech software company Palantir's work with Immigration and Customs Enforcement (ICE) following revelations that the company's products helped facilitate the arrests of over 400 immigrants.

    Read at 09:41 am, May 14th

  • Rudy Giuliani Plans Ukraine Trip to Push for Inquiries That Could Help Trump

    WASHINGTON — Rudolph W. Giuliani, President Trump’s personal lawyer, is encouraging Ukraine to wade further into sensitive political issues in the United States, seeking to push the incoming government in Kiev to press ahead with investigations that he hopes will benefit Mr. Trump. Mr.

    Read at 09:39 am, May 14th

  • Trump’s interest in stirring Ukraine investigations sows confusion in Kiev 

    MOSCOW —  As President Trump and his inner circle appear increasingly focused on Ukraine as a potential tripwire for Joe Biden and other Democrats, officials about to take power in Kiev are pushing their own message: Leave us out of it.

    Read at 09:33 am, May 14th

  • Palantir's Github Page Is the New Battleground in the Fight Against ICE

    Tech activists continue to organize and win across many of the industry’s biggest firms—and increasingly the online spaces tech workers gather are becoming battlegrounds in their own right.

    Read at 09:30 am, May 14th

  • Trump Is Pressuring Ukraine to Smear Clinton and Biden

    In 2016, Donald Trump’s campaign learned Russia was working to help him win, and many of its members actively sought to exploit that assistance. In 2020, now possessing the powers of the Executive branch, it’s pressuring a foreign government to assist Trump’s reelection campaign.

    Read at 09:25 am, May 14th

  • Why Don’t White Athletes Understand What’s Wrong With Trump?

    So far, the conversation about the upcoming Boston Red Sox visit to Donald Trump’s White House has centered around the people of color who are skipping the event.

    Read at 09:22 am, May 14th

  • Trump Admin Inflated Iran Intel, U.S. Officials Say

    On Sunday, the National Security Council announced that the U.S. was sending a carrier strike group and a bomber task force to the Persian Gulf in response to “troubling and escalatory” warnings from Iran—an eye-popping move that raised fears of a potential military confrontation with Tehran.

    Read at 09:16 am, May 14th

  • Karen White: how 'manipulative' transgender inmate attacked again

    Transgender politics – like any politics – can be divisive. Yet in the case of Karen White, legally still a man put into a female-only prison, both sides of the anti-trans and pro-transgender rights debate are united in the belief mistakes were made.

    Read at 08:33 am, May 14th

  • House Panel Approves Contempt for Barr After Trump Claims Privilege Over Full Mueller Report

    WASHINGTON — The House Judiciary Committee voted Wednesday to recommend that the House hold Attorney General William P. Barr in contempt of Congress for failing to turn over Robert S.

    Read at 08:28 am, May 14th

  • Uber's IPO documents admit that they need to destroy public transit.

    Cap says, "Don't be a scab!" In case you haven't heard, many Uber and Lyft drivers are on strike today nationwide, so if you use either of those services today, not only are you supporting a vile business model and an appalling corporate culture that is destroying your cities, but you're also cross

    Read at 08:22 am, May 14th

  • Democratic Socialists of America (DSA)

    With the new far-right threat a reality in most European capitals, DSA stands with Left parties against what Bernie Sanders has called the continent’ s “authoritarian axis.

    Read at 08:19 am, May 14th

  • Whose side is Twitter on: misogynists or women in public life?

    What happens in Vegas stays in Vegas. It’s the 21st-century proverb that also reflects our nonchalant attitude to social media.

    Read at 08:10 am, May 14th

  • Mnuchin rejects Democrats’ demand to hand over Trump’s tax returns, all but ensuring legal battle

    Treasury Secretary Steven Mnuchin on Monday told House Democrats he would not furnish President Trump’s tax returns despite their legal request, the latest move by Trump administration officials to shield the president from congressional investigations.

    Read at 08:04 am, May 14th

  • There are many reasons not to impeach Trump. The House should do it anyway

    It’s a constitutional crisis all right. So what happens now? An impeachment inquiry in the House won’t send Trump packing before election day 2020 because Senate Republicans won’t convict him of impeachment.

    Read at 08:01 am, May 14th

  • Uber and Lyft drivers deserve a better strike

    This week, labor news was dominated by the story of the Uber / Lyft strike. On Wednesday, rideshare drivers were meant to turn off the apps for a period of a few hours, or even the entire day. Riders were meant to refrain from using the apps to hail rides.

    Read at 07:57 am, May 14th

  • Why the Uber Strike Was a Triumph

    Did the Uber strike “work”? Following Wednesday’s action by Uber and Lyft drivers across the world, some commentators wondered what the workers were trying to achieve, since they could not possibly stop Uber’s debut as a publicly traded company on Friday.

    Read at 07:38 am, May 14th

  • ‘Black Leadership Matters’: Why a Racial Rift Is Growing Among N.Y. Democrats - The New York Times

    A progressive push, fueled by newly energized activists, has alienated the old guard of black leaders, igniting an internal battle with racial overtones.ImageA progressive push, fueled by many newly energized activists, has also alienated some of the Democrats’ old guard of black leaders.CreditCreditNatalie Keyssar for The New York Times[What you need to know to start the day: Get New York Today in your inbox.]As big-dollar political donors recently gathered at a TriBeCa wine bar to honor one of the country’s most powerful black state lawmakers, protesters converged outside.Waving signs and chanting, shouting to be heard in the bar's darkened interior, they demanded an end to big money in politics. They were Democratic activists — and their target was one of their own: Carl E. Heastie, the Democratic speaker of the New York State Assembly.But they also had to shout over the sound of counterprotesters: an equally sized group of black community leaders, who had assembled to support the speaker and denounce the activists.The progressive movement in New York has been credited with overturning politics in Albany: The Legislature is now under Democratic control for only the third time in 50 years. But the progressive push, fueled by many newly energized activists, has also alienated some of the party’s old guard of black leaders, igniting an internal battle with racial overtones.How Progressive Groups Worked to Unseat Disloyal DemocratsBlack community leaders have leveled accusations of paternalism. Black lawmakers have warned of a gulf between activists’ priorities and those of their constituents. Even black activists who are part of the insurgent wing have cautioned of overreach by white progressives.ImageCarl E. Heastie, the first black man to hold the post of speaker of the New York State Assembly, has been accused by some Democratic activists of being slow to embrace a more progressive agenda.CreditNatalie Keyssar for The New York Times“People talk about how black lives matter,” said Charlie King, a longtime Democratic operative and a former senior campaign adviser to Gov. Andrew M. Cuomo. “Well, black leadership matters. If white progressives can’t respect that, they will be made to respect that.”Since President Trump’s election in 2016, Democrats nationwide have grappled with whether a new wave of progressive energy — fueled in large part by young people and well-off white suburban women — represents black voters, the longtime pillars of the Democratic Party. In New York, the debate has taken on particular weight. Black Democrats now lead both houses of the State Legislature, after years of Republican opposition. In the Assembly especially, black lawmakers have risen under Mr. Heastie’s leadership, as have those with ties to the Bronx County political machine that Mr. Heastie once led.Some of those freshly cemented power brokers are now bristling at the suggestion by newly prominent activists and elected officials that they have not been progressive enough on issues like rent regulation, new taxes on the ultrawealthy and campaign finance reform. They call such criticisms misplaced and racially charged, and they suggest that the activists do not represent the communities they claim to speak for.“What the driving force of this movement cares about isn’t what communities of color care about,” said State Senator Brian Benjamin, a black Democrat who represents Harlem.The issue came to a head outside Mr. Heastie’s fund-raiser last month , when progressive activist groups like Indivisible and Rise and Resist, which formed after the 2016 presidential election, organized a protest. Black leaders arrived to counterprotest.The dueling groups lined up on opposite sides of a sidewalk: the protesting activists, many of them white, facing the counterprotesters, all black.The activists “don’t look like us, don’t live with us,” said the Rev. Troy DeCohen, a pastor who leads the United Black Clergy of Westchester.“What they’re trying to do is co-opt what historically has been rooted in the black community,” he added, referring to the black community’s history of social justice activism.The new groups draw strong support in primarily white neighborhoods in Manhattan, Brooklyn and Westchester. Several of the protesters at Mr. Heastie’s fund-raiser lived in the West Village. Some of the candidates backed by the new groups last year, though diverse in race and gender, won significantly more votes in gentrifying areas of Brooklyn and Queens than in predominantly black or brown neighborhoods. Their rivals had accused them of siding with gentrifiers over poorer communities.But the groups also include members from diverse demographics; local chapters dot the Bronx, Brooklyn and Queens. They support racial justice priorities such as criminal justice reform and more school funding.They also work closely with unions and longer-standing activist groups that are well known for representing — and being led by — working-class people of color. Black community leaders organized a counterprotest at Mr. Heastie’s fund-raiser; one complained that the white progressive activists “don’t look like us, don’t live with us.”CreditNatalie Keyssar for The New York Times“I was deeply offended by the suggestion that it was only white progressives,” said Jawanza Williams, the lead organizer for VOCAL-NY, which focuses on issues like criminal justice and homelessness. Mr. Williams, who is black and formerly homeless, helped lead the protest outside of Mr. Heastie’s fund-raiser. “It erases the struggle of black organizers who are progressive.”The protesters at the fund-raiser emphasized that their criticism was not of the Assembly speaker as a black man, but for the role they said he played in delaying campaign finance reform. "What struck a chord was the hypocrisy,” Livvie Mann, of the group Rise and Resist, said of Mr. Heastie. Ms. Mann, who is white, organized the protest. “Days after the budget, he does a huge fund-raiser, and it felt like a slap in the face.” Kirsten John Foy, president of the activism group Arc of Justice and one of the organizers of the counterprotest, said he agreed with the need to get big money out of politics. But he took issue with the protesters’ tactics and their lack of diversity. Mr. DeCohen said black members of the activist groups had been “brainwashed.” He added, “We always call them the Uncle Toms.”Jason Walker, VOCAL-NY’s campaign director, replied that he was surprised to “see the black faith leaders take the playbook” of racial division. “As a black millennial and a progressive, I’m looking for my leaders to set up the next generation to win,” he said.Mr. Heastie, in brief comments to reporters as he entered the fund-raiser, brushed off the criticism. The political action committee for which he was fund-raising gave $50,000 last year to help elect more Democrats to the Senate.“History will show that the Democratic Assembly has always been the progressive champions,” he said. “That’s what people should be looking at, on the actions that we take.”The tension arrives at a key moment in New York history: Along with Mr. Heastie’s historic ascent to the speakership, Senator Andrea Stewart-Cousins this year became the first black woman to lead the State Senate. Democrats had seized control of both chambers of the Legislature on a promise to quickly enact sweeping change.Democratic activists defended their right to criticize Mr. Heastie, and insisted that their protests were not racially motivated.CreditNatalie Keyssar for The New York TimesBut the party has disagreed about what changes, when, and in what order. The $175 billion state budget passed on April 1 included major progressive victories, including limiting cash bail and releasing money for the city’s public housing system. The black leaders said those achievements should be celebrated, and suggested that campaign finance reform was a lower-priority issue.“I’ve never had one person in Central Harlem and East Harlem say, ‘Brian Benjamin, go to Albany and get me public financing,’” said Mr. Benjamin, the state senator, though he said he supports the idea. “They want affordable housing, money for education and criminal justice reform.”Why Some Democratic Lawmakers Thought the Budget Didn’t Accomplish EnoughBut proponents of public financing said getting big money out of politics would make other progressive goals possible. Ricky Silver, a lead organizer of the group Empire State Indivisible, called public financing the “tip of the arrowhead as it relates to all progressive issues.” Studies have shown that donor diversity increases in public matching systems. The Rev. Jesse Jackson wrote a recent opinion piece calling the policy a potential “game changer.”White activists also defended their right to criticize Mr. Heastie.“He, as the leader of the Assembly, represents the entire state,” said Paul Rabin, a member of the group Rise and Resist. Still, several black leaders who were not at the protest said that while they agreed with the activist groups’ goals, the groups should be conscious of how their actions might appear to observers. L. Joy Williams, the president of the Brooklyn N.A.A.C.P., said “optics and public perception” of the issues activists were fighting for could sidetrack their cause, rather than advance it. Jamaal T. Bailey, a state senator who represents the Bronx and Westchester and considers Mr. Heastie his political mentor, said Democrats should focus on party unity, citing lyrics from the Jay-Z song “Family Feud.” “Nobody wins when the family feuds,” he said. “What’s better than one Democratic majority? Two.”Even black activists who have been heavily involved with the new activist groups warned that certain voices should be careful not to drown out others.Sherese Jackson, who until recently was the only nonwhite board member of Indivisible Nation BK, an activist group in Brooklyn formed after 2016, said the group often discusses how to increase diversity. But the discussions had yet to turn into real change.“It is definitely a struggle as a woman of a color,” she said, “feeling 100 percent safe in a mostly white, progressive world.”Events such as the protest against Mr. Heastie, even if well intentioned, could further deter nonwhite people from joining, she said.“The visual alone — I could see how that could come across to people, and it could be a turnoff,” Ms. Jackson said. “This does not help the trust factor.”Vivian Wang is a reporter for the Metro Desk, covering New York State politics in Albany. She was raised in Chicago and graduated from Yale University. @vwang3 A version of this article appears in print on , on Page A 1 of the New York edition with the headline: Progressive Push Exposes Racial Rift Among Albany Democrats. Order Reprints &#124; Today’s Paper &#124; SubscribeSource: ‘Black Leadership Matters’: Why a Racial Rift Is Growing Among N.Y. Democrats &#8211; The New York Times

    Read at 02:07 pm, May 14th

Day of May 13th, 2019

  • The tyranny of ideas

    China Miéville’s novel Embassytown describes a world in which an alien species, the Ariekei, becomes enthralled by a human’s ability to speak their language.

    Read at 11:54 pm, May 13th

  • Part-time software developer jobs don’t exist, right?

    If you’re tired of working long hours, a part-time—or even just 4 days a week—programming jobs seems appealing. You’ll still get paid, you’ll still hopefully enjoy your job—but you’ll also have more time for other things in your life.

    Read at 11:42 pm, May 13th

  • The Unholy Alliance of Trans-Exclusionary Radical Feminists and the Right Wing

    In April, the House Judiciary Committee held its first hearing on the latest iteration of the Equality Act, federal legislation that would enshrine sexual orientation and gender identity as protected classes under federal civil rights law.

    Read at 07:47 pm, May 13th

  • How Often do Preferential Rents Rise? Rarely, But More Than They Used To

    New York is on the verge of big changes to its rent regulations, with the state legislature likely to vote on reforms before the June 15 sunset date for the current rent law. One topic of profound disagreement between tenant advocates and property owners is preferential rents.

    Read at 06:39 pm, May 13th

  • Steve Harvey Is Stupid

    Some people know things. Many people, however, are able to eke out a more-than-modest living by convincing people that they know things. They aren’t blessed with the gift of knowledge as much as they are blessed with the gift of charismatic sophistry. Politicians call it “spin.

    Read at 06:36 pm, May 13th

  • This Is What It Sounds Like Hiding In A Dark Classroom During A School Shooting

    "Attention please. Lockdown. Locks, lights, out of sight. Attention please. Lockdown. Locks, lights, out of sight. Attention please. Lockdown. Locks, lights, out of sight. " "Attention please. Lockdown. Locks, lights, out of sight.

    Read at 06:31 pm, May 13th

  • “Am I a bad person?” Why one mom didn’t take her kid to the ER — even after poison control said to.

    Two years ago, 36-year-old Lindsay Clark was facing a terrible decision. Her 2-year-old daughter Lily had gotten into a small bottle of the anti-nausea drug Dramamine.

    Read at 06:29 pm, May 13th

  • We Made A Free Documentary on How to Start a Worker Co-op

    The worker cooperative movement is flourishing. That’s because there’s a growing understanding that the economy and the businesses we work in would fair better if they were owned and run by the people. But starting a worker co-op is difficult, and the process can be daunting.

    Read at 03:31 pm, May 13th

  • Burr holds firm despite GOP anger over Don Jr. subpoena

    Richard Burr faces intense pressure from Republicans to drop his subpoena of President Donald Trump’s eldest son and quickly wrap up the Senate Intelligence Committee’s Russia probe.

    Read at 09:38 am, May 13th

  • Advanced Custom Fields 5.8.0 Introduces ACF Blocks: A PHP Framework for Creating Gutenberg Blocks

    After six months in development, Advanced Custom Fields 5.8.0 was released yesterday with a new PHP-based framework for developing custom Gutenberg block types.

    Read at 09:34 am, May 13th

  • New ECMAScript Modules in Node v12

    If you’re familiar with popular JavaScript frontend frameworks like React, Angular, etc, then the concept of ECMAScript won’t be entirely new to you. ES Modules have the import and export syntax we see often in frontend frameworks. Node uses CommonJS which relies on require()** for imports.**

    Read at 09:29 am, May 13th

  • Trans-inclusive Design

    Late one night a few years ago, a panicked professor emailed me: “My transgender student’s legal name is showing on our online discussion board. How can I keep him from being outed to his classmates?” Short story: we couldn’t. The professor created an offline workaround with the student.

    Read at 09:23 am, May 13th

  • Writing Testable Code

    Many developers have a hate relationship with testing. However, I believe the main cause of that is code that is highly-coupled and difficult to test. This post states some principles and guidelines…

    Read at 09:16 am, May 13th

  • Why I Love useReducer

    I didn't realize until recently how much I loved the React Hook useReducer.

    Read at 09:09 am, May 13th

  • WordPress 5.2 “Jaco” Released, Includes Fatal PHP Error Protection and A Recovery Mode

    WordPress 5.2 “Jaco” named after bassist Jaco Pastorius, is now available for download. Normally, I’d start listing new features but I’m going to do something a little different this time.

    Read at 09:06 am, May 13th

  • Forget Technical Debt — Here's How to Build Technical Wealth

    Andrea Goulet and her business partner sat in her living room, casually reviewing their strategic plan, when an episode of This Old House came on television. It was one of those moments where ideas collide to create something new.

    Read at 09:03 am, May 13th

  • Theme Review Team Leadership Implements Controversial Changes to Trusted Authors Program, Requiring Theme Reviews in Exchange for Making Themes Live

    The WordPress Theme Review team has implemented a controversial change to its Trusted Authors Program that puts a hard requirement on participants to join the theme review team and perform a minimum number of reviews in order to continue having their own themes fast tracked through the review proces

    Read at 08:21 am, May 13th

  • Faster and more feature-rich internationalization APIs · V8

    The ECMAScript Internationalization API Specification (ECMA-402, or Intl) provides key locale-specific functionality such as date formatting, number formatting, plural form selection, and collation.

    Read at 08:14 am, May 13th

  • A year with Spectre: a V8 perspective

    On January 3, 2018, Google Project Zero and others disclosed the first three of a new class of vulnerabilities that affect CPUs that perform speculative execution, dubbed Spectre and Meltdown.

    Read at 08:10 am, May 13th

  • Tabor Theme Now Available as a Free Gatsby Theme for WordPress

    Gatsby WordPress Themes, a project launched earlier this year by a group of collaborators, has just released its second free theme. The team is led by Gatsby and GraphQL aficionados Zac Gordon, Jason Bahl, Muhammad Muhsin, Hussain Thajutheen, and Alexandra Spalato.

    Read at 08:01 am, May 13th

  • Always useMemo your context value

    So when <App /> re-renders, it'll re-render all of the other components. Most of the time this isn't a problem, mostly because the <App /> shouldn't re-render very often. But you could imagine this being in any part of the tree of our application.

    Read at 07:59 am, May 13th

  • Venezuelan Embassy Protection Collective Statement – INTERNATIONALIST 360°

    .entry-header Embassy Protection Collective The Embassy Protection Collective is a group of activists and journalists living in the Venezuelan embassy in Washington, DC at the invitation of the Venezuelan government. The following is a statement released by the Embassy Protection Collective to MintPress News and other media: This is the 34th day of our living in the Venezuelan embassy in Washington, DC. We are prepared to stay another 34 days, or however long is needed to resolve the embassy dispute in a peaceful way consistent with international law. This memo is being sent to the US and Venezuela as well as members of our Collective and allies. We are encouraging people to publish this memo as a transparent process is needed to prevent the US from making a unilateral decision that could impact the security of embassies around the world and lead to military conflict. There are two ways to resolve the issues around the Venezuelan embassy in DC, which we will explain. Before doing so, we reiterate that our collective is one of independent people and organizations not affiliated with any government. While we are all US citizens, we are not agents of the United States. While we are here with permission of the Venezuelan government, we are not their agents or representatives. We are here in the embassy lawfully. We are breaking no laws. We did not unlawfully enter and we are not trespassing. Exiting with a Protecting Power Agreement The exit from the embassy that best resolves issues to the benefit of the United States and Venezuela is a mutual Protecting Power Agreement. The United States wants a Protecting Power for its embassy in Caracas. Venezuela wants a Protecting Power for its embassy in DC. Such agreements are not uncommon when diplomatic relations are severed. A Protecting Power Agreement would avoid a military conflict that could lead to war. A war in Venezuela would be catastrophic for Venezuela, the United States, and for the region. It would lead to lives lost and mass migration from the chaos and conflict of war. It would cost the United States trillions of dollars and become a quagmire involving allied countries around the world. We are serving as interim protectors in the hope that the two nations can negotiate this resolution. If this occurs we will take the banners off the building, pack our materials, and leave voluntarily. The electricity could be turned on and we will drive out. We suggest a video walk-through with embassy officials to show that the Embassy Protection Collective did not damage the building. The only damage to the building has been inflicted by coup supporters in the course of their unprosecuted break-ins. The United States violates the Vienna Convention, makes an illegal eviction and unlawful arrests This approach will violate international law and is fraught with risks. The United States would have to cut the chains in the front door put up by embassy staff and violate the embassy.  We have put up barriers there and at other entrances to protect us from constant break-ins and threats from the trespassers whom the police are permitting outside the embassy. The police’s failure to protect the embassy and the US  citizens inside has forced us to take these actions. The Embassy Protectors will not barricade ourselves, or hide in the embassy in the event of an unlawful entry by police. We will gather together and peacefully assert our rights to remain in the building and uphold international law. Any order to vacate based on a request by coup conspirators that lack governing authority will not be a lawful order. The coup has failed multiple times in Venezuela. The elected government is recognized by the Venezuelan courts under Venezuelan law and by the United Nations under international law. An order by the US-appointed coup plotters would not be legal. Such an entry would put embassies around the world and in the United States at risk. We are concerned about US embassies and personnel around the world if the Vienna Convention is violated at this embassy. It would set a dangerous precedent that would likely be used against US embassies. If an illegal eviction and unlawful arrests are made, we will hold all decision-makers in the chain of command and all officers who enforce unlawful orders accountable. If there is a notice that we are trespassing and need to vacate the premises, please provide it to our attorney Mara Verhayden-Hilliard, copied on this memo. We have taken care of this embassy and request a video tour of the building before any arrests. We hope a wise and calm solution to this issue can be achieved so escalation of this conflict can avoided. There is no need for the United States and Venezuela to be enemies. Resolving this embassy dispute diplomatically should lead to negotiations over other issues between the nations. The Embassy Protection Collective May 13, 2019 .entry-content .entry-inner Source: Venezuelan Embassy Protection Collective Statement – INTERNATIONALIST 360°

    Read at 10:52 pm, May 13th

  • WhatsApp voice calls used to inject Israeli spyware on phones | Financial Times

    NSO's Pegasus software can allegedly penetrate any iPhone via one simple missed call on WhatsApp A vulnerability in the messaging app WhatsApp has allowed attackers to inject commercial Israeli spyware on to phones, the company and a spyware technology dealer said.WhatsApp, which is used by 1.5bn people worldwide, discovered in early May that attackers were able to install surveillance software on to both iPhones and Android phones by ringing up targets using the app’s phone call function. The malicious code, developed by the secretive Israeli company NSO Group, could be transmitted even if users did not answer their phones, and the calls often disappeared from call logs, said the spyware dealer, who was recently briefed on the WhatsApp hack.WhatsApp is too early into its own investigations of the vulnerability to estimate how many phones were targeted using this method, a person familiar with the issue said.As late as Sunday, as WhatsApp engineers raced to close the loophole, a UK-based human rights lawyer’s phone was targeted using the same method. Researchers at the University of Toronto’s Citizen Lab said they believed that the spyware attack on Sunday was linked to the same vulnerability that WhatsApp was trying to patch.NSO’s flagship product is Pegasus, a program that can turn on a phone’s microphone and camera, trawl through emails and messages and collect location data.NSO advertises its products to Middle Eastern and Western intelligence agencies, and says Pegasus is intended for governments to fight terrorism and crime. NSO was recently valued at $1bn in a leveraged buyout that involved the UK private equity fund Novalpina CapitalIn the past, human rights campaigners in the Middle East have received text messages over WhatsApp that contained links that would download Pegasus to their phones. WhatsApp said that teams of engineers had worked around the clock in San Francisco and London to close the vulnerability. It began rolling out a fix to its servers on Friday last week, WhatsApp said, and issued a patch for customers on Monday. “This attack has all the hallmarks of a private company known to work with governments to deliver spyware that reportedly takes over the functions of mobile phone operating systems,” the company said. “We have briefed a number of human rights organisations to share the information we can, and to work with them to notify civil society.”WhatsApp disclosed the issue to the US Department of Justice last week, according to a person familiar with the matter. A justice department spokesman declined to comment.NSO said it had carefully vetted customers and investigated any abuse. Asked about the WhatsApp attacks, NSO said it was investigating the issue. “Under no circumstances would NSO be involved in the operating or identifying of targets of its technology, which is solely operated by intelligence and law enforcement agencies,” the company said. “NSO would not, or could not, use its technology in its own right to target any person or organisation, including this individual [the UK lawyer].”The UK lawyer, who declined to be identified, has helped a group of Mexican journalists and government critics and a Saudi dissident living in Canada, sue NSO in Israel, alleging that the company shares liability for any abuse of its software by clients. John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, said the attack had failed. “We had a strong suspicion that the person’s phone was being targeted, so we observed the suspected attack, and confirmed that it did not result in infection,” said Mr Scott-Railton. “We believe that the measures that WhatsApp put in place in the last several days prevented the attacks from being successful.” Recommended Other lawyers working on the cases have been approached by people pretending to be potential clients or donors, who then try and obtain information about the ongoing lawsuits, the Associated Press reported in February.“It's upsetting but not surprising that my team has been targeted with the very technology that we are raising concerns about in our lawsuits,” said Alaa Mahajne, a Jerusalem-based lawyer who is handling lawsuits from the Mexican and Saudi citizens. “This desperate reaction to hamper our work and silence us, itself shows how urgent the lawsuits are, as we can see that the abuses are continuing.”On Tuesday, NSO will also face a legal challenge to its ability to export its software, which is regulated by the Israeli ministry of defence. Amnesty International, which identified an attempt to hack into the phone of one its researchers, is backing a group of Israeli citizens and civil rights group in a filing in Tel Aviv asking the ministry of defence to cancel NSO’s export licence. “NSO Group sells its products to governments who are known for outrageous human rights abuses, giving them the tools to track activists and critics. The attack on Amnesty International was the final straw,” said Danna Ingleton, deputy director of Amnesty Tech. “The Israeli ministry of defence has ignored mounting evidence linking NSO Group to attacks on human rights defenders. As long as products like Pegasus are marketed without proper control and oversight, the rights and safety of Amnesty International’s staff and that of other activists, journalists and dissidents around the world is at risk.”Additional reporting by Kadhim Shubber in Washington Copyright The Financial Times Limited 2019. All rights reserved. Second ad on the rhr Follow the topics in this article Potential third ad on the rhr Source: WhatsApp voice calls used to inject Israeli spyware on phones | Financial Times

    Read at 09:40 pm, May 13th

  • A Call For Web Developers To Deprecate Their CSS – CSS Perverts – Medium

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: A Call For Web Developers To Deprecate Their CSS – CSS Perverts – Medium

    Read at 02:58 pm, May 13th

  • Cuomo defends MTA’s use of police in OT fraud crackdown - New York Daily News

    The deployment of cops to LIRR workplaces last week outraged union officials and transit workers. Soon after the move became public, Metropolitan Transportation Authority officials rescinded the decision. Cuomo told the Daily News he did not make the decision to deploy the police, but said random workplace checks were necessary after reports showed the MTA does not have an effective time and attendance management system.Source: Cuomo defends MTA’s use of police in OT fraud crackdown &#8211; New York Daily News

    Read at 02:53 pm, May 13th

  • Union members lash out at emergency meeting for LIRR OT pay

    NEW YORK -&#13; Union members lashed out at the MTA Friday over efforts to keep overtime pay for LIRR employees in check. The MTA's labor force pushed back at an emergency meeting in Manhattan against accusations from the agency's leadership of workers fraudulently collecting overtime pay. Both sides came together to address the report by a fiscal watchdog group that revealed the LIRR paid out $224 million in overtime pay last year - a nearly $50 million increase from 2017. Union bosses said it is unfair for MTA leadership to paint all workers with a broad brush. "You're assigning the overtime. Is there an allegation that somehow union-represented workers at the MTA have made their own OT? Conjured up their own overtime? Given themselves OT?" asks John Samuelsen, president of the Transport Workers Union. "It all comes from the bosses. This is not us. This is a management problem." MORE: MTA uses its officers to police excessive LIRR overtime &#124; Watchdog: LIRR worker made over $340K in overtime in 2018  Samuelsen added that he wouldn't be surprised if some workers refuse overtime in protest against the MTA's handling of the controversy. "I would be shocked if this doesn't organically resonate across the subway tracks, the bus system, the railroad tracks, in a dip in productivity to add to all the problems of mismanagement that you're already suffering," says Samuelsen. MTA board chairman Patrick Foye announced five workers have either already been disciplined or will be sanctioned for overtime abuses. He also says the MTA will no longer have its police officers monitor overtime practices at LIRR facilities. As News 12 reported, union bosses were outraged when the MTA began policing overtime with its police force, calling it insulting and irresponsible. An MTA Board member also called on the agency to hire a former prosecutor to conduct an independent investigation of overtime abuses at the MTA. The overtime controversy comes as the MTA prepares to enter contract negotiations with most of its unions.Source: Union members lash out at emergency meeting for LIRR OT pay

    Read at 01:49 pm, May 13th

  • How To Decide Which MetroCard To Buy With The Recent Fare Hike: Gothamist

    While we can't personally make the subway run more efficiently, we can figure out how to make your subway experience as cost-effective as possible. And with the MTA recently raising subway fares again, now's a good time to look at the new MetroCard math that comes with the new fare structure. The 30-day MetroCard has increased from $121 to $127, while the 7-day has gone from $32 to $33, and the fare purchase bonuses have been eliminated. The one upside is that the fares got a lot simpler—you no longer have to divide e=mc2 by the negative square root of pi in order to figure out how much money to put on your card, and you won't be left with a frustrating $1.19 bonus anymore. Now the only decision to make is whether to buy a 30-day MetroCard, a 7-day MetroCard, or individual swipes. (Jen Carlson / Gothamist) Let's break the options down: The 30-day MetroCard's cost of $127 is the equivalent of ~46.2 single-swipes, so if you only ride the subway 46 times in 30 days, then single swipes are a better deal, but if you ride 47 times, you'll want to go with the 30-day card. To figure out if you're riding 47 times in 30 days, you'll need to consider a few things: If you use your MetroCard to commute to work and buy it on a Saturday, only 20 of the next 30 days will be business days, which probably means you'll be using it less. If you buy it Monday through Thursday you'll have the card for 22 business days. Unfortunately, once the card runs out, you'll have to buy a new one on whatever day that occurred. So if you plan to be buying a 30-day MetroCard for several months in a row, then buy the first card on a Monday so that you postpone having to buy it on a Saturday for as long as possible. Note how many major holidays you get off that fall within the 30 day period. For example, between December 24th and January 23rd there are three, which could potentially make for six fewer commutes. The 30-day MetroCard would probably not be a good deal during such a period. If you're not using your MetroCard at least 1 or 2 times per week outside of your commute on average, then the 30-day MetroCard is not for you. And now, a twist: the names "30-day MetroCard" and "7-day MetroCard" are not always accurate. More often than not, they are 6.5 day and 29.5 day MetroCards. That is because if you buy a 7-day MetroCard at 11:59 p.m. Sunday, it does NOT expire at 11:59 p.m. the following Sunday, but rather at midnight—that's 6 days and 1 minute after you bought it! Therefore you most definitely do not want to buy the 7-day/30-day MetroCards at night, especially after work when you've used the card twice. You'll want to buy the card early in the day before you've used it.About that 7-day MetroCard—it is probably only worth considering this option if you are a tourist or doing a city-wide scavenger hunt. Fortunately, the math here is easy: the value of a 7-day MetroCard is exactly 12 rides, so you're only saving money if you're using the card 13 times during those 7 days. This is why the 7-day is so not applicable as a commuter card since 13 rides represent 6.5 commutes in 7 days. It'd only be worth it if you worked 7 days a week, but even then the 30-day would probably be a better deal unless the job was very short-term. To recap: Commuters should buy the 30-day when riding 47 times per month, which is more likely to happen if you buy it in the morning, on a Monday, not during holiday season, and when you'll be riding on the weekends occasionally. Correction: The 30-day MetroCard clock starts ticking at your first swipe, not when you first purchase it. We the Commuters is a weekly newsletter about transportation from WNYC and Gothamist. Sign up below for essential commuting coverage delivered to your inbox every Thursday. Loading... Source: How To Decide Which MetroCard To Buy With The Recent Fare Hike: Gothamist

    Read at 01:32 pm, May 13th

  • AddyOsmani.com - Paint Holding - reducing the flash of white on same-origin navigations

    Paint Holding - reducing the flash of white on same-origin navigations For a while now, Chrome has eagerly cleared the screen when transitioning to a new page to give users the reassurance that the page is loading. This "flash of white" is this brief moment during which the browser shows a white paint while loading a page. This can be distracting in-between navigations, especially when the page is reasonably fast in reaching a more interesting state. But for pages that load lightning fast, this approach is actually detrimental to the user experience. In the following animation, you see an example of what this looks like today. We are big fans of this website and it kills us that their quality experience has a flash of white, and we wanted to fix it. We did so with a new behavior that were calling Paint Holding, where the browser waits briefly before starting to paint, especially if the page is fast enough. This ensures that the page renders as a whole delivering a truly instant experience. The way this works is that we defer compositor commits until a given page load signal (PLS) (e.g. first contentful paint / fixed timeout) is reached. We distinguish between main-thread rendering work and commit to the impl thread (only the latter is deferred). Waiting until a PLS occurs reduces likelihood of flashes of white/solid-color. Our goal with this work was for navigations in Chrome between two pages that are of the same origin should have a seamless and fast default navigation experience with no flashes of white/solid-color background between old and new content. Try Paint Holding in Chrome Canary (Chrome 76) and let us know what you think. Developers shouldn't have to worry about making any modifications to their pages to take advantage of it. This post was originally publised on WebFundamentals Source: AddyOsmani.com &#8211; Paint Holding &#8211; reducing the flash of white on same-origin navigations

    Read at 07:50 am, May 13th