James Reads


Day of Jun 8th, 2019

  • The Making of a YouTube Radical

    Caleb Cain was a college dropout looking for direction. He turned to YouTube. Soon, he was pulled into a far-right universe, watching thousands of videos filled with conspiracy theories, misogyny and racism.

    Read at 12:47 pm, Jun 8th

  • Born to be Militaristic

    When I arrived in Vietnam in 1970, members of my platoon had lost a friend to a booby trap just two weeks before. I was smoking dope in a guard bunker when a group of them went into the outskirts of Bongson and gunned down an old woman who was hoeing in her garden.

    Read at 12:10 pm, Jun 8th

  • Climate, Solidarity, and Resistance

    The newly empowered U.S. Left needs a foreign policy. But what should it be?

    Read at 12:08 pm, Jun 8th

  • Yesterday, Twitter’s main character was a union-buster

    Yesterday, University of Chicago professor Agnes Callard tweeted that she had been crossing picket lines all week set up by graduate students at the institution, asking “Am I in the wrong?” As the expression goes, every day on Twitter there is one main character; the goal is never to be it.

    Read at 11:46 am, Jun 8th

  • 'Surveillance capitalism': critic urges Toronto to abandon smart city project

    The 12-acre Quayside project, a partnership between Google’s Sidewalk Labs and the city of Toronto, has come under increasing scrutiny amid concerns over privacy and data harvesting.

    Read at 11:35 am, Jun 8th

  • Python: range is not an iterator!

    After my Loop Better talk at PyGotham 2017 someone asked me a great question: iterators are lazy iterables and range is a lazy iterable in Python 3, so is range an iterator? Unfortunately, I don’t remember the name of the person who asked me this question.

    Read at 11:28 am, Jun 8th

  • The Iterator Protocol: How for Loops Work in Python

    We’re interviewing for a job and our interviewer has asked us to remove all for loops from a block of code. They then mentioned something about iterators and cackled maniacally while rapping their fingers on the table.

    Read at 11:23 am, Jun 8th

  • Exclusive: In secret recording, Pompeo opens up about Venezuelan opposition, says keeping it united ‘has proven devilishly difficult’

    Secretary of State Mike Pompeo offered a candid assessment of Venezuela’s opposition during a closed-door meeting in New York last week, saying that the opponents of President Nicolás Maduro are highly fractious and that U.S.

    Read at 11:20 am, Jun 8th

  • How a watchdog whitewashed its oversight of FEMA’s disaster response with ‘feel good’ reports

    After catastrophic floodwaters submerged wide stretches of southern Louisiana in 2016, displaced homeowners and officials criticized the federal recovery effort as dangerously slow, leaving thousands of people homeless for months.

    Read at 11:15 am, Jun 8th

  • Australia May Well Be the World’s Most Secretive Democracy

    SYDNEY, Australia — One journalist is being investigated for reporting that several boats filled with asylum seekers recently tried to reach Australia from Sri Lanka.

    Read at 11:00 am, Jun 8th

  • I Want to Live in Elizabeth Warren’s America

    The Massachusetts senator is proposing something radical: a country in which adults discuss serious ideas seriously. It’s early, but this much is true: Elizabeth Warren is running the most impressive presidential campaign in ages, certainly the most impressive campaign within my lifetime.

    Read at 10:49 am, Jun 8th

  • Loop better: A deeper look at iteration in Python

    Python's for loops don't work the way for loops do in other languages. In this article we'll dive into Python's for loops to take a look at how they work under the hood and why they work the way they do. We're going to start off our journey by taking a look at some "gotchas.

    Read at 10:18 am, Jun 8th

Day of Jun 7th, 2019

  • Vox Media Employees Walk Out On Final Day Of Union Bargaining

    CEO Jim Bankoff, meanwhile, said he was disappointed in the walkout and said paying higher wages was "not realistic or smart." Several hundred Vox Media employees staged a walkout and stopped writing and publishing stories in an effort to pressure the company to sign their union contract.

    Read at 06:35 pm, Jun 7th

  • Newly Discovered Files Suggest GOP Lawmakers Lied in Court About Racial Gerrymandering to Stop An Election

    The latest bombshell from the formerly secret files of the GOP’s top gerrymandering guru emerged on Thursday, and it’s astounding: Voting rights advocates claim to have evidence that North Carolina Republican lawmakers repeatedly lied to a federal court, and to the public, in a successful effort

    Read at 06:33 pm, Jun 7th

  • Automakers Tell Trump His Pollution Rules Could Mean ‘Untenable’ Instability and Lower Profits

    WASHINGTON — The world’s largest automakers warned President Trump on Thursday that one of his most sweeping deregulatory efforts — his plan to weaken tailpipe pollution standards — threatens to cut their profits and produce “untenable” instability in a crucial manufacturing sector.

    Read at 06:29 pm, Jun 7th

  • Vox Media’s union wants the ‘best contract in digital media.’ If it wins, it would be good for everyone.

    Dylan Matthews covers a bit of everything at the wonkishly inclined five-year-old site Vox.com. His bio lists his interests as, “global development, anti-poverty efforts in the US and abroad, factory farming and animal welfare, and conflicts about the right way to do philanthropy.

    Read at 06:25 pm, Jun 7th

  • Economists Are Obsessed with “Job Creation.” How About Less Work?

    In 1930, the British economist John Maynard Keynes predicted that, by the end of the century, the average workweek would be about 15 hours.

    Read at 06:13 pm, Jun 7th

  • How to Use the Web Share API

    Twitter Facebook The user is presented with a wide range of options for sharing content compared to the limited number you might have in your DIY implementation. You can improve your page load times by doing away with third-party scripts from individual social platforms.

    Read at 12:10 pm, Jun 7th

  • Pelosi tells Dems she wants to see Trump ‘in prison’

    She also clashed with Judiciary Committee Chairman Jerry Nadler, who pressed her to begin impeachment proceedings.

    Read at 09:50 am, Jun 7th

  • The Coalition Out to Kill Tech as We Know It

    Updated on June 5 at 12:39 p.m. In October 2016, then-President Barack Obama hosted a miniature version of the blowout tech conference South by Southwest, which the White House called South by South Lawn. Obama, as The New York Times put it at the time, had “brought Silicon Valley to Washington.

    Read at 09:37 am, Jun 7th

  • How To Mock Services Using Mountebank and Node.js

    The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program. In complex service-oriented architectures (SOA), programs often need to call multiple services to run through a given workflow.

    Read at 09:27 am, Jun 7th

  • Daily Ethical Design

    Design ethics concerns moral behavior and responsible choices in the practice of design.

    Read at 09:18 am, Jun 7th

  • Men Cause 100% of Unwanted Pregnancies

    Our conversation about abortion places the burden of responsibility on women. I argue men are the root cause. If we actually care about reducing or eliminating abortions, we must hold men accountable.

    Read at 09:09 am, Jun 7th

  • ? You don't need passport.js - Guide to node.js authentication ✌️

    While third-party authorization services like Google Firebase, AWS Cognito, and Auth0 are gaining popularity, and all-in-one library solutions like passport.js are the industry standard, is common to see that developers never really understand all the parts involved in the authentication flow.

    Read at 07:40 am, Jun 7th

  • semantic/why-haskell.md at master · github/semantic

    Semantic semantic is a Haskell library and command line tool for parsing, analyzing, and comparing source code. In a hurry? Check out our documentation of example uses for the semantic command line tool. Usage Run semantic --help for complete list of up-to-date options. Parse Usage: semantic parse ([--sexpression] | [--json] | [--json-graph] | [--symbols] | [--dot] | [--show] | [--quiet]) [FILES...] Generate parse trees for path(s) Available options: --sexpression Output s-expression parse trees (default) --json Output JSON parse trees --json-graph Output JSON adjacency list --symbols Output JSON symbol list --dot Output DOT graph parse trees --show Output using the Show instance (debug only, format subject to change without notice) --quiet Don't produce output, but show timing stats Diff Usage: semantic diff ([--sexpression] | [--json] | [--json-graph] | [--toc] | [--dot] | [--show]) [FILE_A] [FILE_B] Compute changes between paths Available options: --sexpression Output s-expression diff tree (default) --json Output JSON diff trees --json-graph Output JSON diff trees --toc Output JSON table of contents diff summary --dot Output the diff as a DOT graph --show Output using the Show instance (debug only, format subject to change without notice) Graph Usage: semantic graph ([--imports] | [--calls]) [--packages] ([--dot] | [--json] | [--show]) ([--root DIR] [--exclude-dir DIR] DIR:LANGUAGE | FILE | --language ARG (FILES... | --stdin)) Compute a graph for a directory or from a top-level entry point module Available options: --imports Compute an import graph (default) --calls Compute a call graph --packages Include a vertex for the package, with edges from it to each module --dot Output in DOT graph format (default) --json Output JSON graph --show Output using the Show instance (debug only, format subject to change without notice) --root DIR Root directory of project. Optional, defaults to entry file/directory. --exclude-dir DIR Exclude a directory (e.g. vendor) --language ARG The language for the analysis. --stdin Read a list of newline-separated paths to analyze from stdin. Language support Priority Language Parse Assign Diff ToC Symbols Import graph Call graph Control flow graph 1 Ruby ✅ ✅ ✅ ✅ ✅ ✅ 🚧 2 JavaScript ✅ ✅ ✅ ✅ ✅ ✅ 🚧 3 TypeScript ✅ ✅ ✅ ✅ ✅ ✅ 🚧 4 Python ✅ ✅ ✅ ✅ ✅ ✅ 🚧 5 Go ✅ ✅ ✅ ✅ ✅ ✅ 🚧 PHP ✅ ✅ ✅ ✅ ✅ Java ✅ ✅ ✅ 🔶 ✅ JSON ✅ ✅ ✅ N/A N/A N/A N/A JSX ✅ ✅ ✅ 🔶 Haskell ✅ ✅ ✅ 🔶 ✅ Markdown ✅ ✅ ✅ 🔶 N/A N/A N/A   ✅ — Supported 🔶 — Partial support 🚧 — Under development Development We use cabal's Nix-style local builds for development. To get started quickly: git clone git@github.com:github/semantic.git cd semantic git submodule sync --recursive && git submodule update --init --recursive --force cabal new-update cabal new-build cabal new-test cabal new-run semantic -- --help semantic requires at least GHC 8.6.4. We recommend using ghcup to sandbox GHC versions. Our version bounds are based on Stackage LTS versions. The current LTS version is 13.13. stack as a build tool is not officially supported; there is an unofficial stack.yaml available, though we cannot make guarantees as to its stability. Technology and architecture Architecturally, semantic: Reads blobs. Generates parse trees for those blobs with tree-sitter (an incremental parsing system for programming tools). Assigns those trees into a generalized representation of syntax. Performs analysis, computes diffs, or just returns parse trees. Renders output in one of many supported formats. Semantic leverages a number of interesting algorithms and techniques: Contributions Contributions are welcome! Please see our contribution guidelines and our code of conduct for details on how to participate in our community. Licensing Semantic is licensed under the MIT license. Source: semantic/why-haskell.md at master · github/semantic

    Read at 11:24 am, Jun 7th

  • Render Snarky Comments in Comic Sans—zachleat.com

    Render Snarky Comments in Comic SansJune 07, 2019I had the pleasure of attending an IndieWebCamp before the amazing Beyond Tellerand conference a few weeks back and I’m still buzzing from the experience.I can’t really express how meaningful this experience was to me. An antithesis to the rat race of social media, IndieWebCamp was a roomful of kindred spirits that care about the web and their own websites and hosting their own content. It felt like the Google Reader days again, when everyone was blogging and writing on their own sites. I dunno if you can tell but I loved it. If you get the chance to attend one of these events, jump on it (I really want to run one in Omaha 👀).Webmentions, Disqus, WordpressAt the event I got a working example of webmentions going on my personal web site. I already had a static copy of my old Disqus comments that I’d exported (which included copies of old Wordpress comments that I’d imported into Disqus 😎).Webmentions are made possible for static web sites when you use webmention.io, a service to log incoming entries. Another service, Bridgy, crawls social networking sites for mentions of my site and sends those over to webmention.io automatically.If I’ve already lost you, luckily Max Böck wrote up a lovely tutorial on how to do this using Eleventy (his site is amazing, too). Max also created an eleventy-webmentions starter project which has all the code for this. Hopefully we can get some form of this merged back into the upstream eleventy-base-blog too.You can see an example of how the webmentions look on my site at one of my recent blog posts: Google Fonts is Adding font-display.Sentiment AnalysisHosting my own content and comments allows me to be a bit more creative with it. So I decided to take this a step further and have a little fun with negative comments.First, how do we find out if a comment is negative? Let’s try to use Natural, a plugin on npm. I added a Liquid filter to analyze text and spit out a sentiment value. 0 is neutral, < 0 is negative, and > 0 is positive. Note that this natural language processing isn’t 100% (sometimes I’ll get a false positive) but this is just a fun demo on my site.And then in my Liquid template, I use this integer value to add a static-comments-reply-salty class: {% assign sentiment = webmention.content.text | getSentimentValue %} <li class="static-comments-reply{% if sentiment < 0 %} static-comments-reply-salty{% endif %}"> … And then in my stylesheet, I use this class to opt-into a lovely stack of Comic Sans, Chalkboard, and of course a fantasy fallback for kicks:.static-comments-reply-salty { font-family: Comic Sans MS, Chalkboard SE, fantasy;}As extra credit, I also used the random-case plugin to mODifY tHe TeXt (at David Darnes excellent recommendation).How does it look?This was taken from a real comment on my site.Before:After:This isn’t intended to be a hot-take on Comic Sans. Instead it’s meant to change the tone of the negativity to make it sound like a clown is yelling at a kid’s birthday party.Source: Render Snarky Comments in Comic Sans—zachleat.com

    Read at 10:33 am, Jun 7th

  • The last Soviet citizen: The cosmonaut who was left behind in space - Russia Beyond

    Alexander Mokletsov/Sputnik Sergei Krikalev was in space when the Soviet Union collapsed. Unable to come home, he wound up spending two times longer than originally planned in orbit. They simply refused to bring him back. While tanks were rolling through Moscow's Red Square, people built barricades on bridges, Mikhail Gorbachev and the Soviet Union went the way of history, Sergei Krikalev was in space. 350 km away from Earth, the Mir space station was his temporary home.He was nicknamed "the last citizen of the USSR." When the Soviet Union broke apart into 15 separate states in 1991, Krikalev was told that he could not return home because the country that had promised to bring him back home no longer existed.How did this happen?Four months earlier, Krikalev, a 33-year-old flight engineer, had set off for the Mir space station from the Soviet Baikonur Cosmodrome, which is located in Kazakhstan. Krikalev's mission was supposed to last five months, and his training had not prepared him to be in space longer than this.Then the coup d'état happened. "For us, this came as a complete surprise,” Krikalev would recall. “We did not understand what was happening. When we were discussing it, we tried to understand how it would affect the space industry."Sergei Krikalev Volkov/TASS And affect the space industry it did. Krikalev was told there was no money to bring him back. A month later, he still got the same answer: mission control was asking him to stay out there a bit longer. Another month passed, but still the same answer yet again. "They say it’s tough for me — not really good for my health. But now the country is in such difficulty, the chance to save money must be (the) top priority,” Discover Magazine quoted him as saying.The waiting gameIn fact, he could have left. There was a Raduga re-entry capsule onboard the Mir, which was designed specifically for making the return to Earth. But taking it would have meant the end of Mir since there was no one else left to look after it."I wondered if I had the strength to survive to complete the program. I was not sure," he said. Muscle atrophy, radiation, cancer risk, the immune system becoming weaker with every passing day—these are just some of the possible consequences of a protracted space mission.Sergei Krikalev and Valery Polykov TASS In Krikalev’s case, the mission lasted twice as long as originally planned. He spent 311 days, or 10 months, in space, unwittingly setting a world record in the process. Over this time, four scheduled missions were cut to two, and neither of them had space for another flight engineer.Russia, which at that time had major money problems due to hyperinflation, was selling other countries seats to the space station on the Soyuz rocket. For example, Austria bought a seat for $7 million, while Japan purchased one for $12 million to send a TV reporter there. There was even talk of urgently selling off the Mir while it was still in working order. All of this mean that other crew members returned to Earth, while Krikalev, the only flight engineer, could not. Locked up there in space, far from home, he asked them to bring him honey in order to raise his spirits. But there was no honey, and instead they sent him lemon and horseradish.The return  Krikalev finally returned to Earth on March 25, 1992 after Germany paid $24 million to purchase a ticket for his replacement, Klaus-Dietrich Flade.Upon landing, a man with the four letters “USSR” and a red Soviet flag on his spacesuit emerged from the Soyuz capsule. One report described his appearance as "pale as flour and sweaty, like a lump of wet dough." By then the whole world had heard about this “victim of space.” Four men helped him stand, supporting him as he placed his feet on the ground. One of them threw a fur coat over him, while the other brought him a bowl of broth.While Krikalev was away, the outskirts of Arkalykh, the city where he landed, had ceased to be Soviet and had instead become part of the independent republic of Kazakhstan. The city where he lived was no longer called Leningrad—it had become St. Petersburg instead. While in space, he had orbited Earth 5,000 times and the territory of his own country had shrunk by more than 5 million square kilometers. The Communist Party of the Soviet Union, which had ruled the country since the 1920s, had ceased to be a political monopolist and was instead just one of many parties. His monthly salary of 600 rubles, which at the time of his departure into space was considered a good salary for a scientist, had been devalued. Now a bus driver earned twice as much."The change is not that radical," Krikalev would say at a press conference a few days later. "I lived on the territory of Russia, while the republics were united into the Soviet Union. Now I have returned to Russia, which is part of the Commonwealth of Independent States."Sergey Krikalev Global Look Press He would be made a Hero of Russia and two years later would go on another space mission, this time becoming the first Russian cosmonaut to fly on a NASA shuttle. And a couple of years later, the first one to spend time on the new International Space Station. If using any of Russia Beyond's content, partly or in full, always provide an active hyperlink to the original material. Check your email to confirm the subscribtion') }, error: function() { $email.val(''); alert('An unknown error occurred. Try later.'); } }); } }); }; initFormSubmit(); $completeButton.on('click', function (evt) { evt.preventDefault(); evt.window.location.reload(); }); }()); Source: The last Soviet citizen: The cosmonaut who was left behind in space – Russia Beyond

    Read at 10:15 am, Jun 7th

  • Unions, Media Companies and TPM | Talking Points Memo

    Unions, Media Companies and TPM .ArticleImage I was on vacation last week when I got the news that the TPM Union had ratified the contract we’d agreed upon. Without a doubt, the union makes TPM a better company. Now that I’m back in the office, I wanted to talk a bit about why. Some of TPM’s longtime readers may know me but most of you will not so let me introduce myself. I’m Joe Ragazzo, executive publisher at TPM. In my previous life I was a journalist but moved over to the “business” side because it upset me how the news industry was dying and I hoped in some small way I could help improve it. We have three simple goals at TPM. We want to do great journalism. We want to be the best media company at which to work. We want to make enough money to do the first two things. I think every media company should unionize. No matter the best intentions of management —  myself included — there are blind spots. It’s arrogant to think that without giving the employees a literal seat at the table, without listening to their concerns and making a good faith effort to address them, that you can be a great employer over the long run. When the TPM Union asked Josh Marshall to recognize the union, he did so within a couple hours. This gave me a great deal of pride. Over the last year or so, managing editor David Kurtz and I have negotiated with the union’s bargaining committee. Josh was in the background, he signed off on major points but largely left the negotiating to us. In my opinion, we came to a mutually beneficial agreement that locks in minimum salaries, annual raises, parental leave, and a number of other things. One of the aspects of the contract that I’m most happy with is that a union member will join our Strategy Council. I think this is a big step for TPM. I will not bore you with my philosophies and beliefs about corporate governance, but it would be great for more companies to do something like this. Again, management has blind spots, and giving more of a voice to the employees will only make our journalism better and our employees happier. TPM is a small, independent company. We don’t have loads of cash sitting around. Every dollar matters. In order for this place to be awesome, everyone has to be excited about working here and feel like they are valued. As management, we can say we care. We can say lots of things. But actually putting our promises into a contract, I think, demonstrates to the employees that it’s not just idle chatter. If you are not yet a member at TPM, it’d be great if you’d consider joining. (If you’re already a member, maybe consider upgrading to Ad Free!) We depend on those memberships. Like I said, we’re a small, independent company. Membership dollars don’t line the pockets of investors. They pay the salaries of our reporters and programmers, and our rent and other operational costs. The vast majority of TPM’s budget goes to paying our personnel — we don’t spend money on much else — and the majority of our income comes from memberships. The short version of that is: When you buy a membership, it’s money that goes directly to the people who make TPM. We are living our values and I hope you’ll support us. A final thought. If you work at a media company that isn’t unionized, you should consider unionizing. It’s better for everyone. You deserve the peace of mind of a contract. The idea that unions are a problem is farcical. The problems in media these days is that large corporations are pillaging local media and private equity is stripping journalism outfits down to their bare bones. There are media executives taking home huge bags of cash while paying employees next to nothing. Sometimes, they are taking home huge paychecks, not paying anyone decently — and their companies are still failing. Unions are not some magic elixir and every contract is different, but it’s my firm belief that journalism would be in much better shape if everyone unionized and the actual journalists had a seat at the table. #article-content .Article__Footer .Article__Content .--span .js_article-wrapper Source: Unions, Media Companies and TPM | Talking Points Memo

    Read at 08:13 am, Jun 7th

Day of Jun 6th, 2019

  • Automattic Adopts Alex Mills’ Plugins

    Automattic announced today that a team inside the company will be adopting Alex Mills‘ plugins and continuing their development and support.

    Read at 11:13 pm, Jun 6th

  • Peterborough by-election: Nigel Farage's Brexit Party fail to win seat in parliament

    Nigel Farage’s Brexit Party have failed to secure their first parliamentary seat in the Peterborough by-election.

    Read at 11:11 pm, Jun 6th

  • Alexandria Ocasio-Cortez 'encouraged' by work with Ted Cruz's legislative team

    Rep. Alexandria Ocasio-Cortez says a ban on former lawmakers becoming lobbyists, which prompted her unlikely alliance with Sen. Ted Cruz, is one step closer to becoming a reality.

    Read at 11:09 pm, Jun 6th

  • The Agile Labor Union

    In 2001, seventeen American, British and Canadian software engineers and IT managers met at a ski resort in Snowbird, Utah, to start a movement to remake the way software is built.

    Read at 10:59 pm, Jun 6th

  • Intro to the Rust programming language

    Alex Crichton presents an introduction to the Rust programming language

    Read at 02:10 pm, Jun 6th

  • DNC tells Inslee it won't host climate debate

    Democratic presidential hopeful and Washington Gov. Jay Inslee announced Wednesday that the Democratic National Committee (DNC) is rebuffing his repeated entreaties for a primary debate focused on climate change.

    Read at 09:22 am, Jun 6th

  • https://revolutionsperminute.simplecast.com/episodes/healthcare-at-the-intersection-z5h4H5e5

    Read at 09:18 am, Jun 6th

  • Pressure mounts on Japanese lawmaker Hodaka Maruyama, who drank 10 glasses of cognac before suggesting war with Russia over disputed islands | South China Morning Post

    Hodaka Maruyama made his remarks in May, during a visit to one of four Russian-held islands off Hokkaido.Source: Pressure mounts on Japanese lawmaker Hodaka Maruyama, who drank 10 glasses of cognac before suggesting war with Russia over disputed islands | South China Morning Post

    Read at 06:25 pm, Jun 6th

  • Why we prefer CSS Custom Properties to SASS variables | CodyHouse

    Since the release of our framework a few months ago, we've been asked by many users why we opted for CSS variables, instead of SASS variables, even though we do use SASS in the framework. In this article, I'll go through the advantages of using custom properties and why they've become crucial in our workflow. Content: Creating and applying color themes Controlling the type scale Controlling the spacing scale Editing vertical rhythm on a component level Abstracting components behavior What's the catch? Can I use CSS variables with a preprocessor? Conclusion 👋 First time you hear about the CodyHouse Framework? Defining variables # In this article I'm assuming you're familiar with the basics of both CSS custom properties and SASS (or any other CSS preprocessor). If you're not, let's start from a basic example: In SCSS: $color-primary: hsl(220, 90%, 56%);
 .link {
 color: $color-primary;
 } In CSS: :root {
 --color-primary: hsl(220, 90%, 56%);
 .link {
 color: var(--color-primary);
 } Native, custom properties allow you to define variables without the need for CSS extensions (i.e., SASS). Are they the same? Not really! Unlike SASS variables, custom properties 1) are scoped to the element they are declared on, 2) cascade and 3) can be manipulated in JavaScript. These three features open a whole new world of possibilities. Let me show you some practical examples! 1. Creating and applying color themes # Here's an example of how you would create two (simplified) color themes using SASS variables: $color-primary: blue;
 $color-text: black;
 $color-bg: white;
 /* invert */
 $color-primary-invert: red;
 $color-text-invert: white;
 $color-bg-invert: black;
 .component {
 color: $color-text;
 background-color: $color-bg;
 a {
 color: $color-primary;
 .component--dark {
 color: $color-text-invert;
 background-color: $color-bg-invert;
 a {
 color: $color-primary-invert;
 } In the example above, we have a 'default' theme, and a 'dark' theme where we invert the colors of background and text. Note that in the dark theme we need to go through each property where the color variables were used, and update them with a new variable. As long as we stick to simplified (non-realistic) examples, no issue arises. What if we have a component with plenty of elements? Once again, we would be forced to rewrite all the properties where the color variables are used and replace the variables. And if you change the main component, you have to double check all the modifiers. So yeah...not so handy! While building our framework, we came up with a different approach based on CSS variables. First of all, let's define the color variables: :root, [data-theme="default"] {
 --color-primary: blue;
 /* color contrasts */
 --color-bg: white;
 --color-contrast-lower: hsl(0, 0%, 95%);
 --color-contrast-low: hsl(240, 1%, 83%);
 --color-contrast-medium: hsl(240, 1%, 48%);
 --color-contrast-high: hsl(240, 4%, 20%);
 --color-contrast-higher: black;
 [data-theme] {
 background-color: var(--color-bg);
 color: var(--color-contrast-high);
 [data-theme="dark"] {
 --color-primary: red;
 /* color contrasts */
 --color-bg: black;
 --color-contrast-lower: hsl(240, 6%, 15%);
 --color-contrast-low: hsl(252, 4%, 25%);
 --color-contrast-medium: hsl(240, 1%, 57%);
 --color-contrast-high: hsl(0, 0%, 89%);
 --color-contrast-higher: white;
 } FYI: in the example above we use data-* attributes to apply a color theme, but this has nothing to do with CSS variables vs. SASS variables. Also, we defined a scale of neutral values using a nomenclature based on the 'contrast level'. The important point is that we don't need to create new color variables for our second (dark) theme. Unlike SASS, we can override the value of existing custom properties. Here's how to apply the color variables to a component: .component {
 color: var(--color-contrast-higher);
 background-color: var(--color-bg);
 border-bottom: 1px solid var(--color-contrast-low);
 a {
 color: var(--color-primary);
 } What about the dark variation of the component? We don't need additional CSS. Because we're overriding and not replacing variables, we only need to apply the correct color variables when we create the component for the first time. It doesn't matter how complicated the component becomes, once you've set the color themes in your _colors.scss file, and applied the color variables to the elements of your components, you can apply color themes in a very simple way: <section data-theme="dark">
 <div class="component">
 <div class="child" data-theme="default"></div>
 </section> In the example above, we've applied the 'dark' color theme to the section, and the 'default' color theme to the .child element. That's right, you can nest color themes! This technique, made possible by the use of CSS custom properties, allows you to do in no time cool stuff like this. 👇 Here are some links in case you want to learn more about how to manage colors using the CodyHouse framework: 2. Controlling the type scale # A type (or modular) scale is a set of harmonious (size) values that are applied to typography elements. Here's how you can set a type scale in SCSS using SASS variables: $text-xs: 0.694em;
 $text-sm: 0.833em;
 $text-base-size: 1em;
 $text-md: 1.2em;
 $text-lg: 1.44em;
 $text-xl: 1.728em; A standard approach would be creating the type scale using a third-party tool (or doing the math), then importing the values into your style like in the example above. While building our framework, we decided to incorporate the whole scale formula into the _typography.scss file. Here's how we set the type scale using CSS variables: :root {
 // body font size
 --text-base-size: 1em;
 // type scale
 --text-scale-ratio: 1.2;
 --text-xs: calc((1em / var(--text-scale-ratio)) / var(--text-scale-ratio));
 --text-sm: calc(var(--text-xs) * var(--text-scale-ratio));
 --text-md: calc(var(--text-sm) * var(--text-scale-ratio) * var(--text-scale-ratio));
 --text-lg: calc(var(--text-md) * var(--text-scale-ratio));
 --text-xl: calc(var(--text-lg) * var(--text-scale-ratio));
 --text-xxl: calc(var(--text-xl) * var(--text-scale-ratio));
 --text-xxxl: calc(var(--text-xxl) * var(--text-scale-ratio));
 } What's the advantage of such an approach? It gives you the possibility to control the whole typography system by editing only two variables: the --text-base-size (body font size) and the --text-scale-ratio (the scale multiplier). 'Yes, but can't you do the same using SASS variables'? No, if you want to modify your typography at specific breakpoints: :root {
 @include breakpoint(md) {
 --text-base-size: 1.25em;
 --text-scale-ratio: 1.25;
 } The snippet above is the cornerstone of our responsive approach. Because we use Ems relative units, when the --text-base-size (body font size) is modified, both typography and spacing are affected. You end up with a system that resizes all your components with almost no need to set media queries on a component level. Here are some useful links on the topic: 3. Controlling the spacing scale # The spacing scale is the equivalent of the type scale but applied to space values. Once again, including the scale formula into the framework allowed us to control the spacing system and make it responsive: :root {
 --space-unit: 1em;
 --space-xxxxs: calc(0.125 * var(--space-unit)); 
 --space-xxxs: calc(0.25 * var(--space-unit));
 --space-xxs: calc(0.375 * var(--space-unit));
 --space-xs: calc(0.5 * var(--space-unit));
 --space-sm: calc(0.75 * var(--space-unit));
 --space-md: calc(1.25 * var(--space-unit));
 --space-lg: calc(2 * var(--space-unit));
 --space-xl: calc(3.25 * var(--space-unit));
 --space-xxl: calc(5.25 * var(--space-unit));
 --space-xxxl: calc(8.5 * var(--space-unit));
 --space-xxxxl: calc(13.75 * var(--space-unit));
 @supports(--css: variables) {
 :root {
 @include breakpoint(md) {
 --space-unit: 1.25em;
 } This approach becomes particularly powerful when combined with the typography method discussed in the previous chapter. With just a few lines of CSS, you end up with responsive components: One thing I love about using Ems units along with this spacing system is that if spacing and typography sizes look right at a specific breakpoint, they almost certainly look right at all breakpoints, regardless the fact that you update the --space-unit value. A corollary to that is I can design with nearly no need to resize the browser window (except when I want to change the behavior of a component); and when I do resize the window, spacing and typography adapt gracefully. More links on the topic: 4. Editing vertical rhythm on a component level # Unlike SASS variables, we can override the value of CSS variables. One way to take advantage of this feature is injecting custom properties into other custom properties, thus creating 'controls' that can be edited on a component level. Here's an example: when you set the vertical spacing of a text component, you probably want to specify line-height and margin-bottom for your elements: .article {
 h1, h2, h3, h4 {
 line-height: 1.2;
 margin-bottom: $space-xs;
 ul, ol, p, blockquote {
 line-height: 1.5;
 margin-bottom: $space-md;
 } This spacing, however, varies according to where this text is used. For example, if you want your text to be more condensed, you need to create a component variation where you apply different spacing values: .article--sm {
 h1, h2, h3, h4 {
 line-height: 1.1;
 margin-bottom: $space-xxxs;
 ul, ol, p, blockquote {
 line-height: 1.4;
 margin-bottom: $space-sm;
 } ...and so on anytime you wish to update vertical rhythm. Here's an alternative approach based on CSS variables: .text-component {
 --component-body-line-height: calc(var(--body-line-height) * var(--line-height-multiplier, 1));
 --component-heading-line-height: calc(var(--heading-line-height) * var(--line-height-multiplier, 1));
 --line-height-multiplier: 1;
 --text-vspace-multiplier: 1;
 h1, h2, h3, h4 {
 line-height: var(--component-heading-line-height);
 margin-bottom: calc(var(--space-xxxs) * var(--text-vspace-multiplier));
 h2, h3, h4 {
 margin-top: calc(var(--space-sm) * var(--text-vspace-multiplier));
 p, blockquote, ul li, ol li {
 line-height: var(--component-body-line-height);
 ul, ol, p, blockquote, .text-component__block, .text-component__img {
 margin-bottom: calc(var(--space-sm) * var(--text-vspace-multiplier));
 } The --line-height-multiplier and --text-vspace-multiplier are the two scoped controls of the text-component. When we create a modifier of the .text-component class, to edit vertical spacing we only need to override those two variables: .article.text-component { // e.g., blog posts
 --line-height-multiplier: 1.13; // increase article line-height
 --text-vspace-multiplier: 1.2; // increase vertical spacing
 } In case you want to take this for a spin: 5. Abstracting components behaviour # The possibility to override the value of a component can be used in many ways. In general, anytime you can abstract the behavior of a component in one or more variables, you're making your life easier when that component needs editing (or you have to create a variation of the component). An example is our Auto Sized Grid component, where we use CSS grid to create a layout where the gallery items auto-fill the available space based on a min-width set in CSS, then we abstract the min-width value of the items, storing it in a variable. That min-width value is the only thing you need to modify when you create a variation of the Auto Sized Grid component. 6. What's the catch? # In two words: browser support. Hold on, though! You can use CSS custom properties in all the ways described in this article with the help of a PostCSS plugin. There are some limitations in the things you can do, and some changes (e.g., editing vertical rhythm) only apply to modern browsers. In these specific cases, you're free to use CSS variables as long as you're not disrupting the experience in older browsers. Check out our documentation for more info about the limitations of using CSS Variables today. 7. Can I use CSS variables with a preprocessor? # Yes! As long as SASS (or any other preprocessor) allows you to do stuff you can't do in CSS, and you need that stuff, why not using it? SASS is not a library users have to download when they access your website. It's a tool in your workflow. We use SASS, for example, to define color functions that work with CSS variables. 8. Conclusion # In this article, we've gone through a few examples that demonstrate what's the advantage of using CSS custom properties over SASS variables. We've focused on how they enable you to create 'controls' that speed up the way you modify components, or set rules that affect typography and spacing. We've covered a lot of ground, and I hope you can take something from this post and include it in your work. 😊 Would you like to share how you're using CSS variables, or do you have feedback on the article? Get in touch on Twitter! Source: Why we prefer CSS Custom Properties to SASS variables | CodyHouse

    Read at 04:56 pm, Jun 6th

  • Enabling Modern JavaScript on npm

    Enabling Modern JavaScript on npm 30 May 2019 on Modules, Transpilers, Webpack, npm, Ecosystem Modern JavaScript syntax lets you do more with less code, but how much of the JavaScript we ship to users is actually modern? For the past few years we’ve been writing modern JavaScript (or TypeScript), which is then transpiled to ES5 as a build step. This has let the “state of the art” of JavaScript move forward at a faster pace than could have otherwise been achieved while supporting older browsers. More recently, developers have adopted differential bundling techniques where two or more distinct sets of JavaScript files are produced to target different environments. The most common example of this is the module/nomodule pattern, which leverages native JS Modules (also known as "ES Modules") support as its “cutting the mustard” test: modules-supporting browsers request modern JavaScript (~ES2017), and older browsers request the more heavily polyfilled and transpiled legacy bundles. Compiling for the set of browsers defined by their JS Modules support is made relatively straightforward courtesy of the targets.esmodules option in @babel/preset-env, and Webpack plugins like babel-esm-plugin make producing two sets of JavaScript bundles mostly painless. Given the above, where are all the blog posts and case-studies showing the glorious performance and bundle size benefits that have been achieved using this technique? It turns out, shipping modern JavaScript requires more than changing our build targets. It’s not our code Current solutions for producing paired modern & legacy bundles focus solely on “authored code” - the code we write that implements an application. These solutions can’t currently help with the code we install from sources like npm - that’s a problem, since some sources place the ratio of installed code to authored code is somewhere in the ballpark of 10:1. While this ratio will clearly be different for every project, we've consistently found that the JavaScript shipped to users contains a high amount of installed code. Even walking this estimate back, there are clear indications that the ecosystem favors installing existing modules over authoring new one-off modules. In many ways this represents a triumph for Open Source: developers are able to build on the communal value of shared code and collaborate on generalized solutions to their problems in a public forum. “the dependencies we install from npm are stuck in 2014” As it turns out, this amazing ecosystem also holds the most important missing piece of our modern JavaScript puzzle: the dependencies we install from npm are stuck in 2014. The modules we publish to npm are “JavaScript”, but that’s where any expectation of uniformity ends. Front-end developers consuming JavaScript from npm near universally expect that JavaScript to run “in a browser”. Given the diverse set of browsers we need to support, we end up in a situation where modules need to support the Lowest Common Denominator from their consumers’ browser support targets. The eventuality that played out means we have come to explicitly depend on all code in node_modules being ECMAScript 5. In some very rare cases, developers use bolted-on solutions to detect non-ES5 modules and preprocess them down to their desired output target (here’s a hacky approach you shouldn’t use). As a community, the backwards compatibility of each new ECMAScript version has allowed us to largely ignore the effect this has had on our applications, despite an ever-widening gap between the syntax we write and the syntax found in most of our favorite npm dependencies. This has led to a general acceptance that npm modules should be transpiled before they are published to the registry. The publishing process for authors generally involves bundling source modules to multiple formats: JS Modules, CommonJS and UMD. Module authors sometimes denote these different bundles using a set of unofficial fields in a module’s package.json, where "module" points to an .mjs file, "unpkg" points to the UMD bundle, and "main" is still left to reference a CommonJS file. All of these formats affect only a module’s interface - its imports and exports - and this lead to an unfortunate consensus among developers and tooling that even modern JS Modules should be transpiled to a library’s lowest support target. It has been suggested that package authors could begin allowing modern JavaScript syntax in the entry module denoted in their package.json via the module field. Unfortunately, this approach is incompatible with today’s tooling - more specifically, it’s incompatible with the way we’ve all configured our tooling. These configurations are different for every project, which makes this a massive undertaking since the tools themselves are not what needs to be changed. Instead, the changes would need to be made in each and every application’s build configuration. The reason these constraints hold firm is in large part due to popular bundlers like Webpack and Rollup shipping without a default behavior for whether JavaScript imported from node_modules should be processed. These tools can be easily configured to treat node_modules the same as authored code, but their documentation consistently recommends developers disable Babel transpilation for node_modules. This recommendation is generally given citing build performance improvements, even though the slower build produces better results for end users. This makes any in-place changes to the semantics of importing code from node_modules exceptionally difficult to propagate through the ecosystem, since the tools don’t actually control what gets transpiled and how. This control rests in the hands of application developers, which means the problem is decentralized. The module author’s perspective The authors of our favorite npm modules are also involved. At present, there are five main reasons why module authors end up being forced to transpile their JavaScript before publishing it to npm: We know app developers aren’t transpiling node_modules to match their support targets. We can’t rely on app developers to set up sufficient minification and optimization. Library size must be measured in bundled+minified+gzipped bytes to be realistic. There is still a widespread expectation that npm modules are delivered as ECMAScript 5. Increasing a module’s JS version requirement means the code is unavailable to some users. When combined, these reasons make it virtually impossible for the author of a popular module to move to modern JavaScript by default. Put yourself in the shoes of a module author: would you be willing to publish only modern syntax, knowing the resulting update would break builds or production deploys for the majority of your users? The npm ecosystem’s current state and inability to bifurcate classic vs modern JavaScript publishing is what holds us back from collectively embracing JS Modules and ES20xx. Module authoring tools hurt, too Just like with application bundlers being configurable without an implied default behaviour for node_modules, changing the module authoring landscape is an unfortunately distributed problem. Since most module authors tend to roll their own build tooling as requirements vary from project to project, there isn’t really a set of canonical tools to which changes could be made. Microbundle has been gaining traction as a shared solution, and @pika/pack recently launched with similar goals to optimize the format in which modules are published to npm. Unfortunately, these tools still have a long way to go before being considered widespread. Assuming a group of solutions like Microbundle, Pika and Angular’s library bundler could be influenced, it may be possible to shift the ecosystem using popular modules as an example. An effort on this scale would be likely to encounter some resistance from module consumers, since many are not yet aware of the limitations their bundling strategies impose. However, these upended expectations are the very shift our community needs. Looking Forward It’s not all doom and gloom. While Webpack and Rollup encourage unprocessed npm module usage only through their documentation, Browserify actually disables all transforms within node_modules by default. That means Browserify could be modified to produce modern/legacy bundles automatically, without requiring every single application developer to change their build configuration. Similarly, opinionated tools built atop Webpack and Rollup provide a few centralized places where we could make changes that bring modern JS to node_modules. If we made these changes within Next.js, Create React App, Angular CLI, Vue CLI and Preact CLI, the resulting build configurations would eventually make their way out to a decent fraction of applications using those tools. Looking to the vast majority of build systems for JavaScript applications that are one-off or customized per-project, there is no central place to modify them. One option we could consider as a way to slowly move the community to Modern JS-friendly configurations would be to modify Webpack to show warnings when JavaScript resources imported from node_modules are left unprocessed. Babel announced some new features last year that allow selective transpiling of node_modules, and Create React App recently started transpiling node_modules using a conservative configuration. Similarly, tools could be created for inspecting our bundled JavaScript to see how much of it is shipped as over-polyfilled or inefficient legacy syntax. The last piece Let’s assume we could build automation and guidance into our tools, and that doing so would eventually move the thousands (millions?) of applications using those tools over to configurations that allow modern syntax to be used within node_modules. In order for this to have any effect, we need to come up with a consistent way for package authors to specify the location of their modern JS source, and also get consensus on what “modern” means in that context. For a package published 3 years ago, “modern” could have meant ES2015. For a package published today, would “modern” include class fields, BigInt or Dynamic Import? It’s hard to say, since browser support and specification stage vary. This comes to a head when we consider the effect on differential bundling. For those not familiar, Differential Bundling refers to a setup that lets us write modern JavaScript, then build separate sets of output bundles targeting different environments. In the most popular usage, we have a set of bundles targeting newer browsers that contains ~ES2015 syntax, and then a “legacy” set of bundles for all other browsers that is transpiled down to ES5 and polyfilled. The problem is that, if we assume “modern” to mean “anything newer than ES5”, it becomes impossible to determine what syntax a package contains that needs to be transpiled in order to meet a given browser support target. We can address this problem by establishing a way for packages to express the specific set of syntax features they rely on, however this still requires maintaining many variant configurations to handle each set of input→output syntax pairs: Package Syntax Output Target Example “Downleveling” Transformations ES5 ES5 / nomodule none ES5 <script type=module> none ES2015 (classes) ES5 / nomodule classes & tagged templates ES2015 (classes) <script type=module> none ES2017 (async/await) ES5 / nomodule async/await, classes & tagged templates ES2017 (async/await) <script type=module> none ES2019 ES5 / nomodule rest/spread, for-await, async/await, classes & tagged templates ES2019 <script type=module> rest/spread & for-await What would you do? Over-transpiled JavaScript is an increasing fraction of the code we ship to end users, impacting initial load time and overall runtime performance of the web. We believe this is a problem needing a solution – a solution module authors and consumers can agree upon. The problem space is relatively small, but there are many interested parties with unique constraints. We’re looking to the community for help. What would you suggest to remediate this problem for the entire ecosystem of Open Source JavaScript? We want to hear from you, work with you, and help solve this problem in a scalable way for new syntax revisions. Reach out us on Twitter: _developit, kristoferbaxter and nomadtechie are all eager to discuss. Jason Miller's Picture Jason Miller Read more posts by this author. Share this post Twitter Facebook Google+ disqus Source: Enabling Modern JavaScript on npm

    Read at 04:55 pm, Jun 6th

  • Magical, Mystical JavaScript Transducers

    Magical, Mystical JavaScript Transducers In an earlier post we were looking at how to calculate an average using JavaScript’s array method. And in that article we ran into a dilemma. On the one hand, we could build our solution out of small, simple functions. But that meant doing many passes over the one array. On the other hand, we could do everything in a single pass. But that meant creating a hideously complex reducer. We were forced to choose between elegance and efficiency. In the same article though, I hinted at another way. A solution that would give us the elegance of using small, simple functions. But also the efficiency of doing our processing in a single pass through the array. What is this magical solution? It’s a concept called a transducer. Transducers are very cool. They give us a lot of power. But they are also a bit abstract. And that makes them hard to explain. So I could write an epic post explaining where transducers came from and how they work…. But someone else has already done it. Eric Elliott has written a lengthy article that explains transducers in depth. So rather than repeat his work, I’m going to encourage you to read that. So what’s the point of this article then? If Mr Elliott explains transducers so well, what else is left to say? Well, two things: Even after reading Mr Elliott’s article twice, I still found it tricky to get my head around. So I thought I’d have a go at explaining how I understand them; and I thought it might be instructive to apply transducers to a specific problem. That way, we can see them in action and make things concrete. So, in this article, I’ll solve the same problem from my previous article. Transducers are hard. It may take a couple of attempts to get your head around them. So if you’re still confused after reading Mr Elliott’s article, maybe this one might help you along the way. A practical application of transducers So, let’s refresh our memory on the problem we’re trying to solve. We have some data about Victorian-era slang terms: const victorianSlang = [ { term: 'doing the bear', found: true, popularity: 108, }, { term: 'katterzem', found: false, popularity: null, }, { term: 'bone shaker', found: true, popularity: 609, }, { term: 'smothering a parrot', found: false, popularity: null, }, { term: 'damfino', found: true, popularity: 232, }, { term: 'rain napper', found: false, popularity: null, }, { term: 'donkey’s breakfast', found: true, popularity: 787, }, { term: 'rational costume', found: true, popularity: 513, }, { term: 'mind the grease', found: true, popularity: 154, }, ]; We’d like to find the average of all the entries that have a popularity score. Now, one way to solve the problem is using .filter(), .map() and .reduce(). It might look something like this: // Helper functions // --------------------------------------------------------------------------------- function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } // We use an object to keep track of multiple values in a single return value. function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Calculations // --------------------------------------------------------------------------------- const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.filter(isFound) .map(getPopularity) .reduce(addScores, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity); The problem with this approach is that we have to traverse the array three times: Once to filter out the un-found items; Again to extract the popularity scores; And once more to calculate the total. This isn’t so bad, except that we’re creating at least two intermediate arrays. These could potentially take up a lot of memory (if we had a larger data set). But the good thing about this approach is that it breaks the task down into three easy sub-tasks. Another way to think about transducers Now, how do we get from our problem to transducers? To make the transition easier, let’s try a thought experiment. Imagine that someone with a lot of power outlawed the use of .filter(), .map() and .flatMap() in JavaScript. It’s a silly thought experiment, I know, but humour me. Imagine you couldn’t use the built in .filter() or .map() method. And neither could you write your own versions using for-loops. What would we do? This situation wouldn’t phase us too much, because we know that we can use .reduce() to do the job of both .filter() and .map(). Here’s how that might look: // Helper functions // --------------------------------------------------------------------------------- function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } function filterFoundReducer(foundItems, item) { return isFound(item) ? foundItems.concat([item]) : foundItems; } function mapPopularityReducer(scores, item) { return scores.concat([getPopularity(item)]); } // We use an object to keep track of multiple values in a single return value. function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Calculations // --------------------------------------------------------------------------------- const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.reduce(filterFoundReducer, []) .reduce(mapPopularityReducer, []) .reduce(addScores, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity); Notice how we chain .reduce() three times there. We’ve converted our main calculation so that it uses only .reduce(). The imaginary ban on .filter() and .map() hasn’t stopped us. But if this ban were to continue, we might want to make life easier on ourselves. We could save some effort by creating functions for building reducers. For example, we could create one for making filter-style reducers. And we could build another for creating map-style reducers: function makeFilterReducer(predicate) { return (acc, item) => predicate(item) ? acc.concat([item]) : acc; } function makeMapReducer(fn) { return (acc, item) => acc.concat([fn(item)]); } Nice and simple, aren’t they? If we were to use them on our average calculation problem, it might look like this: const filterFoundReducer = makeFilterReducer(isFound); const mapPopularityReducer = makeMapReducer(getPopularity); But, so what? We’re not any closer to solving the average problem more efficiently. When do we get to the transducers? Well, as Mr Elliott says in his article, transducers are tools for modifying reducers. To put it another way, we can think of a transducer as a function that takes a reducer and returns another reducer. If we were to describe that with Haskell types, it might look something like this: type Reducer = (a, b) => a transducer :: Reducer -> Reducer What that means is: A transducer takes a reducer function as input, and transforms it in some way. We give it a reducer, and it gives us another reducer function back. Now, we’ve just modified our average-calculatingcode so that it only uses reducers. No more .filter() and .map(). Instead, we have three separate reducers. So, we’re still traversing the array three times. But what if, instead of three reducers, we used transducers to combine them into one? So we could, for example, take a reducer and modify it so that some items were filtered out. The first reducer still runs, but it just never sees some values. Or, we could modify a reducer so that every item passed to it was transformed or mapped to a different value. That is, every item is transformed before the reducer sees it. In our case, that might look something like this: // Make a function that takes a reducer and returns a // new reducer that filters out some items so that the // original reducer never sees them. function makeFilterTransducer(predicate) { return nextReducer => (acc, item) => predicate(item) ? nextReducer(acc, item) : acc; } // Make a function that takes a reducer and returns a new // reducer that transforms every time before the original // reducer gets to see it. function makeMapTransducer(fn) { return nextReducer => (acc, item) => nextReducer(acc, fn(item)); } Earlier, we made convenience functions for creating reducers. Now, instead, we’ve created convenience functions for changing reducers. Our makeFilterTransducer() function takes a reducer and sticks a filter in front of it. Our makeMapTransducer() function takes a reducer and modifies every value going into it. In our average calculation problem, we have a reducer function at the end, addScores(). We can use our new transducer functions to map and filter the values going into it. We would end up with a new reducer that does all our filtering, mapping, and adding in one step. It might look like this: const foundFilterTransducer = makeFilterTransducer(isFound); const scoreMappingTransducer = makeMapTransducer(getPopularity); const allInOneReducer = foundFilterTransducer(scoreMappingTransducer(addScores)); const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.reduce(allInOneReducer, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity); And now, we’ve managed to calculate our average in a single pass. We’ve achieved our goal. We are still building our solution out of tiny, simple functions. (They don’t get much simpler than isFound() and getPopularity().) But we do everything in a single pass. And notice that we we were able to compose our transducers together. If we wanted, we could even string a bunch of them together with compose(). This is why smart people like Mr Elliott and Rich Hickey think they’re so interesting. There’s a lot more to explore with transducers though. This is just one specific application. If you want to dive in and start using them in your projects, please take note of a few things first: I’ve used non-standard function names in this article to try and make their purpose clear. For example, I use the argument name nextReducer, where Mr Elliott uses step. As a result, the solution here looks a bit uglier because of the long names. If you read Mr Elliott’s article, he uses more standard names and everything looks a bit more elegant. As Mr. Elliott suggests in his article, it’s (usually) best to use someone else’s transducer library. This is because the version written here has been simplified to help make the concepts clear. In practice, there’s several edge cases and rules to handle. A well written implementation will take care of that for you. Transducers in Ramda Speaking of well written implementations, Ramda has one built-in for processing arrays. I thought I’d show how our problem works because Ramda’s way of doing it is a little bit magical. So magical, in fact, that it’s hard to see what’s going on. But once you get it, it’s brilliant. So, the thing that stumped me for quite a while is that with Ramda, you don’t need to make transducer factories. We don’t need makeFilterTransducer() or makeMapTransducer(). The reason is, Ramda expects you to use its plain ol’ filter() and map() functions. It does some magic behind the scenes and converts them into a transducer for us. And it does all the work of complying with the reducer rules for us as well. So, how would we solve the sample problem with Ramda? Well, we would start by using the [transduce()](https://ramdajs.com/docs/#transduce) function. It takes four parameters: The first is a ‘transducer’. But, as we mentioned, we just compose plain old Ramda utilities. Then, we pass a final reducer to transform. And then an initial value. And finally, the array to process. Here’s how our solution might look: import {compose, filter, map, transduce} from 'ramda'; // Our utility functions… function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Set up our 'transducer' and our initial value. const filterAndExtract = compose(filter(isFound), map(getPopularity)); const initVal = {totalPopularity: 0, itemCount: 0}; // Here's where the magic happens. const {totalPopularity, itemCount} = transduce( filterAndExtract, // Transducer function (Ramda magically converts it) addScores, // The final reducer initVal, // Initial value victorianSlang // The array we want to process ); // And spit out the average at the end. const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity); One thing to note here is that in compose(), I’ve written filter() first, then map(). This isn’t a mistake. It’s a quirk of how transducers work. The order you compose is reversed from the usual. So filter() is applied before map(). And this isn’t a Ramda thing either. It’s all transducers. You can see how it happens if you read the examples above (not the Ramda ones). One final thing to point out: Transducers are not just limited to processing arrays. They can work with trees, observables (think RxJS) or streams (see Highland.js). Anything that has some concept of reduce(), really. And that’s kind of the dream of functional programming. We write tiny, simple functions like isFound() and getPopularity(). Then we piece them together with things like transduce() and reduce(). And we end up with powerful, performant programs. So, to sum up, Transducers are great. But they can also be confusing. So if anything I’ve written here confused you, please send me a tweet and let me know. I’d love to hear about it so I and try and improve the explanation. And of course, if you found it useful/helpful, I’d love to hear about that too. Free Cheat Sheet If you found this article interesting, you might like the Civilised Guide to JavaScript Array Methods. It’s free for anyone who subscribes to receive updates. Acquire your copy Source: Magical, Mystical JavaScript Transducers

    Read at 08:17 am, Jun 6th

  • How I decimated Postgres response times for my SaaS

    Last week I rolled out a simple patch that decimated the response time of a Postgres query crucial to Checkly. It quite literally went from an average of ~100ms with peaks to 1 second to a steady 1ms to 10ms.However, that patch was just the last step of a longer journey. This post details those steps and all the stuff I learned along the way. We'll look at how I analyzed performance issues, tested fixes and how simple Postgres optimizations can have spectacular results. The post is fairly Postgres Heroku heavy, but should work for other platforms too. Also, I can't believe I just used the word "journey" unironically.Recognizing the painDeciding what parts of your app to optimize for performance can be tricky. So let's start at the beginning. Here's a screenshot of our own Checkly dashboard. This is what a typical user sees when he/she logs in, so it better be snappy. kg-card-begin: imageCheckly dashboardkg-card-end: imageThe dash shows a bunch of checks, the last 24 results, the success ratios for 24 hours and one day and the average response times. All of these data points and metrics are in some way querying the check_results table: one of the most crucial tables in our Postgres backend.To break it down, there are two important API requests on this dashboard that hit this table./check-status is a bit of a weird one. It is a snapshot of the current failing/passing status and the various metrics like success ratios, p95 and average response times at the current time. For all checks! These metrics are calculated on the fly using, you guessed it, the check_results table./results-metrics is more straightforward. It just returns an array of check result objects with most of the fluff stripped out. It has pagination and all the bits you would expect.You can see these API requests flying by in your developer console. No secret stuff here.kg-card-begin: imageXHR calls the API endpoints making up the Checkly dashboardkg-card-end: imageIn the screenshot above you can see that the /check-status call took 191ms. This is pretty good as it includes all aggregates and calculations for 18 checks in total. So roughly 10ms per check. However, this was already in the 1.5 seconds to 2.5 seconds range before the optimizations discussed in this post. Customers have dashboards with 40+ checks so their performance was pretty miserable.The metrics from Heroku (where we host the customer facing API) also showed this. Lots of spikes, and a p95 in the seconds.kg-card-begin: imageHeroku metrics showing bad p95kg-card-end: imageThe /results-metric calls are allowed to be much slower. They are essentially lazy loaded to fill out the green pointy bars in the little graph. If one takes a second, that's probably fine.Long story short, the /check-status endpoint was getting too slow. It needed fixing. Onwards!AnalysisOf course, I had hunch where I could improve performance. I've been growing Checkly for the past year and know the code base inside out.The check_results table was growing. Not just due to more customers joining Checkly, but because we changed retention from 7 days to 30 days for raw check results fairly recently. After the 30 days, results are rolled up into a different table.All aggregate calculations are done using a timestampz column.Almost all queries on this table filter either on checkId or on created_atAlmost all queries on this table are of the ORDER BY <timestampz> DESC type.I first turned to Heroku's own toolset. With the help of this excellent post by schneems and Heroku's own docs on expensive queries I ran the heroku pg:outliers command which parses and presents the data from the pg_stat_statements table.kg-card-begin: code$ heroku pg:outliers total_exec_time | prop_exec_time | ncalls | sync_io_time | query -----------------+----------------+-----------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 49:26:45.374033 | 32.4% | 3,024,907 | 30:37:17.830208 | select "id", "responseTime", "hasFailures", "hasErrors" from "check_results" where "checkId" = $1 order by "created_at" desc limit $2 34:14:25.316815 | 22.5% | 248,647 | 02:04:20.841521 | select "checkId", ROUND($2 * (SUM(CASE WHEN "hasFailures" = $3 THEN $4 ELSE $5 END)::FLOAT / COUNT("hasFailures"))::NUMERIC, $6) AS "successRatio" from "check_results" left join "checks" on "checks"."id" = "check_results"."checkId" where "checks"."accountId" = $1 and check_results.created_at > now() - interval $7 group by "checkId"kg-card-end: codeThere it was, the query eating up 32% of execution time. A really simple select query, no JOIN statements, nothing fancy. kg-card-begin: codeSELECT "id", "responseTime", "hasFailures", "hasErrors" FROM "check_results" WHERE "checkId" = $1 ORDER BY "created_at" DESC LIMIT $2kg-card-end: codeNotice the second outlier query also has a statement filtering on created_at in addition to filtering on checkIdkg-card-begin: codeWHERE check_results.created_at > NOW() - INTERVAL $7 kg-card-end: codeHowever, this is not a smoking gun. Maybe this percentage of execution time is fine. Maybe it's just a lot queries that execute really fast? Luckily, Heroku Postgres has a "slowest execution time" tab in their graphical interface and it showed this exact query in the second position, just after a somewhat slower but much less crucial query.kg-card-begin: imageHeroku Postgres slowest execution timekg-card-end: imageThe query was slow and frequently used. That's probably a smoking gun. 🔫Two observations on the check_results table at this time:It has an index on checkId which is a foreign key to our checks table. It's used to dig up individual check results and works fine.I had an index on created_at  — a timestampz  field — because I thought this would help. I was (sort of) wrong. As I dug into the code base, I saw that I used the startedAt timestampz field also. Actually, querying for startedAt made a lot more sense in our specific context as created_at was influenced by how quickly our daemon process can gobble up our feed of check results. This was at most some milliseconds, but still.All relevant queries were indeed filtering using WHERE "checkId" = x  or WHERE created_at > xAll relevant queries were indeed ordering using the  ORDER BY timestampz DESC statement.So, my early hunches were confirmed by the data and some code digging. My solution hypothesis was that I needed to do two things:Use just one timestampz field in all relevant queries: startedAtAdd a composite index on checkId, startedAt DESCNote, that the order of the fields in the composite index is very important. This is because the first attribute in the index is crucial for searching and sorting.Of course,  you can only be sure if this fix works by reproducing the situation and measuring the outcome. It's almost computer science.Reproducing & fixingYou know what grinds my gears? Queries that are slow in production but fast on test and development. And this was exactly what I had on my hands here.I located the piece of code responsible for this query fairly quickly. Hoorah for thin ORMs! Its response time was blazing on my dev box and on test. Clearly, the table size was just too small to have any impact. The EXPLAIN ANALYZE output of the query in question also showed a completely different execution plan. So I whipped up a script to insert around a 500.000 records into my local check_results table and ran  EXPLAIN ANALYZE  again on our query.kg-card-begin: code-- before the new index EXPLAIN ANALYZE(SELECT "id", "responseTime", "hasFailures", "hasErrors", "startedAt" FROM "check_results" WHERE "checkId" = '0d97011e-12b6-4bb2-ad17-9b22e5f3aeee' ORDER BY created_at DESC limit 5000) ... Sort Key: created_at DESC Sort Method: top-N heapsort Memory: 1948kB -> Index Scan using check_results_checkid_index on check_results Index Cond: (""checkId"" = '0d97011e-12b6-4bb2-ad17-9b22e5f3aeee'::uuid) ... Planning time: 0.096 ms Execution time: 25.353 mskg-card-end: codeThe execution time of 25ms seems fast, but keep in mind the table on my dev box is pretty small compared to the live version. Notice the Sort that happens before the index is actually used.With the new index, it looks as follows:kg-card-begin: code-- after the new index CREATE INDEX check_results_checkid_startedat_desc_index ON check_results("checkId", "startedAt" DESC) EXPLAIN ANALYZE(SELECT "id", "responseTime", "hasFailures", "hasErrors", "startedAt" FROM "check_results" WHERE "checkId" = '0d97011e-12b6-4bb2-ad17-9b22e5f3aeee' ORDER BY "startedAt" DESC limit 5000) ... Index Scan using check_results_checkid_startedat_desc_index on check_results Index Cond: (""checkId"" = '0d97011e-12b6-4bb2-ad17-9b22e5f3aeee'::uuid) ... Planning time: 0.125 ms Execution time: 2.744 mskg-card-end: codeThat's 2.7ms execution time. On my local box already a 10x reduction! The magic is of course in the composite index as there is no Sort step. The query never needs to scan either; the index is all it needs and indices are fast.I wrapped the new index in a knex.js migration and pushed it to our test and later live environment. Let's look at some results.Here are the Heroku metrics again. The p95 dropped from ~3 seconds to 175ms.kg-card-begin: imageHeroku metrics showing great p95kg-card-end: imageAlso, the "Slowest execution time" tab now shows a query used in a batch job that takes ~30 seconds. Exactly as I expected and not annoying to users.Lessons learnedIn the end, the solution was fairly trivial. But most performance fixes are. The trick is in finding the right angle, the right test data and confirming your hypothesis. Also Postgres and Heroku are pretty awesome. Just saying. 😎kg-card-begin: hrkg-card-end: hrbanner image: "Shiki Uta, Natsu no Gaku". (Songs of the Four Seasons; Summer), Kuniteru Utagawa, 1808-1876, Japan. Source building a saas postgres sql Tim Nolet Tim is the founder of Checkly. He used to work on big Enterprisey Java stuff almost 20 years ago but now he is vertically aligning divs most of the time. Actually studied to be an art historian. Berlin Twitter Begin Mailchimp Signup Form End mc_embed_signup Sign up for more SaaS stories. New content every Monday! 🍍 Show Comments Why the recent "Digital Ocean killed my company" incident scares the hell out of me update 05-06-19: Digital Ocean has posted their postmortem on this situation as… The Art of the Late Start There is a lot of dogma around shipping a product in startup… Source: How I decimated Postgres response times for my SaaS

    Read at 08:04 am, Jun 6th

  • Windowing wars: React-virtualized vs. react-window - LogRocket Blog

    react-window is a complete rewrite of react-virtualized. I didn’t try to solve as many problems or support as many use cases. Instead I focused on making the package smaller and faster. I also put a lot of thought into making the API (and documentation) as beginner-friendly as possible. The above is quoted directly from the react-window GitHub by Brian Vaughn, aka bvaughn — the author of both react-window and react-virtualized (and also a member of the React core team). TL;DR: react-window is newer, faster, and much lighter, but it doesn’t do everything react-virtualized can do. Use react-window if you can, but react-virtualized has a lot of bells and whistles that might be pretty useful to you. In this article, we’ll cover: What do these libraries do? What does react-window do? What does react-virtualized do that react-window doesn’t do? Which one is best for you? 🚀 Question 1: Do you need windowing? Both react-window and react-virtualized are libraries for windowing. Windowing (aka virtualizing) is a technique for improving the performance of long lists by only writing the visible portion of your list to the DOM. Without windowing, React has to write your entire list to the DOM before one list item is visible. So if I had around 10,000 list items, I’d have to wait for React to write at least 10,000 <div />s to the DOM before the first item in that list is visible. Ouch. As a reminder, React internally uses a “virtual DOM” to hold your UI state because the “real” DOM is slow and expensive. By windowing, you can speed up your initial render by avoiding the “real” DOM as much as possible. Question 2: Do you really need windowing? Though it can improve performance, windowing is not a silver bullet. Windowing improves performance because it delays writing your entire list to the DOM, but the reality is that those items have to be written to the DOM eventually if you want the user to see them. If you don’t pay for the rendering time upfront, then you’ll have to pay for it later. Sometimes windowing can actually decrease perceived performance because the user has to wait for each individual list item to load on scroll instead of waiting for one eager load of the entire list on mount. In the demo above, notice how the list in the windowed version appears faster, but the non-windowed version feels faster when you’re scrolling through it. The windowed version appears faster because it delays rendering the whole list, but it feels slower/looks janky when scrolling fast because it’s loading and unloading list items to the DOM. Whether or not to window greatly depends on your situation and what’s important to you: No windowing Windowing Initial load time ⚠️ Depends on the list size ✅ (near) Instant List item load time ✅ (near) Instant ⚠️ Depends on complexity of the item DOM manipulation occurs ⚠️ On initial render ⚠️ On scroll In general, I would not recommend windowing if you don’t have to. I’ve made the mistake of windowing when it was unnecessary, and the end result was a slower list that took longer to make and was significantly more complex 😓. Both react-window and react-virtualized are great libraries that make windowing as easy as can be, but they also introduce a bit of constraints on your UI. Before you window, try making your list normally and see if your environment can handle it. If you’re having performance issues, then continue on. Question 3: Is react-window good enough for you? As stated by the author of both react-window and react-virtualized: react-window doesn’t solve as many problems and doesn’t support as many use cases. This might make you think react-window won’t solve your problem, but that’s not necessarily the case. react-window is a just a lighter core with a simpler philosophy. react-window can still support many of the same use cases as react-virtualized, but it’s your responsibility as a developer to use react-window as a building block instead of react-virtualized for every use case. react-window is just a library that virtualizes lists and grids. That’s why it’s more than 15 times smaller. Quoting bvaughn again: Adding a react-virtualized list to a [create-react-app] project increases the (gzipped) build size by ~33.5 KB. Adding a react-window list to a CRA project increases the (gzipped) build size by <2 KB. Out of the box, react-window only has four components: This is vastly different from the 13 components react-virtualized has. Virtualized collection types: Helpers/decorators for the above collection types: As a general rule of thumb, you should be able to use react-window in place of react-virtualized for tables, lists, and grids. However, you can’t use react-window for anything else, including masonry layouts and any other 2-D layouts that don’t fit a grid. Here are some demos of using react-window to achieve the same results as react-virtualized: Dynamic container sizing (AutoSizer) Dynamic item sizing (CellMeasurer) Note: there are some caveats to the approach in the demo above (as there are caveats to using the actual CellMeasurer in react-virtualized). This cell measurer has to render the contents of the item twice: once to size it, and then once inside the list. This approach also requires the node to be rendered synchronously with react-dom, so complex list items may seem to appear slower when scrolling. Infinite loading (InfiniteLoader) Taken directly from the react-window-infinite-loader package: Arrow key navigation (ArrowStepper) Scroll-synced multigrids (MultiGrid + ScrollSync) Question 4: Should you use react-virtualized anyway? Quoting from the react-window GitHub again: If react-window provides the functionality your project needs, I would strongly recommend using it instead of react-virtualized. However, if you need features that only react-virtualized provides, you have two options: Use react-virtualized. (It’s still widely used by a lot of successful projects!) Create a component that decorates one of the react-window primitives and adds the functionality you need. You may even want to release this component to npm (as its own, standalone package)! 🙂 So there’s that! react-virtualized is still a great project, but it may do more than you need. However, I would recommend using react-virtualized over react-window if: You’re already using react-virtualized in your project/on your team. If it ain’t broke, don’t fix it — and, more importantly, don’t introduce unnecessary code changes. You need to virtualize a 2-D collection that is not a grid. This is the only use case that react-virtualized handles that react-window has no support for. You want a pre-built solution. react-virtualized has code demos for all its use cases while react-window just provides virtualized list primitives so you can build off them. If you want docs and pre-made examples with more use cases, then the heavier react-virtualized is for you. Bottom line react-window: newer and faster virtualized list primitives. Use react-window as your virtualized list building block to satisfy your specific use case without bringing a lot of unnecessary code. react-virtualized: a heavier all-in-one that solves — and provides docs/examples for — many use cases, including virtualizing collections that are not grids (e.g., masonry layouts). react-virtualized is still a great library but probably does more than you need it to. Plug: LogRocket, a DVR for web apps LogRocket is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps. Try it for free. Source: Windowing wars: React-virtualized vs. react-window – LogRocket Blog

    Read at 07:54 am, Jun 6th

  • Joost de Valk Steps Down as WordPress Marketing Lead – WordPress Tavern

    Joost de Valk has announced that he’s stepped down from the WordPress Marketing and Communications Lead role. The position was created and awarded to de Valk earlier this year. Not only was it a new position, but it also expanded the leadership roles in the WordPress project. Despite making progress, de Valk didn’t feel as though he was fulfilling the leadership aspect of his role. “My experience over the last few months made me feel that while I was doing things and getting things done, I certainly wasn’t leadership. I don’t want to pretend I have a say in things I don’t have a say in,” he said. Not having a clear definition of what marketing means and having people within the project on the same page contributed to his decision. “There’s a stark difference between where I thought I would be in the organization in this role, and where I am actually finding myself now,” de Valk said. “Even things that every outsider would consider marketing (release posts, about pages) are created without even so much as talking to me or others in the marketing team. Because I felt left out of all these decisions, I feel I can’t be a marketing lead.” He also cited a lack of clarity surrounding his position, “I’ve been asked dozens of times on Twitter, Facebook and at WordCamps why I now work for Automattic, which of course I don’t but that is the perception for a lot of people,” he said. “On other occasions, I seem to be the token non-Automattician, which I’m also uncomfortable with.” Due to taking a toll from failing to fulfill the position, de Valk plans to take an extended vacation during the Summer and when he returns, focus 100% of his efforts on Yoast and his Chief Product Officer role. Matt Mullenweg commented on de Valk’s article thanking him for being willing to try new things and for his passion, impatience, and drive to improve WordPress. Would you like to write for WP Tavern? We are always accepting guest posts from the community and are looking for new contributors. Get in touch with us and let's discuss your ideas.Like this:Like Loading... Related Source: Joost de Valk Steps Down as WordPress Marketing Lead – WordPress Tavern

    Read at 07:49 am, Jun 6th

  • Cat Declawing Ban Is Passed by N.Y. Lawmakers - The New York Times

    The measure is for people who “think their furniture is more important than their cat,” a supporter said. Gov. Andrew M. Cuomo would have to sign it.ImageMeow.CreditCreditDumitru Doru/EPA, via ShutterstockALBANY — New York lawmakers on Tuesday passed a ban on cat declawing, putting the state on the cusp of being the first to outlaw the procedure. The bill, which had been fought for several years by some veterinary groups, would outlaw several types of declawing surgeries except in cases of medical necessity, and forbid any such surgeries for “cosmetic or aesthetic reasons.”The Assembly sponsor, Linda Rosenthal, a Manhattan Democrat, said those reasons include pet owners who “think their furniture is more important than their cat.” “It’s unnecessary, it’s painful, and it causes the cat problems,” said Ms. Rosenthal, who owns two fully clawed cats, Kitty and Vida. “It’s just brutal.” New York State joins several cities in banning declawing, including Los Angeles and Denver; several other states, including California, New Jersey and Massachusetts, are also considering bans, according to the Humane Society of United States, which hailed the New York bill. “Declawing is a convenience surgery, with a very high complication rate, that offers no benefit to the cat,” said Brian Shapiro, the group’s New York director, adding that the procedure causes “an increase in biting and litter-box avoidance, which often results in the cat being surrendered to an animal shelter.”The declawing bill now awaits the signature of Gov. Andrew M. Cuomo (who is really more of a dog guy); on Tuesday, he said his office would review it. If the bill becomes law, those who violate it could face a $1,000 fine. The bill was passed during an annual and somewhat rare rite of Albany bipartisanship: Animal Advocacy Day, when pet owners and their animal masters flood the Capitol, and Democrats and Republicans join forces to praise each other’s legislation and dote on each other’s pets. Albany considers whole packs of animal bills each year; there are currently more than a dozen, for example, dealing with dogs, ranging from raising penalties for theft to establishing tax credits for adopting an animal. The State Senate itself passed nine animal-related bills on Tuesday, including bills to require pet stores to have fire protection systems and to increase the fines for people who leave their dogs outside without “adequate shelter.” The action on cats also came as the Legislature ground toward the scheduled end of the year’s legislative session on June 19. After the election of a fully Democratic-controlled Legislature in November, there was a flurry of activity earlier this year, with major bills on abortion rights, gun control and election reforms. But that pace has slackened in recent months, and Mr. Cuomo has spent much of the last week chiding the Legislature for inaction on issues like legalizing marijuana and renewing rent regulations. Legislative leaders, meanwhile, have defended their work and striven to convey a unified front, issuing a joint statement last week promising to pass “the strongest rent package ever” — and making no mention of Mr. Cuomo. The cat bill faced no such friction on Tuesday, despite ardent opposition from groups like the New York Veterinary Medical Society, which had argued that declawing should be allowed “when the alternative is abandonment or euthanasia.”The society had also suggested that some cats were declawed — a process formally known as onychectomy — by owners who suffered from diseases like hemophilia, diabetes or immune disorders. “Cats that would lose their home if not declawed face a higher risk of euthanasia than if their owner were able to care for them,” the society said in a statement released in late May. “They also exchange a life of comfort and care to potentially spend years in conditions that may be far from ideal for long-term living.”Backers of the ban, however, said that the procedure causes intense, lasting pain for the animal, and likened it to mutilation. “It’s the equivalent of severing a finger at the first knuckle,” said State Senator Michael N. Gianaris, the Queens Democrat who serves as the chamber’s deputy leader. “It’s said that a society can be judged by the way it treats its animals, and by allowing this practice to continue, we have not been setting a good example. Today we can move that in the right direction.”And for once, Mr. Gianaris’s Republican counterpart agreed. “Animals give us unconditional love,” said State Senator James Tedisco, a Republican who represents a sizable chunk of the Adirondacks and brought his pet Corgi, Grace, to the Capitol. “I think that this is the most nonpartisan day we have in the New York State Legislature.”There were some questions raised by lawmakers during a debate in the Assembly, including from Brian Manktelow, a Republican assemblyman from the Finger Lakes region, who said that declawing should be “a medical decision, not a legislative decision.” He also raised the specter of New Yorkers traveling to other states to have the procedure done. Ms. Rosenthal suggested that New York would, in fact, inspire other states to pass such bans. “There is really never a good reason for a cat to be declawed,” she said, “from the cat’s point of view.” Jesse McKinley is The Times's Albany bureau chief. He was previously the San Francisco bureau chief, and a theater columnist and Broadway reporter for the Culture Desk. @jessemckinley A version of this article appears in print on , on Page A 21 of the New York edition with the headline: United by Pets, Albany Bans Declawing of Cats. Order Reprints | Today’s Paper | SubscribeSource: Cat Declawing Ban Is Passed by N.Y. Lawmakers – The New York Times

    Read at 07:46 am, Jun 6th

  • When to useMemo and useCallback

    When to useMemo and useCallbackJune 04, 2019Performance optimizations ALWAYS come with a cost but do NOT always come with a benefit. Let's talk about the costs and benefits of useMemo and useCallback.Here's a candy dispenser:Candy DispenserAvailable Candygrab snickersgrab skittlesgrab twixgrab milky wayHere's how it's implemented:Now I want to ask you a question and I want you to think hard about it before moving forward. I'm going to make a change to this and I want you to tell me which will have the better performance characteristics.The only thing I'm going to change is wrap the dispense function inside React.useCallback:1const dispense = React.useCallback(candy => {2 setCandies(allCandies => allCandies.filter(c => c !== candy))3}, [])Here's the original again:1const dispense = candy => {2 setCandies(allCandies => allCandies.filter(c => c !== candy))3}So here's my question, in this specific case, which of these is better for performance? Go ahead and submit your guess (this is not recorded anywhere):Let me give you some space to not spoil the answer for you...Keep scrolling... You did answer, didn't you?There, that should do it...Why is useCallback worse?!We hear a lot that you should use React.useCallback to improve performance and that "inline functions can be problematic for performance," so how could it ever be better to not useCallback?Just take a step back from our specific example, and even from React and consider this: Every line of code which is executed comes with a cost. Let me refactor the useCallback example a bit (no actual changes, just moving things around) to illustrate things more clearly:1const dispense = candy => {2 setCandies(allCandies => allCandies.filter(c => c !== candy))3}4const dispenseCallback = React.useCallback(dispense, [])And here's the original again:1const dispense = candy => {2 setCandies(allCandies => allCandies.filter(c => c !== candy))3}Notice anything about these? Let's look at the diff:1const dispense = candy => {2 setCandies(allCandies => allCandies.filter(c => c !== candy))3 }4+ const dispenseCallback = React.useCallback(dispense, [])Yeah, they're exactly the same except the useCallback version is doing more work. Not only do we have to define the function, but we also have to define an array ([]) and call the React.useCallback which itself is setting properties/running through logical expressions etc.So in both cases JavaScript must allocate memory for the function definition on every render and depending on how useCallback is implemented, you may get more allocation for function definitions (this is actually not the case, but the point still stands). This is what I was trying to get across with my twitter poll here:Assuming this code appears in a React function component, how many function allocations are happening with this code on each render?const a = () => {}And how many are happening with this code?const a = useCallback(() => {}, [])— Kent C. Dodds (@kentcdodds) June 4, 2019Granted, I had several people tell me that was worded poorly, so my apologies if you got the wrong answer but actually knew the correct answer.I'd like to mention also that on the second render of the component, the original dispense function gets garbage collected (freeing up memory space) and then a new one is created. However with useCallback the original dispense function wont get garbage collected and a new one is created, so you're worse-off from a memory perspective as well.As a related note, if you have dependencies then it's quite possible React is hanging on to a reference to previous functions because memoization typically means that we keep copies of old values to return in the event we get the same dependencies as given previously. The especially astute of you will notice that this means React also has to hang on to a reference to the dependencies for this equality check (which incidentally is probably happening anyway thanks to your closure, but it's something worth mentioning anyway).How is useMemo different, but similar?useMemo is similar to useCallback except it allows you to apply memoization to any value type (not just functions). It does this by accepting a function which returns the value and then that function is only called when the value needs to be retrieved (which typically will only happen once each time an element in the dependencies array changes between renders).So, if I didn't want to initialize that array of initialCandies every render, I could make this change:1- const initialCandies = ['snickers', 'skittles', 'twix', 'milky way']2+ const initialCandies = React.useMemo(3+ () => ['snickers', 'skittles', 'twix', 'milky way'],4+ [],5+ )And I would avoid that problem, but the savings would be so minimal that the cost of making the code more complex just isn't worth it. In fact, it's probably worse to use useMemo for this as well because again we're making a function call and that code is doing property assignments etc.In this particular scenario, what would be even better is to make this change:1+ const initialCandies = ['snickers', 'skittles', 'twix', 'milky way']2 function CandyDispenser() {3- const initialCandies = ['snickers', 'skittles', 'twix', 'milky way']4 const [candies, setCandies] = React.useState(initialCandies)But sometimes you don't have that luxury because the value is either derived from props or other variables initialized within the body of the function.The point is that it doesn't matter either way. The benefits of optimizing that code is so minuscule that your time would be WAY better spent worrying about making your product better.What's the point?The point is this:Performance optimizations are not free. They ALWAYS come with a cost but do NOT always come with a benefit to offset that cost.Therefore, optimize responsibly.So when should I useMemo and useCallback?There are specific reasons both of these hooks are built-into React:Referential equalityComputationally expensive calculationsReferential equalityIf you're new to JavaScript/programming, it wont take long before you learn why this is the case:1true === true 2false === false 31 === 1 4'a' === 'a' 56{} === {} 7[] === [] 8() => {} === () => {} 910const z = {}11z === z 1213I'm not going to go too deep into this, but suffice it to say when you define an object inside your React function component, it is not going to be referentially equal to the last time that same object was defined (even if it has all the same properties with all the same values).There are two situations where referential equality matters in React, let's go through them one at a time.Dependencies listsLet's review an example.Warning, you're about to see some seriously contrived code. Please don't nit-pick that and just focus on the concepts please, thank you.The reason this is problematic is because useEffect is going to do a referential equality check on options between every render, and thanks to the way JavaScript works, options will be new every time so when React tests whether options changed between renders it'll always evaluate to true, meaning the useEffect callback will be called after every render rather than only when bar and baz change.There are two things we can do to fix this:That's a great option and if this were a real thing that's how I'd fix this.But there's one situation when this isn't a practical solution: If bar or baz are (non-primitive) objects/arrays/functions/etc:This is precisely the reason why useCallback and useMemo exist. So here's how you'd fix that (all together now):Note that this same thing applies for the dependencies array passed to useEffect, useLayoutEffect, useCallback, and useMemo.React.memo (and friends)Warning, you're about to see some more contrived code. Please be advised to not nit-pick this either but focus on the concepts, thanks.Check this out:Every time you click on either of those buttons, the DualCounter's state changes and therefore re-renders which in turn will re-render both of the CountButtons. However, the only one that actually needs to re-render is the one that was clicked right? So if you click the first one, the second one gets re-rendered, but nothing changes. We call this an "unnecessary re-render."MOST OF THE TIME YOU SHOULD NOT BOTHER OPTIMIZING UNNECESSARY RERENDERS. React is VERY fast and there are so many things I can think of for you to do with your time that would be better than optimizing things like this. In fact, the need to optimize stuff with what I'm about to show you is so rare that I've literally never needed to do it in the 3 years I worked on PayPal products and the even longer time that I've been working with React.However, there are situations when rendering can take a substantial amount of time (think highly interactive Graphs/Charts/Animations/etc.). Thanks to the pragmatistic nature of React, there's an escape hatch:Now React will only re-render CountButton when it's props change! Woo! But we're not done yet. Remember that whole referential equality thing? In the DualCounter component, we're defining the increment1 and increment2 functions within the component functions which means every time DualCounter is re-rendered, those functions will be new and therefore React will re-render both of the CountButtons anyway.So this is the other situation where useCallback and useMemo can be of help:Now we can avoid the so-called "unnecessary re-renders" of CountButton.I would like to re-iterate that I strongly advise against using React.memo (or it's friends PureComponent and shouldComponentUpdate) without measuring because those optimizations come with a cost and you need to make sure you know what that cost will be as well as the associated benefit so you can determine whether it will actually be helpful (and not harmful) in your case, and as we observe above it can be tricky to get right all the time so you may not be reaping any benefits at all anyway.Computationally expensive calculationsThis is the other reason that useMemo is a built-in hook for React (note that this one does not apply to useCallback). The benefit to useMemo is that you can take a value like:And get it lazily:1const a = React.useMemo(() => ({b: props.b}), [props.b])This isn't really useful for that case above, but image that you've got a function that synchronously calculates a value which is computationally expensive to calculate (I mean how many apps actually need to calculate prime numbers like this ever, but that's an example):That could be pretty slow given the right iterations or multiplier and there's not too much you can do about that specifically. You can't automagically make your user's hardware faster. But you can make it so you never have to calculate the same value twice in a row, which is what useMemo will do for you:The reason this works is because even though you're defining the function to calculate the primes on every render (which is VERY fast), React is only calling that function when the value is needed. On top of that React also stores previous values given the inputs and will return the previous value given the same previous inputs. That's memoization at work.ConclusionI'd just like to wrap this up by saying that every abstraction (and performance optimization) comes at a cost. Apply the AHA Programming principle and wait until the abstraction/optimization is screaming at you before applying it and you'll save yourself from incurring the costs without reaping the benefit.Specifically the cost for useCallback and useMemo are that you make the code more complex for your co-workers, you could make a mistake in the dependencies array, and you're potentially making performance worse by invoking the built-in hooks and preventing dependencies and memoized values from being garbage collected. Those are all fine costs to incur if you get the performance benefits necessary, but it's best to measure first.Related reading:P.S. If you're among the few who worry about the move to hooks and that it forces us to define functions within our function components where we used to define functions as methods on our classes, I would invite you to consider the fact that we've been defining methods in the render phase of our components since day one... For example:loading relevant upcoming workshops...Source: When to useMemo and useCallback

    Read at 07:36 am, Jun 6th

  • JAMstack? More like SHAMstack. | CSS-Tricks

    I'm a fan of the whole JAMstack thing. It seems like a healthy web movement. I'm looking forward to both of the upcoming conferences. Of any web trend, #jamstack seems like it will be the least regrettable. — Chris Coyier (@chriscoyier) May 22, 2019 I feel like the acronym might not be quite doing it justice though. Not that I suggest we change it. Once a thing like that has legs, I find it's best to roll with it. Same deal with serverless. Heck, the name of this website is pretty... not great. To me, the most important part of JAMstack is rooted in the concept of static file hosting. Static file hosting is the foundation of all the power. It opens up a bunch of doors, like: Everything can be CDN-hosted. "The edge," as they say. Even the HTML (the M in JAMStack also refers to Markup) can be CDN-hosted, which you otherwise can't do. That gives you an amazing base of speed that encourages you to keep that speed as you build. The project feels easier to work with. Git clone, npm install, build. Deployments are git pushes of a dist folder. It's so cool, for example, Netlify gives you a URL for every build, even on branches you're working on. This is made possible by deploys being kind of immutable. A set of files at a particular point in time. Cloud functions are awesome. Because you don't have a traditional server-side language to reach for, you build it with cloud functions when you do need server-side code — which is a cool way to architect things anyway and is very spiritually connected to all this. Don't think, "Oh, JAMstack is just for Jekyll blogs," or whatever. True, static site generators are extremely JAMstack-y, and JAMstack highly encourages as much prebuilt markup as possible (which is good for speed and SEO and all that), but pre-built markup isn't a requirement. I'd go so far as to say that a client-side JavaScript-powered app that ships a <div id="root"></div> and a bundle of JavaScript that hits APIs and builds out the site is still a JAMstack site. It's still statically hosted (probably) with cloud functions serving up data. I’d say “yes”. Perhaps a little more SSR would be good for all the reasons but meh, not required for a jamstack merit badge. — Chris Coyier (@chriscoyier) May 22, 2019 But as long as you're JAMStack anyway, that encourages you to put more in those static files. In that way, it encourages static content as well, when possible. I'd say "server-side rendered" (SSR) as that's the common term, but it's beyond that. It's not a server side language generating the markup on request; it's built in a build step ahead of time, before deployment. Again it's not required, just encouraged. So, we've got static-hosted HTML, and all our other files (e.g. CSS, images, etc.) are also static. Then: The J of JAMstack is JavaScript. The A of JAMstack is APIs. They are sorta kinda the same thing. Your JavaScript files are statically hosted. They run, and they talk to APIs if they need to. A common example might be a GraphQL endpoint coughing up some content. An interesting twist here is that you can half-and-half this stuff. In other words, you can pre-build some of the markup, and wait for JavaScript and API calls for other parts. Imagine an e-commerce site with a homepage and a dozen other pages you can pre-build entirely, but then a catalog of thousands of products that would be too impractical to statically generate (too slow). They are just a single scaffolded template that flesh themselves out with client-side API calls. So, if we were to make a new acronym, perhaps we'd include Static Hosting in there and combine the JavaScript and APIs into just APIs, leaving us with... Static Hosting, APIs, and Markup, or the SHAMstack. Errrrr 😬 maybe not. Source: JAMstack? More like SHAMstack. | CSS-Tricks

    Read at 07:28 am, Jun 6th

  • Joe Biden Tried to Cut Contraception Coverage in Obamacare

    As vice president, Joe Biden repeatedly sought to undermine the Affordable Care Act’s contraception mandate, working in alliance with the U.S. Conference of Catholic Bishops to push for a broad exemption that would have left millions of women without coverage. Biden’s battle over contraception is a window into his approach to the politics of reproductive freedom, a function of an electoral worldview that centers working-class Catholic men over the interests of women. The issue has been causing his presidential campaign some discomfort — on Wednesday, Biden’s campaign clarified that he remains a supporter of the so-called Hyde Amendment, a provision that bars federal money from being used to fund reproductive health services. Biden had recently told an activist with the ACLU that he opposed the amendment, and wanted to see it repealed. On contraception, according to contemporaneous reporting and to sources involved with the internal debate, Biden had argued that if the regulations implementing the Affordable Care Act were going to mandate coverage, it would anger white, male Catholic voters, and threaten President Obama’s reelection in 2012. Biden’s main ally in the internal fight over contraception was Chief of Staff William Daly; both men are Catholic. Opposing Biden was a faction of mostly women advisers, joined by some men, who argued that Biden had both the policy and the politics wrong. On policy, they noted that if his broad exemption went into effect, upwards of six million women who happened to be employed by religious-affiliated organizations would lose contraception coverage. The politics were just as bad, they argued, given that women were increasingly becoming central to the party’s success. To turn on them on the issue of access to birth control — embracing a fringe position not even adopted by most Catholics who aren’t bishops — would put that support at risk. Biden has long said that he is personally opposed to abortion, but supports the legal right. His support of Roe v. Wade has not always been full throated. “When it comes to issues like abortion, amnesty, and acid, I’m about as liberal as your grandmother,” Biden said in a June 1974 article. “I don’t like the Supreme Court decision on abortion. I think it went too far. I don’t think that a woman has the sole right to say what should happen to her body.” Because Biden’s anti-Roe comments came so long ago — more than four decades — some have argued they are of little value in gauging his current politics. But his battle against contraception, and his unwillingness to join the bulk of the Democratic field and call for the repeal of the Hyde Amendment, puts him dramatically out of step with today’s party. Biden is so out of step, in fact, that when he was shown polling data during the contraception fight, he dismissed it as inaccurate. He has a view of the American electorate’s politics on abortion that can’t be influenced by new facts. Jake Tapper, then reporting for ABC News, reported in February 2012: The two sides couldn’t even agree about what they were debating. In the fall, [Planned Parenthood head Cecile] Richards brought in polling indicating that the American people overwhelmingly supported the birth control benefit in health insurance. She also highlighted statistics showing the overwhelming use of birth control. The Vice President and others argued that this wouldn’t be seen as an issue of contraception — it would be seen as an issue of religious liberty. They questioned the polling of the rule advocates, arguing that it didn’t explain the issue in full, it ignored the question of what religious groups should have to pay for. And they argued that women voters for whom this was an important issue weren’t likely to vote for Mitt Romney, who has drawn a strong anti-abortion line as a presidential candidate, saying he would end federal funding to Planned Parenthood and supporting a “personhood” amendment that defines life as beginning at the moment of fertilization. Similarly, Mike Dorning and Margaret Talev reported:” Vice President Joe Biden and then-White House chief of staff Bill Daley, also Catholics, warned that the mandate would be seen as a government intrusion on religious institutions. Even moderate Catholic voters in battleground states might be alienated, they warned, according to the people familiar with the discussions.” It was, ultimately, public anger that led to Biden and Daley’s defeat on the issue. On January 31, 2012 as the administration was finalizing its policy, it was reported that Susan G. Komen for the Cure had cut its funding of Planned Parenthood, in a push led by abortion foe Karen Handel. The fury over the decision stunned the organization, which backtracked and apologized within a week, as Planned Parenthood raised hundreds of thousands of dollars from angered supporters of abortion rights. “We want to apologize to the American public for recent decisions that cast doubt upon our commitment to our mission of saving women’s lives,” Komen said in a statement on February 3, 2012. The White House watched the affair unfold closely, and the blowback punctured the mythology that there is no real public support for abortion rights. It also sent a signal that if the White House backtracked on access to contraception, it could expect a livid response. The exemption that was ultimately granted, on February 10, was a very narrow one, frustrating the bishops. In his vice presidential debate with Mitt Romney’s running mate, Paul Ryan, Biden attempted to portray it as a broad exemption. “No religious institution — Catholic or otherwise, including Catholic social services, Georgetown hospital, Mercy hospital, any hospital — none has to either refer contraception, none has to pay for contraception, none has to be a vehicle to get contraception in any insurance policy they provide. That is a fact,” Biden said in the debate. In a rare public disagreement with Biden, the Conference of U.S. Bishops shot back with a statement, accurately saying that Biden’s claim was “not a fact.” Indeed, many religious-affiliated entities that had hoped to win an exemption, and which had Biden’s support inside the White House, had failed. But with Biden now a front-runner for the Democratic nomination for president, they may get another shot at denying access to contraception to their employees. Ryan Grim is the author of We’ve Got People: From Jesse Jackson to Alexandria Ocasio-Cortez, the End of Big Money and the Rise of a Movement.Source: Joe Biden Tried to Cut Contraception Coverage in Obamacare

    Read at 07:02 am, Jun 6th

Day of Jun 5th, 2019

  • React-Redux with TypeScript

    TypeScript is a great language to choose if you are a JavaScript developer and interested in moving towards a statically typed language. Using TypeScript is such a logical move for developers that…

    Read at 10:41 pm, Jun 5th

  • Atlanta Police Try and Shut Down #GunsDownWaterGunsUp But Movement Spreading

    A self-organized movement dubbed, #GunsDownWaterGunsUp has spread to several cities in Georgia and over the next week is poised to branch out into other states.

    Read at 10:24 pm, Jun 5th

  • Revealed: air pollution may be damaging ‘every organ in the body’

    Schraufnagel is concerned that many doctors are unaware of this wide-ranging damage associated with air pollution. “Some have no idea air pollution affects the organs they specialise in. But it affects their organs too and they had better pay attention,” he said.

    Read at 10:17 pm, Jun 5th

  • https://www.valentinog.com/blog/engines/

    Read at 10:12 pm, Jun 5th

  • Anti-Abortion Lawmakers Have No Idea How Women’s Bodies Work

    Last night, the Alabama Senate voted to make abortion illegal from the moment of conception, punishable by 99 years in prison, with no exceptions for rape or incest. It will be the most extreme…

    Read at 10:04 pm, Jun 5th

  • Easy Automatic npm Publishes

    One common question from people using npm to publish, especially on CI systems, is how best to automate the process, especially when dealing with multiple branches.

    Read at 10:00 pm, Jun 5th

  • Christopher Hitchens Debates Jon Stewart on Iraq

    Christopher Hitchens Debates Jon Stewart on IraqSubscribe & More Videos: https://goo.gl/RRVFFaThank for watching, Please Like Share And SUBSCRIBE!!!#bestof, #atheist

    Read at 09:39 pm, Jun 5th

  • What Religious Socialists Bring to Our Table

    Adapted from a presentation at the launch event for Social Democracy in the Making: Political and Religious Roots of European Socialism by Gary Dorrien, eminent social ethicist, long-time member of DSA, and friend of this blog. Video of the event is available here.

    Read at 09:28 pm, Jun 5th

  • Getting The Most Out Of Your PostgreSQL Indexes

    In the Postgres world, indexes are essential to efficiently navigate the table data storage (aka the “heap”). Postgres does not maintain a clustering for the heap, and the MVCC architecture leads to multiple versions of the same tuple lying around.

    Read at 08:00 pm, Jun 5th

  • Multi-column indexes

    This post covers standard Postgres B-tree indexes. To make it easier to read, I’ve just referred to “indexes” throughout, but it’s worth bearing in mind that the other Postgres index types behave very differently.

    Read at 07:54 pm, Jun 5th

  • essential rust tools

    Rust has a “community of developers empowered by their tools and each other” (via Katharina Fey in “An async story“).

    Read at 07:50 pm, Jun 5th

  • Bayesian Inference for Hiring Engineers

    To get an interview for a technical position, an engineer must run a gauntlet. Their resume has to get past a recruiter or hiring manager. They have to sound excited and competent on a culture fit phone call. And they need to complete either an online test or a technical phone screen.

    Read at 07:48 pm, Jun 5th

  • New York City Passes Historic Climate Legislation

    1 / 9 Alaska The impacts of climate warming in Alaska are already occurring, experts have warned. Over the past 50 years, temperatures across Alaska increased by an average of 3.4°F. Winter warming was even greater, rising by an average of 6.3°F jeopardising its famous glaciers and frozen tundra.

    Read at 07:36 pm, Jun 5th

  • Trump Administration Considered Tariffs on Australia

    WASHINGTON — The Trump administration considered imposing tariffs on imports from Australia last week, but decided against the move amid fierce opposition from military officials and the State Department, according to several people familiar with the discussions.

    Read at 07:32 pm, Jun 5th

  • Kushner unsure whether he'd alert FBI if Russians request another meeting

    Why this matters: Kushner is now in the West Wing as senior adviser to the president.

    Read at 02:53 pm, Jun 5th

  • Cabán Gets Big Money Help For Her “People-Powered” Campaign

    Public defender Tiffany Cabán may lead the way with individual small donor contributors in the Queens Queens District Attorney race, but she also received help from major players, according to the New York State Board of Elections, which released campaign filings today.

    Read at 01:46 pm, Jun 5th

  • As Sunset Park Gentrifies, Residents Accuse Building Owner Of Dropping Manufacturing Commitment

    The owner of Liberty View Industrial Plaza in Sunset Park is looking to back out of a deal with the city to reserve most of the building for manufacturing tenants.

    Read at 01:42 pm, Jun 5th

  • YouTube decides that homophobic harassment does not violate its policies

    YouTube has at last formally responded to an explosive and controversial feud between Vox writer and video host Carlos Maza and conservative YouTuber Steven Crowder.

    Read at 11:56 am, Jun 5th

  • Exclusive: Pompeo delivers unfiltered view of Trump’s Middle East peace plan in off-the-record meeting

    Secretary of State Mike Pompeo delivered a sobering assessment of the prospects of the Trump administration’s long-awaited Middle East peace plan in a closed-door meeting with Jewish leaders, saying “one might argue” that the plan is “unexecutable” and it might not “gain traction.

    Read at 09:43 am, Jun 5th

  • How New York’s Elite Public Schools Lost Their Black and Hispanic Students

    Donna Lennon will never forget when she learned she had won a seat at Stuyvesant High School, one of the nation’s most revered and selective public schools. Ms.

    Read at 09:33 am, Jun 5th

  • EXCLUSIVE : 'Central Park Jogger' supports city's release of case records, hopes to clear police and DA of allegations they railroaded the accused teens

    Read at 09:27 am, Jun 5th

  • U.S. Requiring Social Media Information From Visa Applicants

    Visa applicants to the United States are required to submit any information about social media accounts they have used in the past five years under a State Department policy that started on Friday.

    Read at 09:22 am, Jun 5th

  • Britain is horribly divided – and that’s also the fault of remainers

    In the buildup to the European elections, I travelled around Britain and had the sense of people talking about the state of the country in wildly different ways. “Democracy is broken,” shouted a Brexit supporter in Swindon.

    Read at 09:20 am, Jun 5th

  • The Right’s Grifter Problem

    Making the click-through worthwhile: A rare single-topic Jolt this morning, as I’ve watched the two millionth “the problem with conservatism is people like you, the solution for conservatism is people like me” debate, and I’m just sick and tired of so many of our brethren averting their eyes

    Read at 08:15 am, Jun 5th

  • House Dems to hold Barr, Ross in contempt over census question

    House Democrats are moving to hold Attorney General William Barr and Commerce Secretary Wilbur Ross in contempt of Congress for defying a subpoena seeking information about efforts to add a citizenship question to the 2020 census. “Unfortunately, your actions are part of a pattern,” Rep.

    Read at 08:04 am, Jun 5th

  • Against Advertising

    Advertisers thrive on perpetuating a system that is ravaging the planet. We can do without them — and a lot of the junk they’re trying to sell us. The job of advertisers is straightforward.

    Read at 08:00 am, Jun 5th

  • San Francisco to force some addicts into treatment

    A city known for its fierce protection of civil rights has voted to force some people with serious mental illness and drug addiction into treatment, even if it goes against the spirit of the normally liberal-minded San Francisco.

    Read at 07:27 am, Jun 5th

  • SF supervisors strike deal to expand forced treatment of mentally ill - SFChronicle.com

    hearst/templates/premium/article/content/gallery.tpl e hearst/templates/premium/article/content/gallery.tpl San Francisco supervisors struck a deal Monday to support a controversial law that would expand the city’s ability to force seriously mentally ill people into care — but the plan will likely help only about five people. The Board of Supervisors is expected to approve the legislation on Tuesday, following months of debate over how the city should deal with severely mentally ill people on the streets. The supervisors battled for months over the proposal. But even the legislation’s most ardent supporters say it isn’t the answer to the city’s broken behavioral health care system. The proposal expands the definition of who is eligible for conservatorship, which is court-ordered mental health treatment. If it passes, the city can impose in-patient treatment on someone if they are severely mentally ill, addicted to drugs and have been taken to an emergency crisis unit — known as a 5150 hold — at least eight times in a year. While the city’s Department of Public Health estimates the expanded law would help only about five people, the board was fractured over whether the city’s already clogged mental health care system could adequately help more patients. ferd/hearst/templates/premium/article/content/related-links.tpl e ferd/hearst/templates/premium/article/content/related-links.tpl But on Monday, some supervisors said it was a small step in the right direction. Supervisor Rafael Mandelman, who originally co-sponsored the legislation with Mayor London Breed, said he was pleased with his colleagues’ support, but was “frustrated that it took this long and it was this much work for something that was so incremental.” Some supervisors said they decided to support the legislation after seeing several minor amendments crafted by Supervisors Matt Haney and Norman Yee. One amendment would ensure that someone is offered a bed in a treatment facility before they are put into inpatient treatment, and offered a housing placement after they are ready to move on. Those guarantees could minimize long wait times for people locked in psychiatric wards, and lower the possibility of people being dumped back on the streets after treatment. Another amendment would guarantee that a specialized care team is in place to offer the people voluntary services before they are forced into care. None of the amendments, however, addressed a fundamental problem that some supervisors had with the legislation: It would add more people to an already clogged mental health care system. About 600 people are currently under court-ordered inpatient and outpatient treatment in San Francisco. Some of those people have found themselves stuck in locked wards for weeks — and in some cases, months — longer than they should be, as they wait for a bed in a more appropriate treatment facility to open up. Long wait times still concern Supervisor Hillary Ronen, originally one of the law’s main opponents. But she said the amendments — particularly the one that ensures housing is offered — persuaded her to support the law. Her support is a big shift from two weeks ago when she called the proposal “unworkable.” Ronen said she also changed her mind after Breed pledged $50 million in next year’s budget to include more than 100 treatment and recovery beds for people suffering from mental health issues and substance abuse. She also touted her recently proposed ballot measure — co-sponsored by Haney — which has the ambitious goal of guaranteeing mental health care for all city residents. “It’s a combination of all those factors together that are allowing me to change my position on this,” she said. But “I think it is such a minor law and it will have such a minor impact compared to the gravity of our problem.” A proposed state law, SB40, which recently passed the Assembly, could increase the number of people eligible under the expanded conservatorship laws to up to approximately 55 people. Supervisors Haney, Yee and Ahsha Safaí also said Monday that they would vote for the legislation. That makes eight votes confirmed to The Chronicle, including the other co-sponsors Supervisors Vallie Brown and Catherine Stefani, and Supervisor Aaron Peskin, who said he would support the legislation last week. If Mandelman’s colleagues did not support the measure, he threatened to bring the issue to the voters on the November ballot. On Monday evening, some supporters were still trying to persuade their colleagues to vote for the legislation in an attempt to get unanimous support. The goal, they said, is to show unity. “Our homelessness crisis calls out for bold, persistent experimentation,” he said. But, he added, it’s challenging when “San Franciscans are really divided.” Trisha Thadani is a San Francisco Chronicle staff writer. Email: tthadani@sfchronicle.com Twitter: @TrishaThadani Source: SF supervisors strike deal to expand forced treatment of mentally ill – SFChronicle.com

    Read at 03:08 pm, Jun 5th

  • Take Back Your Web: Tantek Çelik’s Call to Action to Join the Independent Web – WordPress Tavern

    Tantek Çelik, Web Standards Lead at Mozilla and co-founder of IndieWebCamp, delivered an inspirational talk titled “Take Back Your Web” at the most recent beyond tellerrand conference in Düsseldorf, Germany. He opened the presentation with a litany of Facebook’s wrongdoings, taking the world’s largest social network to task for its role in increasing polarization, amplifying rage, and spreading conspiracy theories. Çelik challenged the audience to “stop scrolling Facebook,” because its algorithms are designed to manipulate users’ emotions and behaviors. He noted that it is the only social network with a Wikipedia page dedicated to its criticism. This massive document has a dizzying number of references, which Wikipedia says “may be too long to read and navigate comfortably.” As an alternative to scrolling Facebook, Celik encouraged attendees to spend time doing nothing, an activity that can be uncomfortable yet productive. The “Take Back Your Web” presentation is a call to action to join the independent web by owning your own domain, content, social connections, and reading experience. Celik recommends a number of IndieWeb services and tools to empower users to take control of their experiences on the web. With a free site hosted on GitHub, he said the costs of owning your own domain are less than owning a phone or having internet service. Suggestions like this are targeted at developers who share Twitter names instead of domains and post articles on Medium. Setting up a site on GitHub is not a simple task for most. That’s why networks like WordPress.com, along with hosts that provide instant WordPress sites, are so important for enabling average internet users to create their own websites. Celik referenced Matthias Ott’s recent article “Into the Personal-Website-Verse,” highlighting the section about the value of learning new technologies by implementing them on your own website: “A personal website is also a powerful playground to tinker with new technologies and discover your powers.” It’s one of the few places developers can expand their skills and make mistakes without the pressure to have everything working. Ott enumerates the many benefits of people having their own enduring home on the web and encourages developers to use their powers to make this a reality: As idealistic as this vision of the Web might seem these days, it isn’t that far out of reach. Much of what’s needed, especially the publishing part, is already there. It’s also not as if our sites weren’t already connected in one way or another. Yet much of the discussions and establishment of connections, of that social glue that holds our community together – besides community events in real life, of course –, mostly happens on social media platforms at the moment. But: this is a choice. If we would make the conscious decision to find better ways to connect our personal sites and to enable more social interaction again, and if we would then persistently work on this idea, then we could, bit by bit, influence the development of Web technologies into this direction. What we would end up with is not only a bunch of personal websites but a whole interconnected personal-website-verse. Check out Çelik’s slides for the presentation and the recording below for a little bit of inspiration to re-evaluate your relationship with social networks, create your own site, or revive one that has been neglected. Would you like to write for WP Tavern? We are always accepting guest posts from the community and are looking for new contributors. Get in touch with us and let's discuss your ideas.Like this:Like Loading... Related Source: Take Back Your Web: Tantek Çelik’s Call to Action to Join the Independent Web – WordPress Tavern

    Read at 07:57 am, Jun 5th

  • From YouTube to TV, and Back Again: Viral Video Child Stars and Media Flows in the Era of Social Media - Cyborgology

    Social Media Famous Children In light of recent discussions around the rights of social media famous children, where various journalists and scholars are calling for more accountability from influencer parents, social media platforms, and everyday audiences, my collaborator A/Prof Tama Leaver and I would like to share some snippets from our paper-in-progress regarding the networked trajectories of child virality for which another stakeholder – TV networks – must be held accountable.The piece of research, ‘From YouTube to TV, and Back Again: Viral Video Child Stars and Media Flows in the Era of Social Media’, was last presented in October 2018 at the Association of Internet Researchers (AoIR) 2018 conference in Montreal. YouTube and TV While talk shows and reality TV are often considered launching pads for ordinary people seeking to become celebrities, we argue that when children are concerned, especially when those children have had viral success on YouTube or other platforms, their subsequent appearance(s) on television highlight far more complex media flows. At the very least, these flows are increasingly symbiotic, where television networks harness preexisting viral interest online to bolster ratings. However, the networks might also be considered parasitic, exploiting viral children for ratings in a fashion they and their carers may not have been prepared for. In tracing the trajectory of Sophia Grace and Rosie from viral success to The Ellen Show we highlight these complexities, whilst simultaneously raising concerns about the long-term impact of these trajectories on the children being made increasingly and inescapably visible across a range of networks and platforms.We draw on an extended data set largely comprising screengrabs, archived comments, press coverage, and volumes of field notes tracking historical events that unfolded in public trajectory of young children who go viral on the internet and on the media, but also utilise data derived from an ethnographically informed content analysis of young internet celebrities and a data-driven cultural studies analysis of childhood in the age of tracking devices. Sophia Grace and Rosie This research takes as its primary case study the trajectory and progress of cousins Sophia Grace Brownlee (b. 2003) and Rosie McClelland (b. 2006), who went viral on YouTube in 2011 at the ages of 8 and 5 for covering Nicki Minaj’s Super Bass and were subsequently groomed by The Ellen DeGeneres Show into multi-platform celebrity.Sophia Grace Brownlee (b. 2003) and Rosie McClelland (b. 2006) are a pair of cousins from Essex, England. Better known on the internet as “Sophia Grace and Rosie”, the duo went viral on YouTube at ages 8 and 5 when Sophia Grace’s mother uploaded a video of the girls singing Nicki Minaj’s Super Bass in September 2011 (Sophia Grace 2011a). The viral video was the debut post on the YouTube channel “Sophia Grace”, and has accumulated over 52 million views as of August 2017. A month later in October 2011, the girls were invited on The Ellen DeGeneres Show to be interviewed by show host Ellen and to reenact their viral performance. In a later segment, Nicki Minaj sprang a surprise on the girls where she appeared on stage at a last minute request to chat and sing with them. Both videos have recorded over 32 million and 122 million views respectively.So well received were the girls on The Ellen DeGeneres Show and its YouTube channel that shortly after, behind-the-scenes footage of Sophia Grace & Rosie were released on the Show’s YouTube Channel, in a bid to capitalize upon their virality and extend the length of their appeal to the show’s audience. Subsequently, the girls were subsumed into the programming of The Ellen DeGeneres Show as they represented the show at various red carpet and starred in branded content in the YouTube content vernacular of a vlog, promoting various brands and events. Sophia Grace & Rosie eventually became a bona fide staple on The Ellen DeGeneres Show, hosting their own segment known as “‘Tea Time’ with Sophia Grace & Rosie”, with eight episodes between September 2012 and May 2013. It appears that The Ellen DeGeneres Show spotted talent and viral uptake of the girls early on, inviting them to celebrate their 100 millionth view on YouTube. Over subsequent years, the girls would frequently be featured talking about their personal lives, the experience of Britons regularly visiting America, their family lives, and the impact of their YouTube success, all of which both appeared on The Ellen Show and the Show’s YouTube channel.As the years past and the cousins approach teenhood, it became clear that the social media presence of Sophia Grace was more intentionally curated and branded for a career in the (internet) entertainment industry while Rosie faded into the background. Aside from the structural expansion of rebranding her YouTube channel to focus on Sophia Grace rather than the duo and starting a Facebook page as “Sophia Grace The Artist”. Sophia Grace’s digital estates also underwent content expansion has she began to produce her own music meet mainstream entertainment industry and collaborate with fellow internet celebrities. Since turning 13 in 2016, Sophia Grace formally launched her Influencer career by engaging in Influencer content vernacular and YouTube tropes including participating in internet viral trends unrelated to her music career such as making  and the Oreo challenge, engaging in the attention economy of clickbait such as Q&As addressing her budding romantic life and expanding her presence in other genres on YouTube such as makeup tutorials. Networked Trajectories of Viral Child Celebrity Following our fieldwork and content analysis of the social media presence and media coverage on Sophia Grace and Rosie, we offer the following model that details the steps and milestones through which children who first become viral on social media become systemically groomed into multi-media networked celebrities on both social and legacy media: Complex Media Flows To some extent, the rise and popularity of can be understood as part of what Graeme Turner calls ‘the demotic turn’, the increasing repositioning of everyday people into the media spotlight, creating a form of celebrity via reality TV, talk shows and so forth (Turner, 2013). This is reinforced by Sophia Grace (& Rosie)’s acknowledgement of The Ellen DeGeneres Show as the springboard for their expanded and extended fame post-virality in several of their public messages. However, we argue that the media flows relating to viral children as exemplified by Sophia Grace & Rosie is more complex. Rather than ‘creating’ the fame of these children The Ellen DeGeneres Show and similar TV talk show formats opportunistically capitalize upon the social capital of such viral video children by harnessing their fame and packaging it into more accessible, commercial, and deliberate consumption bytes. The girls were viral stars before they were on TV, but the networks channeled, amplified and significantly capitalized on their emergent (viral) fame. So successful is this model of viral kid celebrity factories that The Ellen DeGeneres Show has curated its own series of adorable kids in a playlist of over 200 videos with such viral children engaging in various (commercial) activities on The Show. Emerging Conclusions Viral fame online and more recognised televisual fame are increasingly blurring, with both symbiotic and parasitic relationships emerging as television networks seek to harness, and create, online attention. Viral children such as Sophia Grace and Rose exemplify this complexity, where the televisual and online flows are multiple and complex. At the heart of these flows, though, are an increasing number of children who amplified viral fame must be carefully position in commercial, social and care terms. As more and more children are featured online as proto-influencers and microcelebrities, often managed and produced by their parents, and sometimes being amplified and harnessed by more traditional media forms such as television, the rights of the children in these instances – to privacy, to self-determination and so forth (Livingstone & Third, 2017) – must be more robustly and transparently discussed. Historically, child stars have often not fared that well after bursts of fame in the media industries; viral kids need more successful and more carefully mapped trajectories. Further Resources While we are currently ushering our paper into publication, here are a few more links on the topic that might be useful:Slides from our talk here.Tweet summary of our key slides here.Abstract in video form here.Radio interview here.Tama’s work on ‘Intimate Surveillance’ here.Crystal’s work on ‘Family Influencers’ here.Pop media version of our work here.Twitter thread + reading list on the history of child influencers here. * Dr Crystal Abidin is a socio-cultural anthropologist of vernacular internet cultures, particularly young people’s relationships with internet celebrity, self-curation, and vulnerability. She is Senior Research Fellow and ARC DECRA Fellow in Internet Studies at Curtin University. Her books include Internet Celebrity: Understanding Fame Online (Emerald Publishing, 2018), Microcelebrity Around the Globe: Approaches to Cultures of Internet Fame (co-edited with Megan Lindsay Brown, Emerald Publishing, 2018), and Instagram: Visual Social Media Cultures (with Tama Leaver and Tim Highfield, Polity Press, December 2019). Reach her at wishcrys.com or @wishcrys. Source: From YouTube to TV, and Back Again: Viral Video Child Stars and Media Flows in the Era of Social Media – Cyborgology

    Read at 07:56 am, Jun 5th

  • An update on last week's customer shutdown incident

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: An update on last week’s customer shutdown incident

    Read at 07:41 am, Jun 5th

Day of Jun 4th, 2019

  • Can New York rein in big real estate?

    Big real estate is on the ropes in New York. When Rep.

    Read at 07:53 pm, Jun 4th

  • Top 10 Inwood Restaurants

    Read at 07:09 pm, Jun 4th

  • Biden’s First Run for President Was a Calamity. Some Missteps Still Resonate.

    In 1988, Joe Biden was prone to embellishment. Hints of that linger today. But unlike then, his message to voters is clear: He’s a stabilizing statesman in a tumultuous time. Joe Biden was riffing again — an R.F.K.

    Read at 07:09 pm, Jun 4th

  • Progressives Should Read Progressive History—So They Don’t Blow It This Time

    "Medicare for All." "The Green New Deal." Calls to overhaul the Supreme Court and replace the Electoral College. Many activists today are heralding a new progressive movement—a successor to the vibrant reform coalition that swept both major political parties in the early years of the 20th century.

    Read at 06:55 pm, Jun 4th

  • A health check playbook for your Postgres database

    I talk with a lot of folks that set their database up, start working with it, and then are surprised by issues that suddenly crop up out of nowhere. The reality is, so many don’t want to have to be a DBA, instead you would rather build features and just have the database work.

    Read at 09:53 am, Jun 4th

  • A Better Approach for Using Purgecss with Tailwind

    We’re hiring designers, developers, project managers in all 3 of our offices. Learn more and introduce yourself. Purgecss is an indispensable frontend tool, especially when used alongside TailwindCSS.

    Read at 09:46 am, Jun 4th

  • Democratic Socialists of America (DSA)

    When we first started talking to Historians for Peace and Justice about sharing their thoughts with DSA Weekly, we had little idea that this very week, the U.S. president would be trash-talking the entire island of Puerto Rico and catapulting it into front-page news.

    Read at 09:13 am, Jun 4th

  • Docker build caching can lead to insecure images

    Docker builds can be slow, so you want to use Docker’s layer caching, reusing previous builds to speed up the current one. And while this will speed up builds, there’s a down-side as well: caching can lead to insecure images. In this article I’ll cover: Why caching can mean insecure images. Bypassing Docker’s build cache. The process you need in place to keep your images secure. TEASER_END Note: Outside the specific topic under discussion, the Dockerfiles in this article are not examples of best practices, since the added complexity would obscure the main point of the article. Want a best-practices Dockerfile and build system? Check out my Production-Ready Python Containers product. The problem: caching means no updates Consider the following Dockerfile (and note this is not a best practices Dockerfile—if you want best practices): FROM ubuntu:18.04 RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends python3 COPY myapp.py . CMD python3 myapp.py The first time we build it, it will download a variety of Ubuntu packages, which takes a while. The second time we run it, however, docker build uses the cached layers (assuming you ensured the cache is populated): $ docker build -t myimage . Sending build context to Docker daemon 2.56kB Step 1/4 : FROM ubuntu:18.04 ---> 94e814e2efa8 Step 2/4 : RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends python3 ---> Using cache ---> 3cea2a611763 Step 3/4 : COPY myapp.py . ---> Using cache ---> f6173b1fa111 Step 4/4 : CMD python3 myapp.py ---> Using cache ---> 6222b50940a5 Successfully built 6222b50940a5 Successfully tagged myimage:latest Until you change the text of the second line of the Dockerfile (“apt-get update etc.”), every time you do a build that relies on the cache you’ll get the same Ubuntu packages you installed the first time. As long as you’re relying on caching, you’ll still get the old, insecure packages distributed in your images even after Ubuntu has released security updates. Disabling caching That suggests that sometimes you’re going to want to bypass the caching. You can do so by passing two arguments to docker build: --pull: This pulls the latest version of the base Docker image, instead of using the locally cached one. --no-cache: This ensures all additional layers in the Dockerfile get rebuilt from scratch, instead of relying on the layer cache. If you add those arguments to docker build you will be ensured that the new image has the latest (system-level) packages and security updates. Rebuild your images regularly If you want both the benefits of caching, and to get security updates within a reasonable amount of time, you will need two build processes: The normal image build process that happens whenever you release new code. Once a week, or every night, rebuild your Docker image from scratch using docker build --pull --no-cache to ensure you have security updates. Source: Docker build caching can lead to insecure images

    Read at 01:18 pm, Jun 4th

Day of Jun 3rd, 2019

  • The internal rift over rent regulations

    Even before this year’s state legislative session began, an overhaul of rent laws appeared to be a given. Democrats had won big majorities in the state Senate and Assembly, and Gov.

    Read at 11:44 pm, Jun 3rd

  • Move Over Redux: Apollo-Client as a State Management Solution

    On the Internal Tools team at Circle, we recently modernized a legacy PHP app by introducing React components. Just a handful of months after this initiative began we have close to one-hundred React components in this app! ?

    Read at 11:31 pm, Jun 3rd

  • New York City's Early Voting Plan Unfairly Benefits White Voters

    Update: On May 30, the New York City Board of Elections announced it would add 19 more early voting sites. But this plan still leaves significant gaps in access, particularly in communities of color. The Board also still does not plan to let people vote at any early voting site in their county.

    Read at 11:19 pm, Jun 3rd

  • Albany Lawmakers Weigh A Ban On The Gay And Trans Panic Defense

    New York could become just the fifth state in the union to ban gay and trans panic defenses in court if legislation in Albany becomes law by the end of June.

    Read at 10:31 pm, Jun 3rd

  • QuickCheck or Fuzzing? Which one to use?

    TL;DR: QuickCheck for fast testing of functions with random values as part of a test suite. Fuzzers (model or mutation based) take longer time to run and are best used for world facing interfaces of the software like file or protocol parsers.

    Read at 09:20 pm, Jun 3rd

  • Rex Tillerson Secretly Meets With House Foreign Affairs Committee to Talk Trump

    Former Secretary of State Rex Tillerson spoke with the leaders of the House Foreign Affairs committee Tuesday in a lengthy session that, an aide said, touched on his time working in the Trump administration, the frictions he had with the president’s son-in-law, and efforts to tackle issues like Ru

    Read at 09:14 pm, Jun 3rd

  • https://revolutionsperminute.simplecast.com/episodes/the-battle-at-hudson-yards-pJ962AZo

    Read at 07:49 pm, Jun 3rd

  • Code Review: How can we do it better?

    ?CircleCI - Continuous Integration as a service (Sponsored - use the link when signing up to support the channel)http://circleci.funfunfunction.com ?Pull Requests featured in the videohttps://github.com/tc39/test262/pull/...https://github.com/rakibtg/Docker-Ele...? Follow on Twitch to get

    Read at 04:30 pm, Jun 3rd

  • JS Party #65 : Building rapid UI with utility-first CSS featuring Adam Wathan

    Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog. Raygun – Unblock your biggest app performance bottlenecks with Raygun APM.

    Read at 12:08 pm, Jun 3rd

  • Plan, Market, and Wal-Market

    More and more people are talking about socialism, but nobody’s doing anything about it. If we’re talking about “nationalizing the means of production,” Bernie Sanders’ avowedly democratic socialist political revolution falls well short.

    Read at 12:06 pm, Jun 3rd

  • Gender Critical | ContraPoints

    Let's go adult human females.✿Patreon: https://www.patreon.com/contrapoints✿Donate: https://paypal.me/contrapoints✿Merch: https://www.teepublic.com/stores/cont...✿Subscribe: https://www.youtube.com/c/ContraPoints✿Live Stream Channel: https://www.youtube.com/c/ContraPoint...✿Twitter: http

    Read at 12:00 pm, Jun 3rd

  • Some thoughts on the New York Health Act

    Over the past few months, I’ve heard from a number of people asking me to co-sponsor S3577, the New York Health Act, a proposal to create a universal single-payer health insurance program that covers all residents of New York State.

    Read at 09:41 am, Jun 3rd

  • Interview about the Historovox with Bob Garfield of On the Media

    My piece on the Historovox has provoked a lot of controversy and pushback. Dan Drezner devoted a column to it in The Washington Post. Others have weighed in on social media. I had a chance to talk with Bob Garfield of the NPR show On the Media about the piece. Have a listen.

    Read at 09:31 am, Jun 3rd

  • Why the imperialists hate Huawei

    The Chinese company Huawei has been targeted by the Donald Trump administration. At Washington’s request, Canada arrested Meng Wanzhou, the chief financial officer of the company, last December. The charge? That the company did business with Iran, contrary to U.S. sanctions on that country.

    Read at 09:28 am, Jun 3rd

  • Why I Love The D.S.A.

    I’m not in the Democratic Socialists of America, but I have been interviewing a number of its members lately, for a forthcoming series of articles in this magazine and an academic project.

    Read at 09:24 am, Jun 3rd

  • Worry About Facebook. Rip Your Hair Out in Screaming Terror About Fox News.

    Novel forms of digital misinformation still pale in comparison with Fox News’ full-time hall of mirrors.

    Read at 09:20 am, Jun 3rd

  • Male Privilege Helped My Political Career. That's Why I'm Endorsing Elizabeth Warren.

    It has become a maddening refrain, every time she drops a new policy proposal, grounded in a compelling vision for a more just and compassionate country. Thoughtful about race, gender, and class. With the details worked out and the new spending paid for in a fair and progressive way.Yes, Elizabeth Warren has the best policies, but…I’m done with the “but.” I’m endorsing Elizabeth Warren for president of the United States.We all know what’s behind the hesitation: a bias toward male leadership.I know this well, because it’s something I have personally benefited from at every turn in my career. At age 23, as a straight, white, young man — bright-eyed but without any evident qualifications — I got a great job as the executive director of a not-for-profit affordable housing group. When I ran for the New York City Council, in one of the most progressive communities in the country, I faced eight other candidates. All men.I’ve tried to put my privilege to good use, as an ally in feminist, anti-racist, and LGBTQ efforts. But I can’t honestly say I’ve grappled seriously with the many ways, subtle and unsubtle, that this bias has benefitted me at every step: in my education, my career, in meetings, in fundraising, in the different expectations for my wife and I, in our domestic life — and most certainly in politics.Thanks to leadership from the black community, we’ve started — just barely — to reckon with the legacy of white supremacy in the United States. But even with the strength of the #MeToo movement, we aren’t really doing that with gender. There’s little honest reckoning of the cost, or of the ongoing legacy of patriarchy, which is around us in every element of our economy, our health, our homes, and our politics. (It’s worth noting that black women, the most loyal Democratic constituency, bear the brunt of both.)If you want an example of that legacy, just look at the Democratic primary.As best I can tell, the argument that we should go with Biden, Bernie, Buttigieg, or Beto (rather than Warren, Harris, or Gillibrand) is that we live in a sexist country where women will struggle to be elected — and since the stakes are so high in the Trump era, we can’t risk it.I understand why it feels scary to let go of our addiction to male leadership. Why it feels to some that the men are somehow “more presidential.” Why some pundits believe “electability” means someone who appears the least threatening to some of the white male voters that Trump won.But let’s be clear about the costs of yielding to those feelings — and the victory we would be handing the misogynist in chief whether we beat him or not — by giving in before we even take up the fight. The answer to repression cannot be to accommodate it. The answer is to push forward for a more expansive and inclusive vision of freedom and equality. When Warren does her pinkie swear with young women on the campaign trail, and tells them she’s running for president “because that’s what girls do,” my daughter Rosa thrills to it. But what’s possible here is something much more than the idea that all of our daughters could believe they can grow up to be president.What’s possible is a profound victory for the simple but so deeply powerful idea that women are fully and truly equal. That we would be lucky to have women bosses, governors, and presidents. That we will all be more fully whole, more fully equal, more fully human, when we scrub the rot of sexist abuse, assault, belittling, interrupting, grabbing, objectifying, and diminishing of women.I know that it is possible for men to be exhilarated about the prospect of living in that world. My son Marek and I both would prefer to live in it.This is not about tokenism. If I weren’t confident that Warren could beat Trump, and that she could do the best job, that victory for equality would not be reason enough in itself. But since I really believe she could — that she not only has the best plans, but would be the best president — it’s an awfully good additional one.And if you’re like me, you'll love that she sweats the details: of universal child care, the boldest plan for affordable housing, universal free public college and canceling student loan debt, holding the largest US corporations accountable, and renewing serious antitrust enforcement for a fairer economy, taking concrete actions to protect reproductive freedom, debt relief for Puerto Rico, a smart Ultra-Millionaire Tax to pay for the investments she proposes, and much, much more. You can find all of them on her website.I have enormous admiration for the movement that Bernie Sanders continues to build, boldly shifting what we believe is possible, and mobilizing the power needed for change. As a New York City Council member, I’ve worked enthusiastically with Mayor de Blasio on universal pre-K, affordable housing, and workers' rights, and with Sen. Gillibrand on work/family balance. And as sharply as I’ll criticize Joe Biden in the coming months, if he is the Democratic nominee, I will work hard and vote for him — or any of the other candidates, all of whom would of course be immeasurably better than the vile narcissist in chief, who is doing so much to erode almost every critical institution of our democracy.But I’m wholeheartedly endorsing Warren. If we organize for her in the coming months, we can help her win. If she elevates with our support and wins the primary, I have no doubt she’ll have the momentum and the courage we need to beat Trump.We’d not only strike a blow against our bias for male leadership. We’d have a little clearer picture of what a more equal world would look like. We’d also get to help her implement all those plans. And how great would that be, for our daughters and our sons alike? Brad Lander is a member of the New York City Council, and the board chair of Local Progress, a national network of progressive local elected officials. Source: Male Privilege Helped My Political Career. That’s Why I’m Endorsing Elizabeth Warren.

    Read at 06:01 pm, Jun 3rd

  • The economics of package management

    The economics of open source This is the talk I delivered on day one of JSConfEU 2019. video link to come original proposal slides in Markdown format, intended for display with Deckset rendered slides on SpeakerDeck Entropic is the open source project associated with this talk. Many thanks to Jon Cowperthwait and David Zink for their valuable feedback on drafts of this talk. Source: The economics of package management

    Read at 12:33 pm, Jun 3rd

  • Thinking About Values | benmccormick.org

    Thinking About ValuesAs a way to help myself grow in my first year as a manager, I’ve been working through Google’s New Manager Training which they have generously published online. One exercise that stood out to me was a “values selection” process, where they encourage managers to select a ranked list of 5-10 words that resonate with them as values from a list of around 400 “values words”. Choosing values is an odd process, because you’re choosing between “goods”. When I first went through this sheet, I circled over 40 words as values that “resonated” with me. They’re things that I value and strive for, or wish that I was more like. Deciding between adventurousness and cowardice is not a “values decision”. A real values decision means that you’re trying to decide whether good traits like adventurousness, belonging, or clear-mindedness are more important to you. Management often forces you to choose between goods: Will we ship quicker? Aim for higher quality? How do those choices impact work/life balance for our teams?. It’s hard to know if we’re making good choices, because the results routinely lack clarity. Indicators lag, causation is fuzzy, and we don’t get to compare to a control group. Values can be guideposts through ambiguity. I’ve seen the power of this at a company level in my current gig, where having a set of values that are kept in the spotlight often allows us to weaponize them as decision making tools . Going through the list, I eventually cut things down to 20, 12, 10 and then 8. That still feels like a lot to me, but for each of the 8 I can easily cite work situations I’ve faced in the past month where my response came directly out that value. In the end I ended up with Integrity, Thankfulness, Community, Prudence (in the classical sense), Generosity, Continual Improvement, Accountability, Justice. I find that these are easy to connect to my everyday work when I write them out as more detailed sentences though. Act with integrity. Live with thankfulness for what I’ve been blessed with. I want to work and live life in community. Make decisions based on a rational analysis of reality, not feelings and first impressions (Prudence). Don’t save it all for myself. Leave margin and be generous. Continual Improvement + Goodism are a powerful combo. Be accountable for my responsibilities and hold others to the same. Actively work for justice I’ve published them here with a bit more detail. Those are “guideposts” that I can easily quote to myself or post up somewhere, but they still have enough substance to say something meaningful when I’m considering how to handle a low-performing employee, organizational changes, or personal career decisions. I’ve already seen these values bring a clarity to my thinking about tough things in a short period of time. If you’re interested, I highly recommend checking out Google’s training. Source: Thinking About Values | benmccormick.org

    Read at 08:15 am, Jun 3rd

  • Self-Host Your Static Assets – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts

    31 May, 2019 Written by Harry Roberts on CSS Wizardry. Table of ContentsWhat Am I Talking About? Risk: Slowdowns and Outages Risk: Service Shutdowns Risk: Security Vulnerabilities Mitigation: Subresource Integrity Penalty: Network Negotiation Mitigation: preconnect Penalty: Loss of Prioritisation Penalty: Caching Myth: Cross-Domain Caching Myth: Access to a CDN Self-Host Your Static Assets One of the quickest wins—and one of the first things I recommend my clients do—to make websites faster can at first seem counter-intuitive: you should self-host all of your static assets, forgoing others’ CDNs/infrastructure. In this short and hopefully very straightforward post, I want to outline the disadvantages of hosting your static assets ‘off-site’, and the overwhelming benefits of hosting them on your own origin. What Am I Talking About? It’s not uncommon for developers to link to static assets such as libraries or plugins that are hosted at a public/CDN URL. A classic example is jQuery, that we might link to like so: <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"></script> There are a number of perceived benefits to doing this, but my aim later in this article is to either debunk these claims, or show how other costs vastly outweigh them. It’s convenient. It requires very little effort or brainpower to include files like this. Copy and paste a line of HTML and you’re done. Easy. We get access to a CDN. code.jquery.com is served by StackPath, a CDN. By linking to assets on this origin, we get CDN-quality delivery, free! Users might already have the file cached. If website-a.com links to https://code.jquery.com/jquery-3.3.1.slim.min.js, and a user goes from there to website-b.com who also links to https://code.jquery.com/jquery-3.3.1.slim.min.js, then the user will already have that file in their cache. Risk: Slowdowns and Outages I won’t go into too much detail in this post, because I have a whole article on the subject of third party resilience and the risks associated with slowdowns and outages. Suffice to say, if you have any critical assets served by third party providers, and that provider is suffering slowdowns or, heaven forbid, outages, it’s pretty bleak news for you. You’re going to suffer, too. If you have any render-blocking CSS or synchronous JS hosted on third party domains, go and bring it onto your own infrastructure right now. Critical assets are far too valuable to leave on someone else’s servers. Risk: Service Shutdowns A far less common occurrence, but what happens if a provider decides they need to shut down the service? This is exactly what Rawgit did in October 2018, yet (at the time of writing) a crude GitHub code search still yielded over a million references to the now-sunset service, and almost 20,000 live sites are still linking to it! Many thanks to Paul Calvano who very kindly queried the HTTPArchive for me.Risk: Security Vulnerabilities Another thing to take into consideration is the simple question of trust. If we’re bringing content from external sources onto our page, we have to hope that the assets that arrive are the ones we were expecting them to be, and that they’re doing only what we expected them to do. Imagine the damage that would be caused if someone managed to take control of a provider such as code.jquery.com and began serving compromised or malicious payloads. It doesn’t bear thinking about! Mitigation: Subresource Integrity To the credit of all of the providers referenced so far in this article, they do all make use of Subresource Integrity (SRI). SRI is a mechanism by which the provider supplies a hash (technically, a hash that is then Base64 encoded) of the exact file that you both expect and intend to use. The browser can then check that the file you received is indeed the one you requested. <script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha256-pasqAKBDmFT4eHoN2ndd6lN370kFiGUFyTiUHWhU7k8=" crossorigin="anonymous"></script> Again, if you absolutely must link to an externally hosted static asset, make sure it’s SRI-enabled. You can add SRI yourself using this handy generator. Penalty: Network Negotiation One of the biggest and most immediate penalties we pay is the cost of opening new TCP connections. Every new origin we need to visit needs a connection opening, and that can be very costly: DNS resolution, TCP handshakes, and TLS negotiation all add up, and the story gets worse the higher the latency of the connection is. I’m going to use an example taken straight from Bootstrap’s own Getting Started. They instruct users to include these following four files: <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="..." crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="..." crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="..." crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="..." crossorigin="anonymous"></script> These four files are hosted across three different origins, so we’re going to need to open three TCP connections. How much does that cost? Well, on a reasonably fast connection, hosting these static assets off-site is 311ms, or 1.65×, slower than hosting them ourselves. By linking to three different origins in order to serve static assets, we cumulatively lose a needless 805ms to network negotiation. Full test.Okay, so not exactly terrifying, but Trainline, a client of mine, found that by reducing latency by 300ms, customers spent an extra £8m a year. This is a pretty quick way to make eight mill. By simply moving our assets onto the host domain, we completely remove any extra connection overhead. Full test.On a slower, higher-latency connection, the story is much, much worse. Over 3G, the externally-hosted version comes in at an eye-watering 1.765s slower. I thought this was meant to make our site faster?! On a high latency connection, network overhead totals a whopping 5.037s. All completely avoidable. Full test.Moving the assets onto our own infrastructure brings load times down from around 5.4s to just 3.6s. By self-hosting our static assets, we don’t need to open any more connections. Full test.If this isn’t already a compelling enough reason to self-host your static assets, I’m not sure what is! Mitigation: preconnect Naturally, my whole point here is that you should not host any static assets off-site if you’re otherwise able to self-host them. However, if your hands are somehow tied, then you can use a preconnect Resource Hint to preemptively open a TCP connection to the specified origin(s): <head> ... <link rel="preconnect" href="https://code.jquery.com" /> ... </head> For bonus points, deploying these as HTTP headers will be even faster. N.B. Even if you do implement preconnect, you’re still only going to make a small dent in your lost time: you still need to open the relevant connections, and, especially on high latency connections, it’s unlikely that you’re ever going to fully pay off the overhead upfront. Penalty: Loss of Prioritisation The second penalty comes in the form of a protocol-level optimisation that we miss out on the moment we split content across domains. If you’re running over HTTP/2—which, by now, you should be—you get access to prioritisation. All streams (ergo, resources) within the same TCP connection carry a priority, and the browser and server work in tandem to build a dependency tree of all of these prioritised streams so that we can return critical assets sooner, and perhaps delay the delivery of less important ones. N.B. Technically, owing to H/2’s connection coalescence, requests can be prioritised against each other over different domains as long as they share the same IP address. If we split our assets across multiple domains, we have to open up several unique TCP connections. We cannot cross-reference any of the priorities within these connections, so we lose the ability to deliver assets in a considered and well designed manner. Compare the two HTTP/2 dependency trees for both the off-site and self-hosted versions respectively: Notice how we need to build new dependency trees per origin? Stream IDs 1 and 3 keep reoccurring.By hosting all content under the same origin, we can build one, more complete dependency tree. Every stream has a unique ID as they’re all in the same tree.Fun fact: Stream IDs with an odd number were initiated by the client; those with an even number were initiated by the server. I honestly don’t think I’ve ever seen an even-numbered ID in the wild. If we serve as much content as possible from one domain, we can let H/2 do its thing and prioritise assets more completely in the hopes of better-timed responses. Penalty: Caching By and large, static asset hosts seem to do pretty well at establishing long-lived max-age directives. This makes sense, as static assets at versioned URLs (as above) will never change. This makes it very safe and sensible to enforce a reasonably aggressive cache policy. That said, this isn’t always the case, and by self-hosting your assets you can design much more bespoke caching strategies. Myth: Cross-Domain Caching A more interesting take is the power of cross-domain caching of assets. That is to say, if lots and lots of sites link to the same CDN-hosted version of, say, jQuery, then surely users are likely to already have that exact file on their machine already? Kinda like peer-to-peer resource sharing. This is one of the most common arguments I hear in favour of using a third-party static asset provider. Unfortunately, there seems to be no published evidence that backs up these claims: there is nothing to suggest that this is indeed the case. Conversely, recent research by Paul Calvano hints that the opposite might be the case: There is a significant gap in the 1st vs 3rd party resource age of CSS and web fonts. 95% of first party fonts are older than 1 week compared to 50% of 3rd party fonts which are less than 1 week old! This makes a strong case for self hosting web fonts! In general, third party content seems to be less-well cached than first party content. Even more importantly, Safari has completely disabled this feature for fear of abuse where privacy is concerned, so the shared cache technique cannot work for, at the time of writing, 16% of users worldwide. In short, although nice in theory, there is no evidence that cross-domain caching is in any way effective. Myth: Access to a CDN Another commonly touted benefit of using a static asset provider is that they’re likely to be running beefy infrastructure with CDN capabilities: globally distributed, scalable, low-latency, high availability. While this is absolutely true, if you care about performance, you should be running your own content from a CDN already. With the price of modern hosting solutions being what they are (this site is fronted by Cloudflare which is free), there’s very little excuse for not serving your own assets from one. Put another way: if you think you need a CDN for your jQuery, you’ll need a CDN for everything. Go and get one. Self-Host Your Static Assets There really is very little reason to leave your static assets on anyone else’s infrastructure. The perceived benefits are often a myth, and even if they weren’t, the trade-offs simply aren’t worth it. Loading assets from multiple origins is demonstrably slower. Take ten minutes over the next few days to audit your projects, and fetch any off-site static assets under your own control. Did you enjoy this? Hire me! Source: Self-Host Your Static Assets – CSS Wizardry – CSS Architecture, Web Performance Optimisation, and more, by Harry Roberts

    Read at 08:08 am, Jun 3rd

Day of Jun 2nd, 2019

  • We All Need to Calm Down About Rare Earths

    That makes them a scary-sounding weapon in economic and diplomatic disputes. Back in 2010, China cut off exports of the minerals to Japan when the two governments were in dispute over ownership of some islands east of Taiwan.

    Read at 10:02 pm, Jun 2nd

  • Cops Across The US Have Been Exposed Posting Racist And Violent Things On Facebook. Here's The Proof.

    A review of the Facebook accounts of thousands of officers around the US — the largest database of its kind — found officers endorsing violence against Muslims, women, and criminal defendants.

    Read at 09:57 pm, Jun 2nd

  • Deceased G.O.P. Strategist’s Hard Drives Reveal New Details on the Census Citizenship Question

    WASHINGTON — Thomas B. Hofeller achieved near-mythic status in the Republican Party as the Michelangelo of gerrymandering, the architect of partisan political maps that cemented the party’s dominance across the country.

    Read at 02:44 pm, Jun 2nd

  • The shittiest project I ever worked on

    Sometimes in job interviews I've been asked to describe a project I worked on that failed. This is the one I always think of first.

    Read at 02:32 pm, Jun 2nd

  • Google Should Google the Definition of ‘Employee’

    Tech companies are goosing profits by relying on contract labor, taking advantage of lax labor laws. The editorial board represents the opinions of the board, its editor and the publisher. It is separate from the newsroom and the Op-Ed section.

    Read at 02:29 pm, Jun 2nd

  • A Big Tent with 30 Pages of Agreement?

    By Annalisa W You may have seen mention in emails from the Steering Committee, or seen events on the calendar, or — if you’re a DSA super-nerd like me — in the Citywide Leadership Committee (CLC) bulletins for the last two CLC meetings, but for those who haven’t been following: over the last

    Read at 02:03 pm, Jun 2nd

  • Climate crisis: UK should dramatically cut working hours to reduce greenhouse gas emissions, study says

    Europeans all need to work far shorter hours each week to help combat the climate crisis, a study has said.

    Read at 12:59 pm, Jun 2nd

  • Making Sense of the New American Right

    I like to start my classes on conservative intellectual history by distinguishing between three groups. There is the Republican Party, with its millions of adherents and spectrum of opinion from very conservative, somewhat conservative, moderate, and yes, liberal.

    Read at 12:41 pm, Jun 2nd

  • Cory Booker and the Orthodox rabbi were like brothers. Now they don’t speak.

    OXFORD, England — The Jewish festival of Purim was in full swing: Music was blasting, family and friends were bouncing to the beat, and 6-foot-3 Cory Booker was laughing and dancing while carrying a 5-foot-6 Orthodox rabbi in a clown suit on his back.

    Read at 12:11 pm, Jun 2nd

  • Keyword (Named) Arguments in Python: How to Use Them

    Keyword arguments are one of those Python features that often seems a little odd for folks moving to Python from many other programming languages. It doesn’t help that folks learning Python often discover the various features of keyword arguments slowly over time.

    Read at 11:59 am, Jun 2nd

  • North Korea executes envoy to failed U.S. summit -media; White House monitoring

    SEOUL (Reuters) - North Korea executed its nuclear envoy to the United States as part of a purge of officials who steered negotiations for a failed summit between leader Kim Jong Un and U.S. President Donald Trump, a South Korean newspaper said on Friday.

    Read at 11:50 am, Jun 2nd

  • North Korea Executed Envoy Over Trump-Kim Summit, Chosun Reports

    North Korea executed its former top nuclear envoy with the U.S. along with four other foreign ministry officials in March after a failed summit between Kim Jong Un and Donald Trump in Vietnam, South Korea’s Chosun Ilbo newspaper reported.

    Read at 11:46 am, Jun 2nd

  • Limping Along

    When you started playing no-limit, especially if you started playing live games, you probably did a lot of limping because that’s what everyone else was doing.

    Read at 11:45 am, Jun 2nd

  • Poker Lessons Learned from “Rich Dad Poor Dad”

    Poker and investing have a lot in common. By looking at some of the best practices in investing, we can reinforce some of the best practices in poker. Thinking of the game in this way can be beneficial because it forces you to play more rationally.

    Read at 11:40 am, Jun 2nd

  • Redirecting

    Read at 09:58 am, Jun 2nd

  • Our military can help lead the fight in combating climate change

    Last year, Hurricane Florence ripped through North Carolina, damaging Camp Lejeune. Hurricane Michael tore through Tyndall Air Force Base in Florida, leaving airplane hangars that housed our fifth-generation aircraft shredded and largely roofless.

    Read at 09:45 am, Jun 2nd

  • Policy Wonk vs. Movement Candidate

    Last week, Massachusetts Senator and presidential hopeful Elizabeth Warren dropped yet another of her Big Policy Ideas™ on Medium. The topic: climate change. If ever there were a problem in need of Warren’s technocratic prowess, it’s the climate crisis.

    Read at 09:40 am, Jun 2nd

  • Lazy Loading In React

    React uses bundlers like WebPack to package its code and deploy it, to the browser. This bundled package is used by the browser to render your React application. Now imagine creating an application…

    Read at 09:40 am, Jun 2nd

  • Should you Invite Phil Laak to your Home Game? - by Brad Laidman

    If you want to have fun, definitely, but you better be able to afford it. You can argue forever about who the best cash game player of all time is or was. The best I’ve ever seen is probably Phil Ivey. But what if every poker player in the world came to me after taking a horrendous beat and losing every penny they had getting themselves all-in as a 99% favorite, where they could gamble everything they owned with checks or markers like in bad scenes from older movies and television shows? If I had a million dollars to back someone, I would give it to Phil Laak. Ivey has leaks. He loves to play craps which has huge monetary variances, he probably has huge cash game swings at poker, and while he was probably morally right about his baccarat stunt, it ended up being a loser and a waste of time. Phil Laak would just steadily grow his bankroll playing against suckers. He’s wicked smart. He’s probably always sober when real money is at stake, and though by nature a legitimate goofball, don’t think for a second that he hasn’t used that image to fund his lifestyle. I remember seeing Phil’s friend Antonio Esfandiari looking at Mike Matusow on television in disbelief wondering why he was always broke and not just generating income at the Bellagio playing $25-$50 No Limit Hold ‘em like he and Phil were at the time. Matusow would win big tournaments, but it was always an open secret that he never really had any real money. I was able to spend some time with Laak at the 2007 WSOP due to the fact that one of the kids also covering the event from Amsterdam was able to help Laak fix his PlayStation. In the 1986 movie, “The Color of Money,” Paul Newman takes a flake played by Tom Cruise and is too successful at turning him into a cut throat hustler. Eddie Felson: You're some piece of work... You're also a natural character. Vincent Lauria: You see? I been tellin' her that. I got natural character. Eddie Felson: That's not what I said, kid. I said you are a natural character; you're an incredible flake. But that's a gift. Some guys spend half their lives trying to invent something like that. You walk into a pool room with that go-go-go, the guys'll be killing each other, trying to get to you. You got that... But I'll tell you something, kiddo. You couldn't find Big Time if you had a road map. Stu Ungar was the greatest Gin player in history, but he barely made a dime at it after his first year in Las Vegas. He came into town and destroyed the best players around talking trash and then no one would play him. Then he got himself banned from local tournaments that he was crushing. I asked Nolan Dalla who wrote the definitive book about Ungar “One of a Kind: The Rise and fall of Stuey” how the local tournaments got away with banning Ungar, and he told me that Stuey would go in and destroy a bunch of somewhat comfortable Senior Citizens and be really cocky and arrogant about it. They just wanted a pleasant weekend outing. Ungar should have learned from Amarillo Slim who wasn’t nearly as good at either game. The way to make money at poker is game selection. Laak, as I said, is definitely a flake, but he can turn it on and off and knows exactly what he’s doing. I saw Milton Friedman’s nephew playing like a maniac at Caesar’s at six in the morning, and Laak was surprised that he didn’t get a phone call. He was at least at that time wired to every juicy game in town. He admitted to me that he wasn’t much of a tournament player and was just in them for the publicity. After he busted out of what was at that time the $50,000 HORSE event, I told him that I had been rooting for him. Laak said laconically, “I was rooting for me too.” Laak was wearing headphones during the tournaments, but he wasn’t zoning out to music. He showed me that they were noise reduction headphones. He was pretending to be listening to music. He was actually listening to everyone else, and he was using that PlayStation to keep tabs on the tournament chip counts. He’s fun and nice, but he’s wicked smart, and if you play cash with him, he will act like he put a bad beat on you and shouldn’t have been in the hand. Don’t be fooled, he knows exactly what his pot odds are in every single hand unless he doesn’t care enough about the stakes to try. Even if he loses a hand to you, he may be setting you up for something in the future. I read that the baseball player Manny Ramirez supposedly missed hittable pitches in April, so pitchers would give him the same look later in the season when it mattered more. Laak may not be the best player in the world, but if your intent is to make money with little risk, you don’t play high stakes against the best, you play amateurs and charge them the price of your company. For a long time it was well known that the juiciest game in Los Angeles was Larry Flynt’s home game, but it was near impossible to get an invite. Phil Laak probably would have been welcome and probably would have taken a ton of money from that game. Laak isn’t like the rest who are basically gambling degenerates looking to test themselves. Laak tests himself doing crazy stunts like when he almost got killed riding an ATV in the desert. When he plays poker, and to tell you the truth I have no idea how much that is these days, he’s there to take your money. It’s just that he’s nice and entertaining enough that it’s probably worth the price if you are wealthy enough. Laak isn’t going to die broke. At least back in 2007, he was a huge fan of televised poker and had a fantastic philosophy towards his profession. “The trick is to do whatever you need to do at the table to achieve fun, freedom, and fulfillment.” Laak was born a math nerd like I was. A lot of math nerds wind up with gambling problems be it in a casino or in the financial markets. I know people who are worth millions and they still can’t quit their addiction to the rush of risk. It’s not gambling to them unless the stakes are high enough to hurt them. There was a famous story about Doyle Brunson trying to get Bill Gates to leave a $2-$4 limit game to join his high stakes table, and Gates probably wanted the competition, but even he wasn’t willing to spend his money to do it. Probably Laak’s only weakness is that he truly does like to play math games, but he’s super smart so if you get him at the table you better hope he’s got his mind on something else or doesn’t care about his results. Personally, I’d much rather hang with Jennifer Tilly than play poker. He told me that they were big Howard Stern fans. She’s always been a character too, and she was taught by Laak so you know he told her to use everything in her arsenal to disarm you and make you play less than your best against her. She wound up with a ton of money from the dissolution of her marriage to Sam Simon, one of the co-creators of “The Simpsons”. While she was and is extremely beautiful, my guess is that Sam Simon, who I used to be in an online trivia group with, would never have been with someone who wasn’t smart and funny in the first place. When I hung with Laak in 2007, Richard Marcus who claims to have been a roulette cheater for many years had just published a book called “Dirty Poker” where he made a less than subtle claim that Tilly’s WSOP bracelet in the 2005 Ladies Event was the result of a fix. Laak’s response to that was pretty convincing. “It didn’t bother us at all. Everybody in the public eye has someone saying something ludicrous about them, but clearly it was such an absurd claim that it didn’t pick up any traction. I read the first ten pages of that book, and it was like a certain Hollywood star, with a certain professional poker playing boyfriend, in a national championship. I haven’t talked to a single person who has given the claim any credence and it’s clearly not true.” Tilly had just made a fool of herself on NBC’s “Poker After Dark” when she didn’t get paid much against Patrik Antonius after flopping top set. If you watch the clip, Phil Ivey and Jennifer Harmon look at her like she’s the stupidest person on the planet after her hand was revealed. Laak said that Tilly was a bit nervous playing poker on television and that she knew she played the hand badly, but my guess is that hand made Tilly a ton of money in the long run. You don’t fix a bracelet event. If you are smart, you butcher a hand on a television show and have a ton of rich, friendly drunks invite you to their home games. I don’t think either that hand or that tourney was rigged or thrown, but to me the probability of Tilly doing an Amarillo Slim move taught to her by Phil Laak is infinitely higher than her bracelet not being legit. Invite Phil Laak to your home game if you can afford it. You’ll have a great time, but you’d be foolish if you expected to beat him. If you were really that good, he’d probably say no thank you and go scuba diving or do something else more fun or profitable with his time. Invite him over to play video games. It would be cheaper and you would still probably lose. Source: Should you Invite Phil Laak to your Home Game? – by Brad Laidman

    Read at 10:17 am, Jun 2nd

Day of May 31st, 2019

  • Design on a deadline: How Notion pulled itself back from the brink of failure

    In 2015, productivity tool Notion nearly died. Its founders, Ivan Zhao and Simon Last, had built their app on a suboptimal tech stack, and it crashed constantly. Their angel investment money dwindling, they faced a brutal choice: Fire their fledgling team of 4 and start over, or run out of cash.

    Read at 11:09 pm, May 31st

  • Confidential draft IRS memo says tax returns must be given to Congress unless president invokes executive privilege

    A confidential Internal Revenue Service legal memo says tax returns must be given to Congress unless the president takes the rare step of asserting executive privilege, according to a copy of the memo obtained by The Washington Post.

    Read at 11:02 pm, May 31st

  • New Video Shows Nigel Farage Courting Fringe Right-Wing Figures At A Private Tea Party Hosted At The Ritz

    New footage reveals Nigel Farage privately sought money and help for his new Brexit Party from fringe right-wing figures including a millionaire Putin cheerleader and a self-proclaimed “influencer” who has posted a string of anti-Islam remarks online.

    Read at 06:54 pm, May 31st

  • Fundamentals of Product-Market Fit

    As we wrote the Holloway Guide to Raising Venture Capital (available for purchase very soon), we found ourselves referring to the concept of product-market fit. A lot.

    Read at 06:49 pm, May 31st

  • Track Changes has a Status Hierarchy (and we all know the rules)

    Today I worked on three separate collaborations: feedback on a thesis draft, a paper revision with colleagues at other universities, and a grant proposal with mostly senior scholars. Each collaboration represents my integration with distinct project teams, on which my status varies.

    Read at 06:39 pm, May 31st

  • My Rapist Apologized

    My 12-year-old daughter recently asked me what I think about abortion. She walked into the kitchen, poked around the refrigerator, then spun around and blurted it out: “I can’t decide what I think about abortion. I want to know what you think.” My daughter is an avid consumer of the news.

    Read at 06:37 pm, May 31st

  • The Twitch argument for GitHub Sponsors

    I’m thrilled about the launch of GitHub Sponsors, which signals a strong commitment from GitHub to support financial infrastructure and tooling for developers. It’s arguably the biggest open source funding experiment to date and will create plenty of opportunities to learn from.

    Read at 06:29 pm, May 31st

  • Arpan Sheth

    Hello! What's your background and what do you do? I grew up in Indore, India (one of the cleanest cities in India) and also completed my undergraduate studies in computer science and engineering there.

    Read at 06:21 pm, May 31st

  • America’s Cities Are Unlivable. Blame Wealthy Liberals.

    The demise of a California housing measure shows how progressives abandon progressive values in their own backyards. To live in California at this time is to experience every day the cryptic phrase that George W. Bush once used to describe the invasion of Iraq: “Catastrophic success.

    Read at 06:03 pm, May 31st

  • Using hooks to replace Redux

    As a beginner React developer coming from Vue, I found myself struggling with Redux and all the boilerplate needed to make simple state management the right way: action types, action…

    Read at 02:31 pm, May 31st

  • The Enduring Scam of Corporate Tax Breaks

    Why these deals have remained popular in Washington despite the overwhelming evidence that they don't work

    Read at 10:55 am, May 31st

  • What I Learned Trying To Secure Congressional Campaigns

    You know how it happens. You try to secure one Congressional campaign, and then another, and pretty soon you can't stop. You'll fly across the country just to brief a Green Party candidate in a district the Republicans carried by 60 points. You want more, more, always looking for that next fix.

    Read at 09:36 am, May 31st

  • Sen. Elizabeth Warren Blasts Big Tech, Advocates Taxing Rich In 2020 Race

    Massachusetts Sen. Elizabeth Warren has long been known as a consumer advocate and a critic of big corporations. But she's not the only progressive seeking the right to challenge President Trump in 2020 who is highlighting economic inequality. Vermont Sen.

    Read at 09:19 am, May 31st

  • D.C's new odd couple: AOC, Ted Cruz team up to drain the swamp

    It was a match made on Twitter, where the two sparred as recently as April over the price of a croissant and the minimum wage.

    Read at 08:30 am, May 31st

  • An Illinois Mom Ordered A Shirt For Her 3-Year-Old From A Chinese Retailer That Came With A "Fuck The Police" Slogan

    A mom from Benton, Illinois, hilariously discovered the shirt she ordered for her 3-year-old daughter from a Chinese retailer came with an additional design element that wasn't originally advertised on its site.Kelsey Dawn Williamson, 23, told BuzzFeed News she's profoundly confused and has not stopped laughing since she received the T-shirt order from AliExpress, an online, Etsy-like retailer based in Hangzhou, China, that hosts small businesses.On May 10, Williamson placed an order for this shirt, which features an iconic image of classic children's book characters Frog and Toad, for her daughter Salem.The shirt retails for about $5. AliExpress.com named the shirt — heavily loaded with search engine optimization terms — "Kids Two Frog Riding Design Baby Boys/Girl TShirt Kids Funny Short Sleeve Tops Children Cute T-Shirt." "Salem probably has 50-plus different little boutique outfits from my favorite store on AliExpress," said Williamson.So she did not expect anything unusual to arrive in the mail.On Tuesday, however, she opened the package to find the Chinese retailer had taken liberty with 3-year-old Salem's new shirt by adding a slogan to it. "I literally did not know how to react so I just took a few moments to stare at it and try to process," Williamson said."Of all the things they could have added, why that? On a children’s-size shirt?" she asked.(Note: Frog and Toad have become a meme, with "fuck the police" being one of the more popular photoshops that originated on Reddit.)She said she FaceTimed her husband about it and they "just screamed together.""We both just lost it, dying of laughter. All he could say was 'Oh shit,'" she added.Williamson said Salem, of course, has no idea what the saying means, and she really likes her new Frog and Toad T-shirt. But Salem's mom could not help but share it on Facebook with her friends and family. Her post has since gone viral. Williamson told BuzzFeed News most people are amused by it all. However, she's received a handful of messages on Facebook from people shaming her and her daughter for her weight."People were actually messaging me just to say mean things about her," she said. "A ton of people calling her fat, asking me what I feed her to make her so big, telling me the shirt I bought was too small."She said she almost took the post down because of the harassment. But she's trying to stay in good spirits about a funny and faulty product that she's now appreciating more than the originally advertised one."I’ve told [Salem], 'People really like your frog shirt!'" Williamson said, laughing."It’s going in her baby box so we can bring it up when she’s older." BuzzFeed News has reached out to AliExpress to try to figure out what may have happened with the shirt — and determine whether the design was intentional. Source: An Illinois Mom Ordered A Shirt For Her 3-Year-Old From A Chinese Retailer That Came With A “Fuck The Police” Slogan

    Read at 09:34 pm, May 31st

  • Bringing Cabán over the Finish Line First — NYC Democratic Socialists

    By Queens Electoral Working GroupTiffany Cabán’s grassroots campaign for Queens District Attorney has already achieved its first goal: changing the conversation about racist over-policing and mass incarceration in Queens. And with only a month to go until the June 25 Democratic Primary, it looks like the 31-year-old queer, Latina public defender, who is both a DSA member and DSA-endorsed candidate, may win an outright victory, if volunteers and donations keep on coming.The change in the conversation is obvious to those who have been attending candidate forums over the past six months. One by one, Cabán’s rivals have adopted at least some of her positions -- and even her language about “promoting community stability” to prevent recidivism and crime. At a late April candidate forum sponsored by the Lesbian and Gay Democratic Club of Queens, the most conservative candidate in the race, Judge Greg Lasak, agreed under pressure not to prosecute sex workers and their clients. Cabán has promised this from the beginning, because it criminalizes work people need to support themselves and makes it impossible for them to report abuse by clients or traffickers.Still, Cabán remains the clear progressive voice in the race, pledging to: Never request cash bail (it’s unfair to the poor); Not prosecute cases of drug possession, turnstile jumping, and other so-called broken windows crimes;  Publish the names of police who have committed perjury (no one should be charged or convicted based on the testimony of a proven liar), and prosecute corrupt police officers and prosecutors;Require all ADAs to understand and avoid, when possible, charges that could lead to deportation of immigrants; Not cooperate with ICE and help defendants avoid capture by, for example, allowing them not to appear at some hearings;Charge real estate companies, lenders, employers, and healthcare providers with tenant abuse, mortgage lending and foreclosure abuse, wage theft, and overprescription and deceptive marketing of opioids, respectively;Set up community advisory groups to address policy issues, such as how to avoid gun violence, and to participate in allocating discretionary funds, including the $100 million in federal asset forfeiture funds, to badly needed programs.   Cabán also argues for closing Rikers without building new jails, and that her policies could reduce the jail population from Queens by 80% in less than two years, making a new jail in Queens unnecessary.   Surprising Strength Cabán’s progressive platform led to her receiving the highest overall grade in the 5 Boro Defenders Candidate Guide, and to a remarkably long string of endorsements from grassroots and national organizations, civil rights leaders and elected officials. Endorsements from the Working Families Party and Real Justice in late March and early April were game changers that brought badly needed resources, including staff and mass texting and phone banking support from huge volunteer bases. The VOCAL-NY Action Fund (which organizes formerly incarcerated, homeless, substance abusers and AIDS survivors) and Make the Road New York Action Fund (which organizes immigrants) brought the credibility and active participation of directly-impacted communities.The Ocasio-Cortez endorsement on May 22 was the next huge shot in the arm, bringing visibility, credibility and money. After news of the endorsement broke in The New York Times and trended on social media, canvassers for Cabán noted an immediate pick-up in positive results from voters, and online contributions soared.  For weeks, Cabán has been cited in online media reports as the leading challenger to Melinda Katz, the perceived frontrunner as the Democratic County machine’s candidate and twice-elected incumbent Queens Borough President. But Katz appears to have lost support recently in Southeast Queens to Councilmember Rory Lancman, the other career politician in the race, who represents much of that area. She may also be losing ground in her Forest Hills/Kew Gardens home base to Lasak.  It’s impossible to predict with any confidence how an election with seven candidates and no public polling will shake out. Katz, Lancman, and Lasak (the three white candidates in a borough that is 75% people of color) have raked in over $1 million each that they are using to air television commercials and mail literature. The three former prosecutors of color running haven’t gathered strong financial or volunteer support. The Cabán campaign is by far the strongest in the field. On a single week in mid-May, its field schedule included more than 70 events. Hundreds of volunteers are knocking on doors and talking to voters at crowded community events in Western Queens (Astoria, Long Island City Sunnyside, Woodside and Jackson Heights), where Bernie Sanders did well in 2016 and Alexandria Ocasio-Cortez did well in 2018, and where a progressive mobilization led Amazon to give up its plan for a second headquarters. VOCAL-NY is focused on canvassing the Queensbridge and Ravenswood housing projects; Make the Road on Spanish speaking parts of Corona and East Elmhurst. The Cabán campaign is also pushing to compete in West Central Queens (Rego Park, Forest Hills, and Kew Gardens) and in select parts of southern Queens, from Richmond Hill and Ozone Park to Jamaica, Cambria Heights, Laurelton and the Rockaways. Texting and phone banking across Queens will provide crucial coverage of voters the field campaign misses.NYC-DSA has been central to the Cabán campaign from the beginning. We were the first organization to endorse her and are leading much of the field campaign and helping in other areas. But Queens is huge. With 2.3 million people and 109 square miles of land, Queens is far larger than Philadelphia, St. Louis or Boston, where other progressive prosecutors have won upsets in recent years. It’s also far larger than any district where a DSA member has been elected. We need more volunteers in the field and on the phones, on a repeat basis, to support the final push that can bring Cabán over the finish line in first place. We also need more money, since Cabán is the only candidate to refuse donations from corporate and real estate-related contributions.. Help us make Queens safe and just for all its residents. We can make a huge difference in people’s lives immediately--and deliver another blow to the Queens machine while we’re at it.Source: Bringing Cabán over the Finish Line First — NYC Democratic Socialists

    Read at 09:19 pm, May 31st

  • Class Components in Vue are No Longer Happening ― Scotch.io

    An upcoming Vue update was set to have classes implemented. In React and Angular, we can create components using JavaScript classes. Some people prefer this way of component creation as it can lead to better readability. It can be a confusing tool though since people start to think of JavaScript classes as classes in other languages that have inheritance. JavaScript classes are just syntactical sugar over JavaScript functions however and classes can lead to a bit of confusion. In Vue, we create components using objects like so: new Vue({ }) <script> export default { } </script> There was a proposal started on February 26, 2019 on GitHub that would allow us to create components with classes in addition to objects. This was targeted for the Vue 3.0 release. Here were the initially proposed classes: In Browser class App extends Vue { static template = `<div @click="increment"> {{ count }} {{ plusOne }} </div>` count = 0 created() { console.log(this.count) } get plusOne() { return this.count + 1 } increment() { this.count++ } } In Single File Components <template> <div @click="increment"> {{ count }} {{ plusOne }} <Foo /> </div> </template> <script> import Vue from 'vue' import Foo from './Foo.vue' export default class App extends Vue { static components = { Foo } count = 0 created() { console.log(this.count) } get plusOne() { return this.count + 1 } increment() { this.count++ } } </script> Pulled directly from the RFC on GitHub: Vue's current object-based component API has created some challenges when it comes to type inference. As a result, most users opting into using Vue with TypeScript end up using vue-class-component. This approach works, but with some drawbacks: Internally, Vue 2.x already represents each component instance with an underlying "class". We are using quotes here because it's not using the native ES2015 syntax but the ES5-style constructor/prototype function. Nevertheless, conceptually components are already handled as classes internally. vue-class-component had to implement some inefficient workarounds in order to provide the desired API without altering Vue internals. vue-class-component has to maintain typing compatibility with Vue core, and the maintenance overhead can be eliminated by exposing the class directly from Vue core. The primary motivation of native class support is to provide a built-in and more efficient replacement for vue-class-component. The affected target audience are most likely also TypeScript users. The API is also designed to not rely on anything TypeScript specific: it should work equally well in plain ES, for users who prefer using native ES classes. Note we are not pushing this as a replacement for the existing object-based API - the object-based API will continue to work in 3.0. There are two major reasons why the Class API proposal was dropped: Composition functions and Classes and Objects would allow us to make the same component 3 different ways. Vue has always focused on developer experience so it's comforting to see them try to simplify the developer experience again. They feel that 3 ways to do the same thing is not the best. With the coming composition functions, TypeScript support is one of the main benefits. Support is better in this approach than in the classes approach. With the two new APIs #22 Advanced Reactivity API and #23 Dynamic Lifecycle Injection, we have a new way of declaring component logic: using function calls.. These are inspired by React Hooks. In composition functions, a component's logic will happen in a new setup() method. It is pretty much data() but gives us more flexibility using function calls inside of it. Free Node eBook Build your first Node apps and learn server-side JavaScript. Nice! Check your email to confirm your subscription. import { value, computed, watch, onMounted, inject } from 'vue' const App = { props: { a: String, b: Number }, components: { }, setup(props) { const count = value(1) const plusOne = computed(() => count.value + 1) function inc() { count.value++ } watch(() => props.b + count.value, val => { console.log('changed: ', val) }) onMounted(() => { console.log('mounted!') }) const injected = inject(SomeSymbol) return { count, plusOne, inc, injected } }, render({ state, props, slots }) { } } I'm excited to see where the Vue team goes with these "composition functions". I like the idea of thinking of our components more as composed parts since that's more in line with a JavaScript way of thinking. Classes lead to people thinking in a more object-oriented way. This also leans towards a thought process that is similar to how React is moving with React Hooks. The "composition functions" also allow for better TypeScript support which in turn leads to a better developer experience and tooling. I'm looking forward to seeing where Vue goes next. I don't mind Vue's way of declaring components with objects and it's looking good with the way they are sticking with it. What are your thoughts on the dropping of the classes proposal? Source: Class Components in Vue are No Longer Happening ― Scotch.io

    Read at 08:28 pm, May 31st

Day of May 30th, 2019

  • The World’s Most Annoying Man

    You know, of course, what the most grating and infuriating human behavior is. It is not when another person is simply being unreasonable.

    Read at 11:49 am, May 30th

  • 5 things you didn’t know about React DevTools

    If you’re into React development, chances are you’ve tried the official React DevTools. This browser extension lets you debug your components, and is available for Chrome, Firefox and even as a standalone application for Safari and React Native debugging.

    Read at 08:10 am, May 30th

  • How to run async JavaScript functions in sequence or parallel

    The async and await keywords are a great addition to Javascript. They make it easier to read (and write) code that runs asynchronously. That includes things like: I’m going to assume you’re familiar with Promises and making a simple async/await call.

    Read at 08:10 am, May 30th

  • 10 Key Learnings in Rust after 30,000 Lines of Code

    I used to love C and C++. If we date back to the mid 90’s, I did C, probably poor C++ which I thought was great, and Assembly exclusively as part of my reverse engineering/security work.

    Read at 08:07 am, May 30th

  • Python vs Pharo

    Python is widely regarded as an easy language for beginners. But did you know that there’s a much easier language, one that has far greater capabilities? It’s called Pharo, a modern variant of Smalltalk. Let’s compare the two… Pharo is much, much simpler than Python.

    Read at 07:36 am, May 30th

Day of May 29th, 2019

Day of May 28th, 2019

Day of May 26th, 2019