<![CDATA[ Zain Rizvi ]]> https://www.zainrizvi.io https://www.zainrizvi.io/favicon.png Zain Rizvi https://www.zainrizvi.io Mon, 18 Mar 2024 06:13:23 -0700 60 <![CDATA[ Want to make a feature change to PyTorch? ]]> https://www.zainrizvi.io/blog/want-to-make-a-feature-change-to-pytorch/ 6303d354a5b7e0003d11520a Mon, 22 Aug 2022 12:10:53 -0700 This page contains instructions on how to propose and implement feature changes to PyTorch.

Over half the commits made to pytorch every week come from the wider open source community. Sometimes researchers have a certain feature that they want to see implemented, companies have hardware they want to support, or a developer found a bug they really need fixed.

With such a large number of commits coming in, PyTorch needs a process for managing it all to keep the codebase maintainable. For smaller changes, like a five line bug fix, this takes the form of a regular PR review. Make a change, submit a PR, and a core pytorch contributor will review it soon.

But sometimes you’ll want to contribute a larger change, like an enhancement to an existing function or even a brand new feature. Submitting a 1,000 line PR for a feature no one has heard about rarely results in a good experience (for neither the reviewer nor the coder).

For larger changes like this, we have a more indepth process that’s similar in spirit to a design review. It ensures that the feature you’re working on becomes something the core owners are happy to accept and maintain going forward.

The Request for Comments

To propose a new feature, you’ll submit a Request For Comments (RFC). This RFC is basically a design proposal where you can share a detailed description of what change you want to make, why it’s needed, and how you propose to implement it.

It’s easier to make changes while your feature is in the ideation phase vs the PR phase, and this doc gives core maintainers an opportunity to suggest refinements before you start code. For example, they may know of other planned efforts that your work would otherwise collide with, or they may suggest implementation changes that make your feature more broadly usable.

Step 1: Create an RFC

RFCs are located in their own repository.

To create one:

  1. For the https://github.com/pytorch/rfcs repository
  2. Copy the template file RFC-0000-template.md to RFC-00xx-your-feature.md and fill it out with your proposal. The template is a guideline, feel free to add sections as appropriate
  3. You may also have the template simply link to another editor, like a Google Docs file, but please ensure that the document is publicly visible. This can make the template easier to add edit, but commenting doesn’t scale very well, so please use this option with caution.

Step 2: Get Feedback on The RFC

  1. Submit a pull request titled RFC-00xx-your-feature.md
  2. Before your PR is ready for review, give it the draft label.
  3. Once it’s ready for review, remove the draft label and give it the commenting label
  4. File an issue against the https://github.com/pytorch/pytorch repository to review your proposal.
  5. In the description, include a short summary of your feature and a link to your RFC PR
  6. Pytorch Triage review will route your issue to core contributors with the appropriate expertise.
  7. Build consensus. Those core contributors will review your PR and offer feedback. Revise your proposal as needed until everyone agrees on a path forward. (Note: A proposal may get rejected if it comes with unresolvable drawbacks or if it’s against the long term plans of the pytorch maintiners)

Step 3: Implement your Feature

  1. If your RFC PR is accepted, you can merge it into the pytorch/rfcs repository and begin working on the implementation.
  2. When you submit PRs to implement your proposal, remember to link to your RFC to help reviewers catch up on the context.
]]>
<![CDATA[ The Software Engineer's Career Ladder ]]> https://www.zainrizvi.io/blog/the-software-engineers-career-ladder/ 62a41d8ae77053003d7fa626 Fri, 10 Jun 2022 22:00:11 -0700 Most tech companies have an established career ladder. They all follow this general form:

  • Junior engineer: Take this tightly defined feature & build it
  • Mid-level engineer: Take this vaguely defined feature & build it
  • Senior engineer: Take this known problem & figure out how to solve it
  • Staff engineer: Take this goal & find the problems we should be solving

Let's dive deeper into the default expectations of these roles and what it takes to zoom past them

Junior Engineer

Take this tightly defined feature & build it

These bright young fledgling college hires have just hached from their eggs. Precocious and eager to please, but still wet behind the ears.

At this level, managers expect Juniors to generally need a lot of hand holding and explanations. This is generally done in the form of giving these juniors well defined projects where the path forward is pretty clear.

Mid-level engineer

Take this vaguely defined feature & build it

The tasks given now become more nebulous.

The manager can get away with putting less time on scoping the project. Maybe they won't have validated that their planned approach might work, or they may still be unclear on some of the implementation details. And that's a good thing!

The dev has developed to the point where they can be trusted to fill in the blanks on their own and complete the project anyways.

This way, the young dev has started taking some of the scoping work off of their manager's plate.

Senior engineer

Take this known problem & figure out how to solve it

They're no longer asked to build specific features. Instead, managers simply hand the entire problem over to these engineers to solve on their own.

They have to explore the solution space, identify the best options, and get the approval of their stake holders (such as your manager, teammates, or customers)

They might even split the solution up into individual features and ask the juniors on their team to work on those. This makes their manager's lives even easier, they don't need to do as much planning or scoping!

Side note: At many large tech companies "Senior engineer" is considered a terminal role. That means everyone is expected to get to this level eventually. But to go any higher you need to work extra hard and also get lucky. The responsibilities change significantly at the next level up, and they're not everyone's cup of tea.

Staff engineer

Take this goal & find the problems we should be solving

You now think at an even higher level, identifying the problem, splitting it up into smaller sub-problems that Senior engineers should be enlisted to solve!

Except, merely identifying the problem isn't enough. You need to be able to persuade other people that the problem is a big deal and that it's more important than all the other problems the company is facing!

That's not easy!

It requires finesse, you have to manage upwards (i.e. politics). If you're successful, you'll likely end up leading the squad tasked with fixing the problem.

And who do you think gets to motivate those senior engineers to work on the problem?

What's in common here?

What do all these levels have in common? Each time you go up from one level to the next you're:

  • Taking on more ambiguous work
  • Influencing and getting support from others to a larger and larger extent

The biggest shift comes from taking a bigger share of ownership over the problem and devising solutions to fix them.

Don't Wait

If you've ever worked on a side project, some of this may have sounded vaguely familiar: It's exactly what you do when you're working on your own!

You do it instinctively then, since you have full context on the problem, you know the goal you're trying to achieve, and you have ideas on how to get there. The bits you don't have to worry about are interpersonal dynamics, and (depending on your goals) finding customers 🙂

And if you can do this in your side projects, you can do it at work too.

Even as a junior engineer, if you work to build up your context around the goals of your team and your org, along with the key problems they're facing, you can start suggesting solutions to those obstacles. Even if your ideas are rejected at first, keep trying. You'll end up practicing those leadership skills, develop that creative muscle, and stand out from the crowd just for trying.

And eventually, you might just find your ideas gaining traction

This article started out as a viral tweet. If you want more content like this, follow me on twitter or sign up to my newsletter below.  

If you want to learn about how to get a better paying software engineering job, you might appreciate The Interview Advice that got me offers from Microsoft, Google and Stripe

]]>
<![CDATA[ Insider's Guide to Passing FAANG Interviews ]]> https://www.zainrizvi.io/blog/insiders-guide-to-passing-faang-interviews/ 629e8fe2e77053003d7fa5cc Mon, 06 Jun 2022 17:22:27 -0700 I've gotten offers from Google, Stripe, Facebook, Microsoft, Patreon, Oracle Cloud, & many more, and spent over 12 years as both the interviewer and interviewee while working at Google, Microsoft, and Stripe

Over time, I realized one thing:

Standard interviewing advice falls woefully short

  • What good does it do to practice coding problems for weeks if your mind goes blank in an interview room?
  • Everyone says to be wary of the recruiters, but what if you weren’t
  • How can you show your “best self” if you’re too afraid to let it out?

Grinding interview questions isn't enough. I tested the answers to these questions multiple times (sometimes by accident).

Turns out:  Conventional wisdom gets you conventional results

But you can do better.

In this course I'll share secrets I discovered the hard way.

Interviewing is a skill and anyone can learn it.

Get the Insider's Guide to Passing FAANG Interviews

👨‍💻 Who is this course for?

These techniques work best for anyone applying for junior to senior software engineering roles.  That's especially true for the parts that describe how to prepare for the coding, design and behavioral portions.

However, most of the principles are timeless and will continue to serve you as you keep going up the ladder.

🗣 What are people saying about the course?

"We all love it. He has wisdom that is absolutely not conventional. I highly recommend it"

-- Louie Bacaj @LBacaj, from the "Engineering Advice You Didn't Ask For" Podcast

"Zain has a super power: formalizing his own experience in an amazing, very interesting way, with tons of insights"

-- Viacheslav Kovalevskyi @b0noi, Engineering Manager at Meta

"I like how it goes beyond the algo/big O notation and focuses on the whole interview. At Amazon, I saw a ton of candidates fail because of weak answers to system design & behavioral questions. It's only $20, so you're ROI is astronomical"

-- David Janke, Software Architect

"Zain provides solid advice on how to bring the best version of yourself during the the actual interviews. This a fantastic course and I will definitely be passing it along to others"

--Hasnain Bilgrami, Machine Learning Engineer

Get the Insider's Guide to Passing FAANG Interviews

📚 This course covers:

  • How get past the paralyzing fear of the interview room
  • How to stay up to date with the changing interview landscape so that you can always get a new offer easily
  • How to get recruiters to work for you
  • What FAANG interviewers actually look for
  • The most common mistakes I see candidates make (which no one gives them feedback on)
  • The difference between interviews for junior and senior roles
  • The mindset to have in each interview type
  • How to find & highlight your best side in behavioral interviews
  • Questions to ask during interviews
  • The best resources to prepare for all interview types
  • How to manage the manager chat
  • How you can start discovering insights of your own that'll help you ace any interview

📘 What is this course NOT?

This is not your traditional interviewing prep course. I won't be teaching you specific algorithms or what Big O notation is, though I will point you to the best resources I've found for learning those things.

Instead, you'll learn the parts of the interview process that no one talks about, the behind the scenes secrets that I had to learn the hard way, and how you can discover even more secrets on your own.

There is no magic bullet. There's no secret hack. Preparing for FAANG interviews takes time and effort. How much time? Preparing for behavioral interviews can take a week or two, while preparing for coding and design interviews could take a couple months each. Be prepared.

But this is a one time effort that will pay dividends for the rest of your career. How much extra income can these few months of work turn into over three or four decades? I'll let you do the math

Most people applying to tech companies only do a third of the interview prep required, and then wonder why they got rejected.

This course will teach you the missing two thirds.

Get the Insider's Guide to Passing FAANG Interviews

🔬 Contents

  • Introduction: 0:00
  • Why people fail: 7:13
  • Going to real interviews for practice: 8:49
  • Practice vs Serious interviews: 14:55
  • Interview process overview: 17:46

Rethinking recruiters

  • Recruiters are friends, not foes: 20:07
  • Hijack the recruiter chat: 22:00
  • Leveraging the recruiter: 24:22

Coding interviews

  • What companies care about:  29:15
  • Common mistakes devs make: 32:12
  • Best resources for learning: 38:53

Design interviews

  • The mindset you need:  41:22
  • The right level of abstraction: 45:01
  • Best resources for learning: 48:48

Behavioral interviews

  • How they're used: 50:57
  • Remembering your best stories: 53:19
  • Show your best side with Amazon's leadership principles: 58:23
  • Tell the story like a STAR: 1:02:16
  • Be curious & ask questions: 1:05:49
  • Manager chat: 1:07:59
  • Handling rejection: 1:09:59
  • Negotiating your offer: 1:13:13
  • Become the interviewer: 1:16:20
  • Recap: 1:21:10
  • Conclusion: 1:24:30

Duration: 1 hour, 25 minutes

Get the Insider's Guide to Passing FAANG Interviews

⚖ Refund policy

If you're not 100% satisfied with the purchase, just reply to the download email within 30 days, and you'll get a full refund. No questions asked.

]]>
<![CDATA[ PARA vs Zettelkasten: The false binary ]]> https://www.zainrizvi.io/blog/para-vs-zettelkasten-the-false-binary/ 61ed318bdda0c2003b166e07 Sun, 23 Jan 2022 02:48:23 -0800 I started practicing PARA and Zettelkasten two years ago.  Here's what I've realized after twenty four months of practice:

Change the system, not yourself

There's a common mistake people made when following popular note taking or productivity systems, be it PARA, Zettelkasten, GTD, Bullet Journals, or anything else.

They assume each system is universal.

Those systems weren't built for mass consumption. Rather, their inventors had each felt a gap in their own abilities, where their previous systems didn't match their personalities. Like two jigsaw pieces that don't quite fit.

Those inventors could have chosen to "buckle down", to "try harder", to "be more disciplined."

But, each person decided "No."

"Change myself to match the system? I'll run out of steam trying to live my life on pure grit and will power. Instead, why not change the system to match me?"

That was the key insight that drove each productivity legend.

Each one took the system they'd been handed and started tweaking it, based on what worked for them:

  • Can't make yourself cross-link notes? Stuff similar ones in a folder instead
  • Large quantities of copied quotes making your notes unwieldy? Only allow written quotes.

Each person used different steps. But they found ones which worked with their personalities and their goals.

They all identified the same core challenge, "How do I remind future-me of what present-me knows when future-me actually needs to know it?", and proceed to evolve solutions that worked well with their own personal constraints.

We can do the same.

The wheel was invented, and now we customize the wheel for every application. Similarly, the popular frameworks are only your starting point.  Adapt it to your own self.  Skip the parts that seem convoluted, that  seem tedious. Just adopt the parts that resonate. Maybe that's all you actually need.  It's okay to ignore the rest.

It's not about whether you should do PARA or Zettelkasten or any of the other methodologies.

You shouldn't "do" any of them.  Starting from there, and venture forth on your own journey.

My own example

The first time I tried PARA, I had to build two PARA systems. One for my work notes and one for my blogging.

Each one went in a completely different direction.

For work, I had concrete projects I was building towards.  Nearly everything I did went into a Project, and almost everything sent into an Area or Resources folder was never seen again.  I ended up accepting this reality and extended the Projects folder structure to meet my own needs, giving each project it's own overview section where I'd track the key documents, conversations, tickets, and even people.

I've been sticking to this PARA system for two years now, and have been getting more and more value from it with each project and it frequently evolves.  I'll go deeper into how I've implemented this sometime later.

PARA for my personal notes, however, looked completely different.  Over there it was the Projects section which ended up mostly neglected, since I only had enough bandwidth to have one significant personal project active at any time, and I didn't need much to keep track of that one.  Instead, I focused on recording what I read and categorizing everything into Resources folders.  Soon, those notes started to take on a Zettelkasten-esque vibe, as I started linking them to each other.

But linking was tedious, and the payoff far from clear.  Over a few months I found myself linking less and less, dropping it completely once I went on an unintentional writing hiatus in

With both these approaches, I let my intrinsic motivation highlight the note taking which gave me the most value and dove deep on it.  The personal Zettelkasten notes, which always felt like a chore, fell by the wayside (more or less) guilt free with lessons learned.  (If I ever pick up Zettelkasten again, it'll be after identifying a much smaller slice that I can adopt quickly get value from)

The false binary

But I'll be sticking with the PARA structure for my work notes for a long time to come!  But I'm starting to notice options for merging targeted Zettelkasten-esqe insights into my notes, but this is still the early days for this modification technique.  I'll have to play around with it a lot more first

And that's exactly the point!  

In summary:

  • Create your own workflow: There isn't no one right way to do things. There's are only ways that work for for you and ones that don't.
  • Eat the elephant one bite at a time: These aren't all-or-nothing systems.  Find a slice of value and incorporate it into your existing habits
  • Rinse and repeat: The system is never 'done'.  You'll always keep tweaking it as your circumstances change and you find new friction points

Happy experimenting

]]>
<![CDATA[ Why Software Engineers like Woodworking ]]> https://www.zainrizvi.io/blog/why-software-engineers-like-woodworking/ 60370cfcf688fd00397ea3ff Wed, 24 Feb 2021 18:44:34 -0800 The smell of fresh pine sawdust filled the air, with more floating up as I sanded the last rough corner of the stool. My toddler was happily sanding her own block off to the side.

Woodworking was a new hobby I'd picked up. My old ones, coding, reading, writing, had kept me stuck to my laptop, holed up by myself. Not very toddler friendly.

A surprise gift laid the seeds of an idea. As a boy, the best summer days were when my dad and I would transform scratch wood into bird houses, planes, and eventually even rube goldberg devices. It was time to rekindle the flame.

A few youtube videos and a HomeDepot run later, it was time to pause my side coding projects and foray into a new world.

Except it wasn't that different from the old one!

Woodworking felt strangely familiar. Even its dopamine hits triggered the same spots that programming would.

Turns out, the best parts of woodworking aren't actually that different from software engineering:

1. You build your own tools

The best software engineering moments are when you building your own tools. That's work with purpose.

Woodworkers do it too. They call such tools "jigs".  Some jigs might be built to meet niche needs (like drilling straight holes). Others, just to save money.

As always, you have to decide what's the value of your engineering time.

2. Too many tools

There are mind boggling varieties of woodworking tools out there, each optimized for a slightly different situation. Want to saw wood with the grain? Against the grain? Need a quick but ugly cut? Need a smooth cut and don't mind it taking longer? What about curved cuts? And that's just for saws!

Each one is optimized for a specific use case, but unless you have an infinite budget you'll need to decide which ones are actually required for what you want to do.

Anyone who's had to choose a storage layer on AWS knows what it's like to understand your tools, figure out what they're best for, and which one most closely matches your needs.

3. Finite (non-monetary) Budget

Money isn't the only thing you budget.

Any limited resource needs to carefully doled out. With software, the budgets might cover hardware constraints (CPU/memory), networking bandwidth, latency targets, engineering man-hours, etc.

Turns out, there's one inflexible budget with woodworking: Physical space!

There's only so much room in my garage (even after I cleaned it). Whatever tools I get have to fit in there along with my work bench. I have to be really picky about what I spend my storage budget on, and some beloved items, like a razor sharp table saw, are simply not possible.

4. Design first, build later

In both worlds, sketching out your designs before building pays huge dividends. You get a clearer picture of what you'll make and you figure out how the different parts will interact.

Otherwise, you might have to throw away days worth of work because you took a slightly wrong turn.

5. Waiting

Compiling

Except with woodworking, you yell "The glue's drying!"

On the plus side, it lets you work on multiple projects at once.

6. You have Users!

Once my wife learned of my plans, the requests for custom pieces started rolling in.  Knowing that whatever I build has an eager recipient awaiting it is rocket fuel for motivation.

And since it's my wife asking for it, it's easier to justify buying the tools I "need".

One big difference

With woodworking you actually get to hold your creations.

Intrigued?

Woodworking has the same highs as software engineering (and even more if you spend enough time around glue).

You get the same fast feedback loops, close interaction with your users, while avoiding some of the more tedious aspects of software engineering. Plus you're never on-call.

I can see why these guys gave up software.

That's not my path though, programming is too near to my heart and my day job lets me do more than just write code.

But it does help answer a question I've asked myself a lot:

What would I do if I was born a hundred years ago, before computers were invented?

And the answer is now clear:

Find something to build.

I write a  newsletter sharing insights on how to leverage lessons from across industries to become a better software engineer. Sign up below to get them in your inbox!

Want insights every day instead? Follow me on twitter

]]>
<![CDATA[ Your confusion is the litmus test ]]> https://www.zainrizvi.io/blog/your-confusion-is-the-litmus-test/ 5fdbbc18abdce2003964084c Fri, 18 Dec 2020 08:44:14 -0800 I was fumbling in the dark. Groping blindly.

It seemed so much simpler a month ago.

"Hey, could you integrate this tool into our service?" my manager had asked. "Sure," I'd replied. "How hard could it be?"

Famous last words.

Now it was my job to take this convoluted piece of security infrastructure, which I knew nothing about, which itself was still under active development, and wrangle it into submission.

It's just code, right?

Every step felt like wading through mud. I failed ten different ways trying to get a single piece to work, only to be told about a magic setting I was supposed to have turned on. It wasn't an isolated incident.

I should add that setting to the docs. Did I mention I was effectively beta testing their instructions? All 73 pages of them?

And I was supposed to finish this integration in five weeks. That wasn't long enough!

Because of the time crunch, I'd been trying to minimize the time spent learning, hoping to use that time to actually solve problems.

Except it wasn't working.

I was firefighting, trying to sprint to the end of the marathon.

Burning out.

It was time to switch gears.

To throw away the firefighter's helmet, and don a detective hat.

What principles would serve better?

1) Go deep often, but keep each dig short

I needed a more wholistic view of how the security tool saw the world.

As we learn, we build a mental model of the world, a collection of ideas and impressions about how parts of the world would react in various circumstances. That understanding lets us predict what our tools will do.

Our mental models are never perfect[1], but as we learn we're continually refining them, making accurate, and gradually we're able to apply that knowledge to more domains. It's like trying to drive a van once you've already practiced on a sedan.

Transfer learning, if you will.

It lets us work faster, more accurately, and get more done.

Every time you go deep into a problem you're making a bet, guessing this expedition will result in a more useful mental model. Each foray down the rabbit hole is like planting a seed which might eventually bear fruit. As always, it's a balancing act:

"[People] tend to think they can only work on important problems-hence they fail to plant the little acorns which grow into the mighty oak trees.
Not that you should merely work on random things, but on small things which seem to you to have the possibility of future growth."
--from The Art of Doing Science and Engineering

The key is to have many digs, which is only possible if you keep each individual foray short.

Go deep, but time box yourself. Don't go too far in.

What's too far? It's hard to know which depths will yield treasure, but one layer of abstraction deep tends to work well (if you reach assembly code, you've gone too far).

Each tripe down a rabbit hole is a roll of the dice, and you won't always roll high. But keeping each trip short you leave yourself with time roll that dice over and over again, increasing your chance of rolling that six.

Additional heuristics can help hone your search even further...

2) Your confusion is a litmus test

Go deep on the tasks you're actively working on, specifically when you encounter something that feels confusing.

Confusion means you've reached the limits of your current mental models. It's no longer helping you predict what something will do. The gaps in your mental model are showing and it's time to dig deeper to plug them up.

Your confusion is a litmus test.

And boy was I confused.

Now, why only go deep into stuff you're actively working on? Because that's a giant sign that this hunt will result in a mental model which is actually useful.

"[These] studies are surprisingly broadly applicable because, even if you’re learning about the details of some specific system, that system’s design will contain a juicy core of extractible general principles. Unlike many “general principles” people try to teach you, the ones you learn [this way] are guaranteed to be important to at least one real-world system (the one you’re learning about). And you’ll see them realized in all their messy detail" --from In Defense of Blub Studies

3) There's always a bigger picture

Sometimes you go deep by going broad.

Everything you do is a local optimization within a larger goal, one that you might not be aware of.

I was trying to integrate the security tool into our service, but the end goal was to let our customers have a more secure cloud footprint than anyone else offered. My service was just one of many the customer would be enabling this security offering with. I had to see things from the customer's perspective and understand how they would use this feature.

Not knowing that would risk pushing the boulder up the wrong hill.

Expanding your understanding of that broader picture can be especially helpful when you're designing a solution, preventing you from getting stuck at a local maximum.

The quality of your solution is limited by your understanding of the problem. Without a broad enough context on the problem your solution might even make the actual problem worse! (*cough* Google logo redesign[2] *cough*)

But when you understand the larger problem, you might realize the guy asking for a faster horse cart would be better served by a car.

Richard Hamming describes how he gradually discovered the larger and larger goals his work contributed to, and how he adjusted what he did accordingly:

"There is no single larger picture. For example, when I first had a computer under my complete control, I thought the goal was to get the maximum number of arithmetic operations done by the machine each day. It took only a little while before I grasped the idea that it was the amount of important computing, not the raw volume, that mattered. Later I realized it was not the computing for the mathematics department, where I was located, but the computing for the research division which was important.

Indeed, I soon realized that to get the most value out of the new machines it would be necessary to get the scientists themselves to use the machine directly so they would come to understand the possibilities computers offered for their work and thus produce less actual number crunching, but presumably more of the computing done would be valuable to Bell Telephone Laboratories. Still later I saw I should pay attention to all the needs of the Laboratories, and not just the research department.

Then there was AT&T, and outside AT&T the country, the scientific and engineering communities, and indeed the whole world to be considered. Thus I had obligations to myself, to the department, to the division, to the company, to the parent company, to the country, to the world of scientists and engineers, and to everyone. There was no sharp boundary I could draw and simply ignore everything outside."

To Recap

These are three guidelines for when it's worth going deep:

  1. Go deep often, but keep each dig short. Each exploration is a seed you're planting. Plant many, and some will eventually bear fruit.
  2. Use your confusion as a litmus test. You've found a gap in your mental model of the world, time to fill it.
  3. There's always a bigger picture. Go broad, traveling across tangential nodes to understand the problem space better.

I ignored these principles to my own peril. But turning around and digging deeper helped me not only solidify my own understanding of the problem, I later shared my newfound understanding with others, helping over hundreds of people both inside and outside Google to understand this highly niche problem.

And yes, the feature shipped on time.


[1] https://fs.blog/2015/11/map-and-territory
[2] https://techcrunch.com/2020/10/06/googles-new-logos-are-bad

]]>
<![CDATA[ Newsletter #19 - Productivity tips from Jeff Bezos and Huge Jackman ]]> https://www.zainrizvi.io/blog/newsletter-19-productivity-tips-from-jeff-bezos-and-huge-jackman/ 5fb69afe95f9fd0039e3c99f Thu, 19 Nov 2020 08:26:21 -0800 Ever study something, but when you try to explain it realize that you'd understood nothing?

A few sips from the pool of knowledge and we think we know it all!

I find it helpful to combat this tendency by talking to others about new ideas I encounter (my wife is a very patient test subject). Sometimes I don't even say three sentences before realizing how little I learned!

Armed with a better view of my own ignorance, I know where to focus as I dig deeper into the topic.

It's even better when the person you're talking is already experienced with the topic. I recently traded perspectives with my friend Slava, an engineering manager at Google, on how to identify work your management chain cares most about.

Check out the full conversation on:

The Nonintuitive Bits episode 38: Start with Principles


Your nuggets for the week

Today's we talk about motivation and productivity:

  1. Protecting yourself from success
  2. Positive moods make you productive
  3. Hack your reptilian brain

#1 Protecting yourself from success

"Will you guard your heart against rejection, or will you act when you fall in love? --Jeff Bezos (source)

It's scary risking your ideas being rejected, but what's the upside you give up?

Take the plunge

Create your Amazon

#2 Positive moods make you productive

Your productivity is much higher when you are in a positive mood, while a bad mood increases your desire to procrastinate.

A good mood even makes you smarter!

Research showes "doctors put in a positive mood before making a diagnosis show almost three times more intelligence and creativity than doctors in a neutral state"

via Barking up the Wrong Tree: 6 Things The Most Productive People Do Every Day

#3 Hack your reptilian brain

"Just before I do an activity, I imagine it's completed, and focus on that feeling I'll have when it's done and went well" --Huge Jackman, aka Wolverine (source)

What happens when you see a cookie? Some part of your reptilian brain imagines the sweet gooey goodness as you chew it. Suddenly your whole body craves that cookie.

Our brain saw how something would make us feel in the future, and pushed us to DO that thing and be rewarded!

But sometimes our brains need a nudge to notice those emotional rewards.

Try deliberately putting yourself in you future self's shoes, when the task is done and feel their feelings of success.

You just might find yourself looking forward to the task.

Your Turn 👊

Let's put Huge Jackman's idea to the test. Try taking some task on your todo list today and imagine how you'll feel once you've completed it.

Did it affect your mood at all?

Reply back and let me know, I reply to every message!

Till next week

--Zain

]]>
<![CDATA[ How to slay a Hydra: Finishing projects ]]> https://www.zainrizvi.io/blog/how-i-started-finishing-more-side-projects/ 5fad6c18f6f0d2003953a05a Thu, 12 Nov 2020 09:28:09 -0800 Am I fighting a hydra? Every time I work around a problem, two others appear! Just how twisted can time zones be?!?

And I thought this project would be easy.

Take a deep breath...Phew. Alright, how do I work around this issue without compromising my vision too much?

That was me a few weeks ago.

After seeing my aunt struggling to convert time zones, I’d taken up the quest to build a site that would make the task easier. I’m always looking for fun side projects to build, and this one seemed like an easy candidate.

I was young and naïve. Time zones are never easy

Here’s the story of how I designed the web app, the hurdles I faced building it, and the principles I used to keep making progress.

Set clear goals

First I had to know what I was aiming at.

I don’t write code for the sake of writing code, there is always some deeper goal. For me, it’s usually about what I expect to learn and capabilities I plan to enable. Being clear on the goal is key to cutting scope ruthlessly later on. It lets me see how the plan can be simplified while still offering everything I was after

My goals for this project:

  • Enable others to easily share timezone links
  • Practice rapid project development and reducing scope by building a feature complete service.
  • Build in public
  • Have fun

That was why I was building. But what was I going to build in particular?

I wrote it down:

I've learned through experience that I tend to only stick with my side projects for two weeks. So I had to narrow down the vision to something that fits in that time frame. Anything unlikely to fit in that time frame got listed as a stretch goal. I’ll probably never do them, but writing those ideas down helps relax my brain, reassuring it that the cool features I just thought of aren’t being ignored (i.e., I’m using the Zeigarnik effect to my advantage).

I had my what and my why. Now for the how.

Focus on the User

I can’t make a good product without looking at it from the user’s point of view. And for this project I had a very specific user in mind:

A person on WhatsApp sending an invitation to their friends to hang out online. They’re not too technically minded, but proficient enough to send a zoom link. And They don’t want to open up another app or website to create a link. That’s a pain to do.

This was the user persona I kept in mind as my target audience.

My solution: Let them be able to simply type out the link from memory.

Designing the Interface

Given that the url was meant to be typed directly into a chat box, how the url would look would be a big part of the design.

The idea was that the person writing the invitation would write the timestamp into the url, and when anyone clicks on it the site would show them that time converted into their own local time

This was going to be the entry point to my app, so it had to be easy to use

You saw my first idea in the screenshot above. It looked perfect. It was short and easy to remember. I was ready to move on to the next step.

Except...

That was my first idea. What are the chances that my first idea would be the best one?

Alright, fine. I'd force myself to write down a few more alternatives.

For a couple minutes I stared at a blinking cursor until I thought of another format.  It took another minute to notice a third possibility.

And then something clicked in my brain and the idea floodgates opened. I jotted down each variation that occurred and some thoughts about them:

We've come a long way from event.at#2009-05-20/4:22pm/PST seeming perfect!

At this point I realized I actually had three components to plan for: time, date, and time zone.

I could design each of them independently, and mix and match the best options. This gave me a bunch of possible alternatives, and more importantly, highlighted the many ways people might make mistakes while typing the url by hand.

Analysis completed, the simplest interface I found was:

People would definitely make mistakes with it though, so I planned to add support for all the other date/time formats I’d discovered. But those were filed away under “stretch goals”. For v1, I’d just focus on this one format.

It turns out, great minds think alike. A few days later I would discover http://mytime.io/, which uses the exact same url format.

Designing: Keep it Simple

Given the two week time constraint, I wanted to keep things as simple as possible. Part of that would be to minimize the amount of new tech I’d have to learn.

To keep things simple, I decided to use the tools I already knew best and build a static site with a C# backend. Time zones are tricky, but I would look up some time zone conversion library (surely there must be one) and I’d generate static html web pages to show people their local time.

The site would be super simple. I’m no html/css expert, and this wasn’t the time to learn. My site would use whatever default look & feel the Visual Studio project template offered.

No databases involved. No state being saved.

I had a nagging thought in the back of my head though: I should add logging.

Thatwould let me discover all the different ways people might type the urls in practice. Folks might misremember how to write the url, and I could treat the common “mistakes” as if they were valid urls.  

I jotted down the "stretch goal" and stopped worrying about it

Tech stack chosen, time to start coding.

Face first into the brick wall

And as I looked for a time zone conversion library I ran face first into a brick wall of problems. Turns out different time zones can have the same abbreviation (e.g CST is used for Central Standard Time and for China Standard Time), the correct abbreviation changes during daylight savings time (but most folks won’t know that and will enter the wrong time zone), and there isn’t even always an unambiguous conversion from one time zone to another.

Given this ambiguity, what were my options?

I could hard code which time zones I would favor when the url is ambiguous. CST would always be interpreted as Central Standard Time

I could tweak the tool to auto-fix the time zone, by guessing whether or not DST was intended. This would also require again limiting the countries I support this for (doing it for all countries would likely exceed my 2 week budget)

I could change how the time zone is specified. Instead of specifying a time zone abbreviation, I could ask people to specify the city. Out of the 1000 most populated cities in the world, only 6 have the same name. I’d be limiting who gets to use the app, but still be catering to most of the population. But then I’d need a solution for cities with a space in their name.

I chatted with a friend about this and he asked “why not just redirect everyone to Google?”

That...made sense.

Google has already solved this problem, they’ve already built in the right heuristics to guess which time zone you intended when you ask it to convert any time to your local time. By redirecting to them, I could leverage all that for my v1.

Sure, I’m not building a super cool time zone conversion logic, but that wasn’t one of my goals.

Suddenly the problem became very simple. When someone visits my site at event.at/4pm/CST, I’d redirect them to Google with the search term “4pm CST”. Google would automatically convert this to their own local time

Incidentally, while testing out the google urls I happened upon a site called http://mytime.io/, which was almost exactly like what I’d originally set out to do, except with slightly better graphics. It’s clearly also a v1 someone built and then didn’t take it further. They ran into the same set of issues that I did.

Now I needed my own domain. So far I had been developing this site at https://localtime.azurewebsites.net/. But it needed a short, memorable url to actually be usable. To my disappointment, I soon learned that .at isn’t a TLD I could actually buy (so event.at wasn’t possible), and I quickly discovered that any easy to memorize domain was either already taken or would cost me hundreds of dollars to acquire.

This was a serious road block.

I spent a couple hours going down the rabbit hole of possible domains I could use, to no avail. I could have compromised and gotten a misspelt domain or something, but then I realized I had already achieved my goals:

  • http://mytime.io got folks 90% of the functionality I’d envisioned. It was certainly good enough for my own use case, and building my own solution to solve that remaining 10% wasn’t very exciting
  • I’d practiced evaluating design and trade offs, which game me most of my desired development practice
  • I had plenty of content now to write about and share building-in-public style.
  • Learning about time zones was a blast

So..

Mission Accomplished!

If you want to play around with what I built, here’s the horribly messy source code and the live site.

The Principles at Play

A quick recap of the key principles at play here:

  • Be clear about your goals. They let you know how you can simplify your vision. Once you achieve them, you’re done! No shame in leaving the rest of the planned work.
  • Cut scope ruthlessly. Be realistic about how long you’ll work on the project and don’t let the scope exceed that
  • Focus on the user. Keep your target audience in mind, as well as their possible shortcomings. What will it take to give them a great experience?
  • Minimize how much you have to learn. Lessons will come anyways, minimize the additional things you’ll have to learn as part of cutting scope.
  • You don’t have to cater to everyone. If you can give one half of people a great experience and the other half nothing, that’s usually better than giving everyone a mediocre experience.

In the past, I would have considered this project a “failure”, thinking that I hadn’t “finished” it.

Heck, I probably wouldn’t have even gotten a working prototype.

But by being clear about what I actually wanted to get out of the work, I not only made faster progress but could also tell when the project was actually completed.

I didn't have to keep hacking away at the hydra's heads, the beast was already slain.

Want to learn more about how to leverage your own mindset to build better products? Sign up for my weekly newsletter below where I share insights on how to be a better software engineer using principles from psychology.

Want daily insights? Follow me on twitter

]]>
<![CDATA[ Falsehoods programmers believe about time zones ]]> https://www.zainrizvi.io/blog/falsehoods-programmers-believe-about-time-zones/ 5f91a6501e2eb70039e2e274 Thu, 22 Oct 2020 08:57:40 -0700 My aunt has a problem

She loves joining Zoom meetings, but they're all hosted in different time zones. It's hard to remember if she should add 4 hours, subtract 3, or what. She's not the most technical person, so google isn't an option. She has to ask for help.

Every. Single. Time.

And, for the less technically minded, it's also error-prone.

It got me thinking:

What if event organizers could share a link that would do the work for you? If someone clicked on mytime.at/5pm/EST, they would see their local version of that time. It sounded simple enough.

I began coding.

I later discovered mytime.io had already implemented a very similar thing, and run into the same pitfalls

I knew trying to manage time is a fool's errand, but that's what datetime libraries are for. I would merely build an extra time zone conversion layer on top.

Surely that couldn't be complicated

...Right?

I soon discovered just how wrong I was. One after another, I kept learning the falsehood of yet another "fact" that had seemed obviously true. Eventually my original vision became literally impossible to pull off without making serious compromises (more about that in a future blog post).

Hopefully this list will help you avoid the landmines I stepped on. All the falsehoods below are ones I'd considered true at some point in my adult life.

Most of them I believed just one month ago.

Misconception #1: UTC offsets go from -12 to +12

Turns out, UTC offsets span from -12 to +14. Yeah, +14. That's gives you 27 hours UTC can be offset by (don't forget the zero offset)

How does it work? UTC-12 has the same time as UTC+12, but is one day behind. Same goes for UTC-11 and UTC+13, etc.

Why that crazy range? That was a result of pacific islanders decided they wanted to be on a specific side of the international date line.

It makes for a very jagged international date line

Misconception #2: Every UTC offset corresponds to exactly one time zone

Here are 10 distinct time zones which are all at UTC+5:

  • Aqtobe Time
  • Mawson Time
  • Maldives Time
  • Oral Time
  • Pakistan Standard Time
  • French Southern and Antarctic Time
  • Tajikistan Time
  • Turkmenistan Time
  • Uzbekistan Time
  • Yekaterinburg Time

You might be wondering: if they’re all at the same UTC offset, why couldn’t all those countries just use the same time zone? Perhaps Pakistanis weren’t keen about being on “Yekaterinburg Time”

Misconception #3: There are more countries in the world than time zones

How could this one possibly be wrong? Well...

  1. Many countries want their very own time zone (how many do you think run on Myanmar Time?)
  2. Some countries split themselves up into multiple time zones (e.g. eastern and western times)
  3. Military time alone uses 25 time zones, one for each hour from UTC-12 to UTC+12
  4. DST. More on this one below

All together, there are 244 time zones used by the 195 countries in the world.

Misconception #4: Every time zone has exactly one agreed upon name

Ever notice how every time zone consists only of English words? Awfully kind of Spanish and French speaking countries to graciously use our language, right?

Hah, Yeah right.

Eastern Standard Time, Tiempo del Este, and Heure Normale de l'Est are all different names for the exact same time zone.

Have fun coding that into your library.

Misconception #5: Time zones are always offset from UTC by an integer number of hours

India standard time is five and a half hours off of UTC. There are many more examples

Misconception #6: Fine, time zones are always offset from UTC by an integer number of half-hours

Nepal likes to be at the 45 minute UTC offset.

Why does that extra 15 minutes matter so much to them? Because they really want their mountain to have the sun right above it at noon.

But it makes you wonder: what would happen if the mountain ever shifted?

Misconception #7: A country stays at the same UTC offset all year long

Don't forget about Daylight Saving Time! Or as the Europeans call it "Summer Time."  

Countries practicing DST change their UTC offset twice every year.

Misconception #8: There is a standard format for declaring time zones

Hah, I wish. Here are some standards I discovered, there may be more:

Common name

These are the traditional time zone names we’re used to. Example: Pacific Standard Time.

But I don't know if there's an official term for these names, they just that unstandardized.

IANA zone keys

This is as close to the official standard as you can get. It's not at all official, but it's something the developer community has rallied around.

It's a painstakingly maintained database which contains all known time zone data representing the entire history of local time for places around the globe.  It doesn't give any zone a name though, preferring to use the name of the most prominent city in there, which leads to:

Prominent city based

This one is "basically bad UI that derives from the IANA zone keys"

Full time zone names come with naming complications, which we discussed above. If that wasn't enough fun, there's also the political implications of recognizing certain time zones such as Israel Standard Time.

Some developers took the safer route and identified time zones only by the name of a prominent city in it, not bothering to map it to a common name. That's why the Ubuntu time zone picker makes you select "New York'' instead of Eastern Standard Time.

Forget time zones, use the raw UTC offset

W3's international standard gave up on the notion of time zones and declared that engineers should only store a timestamp's raw UTC offset.

GPS Coordinates

Fun fact: Many APIs for getting a region's UTC offset only want a UTC time and latitude/longitude coordinates. This lets them define any moment unambiguously and not have to worry about Daylight Saving Time.

If you squint your eyes a bit, you could consider this a fourth standard.

Misconception #9: Daylight Saving Time starts at the same time every year

Did you think this would be the one thing world powers agree on? Each country choose when to start it's own DST

Misconception #10: A country's time zone never changes

Almost every year some country will pass a law to edit their time zone.

In a particularly memorable example, a few years ago the Samoan islands wanted to be on the other side of the international date line to get the same weekends as their Australian trading partners. So on midnight Dec 29th, they changed their UTC offset from -11 to +13 UTC, skipping Dec 30th and going straight to Dec 31st.

Samoan citizens had one less day to celebrate the holidays that year.

On the plus side, just 40 miles away the American Samoa Islands stayed on the other side of the international date line. Now Samoans can celebrate new years on the Western Island, and then boat over to American Samoa for a second new year’s party the next night.

Misconception #11: A country stays in the same time zone during Daylight Saving Time

Funny thing about DST, it doesn't actually change the time zone's UTC offset.  Instead, Daylight Saving Time countries switch to a different time zone, with a different name.

For example:

Texas goes from Central Standard Time to Central Daylight Time.

Chile goes from Chile Standard Time to Chile Summer Time

Misconception #12: Daylight Saving Time starts around March and ends around October

The Southern hemisphere has their summer in the other half of the year. The pattern flips.

Misconception #13: Every time zone has it’s own name

Which country should get to claim "Eastern Standard Time"?

North America claimed dibs by virtue of inventing the name, but do you think no one objected? Australia thought it sounded like a fine name to use, and so even though the rest of the world refer to their time zone as Australian Eastern Standard Time, some of it's own citizens just call it "Eastern Standard Time" (not all of them call it that though).

Misconception #14: Every time zone has its own abbreviation

Which of these was meant when someone says CST?

  • Central Standard Time
  • China Standard Time
  • Cuba Standard Time

And remember how the time zone name changes during Daylight Saving Time? Many people don’t know that and keep using the wrong abbreviations during DST months. CST might be used for Central Daylight Time.

If there's no standard name for time zones, can you really expect one for the abbreviations?

Misconception #15: There is always an unambiguous conversion from one time zone to another

If I say I want to convert 5pm Eastern Standard Time to Pakistan Standard Time, am I talking about the American or Australian Eastern Standard Time?

And is Daylight Saving Time in effect or not?

Okay, it’s tricky. But surely if we include the date and the exact city, then we'd be able to do the conversion reliably, right?

What if the date and time are 1:30 am on Nov 1st, 2020, right when US DST ends and the clock moves backwards?

1:30am occurs twice that morning, how do you know which instance was intended?

Misconception #16: Your time zone library can recognize any time zone (you are using a library for this, right?)

Remember all those different potential time zone names and formats? Most libraries will only support one.

And they might be limited by the time zones installed on your local machine.

Yeah, really.

Remember, if time zones can change based on the whims of a local government, then the library will need some external dataset to base its calculations off of. That external dataset just might be the time zones installed on your PC.

Misconception #17: The entire country always shifts during Daylight Saving Time

In the US, Arizona doesn't practice Daylight Saving Time

Misconception #18: The entire state always shifts during Daylight Saving Time

Within Arizona, the Navajo Nation happily follows Daylight Saving Time

Misconception #19: Other than DST, every city within a state follows the same time zone

In Indiana, USA, most cities follow Eastern Standard Time but a few decided to follow Central Standard Time

Misconception #20: Every city sits within exactly one time zone

A few times in history, state line or time zone lies got drawn without paying attention to who actually lived on that border, cutting a city in half.  There are a surprisingly large number of examples.

This enables some really unusual sleep schedules

It is also why GPS coordinates are more reliable than city names to determine the time zone

Misconception #21: There’s a designated time zone for every location in the world

The north and south poles have no official time zone. Researchers there just follow their own country's time.

There's no way that could get confusing.

Misconception #22: This is a comprehensive list of misconceptions

These are the misconceptions I've uncovered so far, but I'm sure there are many more waiting to be discovered. Heck, I didn't even realize UTC offsets went all the way up to +14 until just 10 hours before I published this list!

Misconception #23: Redditors will agree these are all misconceptions

Who am I kidding? Reddit never agrees on anything. Seriously though,  this post sparked a conversation which highlighted a few more misconceptions I'd held. Thanks Reddit!

Some of the highlights:

  • #24: Daylight Saving Time starts and stops exactly once each year: When the month of Ramadan starts, some Muslim countries will exit DST time, and then re-enter DST once Ramadan ends.  It makes sunset (the time to end your fast) arrive faster (via matthieum)
  • #25: DST offsets are always exactly one hour: The Lord Howe island uses a 30 minute DST offset (via paulrpg), and England once had a 2 hour DST offset (via bandwidthcrisis)
  • #26: Standard Time is the same as Time Zone: They're different concepts (and apparently I've been using them wrong this whole article :P) (via lpsmith)
  • #27: Timezones are always offset from UTC by an integer number of quarter-hours: Amsterdam was once at the UTC + 19 minute, 32.13 second offset. Most of the world simplified it to UTC+0:20 (via DJDavio)
  • #28: Everyone follows their official time zone: Some western parts of China have their own unofficial time zone (via jl2352), and some industries independently decide to ignore DST to mitigate the timezone madness (via bitchkat)
  • #29: You can solve your problems by saving the time as UTC: Saving future timestamps in UTC can still lead to confusion (via AryA_ch)
  • #30: Birth dates tell you who is older:  Not necessarily (via oshkarr)
  • #31: There are exactly 195 countries in the world: Not exactly time zones but another misconception (via kankyo)
  • #32: If you have a UTC timestamp and the GPS coordinates, you can always determine the local time: Palestine and Israel have different time zones. So in the West Bank, the time zone depends on if you're Palestinian or an Israeli settler. If you don't know which person you're computing the time zone for, the local time is ambiguous (via haxney)
  • #33: Historical UTC offsets for a region never change: The 1927 time zone shift for Shanghai has been adjusted at least twice since July 2011 (via JoeIngeno)
  • #34: Daylight Saving Time is the only timezone adjustment: Sometimes countries instead move their clocks backwards in the Winter and call it Winter Time (via Radek Zajic)

Want to hear about how I designed the time zone project, worked around the obstacles, balanced trade offs, what I finally ended up building?

Sign up below to get that essay once it's ready.

]]>
<![CDATA[ How banks help scammers with their bad UI ]]> https://www.zainrizvi.io/blog/how-banks-help-scammers-with-their-bad-ui/ 5f74a41e6f45040045799562 Wed, 30 Sep 2020 09:41:48 -0700 My sister Sana just wanted to earn some money before returning to college. She was looking for a covid-friendly job on craigslist, something to cut down on the student loans she'd have to take out for the year. After searching and scrounging, Sana found someone who needed logistical help taking care of their pets. She applied and was excited to get hired.

First, they needed help buying pet food. They had ordered $1,800 worth of pet food (they bought high quality stuff in bulk apparently) but the seller would only accept payments via Zelle. Her employer was a bit old fashioned and didn't know how to use that service, so they offered a different arrangement: “I’ll mail you a check for $2,000” they said, “please deposit it in your own bank account, and then use $1,800 of it to Zelle the payment to the sellers. The remaining $200 is your fee.”

Sana was a bit suspicious, but she deposited the check into her Chase bank account and it cleared a few days later. Chase’s site showed that the money had been deposited successfully and was available to use, no strings attached.

Seemed like it was legit. So she did the honest thing and Zelle'd the money  to the seller.

Happy with the job well done, her "employer" sent her another $2,000 check to deposit and transfer. This person really liked their pets. The same process happened again, and her employer was happy with the job well done.

Two weeks later, Sana’s debit card stopped working. Confused, she checked her account to see what was wrong.

Her heart sank.

Chase showed it as overdrawn.

By more than three thousand dollars.

The checks had bounced, two weeks after Chase’s site had indicated they had “cleared”. Before this whole thing had started, Sana had $300 in her bank account. Now she had negative $3,300.

She had just been scammed out of $3,600.

What happened?

Turns out checks can bounce weeks after they're deposited. It can take that long for the bank to verify they’re legitimate.

Why does it take so long? Here’s a peek behind the scenes:

  1. You deposit your check, and it shows up as pending in your account.
  2. Your bank sends the check to the check-writer’s bank to request the funds.
  3. Their bank verifies if the check is legitimate and the account actually contains the funds. At small or international banks this step is often manual and could take weeks.
  4. Their bank lets your bank know it's legit. That's when the check has "cleared".
The life cycle of a check

I skipped one step though:

It take can than more than six months for a check to clear, but banks remove that “pending” status just a couple days after it's deposited and allow people to spend the money. This might sound a bit iffy, but it's actually a good thing. Most checks are legitimate and people might need the money quickly. But that assumption breaks down with fraudulent checks.

By removing the “pending” status before fully verifying the check’s legitimacy, Chase gave the illusion that the check had already cleared. This is what the scammers count on.  

Today there is no way for a Chase customer to be sure that a check has fully cleared. But unless you have the inside scoop on how banks work, or a finely honed sense of when someone is trying to swindle you, you won’t catch it.

When Sana called up Chase, the bank heard her out. But instead of stopping the scammers, their only concern was making sure she pays up the remaining $3,000 that they claimed she owed them.

Edit: Chase customer support had one clarification to make. I'd originally said it might take weeks for the check to clear, but they explained it could actually take more than six months. And you'll have no idea when that magic moment occurs.


And she's not the only one who's been hit by this scam.

My friend David Vargas was caught by pretty much the exact same scam. He was sent checks which included his fee and payment that was supposed to be made to someone else. And they only accepted Zelle.

In fact, similar scams have been going on for years, yet most banks don’t seem to care as long as they're not the ones losing money.

What should Chase have done here?

#1 Take responsibility: their system is broken

Is Chase bank directly responsible for scamming people? Of course not. Did they create an environment where scammers could thrive? Absolutely.

The harm was unintentional, but they played a hand in it.

Involuntary manslaughter.

What have other companies done when someone abused their products to deliberately cause harm? They took responsibility for it and fixed the problem at a systematic level.

The Tylenol murders: In 1982, someone started spiking bottles of Tylenol with poison, killing seven people. This was clearly not Tylenol’s fault, but they still immediately paid to recalled all bottles in the stores, offered anyone who had already purchased the pills free replacements. They took it a step further and developed tamper-proof packaging to prevent anyone from contaminating the medicine again in the future.

Credit Card fraud services: Online payments are inherently risky. Credit card companies know this, and they’ve committed to helping their customers when the inevitable fraud occurs. If you pay someone with your Discover card and the product turns out to be a scam, Discover will accept responsibility and refund you the money, taking a loss if necessary. This dynamic keeps Discover on their toes working to prevent scams before they even happen.

#2 How should they fix it?

Make the status of the deposited check abundantly clear. This is how Chase can tamper-proof their bottles.

If a check has been deposited but not fully cleared, if it hasn't had the actual funds transferred over to the bank, then Chase needs to make that fact clear to customers! This missing piece of UX is what scammers depend on. That’s the systemic flaw here.

If the bank actually made the status of the funds clear to its customers, then it would have a much stronger leg to stand on when claiming innocence.

But as things stand right now, the only way a person could know the check was still at risk is if they already knew about the long clearing time and how banks hide it. Before the results come back from the external bank, Chase itself doesn’t know if the check is valid. Yet they offer customers no hint of that uncertainty.

And instead of taking responsibility, what did Chase actually do?

Chase took option #3: Shakedown their customers

Chase acted like the mafia, shaking down whoever they could to get their money back.

My sister had created this bank account back when she was in high school. For minors, Chase requires a guarantor, someone who would make sure her debts are paid. My dad had listed himself, he was also a Chase customer.

And now Chase came after him for the remaining three grand.

He spent hours on the phone with Chase’s customer support trying to get the scam resolved. The reps may have been empathetic, they weren’t empowered to help.

From the bank's perspective, the money they had already pulled from Sana's account was theirs. Sorry, no negotiation possible. The computer won't let us. And Chase did their darned best to not have to take a hair cut on the rest of the amount they enabled scammers to steal.

The most those reps could share was to suggest my dad could try not paying the remaining three thousand and wait until the debt goes to collections. And then pray that the collections department has more leniency. It might hurt his credit rating though, they warned. How much? They didn’t know.  The account was scheduled to go to collections at the end of September.

But a month before that date, my dad noticed his own account was now three thousand dollars lighter. Chase had just helped itself to those funds without warning.

And now that Chase is no longer losing money on this scam, customer support tells him “Sorry, there’s nothing we can do.”

With every step Chase made sure it got paid, one way or another. Vito Corleone would be proud.

And if the bank has no skin in the game, why worry about fixing some misleading UI? They seem to think people don’t bother switching banks, and that it can treat its customers as captive users.

How widespread must this be?

When a person gets scammed they’re usually hesitant to speak up about it.

They're ashamed of having been duped, afraid of being scorned for their foolishness, and so their story rarely gets told.

Yet I know two people who’ve admitted to being hurt by this scam. What does that imply about the scale of this problem?

And Chase allows it to continue happening, punishing their customers for not having intimate knowledge of how banking works.

Is your blood boiling yet?

I get furious every time I think about it. David Vargas tried to pass his situation off as his own fault, but I was outraged on his behalf.

I want everyone at Chase to see this story. I want every other bank that’s enabling scammers to see this. And I want them to fix their system to let people know when shady checks have not yet been deposited.

When I showed this article to a friend, he got restless, he wanted to take action. “How can I help?” he asked, “Tell me what to do!” At the time I wasn’t sure what could be done, but there is one thing:

My ask to you: Can you help spread the word and make Chase notice this problem? Maybe the internet outrage machine can turn this into a priority and protect the thousands of other people who are being scammed by this.

You could share this article (retweet the Tweet, or post on FB, Reddit, HN, whatever your usual channels are). Get this shared widely. And maybe, just maybe, some executive at Chase will notice and take the lead in transforming the banking industry. My sincerest thank you to everyone who helps.

So far, Chase hasn’t been interested in accepting an ounce of responsibility for their part in this scheme. Why bother fixing the system when you can just pull money from your customer’s bank accounts? But maybe they can change. Who wants to be known as an unwitting accomplice to scammers?

And even if they don't change, remember Chase's missteps and apply those lessons to your own products. Look for how your problems could be fixed at a systemic level.

As for Sana, she won’t be using Chase anymore.

They don’t tamper proof their bottles.

Chase doesn't seem to get get how people work. It's like they haven't heard of psychology! The most successful products managed to leverage psychology with their engineering efforts, and others can do the same. You can subscribe below to get it in your inbox, and I'll also be sharing any updates on what happens with Chase

Don't like newsletters? I share the same stuff on twitter

]]>
<![CDATA[ Never Focus on the User ]]> https://www.zainrizvi.io/blog/never-focus-on-the-user/ 5f5bbbf7e36dd700452b6767 Fri, 11 Sep 2020 11:21:44 -0700 Sharing a hard truth about our industry. I wanted to reject this for the longest time. But I finally had to change my mind.

User focused design is wasteful.

Every business knows this, but they're afraid to admit it.

What's their dark secret? They focus on the people who pay them.

“User focused design” is a PR friendly way to say it, but every successful company actually focuses on their buyers

Didn’t you work at Google? I’m pretty sure they say "Focus on the user". Technically true, but they don’t focus on the users you’re thinking of.

This goes against everything taught in UX school. UX assumes good design matters, but that isn’t always true.

Are you saying build cruddy products? Not at all. Build the best, but “best” often isn’t what you think it is.

Your time and money are limited. Every time you decide to build a feature, it’s a decision to not build the other amazing features you could have worked on in that time. If you build features that don’t lead to revenue, it gets hard to pay the bills.

Two Types of Users

Choosing the right features becomes even more critical when your product has two types of users:

  1. Those who pay for it
  2. Those who don’t

Most enterprise software falls into this bucket. A small set of executives choose most of the software their companies will use.

This leads to products that seem cruddy yet are extremely popular. Ever had to use one?

  • Workday: A tool companies use to manage employee documents, vacation time, etc. But updating any setting feels like you’re stuck editing a maze of pdfs
  • Jira: the work tracking system that can’t even create a hyperlink correctly

How do they not get beaten by competitors? The thing is:

Software is made for the people who actually pay money: The buyers.

That’s often the department heads, the executives, they’re the ones who approve the final purchasing decision. Those people are the software’s bread and butter and will always be given a great experience.

And as for everyone else in their companies? Those employees become the software’s captive users.

Captive users don’t get a vote.

For them, life could be terrible. But it’s okay as long as the buyers remain happy.

This goes against everything taught in UX school!

UX teaches what you should do to make things great for the user. Why you should prioritize that experience is a business decision.

Good UX is considered a best practice, but be careful to first understand why something is a best practice.

UX work assumes user friction materially matters. And it does. If users have alternatives.

But if your users are captive, then like it or not, their needs are less critical to your business. The product only needs to be "good enough" for them to do their job.

Instead, what’s important are the needs of the folks who chose to buy your products.

But doesn’t Google say “focus on the user”?

Google has two types of users:

  1. Average Joe consumer who doesn’t pay anything.
  2. The businesses who want to sell stuff

When it's free, you are the product.

Those businesses want their ads displayed widely, so having consumers voluntarily use their product is a feature that google offers to businesses. Focusing on the free users is how Google focuses on their buyers.

It makes sense in their business context.

Don’t blindly accept someone’s best practices without understanding why it’s best for them

How is the math different for other companies?

Why is Workday so Bad?

It’s built for HR. And they love it.

Workday offers HR pretty analytics and graphs to track each employee's activities. Everything about it is designed to make that department’s life easier.  And coincidentally, HR gets to decide if their company purchases Workday.

Many idealistic developers have tried building a Workday competitor and failed.

Why?

They focused on giving the average-joe employee a better experience, spending their energy on the captive audience and neglecting their buyer.

Shockingly, no one would pay for their “superior” product.

Jira’s Justification

Who is Jira’s target customer? It’s certainly not the poor developer who struggles to add a link to his bug report. (I’m still mad at them)

Take a look at Jira’s site and see what they emphasize:

Those pages glamorize having an overview of what work your employees are doing. It’s a tool made for managers. Especially higher level ones. The VP won’t be filing work items, but they’ll be very interested in the insights those dashboards offer.

Jira spends all their software cycles building new features for that VP, ignoring what you and I might consider critical user bugs.

That’s how they stay ahead of their competition.

This feels evil! I want to focus on the user

I get it. I don’t like it either, so I got picky about what I work on.

To avoid this incentive misalignment then try to work on products where either:

  • Your users pay you
  • Free users liking your product is a feature

Remember, even enterprise software has paying users. But they might use different features than the captive users. The VP who bought Jira to look at dashboards? He uses those dashboards! Focus on features he'll find delightful.

Or, you know, build hey.com.

Alternatively, free users liking your product can be a feature.

Who is it a feature for? People who buy ads.

Companies buying ads on Google wouldn’t get nearly as many eyeballs if the search engine didn’t offer amazing results. It’s why Facebook hires psychologists to ensure users keep scrolling that feed.

Understanding how much your buyers value a good customer experience and prioritize your work accordingly.

Yes, ideally you'd give every user the most mind blowing experience, but your time is limited. You have only so many hours you can put in. The heat death of the universe will happen before your product becomes perfect.

So what should you do?

Understand what your buyers want! Look at the world from their perspective and fix their pains.

Viacheslav Kovalevskyi lays out a step by step plan for what to focus on:

  1. Define the set of problems that you were thinking to solve (not project or features)
  2. Define the set of real customers impacted by those problems
  3. Assign tasks to onboard specific customers to the solution that solves the problem
  4. Pick your team members who will have a personal task to onboard the customer
  5. Onboard the customer
  6. Generalize solution for the rest of the customers

- Source: First steps towards User Oriented Development Practices

Find your paying customers. Fix their pains. If you don’t focus on your buyer, someone else will.

Whether you’re looking to advance in your career or for your startup to skyrocket, your job is to make your business money. That happens when you persuade buyers to buy.

That’s it.

So forget about the user.

Focus on the buyer.

Want to learn more about how to understand customers and accelerate your career? Subscribe below to get insights sent to you every week. You can also find me on Twitter

]]>
<![CDATA[ Taking my own Advice ]]> https://www.zainrizvi.io/blog/grow-by-giving-feedback/ 5f400d8226ece5004553fb76 Fri, 21 Aug 2020 11:13:17 -0700 You’ve heard how the best way to learn is by teaching, right?

Wrong.

Try grading instead. The single biggest practice that improved my writing was reviewing other people's essays. David Perell made me do it.

I took his class Write of Passage, and on the very first day we were told to write an essay. And give feedback.

To random strangers.

About their essays.

That didn’t seem right.

He expected me to give advice? Wasn’t he the teacher? I wasn’t qualified to do this, I barely knew how to write my own. That’s why I was taking the class!

But it was an assignment. And if school had drilled one lesson into me, it was to always do the assignments (yeah, I was that kid). I opened up a classmate’s essay and started reading.

And I noticed something weird.

Parts of the essay were super engaging, he’d written hilarious stories with insightful takeaways.

But other parts left me confused. I had to reread the sentence multiple times to understand what was being said. And on other essays I’d feel the urge to skim past topics I usually cared about.

Why was I feeling bored by information I valued? Why was I enjoying reading about a topic I never cared for?

And then it clicked.

The best content makes the audience feel a certain way

Excitement, intrigue, surprise, they need to be in there. They’re what converts Spartacus from a relic of history into a hero you’re rooting for.

Epiphany in hand, I started paying attention to the emotional journey the essays were sending me on. That’s the journey I want you to go on as well.

You can do this.

You read a lot, right? You’re reading this right now! How does this text make you feel? As you read each sentence, notice the feelings you’re having.

Are you excited?

Maybe surprised?

Or were your eyes glazing over?

The right words create magic

David Perell loves to share this color coded example

Dissect the text

Want to peek behind the curtain?

Ask yourself:

  • Why did I feel that emotion?
  • What made me this engaged?
  • Why was that passage so boring?

Then try to improve it. What could have been done differently? Change the structure a bit, see what happens. It’s okay if you don’t find a better option, but it’s important to take a few minutes to try.

Let the problem simmer in your head.

As you become more conscious about the emotions—both good and bad—different writing styles evoke in you, you'll notice patterns hiding in plain sight. You’ll spot techniques being used. The structure of the Matrix will be revealed.

The text will transform from words dropped on a page into a set of tools you can recognize, pick up and wield.

Do it for long enough, and those patterns become instinct. You flow with the melody on the page, in sync with the rhythm, the music sings on and on, right up till the wrong string gets plucked.

You feel it. Something’s not right. And you know what needs to change.

And you’ll inevitably read stuff which doesn’t pack that punch. They’re useful too! If your interest is slipping, pay attention and ask why? Contrasting good examples with the bad etches the patterns firmly in your mind. Darkness lets you notice the light.

Opportunities are everywhere

This technique isn’t limited to essays.

My friend Robbie Crabtree is a trial lawyer who dissects the emotional nuances in movies. He extracts the essence and infuses it into his courtroom speeches, life imitating art. I’ve learned a crazy amount by reading his material, and I highly recommend checking it out.

The mathematician Richard Hamming taught himself to give gripping speeches by observing other lecturers and noticing how their styles made him feel.

I even apply these tactics at work. I’ve written countless design docs and bug descriptions over the years. Not all of them were closely read. Some weren’t read at all, and I had to repeat the information over and over again in person. It was a pain, but I can empathise. I’ve tried to read documents which made my eyes glaze over and it’s a struggle.

But some docs read like a novel. Why?

They emphasize different things.

Surface different facts.

They lay out information in a different order.

It matters.

I adopted those tactics and my writing gradually became more compelling, my thoughts more clearly expressed, and I was more likely to get a response.

Everything is an opportunity to learn, even filing tickets for a bug. I’ve had tickets leave me scratching my head about what needs to be done, I barely understand the complaint! But other tickets clearly lay out all the important details, I immediately know what needs to be done. Guess which one gets fixed first?

Every time you see something a person has written, or said, or created, that’s a chance to learn from them. Don’t limit yourself to their ideas. Learn how they share them.

The writing class is over now, but I thank David for making me review all those essays.

It turns out I was qualified. And you are too.

Try it out. Read. Notice how you feel. Think about why. And copy the best.

Don’t just teach

Grade

A bit thank you to everyone who reviewed drafts of this essay, including Greg Frontiero, Najla Alariefy, Yue Jun, Erik Newhard, and Robbie Crabtree

]]>
<![CDATA[ How I learned to turn Impostor Syndrome into an Advantage - The Impostor's Advantage ]]> https://www.zainrizvi.io/blog/the-impostors-advantage/ 5f36362fb293d90039d713ae Fri, 14 Aug 2020 00:06:00 -0700 My heart was racing. My palms sweating. I was going to be fired.

Performance reviews had just ended, and it was time to meet my manager and be told my results. Except I knew what it would say. How else do you rate a programmer who doesn’t code?

As I stood up from my desk, my eyes fell on my Ship-it plaque, congratulating me for helping release Windows 7. I had joined Microsoft a mere two weeks before it was released, a fresh college hire. There wasn’t a single character I had contributed to that code.

The plaque was a lie. Just like me.

My manager had booked a private conference room to share the results, far away from ears that might overhear anything said. Or begged. The long walk began, each step echoing down the corridor.

I racked my brains, grasping for an excuse to justify keeping my job. Instead my mind kept going back to the last bug I was supposed to fix. I’d spent all day failing to find the problem, finally giving in and asking a teammate for help. He found it in 10 minutes. I was way out of my league.

My boss must have seen it too, I bet that was why he assigned me, the kid, to help government auditors analyze our source code. Help them? I barely understood it myself! But this was more of a “people project.” If it didn’t require writing code, why waste real programmers on it? And so it came to me. I barely stayed afloat, constantly asking my manager for explanations and struggling to relay them to the auditors.

Yeah, I was doomed.

How would I tell my parents? Would I ever get another job? The only coding I did here was with an obsolete technology that no other company cared about; I didn’t even have the skills to land a new job.

What made me think I could work at Microsoft?

I reached the conference room, could I stall any longer? Huh, It’s the same room he interviewed me in two years ago. I doubt he remembers.

Okay, this is it. Deep breath, poker face on. No matter what, I wouldn’t let him see me sweat.

I stepped inside. Scott was sitting at the table, laptop carefully angled to hide the screen.

“Have a seat” he said, gesturing to his right. As I sat down, Scott looked straight at me. He opened his mouth to give me the news. But it wasn’t what I expected.

“Congratulations, you’ve been promoted”

Huh? No way I heard that right. Keep that poker face tight.

“Keep up the great work! Anything you’d like to ask?”

Wait but...when did I...what?

He hadn’t noticed? I wasn’t about to point out his mistake. Can’t let any surprise show.

“Great, thanks.” That was all I trusted myself to say.

I was safe. For now.

He’d catch on in a few months, I was sure. I couldn’t hide forever.

I spent the next few years preparing for that inevitable day, desperately trying to work on projects that could teach me the skills that would catch a recruiter’s eye. I had to become hireable.

I needed a stronger resume, with skills people cared about. I switched to a new team which built stuff for the cloud: Azure Web Apps. Companies love the cloud, right? Surely I’ll learn industry relevant skills there.

Fast forward four years: I still didn’t feel like I was anything special, yet I kept getting promoted. I kept fooling them somehow, the bureaucratic review process hiding my flaws. But something else also started happening, hinting that, just maybe, I wasn’t as clueless as I thought.

What changed?

People started coming to me for answers.

I still didn’t feel like I knew that much. I was just telling people about the stuff I’d worked on, occasionally pointing younger engineers towards tactics I had seen work well. That didn’t feel like anything original, but folks were finding it useful.

It got really weird when the more senior engineers started asking me about the code base. These were brilliant people who had often helped me over the years. Didn’t they already know everything?

I guess not, but they were still way above my league. It’s not like I knew enough to offer them real advice.

But still...my team seemed to think I was doing well. Would other companies think so too? Was I finally hireable? Only one way to find out: I started applying.

I couldn’t believe the results when multiple job offers came in. And one was from Google! I couldn’t pass that one up. I made the switch.

During orientation, Google spent a lot of time discussing Impostor Syndrome, the feeling of accomplished people belittling their own talents and constantly being terrified of being discovered as a fraud. That’s a thing?

“Raise your hand if you have this feeling” Hah, yeah right, and have me be the only one raising my...oh, wow, that’s a lot of hands. Mine joined the crowd.

As I started working, impostor syndrome came up constantly. It was mentioned at company meetings, folks made memes admitting to it. It was everywhere.

People freely admitted to not knowing stuff. Teammates admitted to not understanding the code, or having no idea how a tool worked. All the stuff I didn’t know, many others didn’t get either.

Seeing everyone admit their ignorance freed me from my own fear. Suddenly, feeling clueless seemed normal.  It was a psychological quirk, not the truth.

My self-confidence grew. And gradually, without quite realizing it, something magical happened.

Impostor syndrome became a tool. I discovered the impostor’s advantage.

Did I notice feeling intimidated about asking a question? I started pushing myself to ask that question. Turns out other people had felt afraid as well, asking that question helped improve everyone’s understanding.  When I started openly admitting to being unfamiliar with a tool or some code, my teammates felt like less of an impostor themselves. Their confidence went up. And they in turn became more likely to admit the same, creating a virtuous cycle boosting the entire team’s morale.

The impostor’s advantage was a super power.

And it offered new insights.

That feeling of being an impostor is your subconscious telling you something: It’s saying you’re about to push yourself past your comfort zone and into the growth zone. Now when an opportunity shows up and impostor syndrome starts twitching in the pit of my stomach, that’s a sign I should jump at it! This led me to take on bigger and more ambitious projects, without worrying about being exposed. Somehow I still delivered results, helped by the various people I was no longer afraid to reach out to.

Every project still started with the thought, “I have no idea what to do here.” But then I’d remind myself, “no one else does either.” That was a surprising lesson about the more senior positions: Their work is so valuable precisely because no one knows exactly what needs to be done. It’s ambiguous. And it requires people who can still push through the uncertainty and forge a path forward.

They embrace the impostor’s advantage.

Looking back, I realize now that in my early days I'd been evaluating myself with biased glasses.  I was comparing myself to people much more senior than me. Of course there would be a skill gap. If one person understood something, I assumed everyone knew it. That was false. As the “systems grow it’s impossible for one person to keep it all in their head”[1]; each person just knew the areas they had personally worked on.

My biggest mistake: I didn’t value the soft skills I brought to the table. Fresh out of college I had taken a significant load off my manager's plate by being the main point of contact with auditors and other teams. The fact that I was not writing code made me think I wasn’t doing anything useful, when in fact the soft skill of being able to work with them was incredibly valuable to the team.

As I continue working and taking on bigger projects, I suspect impostor syndrome will never completely go away. But now I take that as a good thing. An advantage. It’s a sign that I’m growing and stretching myself past my comfort zone.

And in thost darkest moments, when self doubt is at its highest, I remind myself:

I haven’t been fired yet.

One of the ways I keep my impostor syndrome at bay is by interviewing regularly, trying to get at least one job offer every year.

This behavior led to me getting offers from Stripe and Facebook in addition to Google and Microsoft. It's not hard, it's just that no one tells you how to prepare for interviews the right way.

In the course Insider Advice on how to Pass FAANG Interviews I share the tactics I used to prepare and get those offers. Tech interviewing is a learnable skill and anyone can pick it up


Subscribe below to get the latest content whenever I publish anything. Or you could follow me on Twitter

[1] The Manager’s Path, by  Camille Fournier

]]>
<![CDATA[ I couldn’t abandon another side project ]]> https://www.zainrizvi.io/blog/do-more-by-doing-less/ 5f2d876590d0ea0039568289 Fri, 07 Aug 2020 10:03:06 -0700 “That deer is huge!”

I’d been driving home late that night. As I came up to my house, my headlights landed on the biggest deer I had ever seen, right in the middle of the road. I live by some woodlands and neighbors had mentioned deer having free reign in there, but this was the first time I got to see one. This guy didn’t grow that big with proverbial instincts: when my headlights landed on it, Bambi dashed away...straight up my driveway!

Come back!

I chased after it, hoping to prolong my impromptu safari by another second or two, only to see a white tail disappear into the bushes.

That was too short, I wanted to see Bambi again!

I rarely see animals around the house, but they’re definity there. We hear coyotes howling at night and were warned of a bobcat prowling the area. They know how to stay out of sight though, I almost never get to see them in person.

Hmm...could I fix that?

What if I set up a camera to look for the animals and tell me when walked by? I’d get to see them every day! (I'm not going to deny it, I was kinda inspired by the ninja squirrel obstacle course).

Time to brainstorm what I’d build:

  • A live video feed pointed at the woods.
  • Then record video whenever animals walk by. I’ll save it in the cloud!
  • I’d need an iPhone app to notify me when an animal is outside.
  • Oh, and I can stream the video feed to the app.
  • Gotta have AI! I’ll use image recognition to detect which animal is walking by so that I’m only notified when the more unique animals walk by (sorry racoons).
  • Finally, some logging to record the time of day each animal showed up. Let’s get scientific

It was totally doable. I was already imagining the day when I’d get an alert on my phone saying “Quick, there’s a deer outside!”

But there was a nagging voice in the back of my head. It whispered: “You’ll never finish this”

See, I have a long line of side projects left abandoned by the wayside. “You’ll abandon this one too” the voice kept insisting, “Don’t bother starting.”

I didn’t want to listen to that voice. But it wasn’t lying either.

Now what?

Alright, it’s calling out a real risk. Risk noticed. What can I do to mitigate it and improve my odds? I thought back to my past projects, hoping to find a hint that would let this one have a happy ending.

I found one.

It sounds crazy, but as I thought through all my old projects, I realized the dead projects had something surprising in common: I abandoned them about two weeks after starting. Almost every side project I completed was something I finished in under two weeks. Two weeks seemed to be when my motivation ran out.

Huh, that was unexpected.

It kinda made sense though. All of those passion projects were something I did for fun, without a large cause propelling them. I had certainly completed large, multi-month side projects before, but all of them solved some pressing need. But the ones which were just for fun? They were forgotten once the enthusiasm wore off. Which apparently takes two weeks.

Watching deer was definitely a “for fun” project. That meant I had a deadline, and not an artificial one. There was an invisible hourglass, and with each passing second my motivation was trickling away.

I had a clock to beat.

How do I do that? There’s no way I could build that many features in two weeks.

It was time to do the same thing I do at work: Cut scope mercilessly.

If I couldn’t complete the project in two weeks, it wasn’t worth attempting. As much as I was in love with the vision, I had to look at the entire project with this lens. So what gets the axe?

First I had to be clear about the *real* problem I was trying to solve.

I wanted to see the animals as they walked by. That required software to notice when they showed up. And something to notify me about it immediately. That’s it. Everything else was fluff.

I had to kill the fluff

Saving footage of the deer would be really cool, but nope. Goodbye cloud recording.

Live streaming video to my phone? Never mind.

Logging? Only at the end, if I have time.

Even for the remaining features I tried to keep cutting scope. For every bit of work I’d ask myself: Do I really need to have this? Is there a simpler way I can achieve the same goal?

Did I really need to build my own app to tell me about the animal outside? A telegram bot could do that. Great, no need to learn how to build iphone apps.

It was critical to limit how many new tools I’d have to learn. Learning one new tool would take a while and eat into the two week window. Learning two new tools would guarantee failure.

Maybe this wasn’t the time to figure out a new machine learning library, I’ll use a motion sensing algorithm instead. When the app detected motion it would take a picture and send it to me. I’d take over the AI’s job and decide if it was interesting.

Sometimes the scope cuts were much more subtle. Some steps which might be best practices at work are unnecessary bloat at home. To know which is which, think about why it’s a best practice in the first place.

Should I write tests for the motion sensing code? I had no idea how to test that. Heck, I won’t be maintaining this code after two weeks anyways. Cut. What about a clean, general way to send notifications to people? I’m the only recipient here, so I’ll just hardcode myself into the telegram bot instead.

But the urge to add “useful” features doesn’t go away. There’s even a name for it: scope creep.

It’s the insistent urge that you should add one more thing. “I should make it easier to code this” “What if I do that with a different tool instead?”

Scope creep happens naturally, and if I didn’t fight it the timer would run out.

Time for the coding spree. I found the right camera on Amazon. Buy. Wrote the motion sensing code, hooked it up to a camera. Done. Wrote a telegram bot. Woohoo! Hooked them up to each other. Boo yeah!

I hit the two week mark, but I was still going strong. I was going to finish this! Calibrating the motion sensor to work outdoors turned out to be trickier than I’d expected but I was making progress...when the last grain of sand trickled out ⏳

Despite my best efforts, a couple days went by when I couldn’t work on the project and suddenly the motivation timer ticked down to zero.  Suddenly pushing it to completion felt like a slog. I didn’t want to continue.

No! I had almost finished it! The project was 95% finished, it just needed a bit more of a push to be complete.

But it just didn’t feel interesting anymore.

There had never been a grand vision driving this thing, I’d started it on a whim. And the motivation was as fleeting as the motive.

A part of me wanted to deny it. “I’ll finish it tomorrow,” I kept thinking. Five tomorrows later I had to accept the truth. It was time to move on and let go of that mental burden. This was a passion project, once the passion wore off it just wasn’t worth slogging through.

But all wasn’t lost. When I started the project I had carefully designed it to really be three distinct projects disguised as one: a motion sensing algorithm, a telegram bot, and an integration project which put it all together. I had completed the first two projects! That would not have happened without me relentlessly cutting scope.

And every project was useful. I’d designed the motion sensor and telegram bot to be independent of the exact application I was developing. Now in any of my future projects I’ll be able to reuse those components if needed, saving me days of work and letting me complete more ambitious projects in that same two week period. I have new tools in my tool belt. That’s still a win!

By trying to do less stuff, I got more stuff done.

So I moved on. But now if that niggling voice ever comes back, I’ll be ready.

And if I see Bambi again, I’ll just take a picture.

]]>
<![CDATA[ Effective Altruism is Suboptimal ]]> https://www.zainrizvi.io/blog/the-most-effective-altruism/ 5f2305207607600039fbde0c Thu, 30 Jul 2020 11:04:11 -0700 I used to only donate to third world countries. That’s where my dollar would stretch the furthest, it was clearly the most logical way to give.

Then fate had me help someone face to face.

It was the third week of November. My team at work had decided to forgo our traditional morale event and instead make Thanksgiving dinner for the homeless. Most people decided to put on an apron and help cook the food, but I thought it would be cool to deliver.

We loaded the van with 80 pounds of turkey goodness and drove down to Tent City 3.

I got out of the van feeling a bit awkward. I’d never done anything like this before, what do you talk to a homeless person about? Fortunately for me, someone had already contacted Tent City.  A couple folks were waiting for us, ready to help unload the food.

As we finished setting up the trays, the dinner call went out. I was expecting a long line of rowdy, hungry people to suddenly appear. Instead, cheerful groups eased by, cracking jokes as they loaded up their plates. “The pie is awful, want me to take it off your hands?” “Thanks for the food guys, you grab a plate too!”

John, one of the men who’d helped unload, walked over and started telling me about the camp. It was like nothing I’d imagined. They had weekly elections, term limits included; John was the group’s leader this week. Every big decision the camp made was put to a vote, majority rules. They ran their own security patrols around the camp and had zero tolerance for drugs. There were even conflict resolution protocols! It was a democracy the Greeks would be proud of, living under blue tarps in an overcrowded parking lot.

Even their living arrangements surprised me. They made agreements with various churches to camp out in their parking lot for three months at a time. Careful not to overstay their welcome, when the three months were up they’d pack their bags and move to some other church.

And then the bombshell: Five days from now their current stay ended and they’d have to leave. But they still didn’t have a place to go.

It was a punch to the gut. Hearing it directly from John made it viscerally real in a way that no powerpoint on poverty could hope to achieve.

I wanted to help. And not just give money.

But...that went against reason. Why not spend that time earning more money and donate it, like I’d always done? If $10/hour labor can do what I would do, isn’t it better to spend that hour earning $40 and donate that instead? Why did this feel different?

Over the next few years I started noticing the answer surfacing in multiple ancient cultures, based on millennia of hard won wisdom. From the east to the west, across Islam, Christianity, Judaism, and the Orient, the sages offer the same advice:

Build your community. Support each other.

Leisure isn’t the Goal

Ancient Japanese wisdom talks about ikigai, the happiness of always being busy.

Busy doing what? Helping others.

Wait, what? Become happier by helping others? How about I become happier by watching Netflix?

It’s a bit counter intuitive, but that’s how our psychology works. Leisure activities like watching TV offer quick dopamine hits, but helping others is what offers that feeling of fulfillment.

And the ancients knew this:

Christianity encourages the service based life, preaching that you “need to be a servant to be truly fulfilled.” No, buying the latest iPhone won’t do it.

Even Islam teaches that the best deed you can do is to help your brother. But it adds a catch: your niyyat — your  intention, your mindset — has to be correct.

When you help others you must actually care about the other person. Prophet Muhammad said “None of you believes until he loves for his brother what he loves for himself.” It’s not enough to do something for others while hoping they’ll reciprocate. You can’t be hoping they’ll like you. Or expect applause. That’s not the right niyyat, and it won't do you much good.

Caring is critical.

“Wait, this sounds contradictory. You’re telling me to help others selflessly because then I’ll feel better? Isn’t that inherently selfish?” Well, kind of, yeah.

It’s not about being perfectly altruistic.

It’s about the things you think of while you’re helping others. The thoughts going through your head in the moment. Have you connected with the other person on a human level? Are you solely focused on their welfare?

Paradoxically, it is when you’re not thinking of the benefits to yourself that you benefit the most.

The benefit to yourself is an afterthought, a gift you notice. It can even be something you think of beforehand to motivate yourself into offering that help in the first place.

But in the moment? It’s gotta be all about the other guy.

“It’s not about me. It’s about others.” --John Maxwell

Let go of the self centered mindset. Focus on helping others because you think they are worth it. Parents know this instinctively: When I’m feeding my toddler I’m not thinking about her reciprocating one day. That’s not how we work.

“Okay, my kids are one thing. But man, what about everyone else? Do I have to care about them all? That sounds hard.” Yeah, that’s a tall order. And you only have the right niyyat when you help people you care about.

So...who do you care about?

Community is Key

It’s hard to care about a statistic.

I used to be a big believer in effective altruism. It emphasizes giving charity to places where the largest number of people will benefit from it. There’s definitely value in that philosophy, but that’s not how our psychology works. People are more motivated to help if you tell them a story of just one person who would benefit from a $100 donation, compared to hearing about 3 million people who need the help.

It’s more fulfilling to help those we relate to.

And while donating money is great, nothing packs a bigger punch than taking the time to personally help people you’ve connected with. The Jewish concept of tzedakah emphasizes that charity "cannot be done to someone – rather, it must be done with someone"

The Japanese take this a step further and emphasize building your moai. A moai is a group of people with common goals who look out for each other. You become part of a community and your focus is on helping your own group.

You lift up those you care about.

Islam has a similar emphasis on community: It encourages congregating morning, noon, and night for daily prayers. Imagine the connection you’d build interacting with people that regularly. Simply accepting a dinner invitation and breaking bread with someone is considered more praiseworthy than most acts of personal worship. It’s another way to cement those ties of brotherhood.

The instructions are clear: know your community, and help your brother.

Communities come in many forms, but remember, to be a moai it must have two components:

  1. Individuals have shared goals
  2. They have each other’s backs.

The internet makes it easy to find people who share your goals, just go check Twitter. But there isn’t anyone looking out for you. You’re a drop in the ocean, a faceless name in the sea. Maybe you get a question answered, a post ‘liked’, or possibly even reshared if you’re lucky. Then one swipe later you’re forgotten, a quick dopamine hit fading away.

That deep connection was never there. It’s not a moai.

Deep relationships come from interacting repeatedly with the same people. By knowing their stories, and them knowing yours. From caring about each other and looking out for each other.

Those bonds are hard to form in large forums. You need to have 1:1 conversations for that to happen, and even then there’s an upper limit to how many people you can befriend. Studies show people can only have 3-5 closest friends, a couple dozen that you can be somewhat close to, and 150 people casual friends. That hundred and fifty is the upper limit of how large a community can get before people stop feeling connected and the group splits apart. Don't bother joining a group larger than that.  If you can’t invest time in people, you can’t hope to be close to them.

If you do find yourself in a large group, find a smaller subgroup that you can have closer ties with. I’m currently taking a class called Write of Passage, which has over two hundred people enrolled. As we practice writing essays, we'll offer each other feedback on theirs, and chat 1:1 via twitter or Zoom. Over time, I found myself gravitating towards the same set of people for most of my interactions. Everyone really wants to see the other person succeed. We're forming our moai.

Together, we lift each other up way higher than any of us could go flying solo

Build your community. Support each other.

Your community can be anything you want it to be, but it won’t be real until you know their stories and they know yours.

When I drove to Tent City 3, the homeless there were just a stereotype. But hearing their stories and struggles first hand turned them into real individuals who mattered.

I doubt I made a similar impression on them. I didn’t share much of myself, it felt awkward saying anything that might remind them I had an actual home. A couple days later I was probably nothing more to them than “those guys who brought the food.”

So sure, I gave them a meal.

They gave me so much more.

]]>
<![CDATA[ Dangerous Professionals: Hacking the Bureaucracy to Get Stuff Done ]]> https://www.zainrizvi.io/blog/hacking-the-bureaucracy-to-get-stuff-done/ 5f1a842560c6440045622821 Fri, 24 Jul 2020 00:08:17 -0700 Ever tried dealing with a large company, only to get stonewalled? You're talking to a black box that repeats “Sorry, that’s against policy” or “I can’t do that”

But if you can open up that box the picture changes. The moving pieces become visible. Watch how each one ticks, and you’ll start noticing the systems at play.

Leaders design incentives to be armor plating for the company, protecting its interests. They work well for the most part, but all plating has seams.

The Dangerous Professional finds those seams and slips between them.  They'll dodge gatekeepers and dance around policies, doing whatever it takes to get the job done.

Bypass the Gatekeeper

Companies setup divisions with the job of managing all the incoming complaints. You know them as customer service.  Depending on whether you’re that company’s customer or their product, customer service’s role varies from “keep them happy with minimal effort” to “get these people off our backs.” They’re the ones companies want you to contact.

When customer service isn’t being that helpful, look for someone else to complain to. Like the CEO.

Jeff Bezos is famous for reading all customer emails sent directly to him. When a complaint lands in his inbox he forwards it to one of his reports with just a question mark added: ‘?’. Whenever anyone gets this email they drop everything and scramble to solve that customer’s problem. The customer service department doesn’t have that kind of clout.

I witnessed this at Microsoft. The VP got a complaint and forwarded it down to his direct report.

Which got forwarded down again. And again. And again. Each level adding one more manager’s weight. Propelling it faster and faster and faster till it smashed into my team’s inbox at critical velocity. All hands on deck!

Why did  this work? Look at what’s happening behind the scenes. Your message went to someone with just enough time to say "This might be important, I’ll delegate it to the relevant team." And it gets fast tracked through the chain of command

A funny bit of management psychology comes into play here: If your boss’s boss’s boss tells you to “look into this please,” it jumps to top of your priority list. Usually folks don’t bother to ask if it should be prioritized higher or lower than the large pile already on their plate.

Achievement unlocked: Teleport to front of line.

Dance around the Rules

Many businesses have departments tasked with nothing but creating rules other departments must follow. Some of those rules get really weird:

Once I was going on a business trip to help with a conference. My company had instituted a hotel budget of $250 per day. But this was a BIG conference. Hotel prices were jacked up, and only the sketchiest places met that price point.

The cheaper hotels were 30 minutes away, but then I’d blow the budget for uber rides. I asked if I'd have to cancel my conference trip.

Manager: “There’s not actually an uber limit”

Oh.

I booked that distant hotel and took an uber everyday. Now here’s the kicker: the hotel + uber cost more than what I would have paid if I had gotten a hotel walking distance from the venue. But that’s too complicated for the finance department to write formal rules around. Imagine if they prevented people from getting a ride when it was really needed!

Thus the current policy was born.

It gets better. When you see the policy’s red lines drawn out, you can dance around them like Danny Ocean’s crew navigating a laser maze.

Good sales reps know this and use it to push deals through. Patrick McKenzie offers an example:

"Suppose you are selling a $100k product to a director who has $100k budgeted for it. You do the dance. Purchasing says they need a 10% discount because We're Big Enough To Always Demand One.

Three days later you submit the following: $100k invoice, discounted 10%, for 1 year of SaaS services. Comes out of software budget. $12k invoice, discounted 10%, for 1 year commitment to professional services. Comes out of consulting budget.

Purchasing is *extremely* likely to approve this."

Did the company hire people who couldn’t do basic math? Of course not.

They pay people who know where their time will be most valuable. Writing a policy that covers 95% of situations will take them 10 hours and saves the companies $XX millions of dollars. Writing policies to cover the remaining 5% will take ten times as long and only offer a tiny fraction of the savings. The cost/benefit analysis doesn’t work out.

If you learn what was left in that missing 5% you'll have the advantage.

Remember there is no “company.” All of the company’s decisions are actually individuals acting within their own set of incentives. The “company” is what emerges when those individual incentives interact with each other.

You don’t have to fight the entire company.

There is no company

Hidden incentives exist in your own company too. Is a team resisting doing something you’re asking for? Learn why. Find the fulcrum point. New paths will emerge.

For example, I used to struggle with people not responding to my emails when I tried to get their approval on a change I was making. “People are busy,” my manager explained. “Instead say, ‘I’ll be making this change on Thursday unless you object before then.’”

I started doing that.

The objections were rare. The complaints, non-existent.

What Makes Them Tick?

You get the best results when you adapt your approach to the person you’re facing

Now replace "person" with "team" or "department"

The sentence remains true

Figuring out each team's incentives can take some detective work. But once you understand them you’ll learn what makes each group tick. As you get better at this you’ll find yourself able to collaborate with others much more effectively and Get Stuff Done.

Now since you reached the end,  you might also enjoy my article Interview Advice that got me offers from Google, Stripe, and Facebook. I share insider tactics I learned over 13+ years to get offers from multipe tech companies companies like at Google, Facebook, Stripe and Microsoft.

Interview advice that got me offers from Google, Microsoft, and Stripe
“What would you say if I asked you to design me a service capable of responding to thousands of user requests every second and latency was critical?” “Umm...that you have to solve this problem at work. But you’re out of ideas, and are looking to interviewees for suggestions”

Rather learn more about how to advance up thr software engineering career ladder? Sign up for my newsletter below or follow me on twitter @ZainRzv

]]>
<![CDATA[ What's it like as a Senior Engineer at Google? ]]> https://www.zainrizvi.io/blog/whats-it-like-as-a-senior-engineer/ 5f123d248f96fd003930da43 Thu, 16 Jul 2020 00:09:37 -0700 My experiences working at Google & Microsoft

When I started working at Microsoft, fresh out of college, coding was my life. Writing code was the easiest way to build any cool thing that my brain could imagine.  When I thought about what I’d want to do for the rest of my life I thought that I just wanted to keep coding.

During the next 11 years I became a Senior Engineer at Microsoft and moved on to work at Google and later Stripe. At these higher levels I still get to build, but I use a very different set of tools to do it. There’s a huge mindset shift needed when you go from junior to senior. Writing code becomes a minor part of the job.

Ever built a tool no one used? I have. It sucked. At the senior levels most of your time goes into identifying what needs to be built and how to build it. You have to research what the problem looks like. You talk to others and get everyone to agree on what needs to be done.

These are your new tools:

  • Research the problem
  • Design the solution
  • Build consensus

Research like a Detective

Fresh out of college you get handed tasks where the right answer is pretty straightforward. There isn’t much disagreement on what to do other than the occasional feedback in code reviews.

As you get more experienced your problems become more ambiguous. The path looks hazy. There are multiple routes you could take, but each one hides its own dragons. It’s not about coding anymore. Most of your work goes into research, and you can’t google the answer.

Research can take many forms. It usually involves a combination of reading code, reading documentation, and talking to people. Yes, actual human beings. In fact, that’s where most of the information you’ll need is locked away. Did you ever see Sherlock Holmes using search engines?

There often is no single person who knows the answer you need. Five different people might hold five different pieces of the puzzle you’re assembling. And you don’t know who those five people are. And they don’t know which pieces you need.

You have to find them. Find them and ask the right questions to sift through their brains, uncovering the nuggets you need.

Sifting for nuggets

At Google Cloud Platform, customers would often contact the Technical Solutions Engineers for help when they ran into issues. Those TSEs dug into the problems and fixed them.

My manager had an idea: “Wouldn’t it be great if we could use AI to automate that process?” We had no clue how to do it. Didn’t even know if it was possible. Heck, we weren’t even sure what kind of problems customers were asking for help with. But that was the challenge my manager offered.

I accepted.

Now any AI solution for this kind of a problem requires lots of data. The AI needs to see many broken environments to understand what they look like. And as I searched around I realized we didn’t have that data, it was all locked away in the brains of those TSEs. You can’t train AI with that.

I had to find the patterns. Maybe chatting with the TSEs would reveal something...

Me: “So, what type of problem do you usually face?”

TSE: “Eh, it’s something different every time”

Darn it, The AI future was looking bleak.

Me: “Well, what do you do to solve it?”

TSE: “It depends. Based on the problem, we’ll query one database or another. Then that’ll point us somewhere else, and we keep digging until we find what’s wrong. Then we fix it.”

No solid data on what problems they solve. No repeatable way to fix them. I was ready to give up.

Wait a second.

“Tell me more about these queries you run?”

What if I changed the problem? Maybe I didn’t have to fix those customer issues right off the bat. What if I helped TSEs debug the problems faster? I could automatically run the hundreds of queries they might run and suggest “Hey, this one had a suspicious result. Maybe dig a bit deeper there?” That’s a lot of debugging the TSEs could avoid.

I could even extend this to collect the data needed for an actual an AI system. This had potential! The TSEs were excited. My team was excited. My manager was excited. We began coding.

Design: The Art of Balance

With ambiguous problems there is no single right answer anymore. There might not be any answer. What you have is a pain point. It could be your customers’s pains, your team’s pains, or even your own pain. The existence of that pain is the problem. Your job is to remove that pain without introducing even greater pains.

There’s a funny thing about ambiguous problems: they don’t have a clear right answer. Every solution offers certain benefits and has certain downsides. The more of those you discover, the better you’ll be at balancing the tradeoffs you have to make. Some common trade offs to consider:

  • How long will it take to develop the solution?
  • What’s the opportunity cost?
  • How risky is it? What happens if that thing fails?
  • How much work will it be to maintain this going forwards?
  • How far will it scale? How far does it need to?

With these ambiguous problems, sometimes the best answer can be “keep doing the thing we’ve been doing.” That was a tough lesson to learn.

Young and Naive

When I was a wee lad four years out of college, I had been asked to come up with a way to make our database upgrades less risky. The team would manually review all the planned changes to make sure they were safe, but once in a while a bug would slip though and the sound of pagers going off would fill the room as everyone frantically tried to fix it.

“Can we build a tool to catch those risky changes?” my manager asked me. Woah, this was a super open ended problem. Sweet! I was determined to not let him down. This required digging deep into database upgrade best practices (I even read a whole book on it cover to cover). I spent the winter holidays toiling away developing a prototype that could do upgrades safely. And it worked! Kinda.

When I showed my creation to my manager he was worried: “You know what, let’s just stick with doing things the way we do right now.”

Ouch.

It was a tough lesson on risk management, but he made the right call. A bug in my tool could have brought our entire service down. It wasn’t worth the risk.

There were multiple lessons I learned that day:

  • Consider how much risk any new project might add to the system
  • It’s okay to fail. If you never fail then you’re not stretching yourself
  • Get feedback early!

To get that feedback communication is crucial. Tell people what you’re going to build before you build it and let them warn you about any pitfalls before you step into one. If I had shared that design with my manager before building it we would have cancelled the project weeks earlier. And I would have had a relaxing winter break.

But collecting feedback requires a soft skill: empathy. Can you understand why people disagree with you? What are they valuing differently?

You may not always agree with the feedback, but you have to understand it. Only then can you move forward with a new vision that everyone can get behind.

Build Consensus

Getting that feedback and agreeing on the plan grows more important as your projects get bigger.

You may start off just having to get your manager to agree (he’s the one who gave you that ambiguous task). But you’ll need to build consensus with the rest of your team and even people outside your team who have a stake in your work.

This requires communication skills, both to understand and be understood.

Once I was tasked with creating the next generation of our internal database management system. This was something many teams depended on, and our current solution would stop scaling a year or two down the line. My team had seven different people with eight different opinions about what the system should look like. That included my manager and skip level. Oy vey.

First step was talking to them all to really understand their concerns and priorities. But there was another voice I wanted to hear from: our customers! This was meant to be a system for other engineering teams, how could I build a solution for them without understanding their problems?  It took a bit of digging to even figure out who those users were. This required another soft skill: The art of finding the person you need to talk to.

Eventually I got into a room with them. There they dropped the bombshell “We can’t really justify the work to migrate to any new system. The current one works well enough for us right now and we have more urgent problems to fix.” I talked to three different teams and got the same answer each time. Damn, what’s the point of building a solution if no one will use it?

A migration had to happen, soon the current system would stop meeting our reliability standards. There were a couple routes forwards:

  • Politics: Get my management chain to convince their management chain to force the teams to migrate. Yuck
  • Persuasion: Teach those teams why this pain that they won’t feel for a few years is more important to fix than the pains they’re facing today. That’s a hard thing to prove, and we’d have to make this case to many, many teams. That doesn’t scale well

There was a third option: change the constraints. What if I said ‘no’ to some of the features I’d been asked to add? Removing that let me design the system in a way that we could migrate all our customers automatically. There would be zero work required from them to migrate. We’d swap the engine with the car still zooming down the highway.

This was much more palatable. And by highlighting our user’s push back I convinced the other stakeholders to change drop those constraints as well.

That’s the general flow of any project you work on at the senior levels: You research the problem, gather the pieces, understand the world better. You design a solution, collect feedback and adjust course as needed. Then the implementation begins.

So how do you learn all these skills? Experience. Jump out of the nest and flap your wings. If an opportunity shows up, take it. You won’t feel ready, no one does, but that’s what makes it a learning experience.

Ask for help. Listen to the answers you get. Keep trying. At the end of the project ask for feedback and use it to improve faster. You only learn these skills by practicing.

It’s a new world at the senior levels. You’ll still be building. But with new tools you’ll build bigger and better than before.

Want to get more insider advice on how to pass interviews at Google, Stripe, and other FAANG companies? I share secrets I had to learn the hard way in my course Insider Advice on how to Pass FAANG interviews


Rather learn about how to build these soft skills and and grow as an engineer? Subscribe below to get tips sent to you every week. You can also find me on Twitter.

]]>
<![CDATA[ Interview advice that got me offers from Google, Microsoft, and Stripe ]]> https://www.zainrizvi.io/blog/the-interviewing-advice-no-one-shares/ 5f123d248f96fd003930da2f Tue, 07 Jul 2020 18:36:05 -0700 “What would you say if I asked you to design me a service capable of responding to thousands of user requests every second and latency was critical?”

“Umm...that you have to solve this problem at work. But you’re out of ideas, and are looking to interviewees for suggestions”

That’s the actual response I gave the interviewer the first time I was asked a design question. He had a good laugh. But then still made me design the service.

In the decade since I’ve lost track of how many hours I’ve spent in the interview room, on both sides of the table. I’ve worked at Microsoft, Google, and Stripe, and received offers from many other companies. As I interviewed, I realized one thing: standard interviewing advice falls woefully short.

What good does it do to practice coding problems for weeks if your mind goes blank in an interview room? Everyone says to be wary of the recruiters, but what if you weren’t? How can you show your “best self” if you’re too afraid to let it out?

I tested the answers to these questions multiple times (sometimes by accident). Turns out conventional wisdom gets you conventional results. But you can do better. Interviewing is a skill and anyone can learn it.

For some reason no one talks about these aspects of interviewing, but I’ve found them helpful time and time again.

We’ll cover:

  • Using recruiters to your advantage
  • Going to real interviews for practice
  • Being open to learning during interviews
  • Keeping those skills sharp even when you’re not job hunting

Tip #1: Use recruiters to your advantage

Their voice sounds so friendly and helpful when you chat with them on the phone. “Zain, I’m looking forward to seeing you when you come for your interview!” Clearly it’s all an act, past-me assumed. I was sure they were secretly judging me, deciding if I was good enough to work at the company.

And they do judge. But not in the way I was expecting.

Recruiters aren’t evaluating you technically, not by the time you pick up the phone anyways. Their decision on whether you have the technical chops to be worth interviewing was made well before that call. If you’re being offered an interview: Congratulations, you've already passed that bar

Now the recruiter wants to work with you. Their whole job is to, you know, recruit. They know that you statistically have poor interview prep skills, and are happy to help fix that flaw. Why discard strong candidates who can’t interview well? They’ll level the playing field by helping everyone show their best selves when interviewing.

How can you take advantage of this?

Ask them questions! Things like:

  • “What should I do to prepare for the interview?”
  • “What are the company values that would be good to highlight during the interview?”

And be forthright about any problems you run into:

  • If you get sick the day before the interview, call up the recruiter and ask to reschedule. They want to test you when you're at your best!
  • Work pressures left you with no time to prepare? You can still try rescheduling. At worst they'll say "sorry, we can't do that". It will not hurt your chances of getting in

Your mad technical skills no longer matter to them. If anything, humility and an openness to learning will show you in the best light.

Tip #2: Go to real interviews for practice

You need to build up your interviewing skills. LeetCode is great, but it doesn’t come close to the real thing. Try to interview at actual companies as much as possible. And don't limit yourself to the companies you care about.

Learn to deal with pressure

When you're in a real interview the world changes: You're locked in a cage with a lion. Every heartbeat is a gorilla bashing against the walls. Your mental gears gunk up as your body goes into fight or flight mode. Your clammy hands struggle to write half legible code on the white board. A threat hides behind every shadow. Even an innocuous "Would you like something to drink?" is a nefarious test: do I pick coke or coffee?

You only get that experience in a real interview. And only real interviews teach you how to deal with it. The first interview will kick your ass. So will the second. But once you get a few under your belt you'll get used to the adrenaline rush. Perhaps enjoy it even. You'll become a bullfighter, confidently facing down the charging bull. That’s how you get over the fear.

You may even find that these practice interviews are much easier! When the stakes are low that lion doesn't look so fierce. I've found that I perform the best in the interviews where I don't care about the outcome. I'm much more relaxed and at ease. I can think faster, my brain reaches out to more possibilities. Now, even in important interviews I try to convince myself that I don't really care. That mindset shift is only possible because I got to experience it in the low stakes games first.

Learn to answer the more ambiguous questions

And what about the questions? After each interview write down all the questions you were asked. The same evening look over the questions while they’re still fresh in your mind. Focus especially on the behavioral and design ones which have no clear right answer. Consider how you could have responded better. Are there stories from your life you could have referenced? Wait a few days and look at them again. You’ll find better answers.

Each answer you prepare this way becomes a blob of paint on your pallet. Chances are high that you'll come across similar questions in future interviews. Over time you'll be able to mix the answers from your pallet to paint a picture highlighting how your abilities make you a valuable asset to the company.

Stay open to serendipity

As you do these interviews, you may discover that the “practice” company is actually interesting after all. Recruiters count on that. I’ve had recruiters suggest I interview even when I told them I didn’t find the company interesting. “Maybe you’ll change your mind after seeing us up close”, they suggest.

And I did. More than once.

Tip #3: Be open to learning during the interview

I learned this one by accident, but boy does it pay off.

In a college career fair once I was walking through the booths with my bag full of swag. My eyes caught a pile of rubik’s cubes being given away by some company I'd never heard of. I wanted one! Of course I couldn't just go up and ask for it directly, so I went and chatted with the guy manning the booth. His name was Vince. A few minutes later I walked away with my prized rubik's cube in hand. That evening I got a call from Vince offering to conduct a full interview loop on campus. I already had a job offer from a company I liked, but I thought "Sure, why not? I could use the experience"

I had no intention of joining the company, they were some boring finance business. There was nothing to lose. So during the interview I felt free to ask any question I wanted. When I thought I got the answer to an interview question wrong, I'd ask "I don't think I did so great here. What's the right answer to this problem?" (I wanted to learn the answer for future interviews!). When I was asked a challenging question I could grin and delight in the problem solving aspect instead of worrying about how badly it would reflect on me. (Remember my smart aleck remark in the intro? That was this interview.)

Turns out they liked that: the next day I had a second job offer. At a significantly higher salary than my first one. Oh, and they wanted to fly me to New York in two weeks for an introduction to the company. I still didn’t want to join, but a free trip to New York? Sign me up! The company's name: Bloomberg

Bloomberg was a practiced hand at recruiting. They had a full two days of events to leave us starry eyed about the company (I came thiiiiis close to accepting their offer). While there, Vince told me I’d made a great impression by fearlessly asking questions, even when I was stumped.

Since then I’ve stopped hesitating before asking any question in an interview. Let your curiosity run free! Don’t cage it! And as an interviewer, I can attest: sincere interest is always a good sign.

Tip #4: Keep your skills sharp even when you’re not job hunting

James Whittaker recommends trying to get a job offer every year, just to make sure you can.

It's an extremely liberating feeling knowing that even if you lose your job you'll be able to find another one quickly. That's a huge stress off your back.

I haven't been very good about this myself, but every once in a while I'll accept an invitation from a recruiter (it helps to set up your LinkedIn account). I don't bother doing any prep for these interviews, at least not initially. Those interviews show me the areas I need to brush up on or where the industry practices are changing.

For example, in a tech screen I did last summer there was some miscommunication and I didn’t realize they expected actual working code. Instead of the usual pseudocode written in a google doc, the interviewer told me to select my language of choice on an online IDE. Now, I don't have a particular language I consider myself super proficient in. I touch many different tools at work and end up having to use a different language about every 5 months. So even basic things like "create an array" tends to require googling the syntax. Given that, what's my favorite language of choice? C#, hands down. So I selected that.

What hadn't occurred to me at the time is that C# is a very verbose language which Visual Studio beautifully automates away the tedious parts of. This online IDE automated nothing. Even for a basic task like creating an array I had to spend precious interview minutes looking up the right package to import and the exact syntax to use. Needless to say, I ran out of time. I crashed and burned there, but it opened my eyes to the way interviews are changing and how I needed to prepare in the future.

Four months later a company called Stripe reached out to me. They also expect you to write working code and even let you use your own IDE. This time I was ready. And now I work there.

These four strategies have helped me over and over again to perform well in the interview room: leveraging the recruiters, doing real interviews for practice, asking questions during interviews, and keeping those skills sharp even after getting the job.

What do all these tactics have in common? They eliminate your fear. The fear that holds you back from letting your best self out. Do them often enough and interview rooms will transform from a threatening    jungle into Tarzan’s playground.

“Remember, never let them see you sweat” Vince told me. But when the stakes are low and you’re having fun, who sweats?

Want to learn more insider advice on how to pass tech interviews, which I had to learn the hard way? Check out my video course: Insider Advice on how to Pass FAANG interviews for timeless interview and career advice.

More interested in learning how to develop the skills required to grow as an engineer and get interview offers in the first place? Subscribe below to get regular tips based on engineering and psychology.

You can also find me on Twitter.

]]>
<![CDATA[ Remembering what you Read: Zettelkasten vs P.A.R.A. ]]> https://www.zainrizvi.io/blog/remembering-what-you-read-zettelkasten-vs-para/ 5f123d248f96fd003930da30 Sat, 09 May 2020 02:23:00 -0700 I love reading. But retaining what I read tends to be a challenge. I usually walk away from a book feeling good but with only a faint idea of what was in there. Heck, if I spend a couple hours online I’ll barely remember what articles I read! And it’s not just me, studies show that you only retain a tiny percentage of what you read.

I hated the idea of wasting all that time I spent reading, so a year ago I started looking into ways to retain what I learned.

The Contenders

My first attempt led me to Farnam Street’s tips on remembering what you read. Their concept of writing your notes on the book itself was liberating (it’s okay to WRITE in my book?!?). Actively writing down notes helped me get more insights out of the text, but those thoughts would then be trapped in the analogue world, locked away until I happened to peruse the book some time in the future.

FS also suggested writing down the book’s core ideas from memory right after you finish the book, but this is a process that requires discipline (unless you like testing yourself?) and things requiring discipline have a distressing tendency to not happen.

Those tips were exciting, but I couldn’t stick with it. I needed something different.

Next I came across the Zettelkasten note taking method. Its core idea is to create atomic notes, where each note is about exactly one topic (not more than a few paragraphs tops) and nothing more. Then you file the away in your system by linking that note to other notes which seem most relevant to it. All the notes are written in your own words, so you’re really writing down your own thoughts here.

The key here is that the linking process groups relevant notes together. Now when you’re interested in browsing your notes on a given topic, you’ll easily find them. You get to see how your ideas relate to each other as well as discover interesting ways they may play off against or even contradict one another.

But this technique is time intensive. You have to:

  • Save the initial note, paraphrasing what you learned
  • Search for relevant notes to link it to
  • Potentially update your table of contents to find that note more easily later on

If you take a lot of notes, the stream of incoming notes can quickly leave you overwhelmed. This technique requires time and dedication.

I couldn’t stick with it.

Finally I discovered Building a Second Brain and it’s P.A.R.A technique. It offers an easier option for busy people.

With P.A.R.A. you organize all your notes by purpose, not by category. Let’s say you’re trying to build an app. You’ll have a folder called ‘app’ for all notes about it. Now if you study databases in order to build it, you’ll file any notes you take inside the ‘app’ folder, not in a separate ‘databases’ folder.

What does this do? By creating purpose-based folders and putting all notes related to that purpose inside it, we’ve created a new way to group relevant notes together. All your notes related to that purpose are available front and center when you open the folder. This lets you avoid the time consuming process of sorting, organizing, and linking your notes in order to make them useful. Just drop the note in the right folder and BAM, that’s it.

How do you reference old notes? When you start working on a new project (like a writing assignment) you search the relevant folders and pull out notes that seem relevant to your task. All those notes will go into the new project’s folder. You’re effectively discovering related notes on the fly. You’ve avoided the work of double linking and cross referencing your notes. This solution gets you 80% of the way there with 20% of the effort. Just in Time linking.

When you finish a project, you file away the notes from that project in which ever folder you think they’ll be most useful in, and then archive the project folder. Now you’ve reset the notes to be discoverable the next time you need them.

And it’s not just good for retention. I’m finding that this purpose-based organization is helping me work much more productively on all my projects!

Summarize only when needed

Even the step of summarizing what you read is optimized for efficiency. It’s called Progressive Summarization

With progressive summarization you don’t bother summarizing what you’re learning, at least not at first. Instead you take the passages you found most interesting and copy them into your notes. If you ever reread those notes in the future then you can start highlighting the phrases that really spoke to you and if you reread them again, only then will you do the work to summarize the ideas in your own words.

It’s not that summarizing your notes from the beginning is bad, but if you procrastinate on it while still expecting yourself to do it then you’re setting yourself up for failure. Progressive summarization offers you a way to delay summarization while still retaining value.

Note what’s happened here: Instead of forcing myself to be disciplined about organizing my notes, P.A.R.A. + Progressive Summarization takes advantage of the times when I’m already excited to work on them. Each time I touch the notes, I have to take a small amount of effort which is proportionate to my level of interest in the task. We’ve replaced forced discipline with leveraged excitement.

Limit what you save

There is one other critical aspect of PARA that’s required to keep the system from being overwhelming. Successful followers of the Zettelkasten method seem to follow this instinctively, but it’s rarely mentioned:

You’re highly encouraged to limit the kind of things you save in your second brain to the following:

  • Things related to projects you’re actively working on. Don’t store trivia
  • Store things that surprise you: Don’t store stuff you already know
  • A 12 select problems that you love to think about

Tiago recommends thinking about the 12 problems you care most about and only store things related to those problems in your notebook (so skip the articles about ancient mummies…unless you’re an archaeologist).

By limiting which topics you put in your second brain you free up more cognitive space to notice what you do store. By storing less you’ll remember more 🤯

Not making your second brain cognitively overwhelming is an under-emphasized part of the PARA system. There shouldn’t be anything in your project’s section unless you are actively working on it. Even the other sections are also meant to be pruned on a regular basis so that they only represent your primary interests. A good rule of thumb: if any folder gains more notes than you can easily skim (~50-100 notes), it might be time to split that folder into two or maybe even delete some notes.

In short

This is by no means a complete comparison of Zettlekasten and P.A.R.A. (that would be a much longer essay) but it captures the major points.

Zettelkasten has its benefits: If you want to be able to casually browse through your notes, looking for ideas to spark your imagination, Zettelkasten will most likely have superior results since the ideas are already summarized right there for you. Zettelkasten makes it easy to compose essays and put together speeches, but that’s because you’ve already done the hard work of writing down your thoughts ahead of time.

It’s requirement to link all notes ahead of time is a HUGE barrier to entry, so Zettlekasten may be best suited to people with a strong research oriented disposition who’re already used to similar practices. The fact that there’s no good software available to help with this makes the process even harder. (Check out Andy Matuschak’s notes for a gorgeous Zettlekasten example)

P.A.R.A. is great for those who don’t have the time (or willpower) to force themselves to write down notes they may never use. Instead it’s Just-in-Time philosophy saves many hours and lets you be more productive. Tiago has designed P.A.R.A. to work with most productivity apps, but the process is optimized for his app of choice: Evernote.

All in all, I’m finding P.A.R.A. pretty useful so far. It has yet to pass the ultimate test of any knowledge management system: Will I still be using it three months from now? (ask me after July). I’m already noticing productivity boosts by using the PARA method to store notes for all my projects, so prospects are looking good 😁

August 2020 Update:

Folks started asking, so here's the update

I’m kinda getting falling behind on some of the proactive archiving/organizing parts of P.A.R.A. But that's where the Just In Time aspects really shine. I can organize just the stuff that I’m about to use.

But the highlights: P.A.R.A. has made a night and day difference to my project management style In the past I organized my notes and todos in a completely different way. I was constantly struggling to find the important bits or would miss important tasks. I'd always be feeling overwhelmed and lost.

I still feel like that today, but it's no longer because of my notes 😅

My system is no where near perfect now, but it's waaaaaaay better than what it used to be

So the system still works. For now at least.

Let's see how it fares after a year of use.

Ask me after March 2021

January 2022 Update: It's a little late, but here are my reflections after two years: PARA vs Zettelkasten: It's a false binary

Would you like to work at a FAANG company? I've worked at Google , Stripe, Facebook and Microsoft. Spending over 12 years as both the interviewer and interviewee. Over time, I realized one thing:

Standard interviewing advice falls woefully short

In the course Insider Advice on how to Pass FAANG Interviews I share the tactics I learned the hard way that you can use to ace your interviews.

Interviewing is a learnable skill, and this course can teach you how to master it.

]]>
<![CDATA[ Quickly Building Products for ACTUAL Customers ]]> https://www.zainrizvi.io/blog/quickly-building-products-for-actual-customers/ 5f123d248f96fd003930da31 Fri, 10 Apr 2020 18:05:00 -0700 This is my tweetstorm on key aspects to focus on when you’re building a SaaS service. It details how features that seem critical may not actually be that important. Experiment to see what your actual customers are demanding, and focus 100% of your effort on that. It’s okay to acquire technical debt along the way

]]>
<![CDATA[ The Truth about VPC Security Controls ]]> https://www.zainrizvi.io/blog/the-truth-about-vpc-security-controls/ 5f123d248f96fd003930da32 Fri, 21 Feb 2020 18:08:00 -0800 GCP’s VPC Service Controls protection is often described as a virtual firewall for your GCP projects. That’s a useful mental model for your company’s decision makers to think with, but the analogy quickly breaks down if you’re an engineer trying to actually implement VPC-SC protection for your GCP projects.

I learned that the hard way.

Here I’ll describe just what VPC-SC is, why it was needed, and a big mistake I made which you reeeally want to make sure you avoid.

What Problem is VPC-SC Supposed to Solve Anyways?

The core problem VPC-SC is meant to solve is to protect companies against data exfiltration. That’s a fancy way to describe preventing adversaries from copying your private data off of your servers and onto their own.

You can setup many types of protections to prevent attackers from getting access to your machines (let’s strong with decent passwords, shall we?), but following the principle of defense in depth, enterprises with more critical data protection requirements sometimes need an additional layer of protection which says “Even if someone hacks into our machines, joke’s on them. They still can’t steal our data!” That’s the core problem we’re trying to solve here.

This problem has been around for a while. How was it solved in the pre-cloud era?

Back then, companies would have a private corporate network. All employee computers would be inside that network, and that network would have a set of strict firewall rules setup to prevent data from being passed to the outside world. The firewall would only be opened up for a few, strictly vetted sites that employees had real business need to access.

This way, even if an attacker gained access to a machine they still would not be able to send a copy of the data to themselves. The firewall would block any such attempts.

Now what changes if you move to the cloud? Not too much actually!

You can define firewall rules for your cloud VMs as well and you can set those up to match very similar rules to what your local company network had.

But what if you wanted to use any of the other mouth watering array of services GCP offers? Things like blob storage, pub sub, or juptyer notebooks as a service?

Those services live outside the reach of your firewalls. They’re run on servers which are the entry point for resources owned by all GCP customers.

The usual solution of opening up a firewall hole to allow traffic just to those services doesn’t quite work. Take Google Cloud Storage (GCS) for example, it’s specifically designed to store data so it would be trivial for an attacker to take your secret data, push it to their own GCS bucket and then download the data from there at their leisure.

Some companies worked around this problem by self-hosting their own versions of these services within their private corporate networks. That’s an option, but it’s expensive. You have to run, debug, maintain, and upgrade both the software and servers, all by yourself. Your dev ops team is cursed to toil away in the kitchen making boiled chicken, while longingly looking upon the duck confit delivered to their neighbor’s doorsteps.

VPC-SC provides a better way.

VPC-SC to the Rescue

At a high level, VPC-SC controls are settings you can define across your projects that help you eat from the GCP buffet without worrying about an alligator eating you.

These settings are called your VPC-SC perimeter. You create and enable a VPC-SC perimeter (the virtual firewall) by defining three things:

  • A list of GCP services that you want to set on lockdown mode
  • The specific projects of yours that the lockdown mode should be applied to
  • Exceptions you want to make to the above policy (via Access Context Manager), but let’s ignore this category

Let’s call the set of projects in the perimeter P. By adding GCP services and projects to the perimeter you’re saying you want to see the following behavior implemented by each of those services:

  • Data Ingress limits: the service shouldn’t allow anything outside the P projects in to read/write data to P. So data is never read from outside the perimeter.
  • Data Egress limits: The service shouldn’t allow projects P to write data anywhere except in one of the P projects. So data never gets written outside the perimeter.

The above two clauses put together tell GCP Services to make sure your data stays inside your projects.

So you’ve combined the traditional firewall + DNS rules that were used to protect computers, and added a layer of VPC-SC service level protection on top of that. These combine to protect your resources from data exfiltration.

How? Now, the only way to access your project’s data is from your VM, which is protected by your custom firewall rules. Even if an adversary manages to get a hold of your GCP credentials they still wouldn’t be able to steal your data. They would first have to breach your firewall to enter your VPC-SC perimeter! (Defense in depth!)

And that’s how you get the virtual firewall!

The more astute among you may have noticed that the VPC-SC controls actually have nothing to do with your VPC network. I never asked, but I assume the name VPC-SC was chosen for marketing purposes to make the ‘virtual firewall’ analogy easier to accept

The Big Misunderstanding

There’s one horrible assumption that people tend to make when setting up VPC-SC networks. I made this mistake too and spent a long time trying to debug it before I realized what I was doing wrong.

Notice how you’re adding services into your VPC-SC perimeter to enable the “lockdown mode”? What happens if you don’t add some service to the perimeter? My naïve assumption was that it would be blocked and inaccessible to my VMs.

That is absolutely, completely, 100% not the case.

In fact, if you don’t add a service to your VPC-SC perimeter then you’ve basically left the route open for your internal VMs to send data to that service but neglected to lock down that service itself. Chickens are in the hen house, but the door is left wide open.

Here comes the fox ready to exfiltrate your data!

In case that wasn’t clear: *You must to opt-in to every__ service ***that you want to have locked-down. Opting-out means **zero protection.

There were probably good technical reasons for setting this up as the default behavior. The first guess which comes to mind would be that not all GCP services support the VPC-SC lockdown yet. But boy, you can really get caught with your pants down if you don’t see this one coming.

How do you protect yourself against this? First step is to include all services that you actually plan to use in your VPC-SC perimeter. Then, block all the services that you do not plan to use. I’m not sure what the recommended way to do this is (probably something at either the DNS or the IAM levels) but that’s what you’ll want to do. Otherwise you’ll have left a hole wide open that’s big enough for an adversary to drive a Humvee full of chickens through.

Please don’t do that

The Power is Yours

There’s a quick overview of what VPC-SC controls offer you and what they don’t. Hope this helped you get a better understanding of how to set it up.

If you have any thoughts, war stories, or corrections, shout out in the comments and let me know!

]]>
<![CDATA[ So you want to do Deep Work? ]]> https://www.zainrizvi.io/blog/so-you-want-to-deep-work/ 5f123d248f96fd003930da33 Thu, 23 Jan 2020 18:12:00 -0800 Deep Work has been called “the ability to focus without distraction on a cognitively demanding task.” Other people call this being in the state of “flow” or “being in the zone” where you can effortlessly focus on your work and be incredibly productive.

“‘The best moments usually occur when a person’s body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile.’… this mental state [is] Flow”

In this information era where all mechanical tasks are being automated, in order to be successful you need to be able to do what machines cannot: be creative. And creativity is something you can generate on demand through deep work, and in his book “Deep Work” Cal explains how you can achieve it.

Below are the key takeaways I had from his book, with a bunch of my own thoughts sprinkled in.

Your Motivation is the Key

“the skillful management of attention is…the key to improving virtually every aspect of your experience.”

Deep Work requires you to concentrate on a topic for long stretches of time. This is an extremely challenging task unless the topic is something that you feel highly motivated to work on.

What happens if you try to work on a task you’re not motivated about? You procrastinate. And delay. And do almost anything else except work on that one task. You might slowly make progress, but you’ll be terribly inefficient.

But if you’re motivated to do something, we can spend hours working on it non-stop and not even feel tired afterwards. We need to harness this motivation.

Even if we feel motivated for a while, our brains are fickle things and sometimes get distracted anyways. It helps to have additional layers of motivation to help us stay motivated to work towards our goals. It’s like multiple hands pushing you in the direction you want to go.

Here are some specific tactics to help us stay motivated:

Tactic #1: Focus on What You REALLY Care About

The first step to being strongly motivated is to focus only on the tasks you really care about.

You need to manage your attention so that you don’t have to force yourself to work. Don’t make yourself say ‘no’ to things you want to do. Instead, find the productive things that you are really longing to do and say ‘yes’ to them. That ‘yes’ will be effortless.

For example, you didn’t have to force yourself away from Facebook to watch the last Avengers movie. When you had a chance to watch the movie, Facebook didn’t even enter your mind.

Find work that captures your attention the same way

“To win the battle for willpower, don’t try to say ‘no’ to the things you want to avoid. Instead try to say ‘yes’ to the subject that arouses terrifying longing, and let that terrifying longing crowd out everything else”

Tactic #2: Create Fast Feedback Loops

This is both a productivity tip and a motivational hack.

Fast feedback loops are when you can quickly measure a result which tells you how effectively you’re working.

Creating feedback loops help you figure out how we’ll you’re progressing towards your goal. The faster you can tell that you’re veering off track, the better.

And seeing that you’re doing good work (or that you need to improve) can push you do keep going or work harder. Conversely, if you can’t see the effects of your actions, you’ll stop caring (goodbye motivation).

Cal specifically mentions measuring what he calls ‘lead measures’, which are items which imply you will be successful (e.g. hours spent in deep work, or mini tasks that you’ve completed). He contrasts that to ‘lag measures’ like number of sales, since the latter takes a lot longer to acquire, meaning it’ll take a lot longer to get that feedback.

It can be tricky to identify good lead measures, but they should be behaviors that would drive success on the lag measures. But the general idea is to see how you can get feedback as fast as possible, which is in line with many other popular philosophies such as MVPs and Fail Fast.

Tactic #3: Compete and Win

Add to your motivation by competing against yourself (or others) with a scoreboard. Cal suggests that people play differently when they’re keeping score. This is really another way of using your lead measures, but you are now attaching a goal to the lead measures and getting your psychology involved as well.

Note that it’s best if you don’t make this a public competition, otherwise you run the risk of optimizing for the wrong metric (the score board) instead of the actual value the score board was meant to represent.

Examples of score boards:

  • Outcomes from your feedback loops
  • Number of hours spent in deep work per day/week
  • Number of lead measures
  • Number of customer sign ups

Tactic #4: Regular Accountability

Have regular meetings of any team that owns a wildly important goal. This could be you setting up a time (weekly or daily) to go over the past weeks scores and plan how to improve the next week. This also plays into the feedback loops that you’re creating and is another angle through which you can make sure your motivation stays high. Social accountability can be a surprisingly motivating lever.

If you’re working solo, you can create accountability in other ways. You can commit to giving regular updates to a group of friends, or to post them on a public forum where your peers will see it.

This tactic can be seen as a variant of a precommitment device

Tactic #5: Work with Great Intensity

Take your goals, estimate how long it’ll take to complete, and give yourself a drastically reduced deadline. Commit to it publicly if possible, and then work intensely to make it happen.

This is kind of like how in school you it would be impossible to write that essay a week before it was due. Yet the day before the deadline words would suddenly pour forth from your fingertips like magic.

Personally, I find it next to impossible to take a self-imposed deadline seriously. I have to pair it with external accountability by telling someone else when about that deadline. Knowing that I’ll have to report my status to that second person suddenly makes that deadline real.

Create a Structure for your Deep Work

Many creatives know the pain of sitting in front of a black piece of paper and then thinking ‘now what’? The best people use a formula that works for them. It seems counterintuitive to use a formula for creativity, but people are most creative when we are given certain limitations (otherwise we risk information overload). Those people have a specific pattern of actions that help them focus and concentrate, and you’ll want to develop one for yourself that you’ll use to do your deep thinking.

There’s no one fixed formula because it’ll vary based on your industry, the type of work you do, and even your personality.

However, you can take the following formula as a starting point. Then overtime you’ll build on it and adjust it to meet your own needs

Starting structure:

  1. Carefully review the relevant variables for solving the problem (the things you can affect) and store them in memory
  2. Define the next-step question you need to answer using those variables. Now you have a specific target for your attention
  3. Focus on your question and try to find an answer using the variables
  4. Consolidate your gains by reviewing clearly the answer you arrived at

There are also a couple pitfalls in your thinking you should watch out for:

  • Distractions: thinking about other things besides the really important goal you have
  • Looping: Going over the same problem again and again, rehashing old results without diving deeper into it. When you notice the loop, catch yourself and shift your attention to the next step

Discarding Distractions + Embracing Boredom = Finding Focus

You have to be able to focus on your deep work. Without this skill, your mind will constatly be wandering to all the distractions constantly bombarding you for attention.

Being able to focus is a skill in and of itself, and you can develop that skill. When you concentrate on something regularly every day (e.g. meditation) you are building up your concentration muscles which will help you have laser focus in other areas of your life.

The way you get better at focusing is to force yourself to get distracted less.

Distractions are things like:

  • Email
  • Social media & web surfing
  • Most of the internet really (you can define specific kinds of internet use as acceptable if it’s critical to your work)
  • 99% of alerts on your phone

Don’t take breaks from Distraction, take breaks from Focus. Focus should be your default state of mind. Only allow yourself to be distracted at predefined times.

You can schedule the occasional break from focus where you can give in to distractions. It has a side benefit of also training yourself to delay gratification.

Implementation tips:

  1. If you need that distraction frequently (e.g. you’re expected to be responsive to email) , then schedule shorter, more frequent, “distraction” breaks
  2. Absolutely no distractions during the distraction free time. You must resist temptation, even when it seems important, this is training your brain
  3. Schedule distraction time at home as well as work. Your brain should be trained to always be in focus mode, even if the thing you’re focusing on is family.
  4. Turn off all non-critical notifications on your phone
  5. Enable Do not Disturb mode on your phone by default, maybe just allowing phone calls to come through. You can check your phone for notifications during your distraction breaks or set it up to automatically disable Do not Disturb mode during your breaks

Cut out your Time Wasters

Eliminate activities that don’t help you achieve your goals.

Cal called this rule “Quit Social Media” but it’s really about removing your biggest time wasters. The logic also applies to TV, YouTube, web surfing, etc.

Social media (and your other time wasters) are tools. They have both positive and negative effects, though the negative comes much more easily.

The Craftsman Approach to Tool Selection: Identify the core factors that determine success and happiness in your professional and personal life. Adopt a tool only if its positive impacts on these factors substantially outweighed its negative impacts

Apply the 80/20 Rule to Your Internet Habits. Figure out which 20% of your online time gives you 80% of the value.

To do this you need to:

  1. Identify the main high-level goals in both your professional and personal life
  2. For each goal, list the two or three most important activities that help you satisfy the goal.
  • The activities should be specific enough to allow you to clearly picture doing them, but general enough to not be tied to a one-time outcome
  1. Consider the network tools you currently use
  • For each, ask whether the tool has a substantially positive, substantially negative, or little impact on the above identified activities.
  • Only use tools that are substantially positive

Don’t Use the Internet to Entertain Yourself! Make deliberate use of your time outside work. The internet is like a time machine that fast forwards you through hours of your life (like in the movie ‘Click’).

Put more thought into your leisure time. Think ahead how you want to spend you free time later that day or on a future day. You can use that time to focus on things shown to increase happiness, such as improving your relationships, engaging in structured hobbies, enjoy nature, etc.

Remove Shallow Work as much as Possible

Shallow work consists of noncognitively demanding, logistical style tasks, that you can often perform while distracted (e.g. filling forms, sending emails). Essentially it’s any work that’s not deep work.

These efforts tend not to create much new value in the world and are easy to replicate .

Shallow work that increasingly dominates the time and attention of knowledge workers is less vital than it often seems in the moment. Replacing shallow work with deep work means you do more work which is actually profitable. (Of course, there is a limit to how much shallow work you can actually cut out)

Deep work is cognitively exhausting. In the beginning, an hour a day is a reasonable amount of deep work time. Experts can do up to four hours, but rarely more.

The rest of the time can be spent in shallow work.

The following are tactics designed to push you to not waste time on shallow work. They’re all designed around getting you to deliberately choose how you’re spending your time

Tactic #1: Schedule Every Minute of Your Day

At the beginning of each workday, divide your day into blocks and write what activity you’ll be doing in each block.

Issues that’ll come up: your estimates were wrong, and other obligations/interruptions will come your way unexpectedly

That’s okay, just remake your schedule the first chance you get. It’s okay if you have to redo your schedule a dozen times a day

The goal here is to force yourself to be conscious about how you’re spending your time. It’s a way to make you think “what’s the best thing I could be doing with my remaining time?” This question will make you less likely to spend time on less productive tasks.

Tactic: add overflow conditional blocks, blocks where you were doing Activity A before and you’ll continue doing it if it takes longer than expected, but if you finish A then you’ll do Activity B instead

Tactic: include a ‘miscellaneous’ block for handling generic things that need to be done (email, interruptions, etc)

Tactic #2: Finish your Work at the Same Time Each Day

Fixed-schedule productivity: have a firm goal of not working past a certain time, then work backwards to find productivity strategies that allow you to satisfy this declaration.

  • You drop what can’t be done, and find ways to maximize what you want to do given your limited time budget

Tactic: set drastic quotas on the major sources of shallow endeavors while protecting the deep efforts

  • Be asymmetric in the culling your activities to make the fixed-schedule productivity work: cut the shallow while preserving the deep
  • Limiting your time forces you to carefully think about your actions, forcing you to be more productive

This is a meta-habit that’s simple to adopt but broad in impact

Give Yourself Down Time

Aka: Don’t take your work home

You need to be able to relax your brain when you’re not working, so that it’s ready to again give a 100% when you start working again. If you don’t have any problems with that, then go ahead and skip this section. If you find yourself thinking about work when you’re sitting with your family, then read on.

One tactic for this: Have a Shutdown Ritual

The idea is that if you find yourself thinking about things you need to do, you need to come up with a system that you trust to make sure that everything important will be taken care of. GTD, bullet journaling, BASB are all different systems meant to achieve this goal, and they all revolve around the idea of documenting your task list in a way that you trust.

At the end of your workday, have a shutdown procedure so that your brain can stop thinking about what you still need to do. This frees your brain up so that it can rest and recover in the evening.

The process should be:

First, ensure every incomplete task, goal, or project has been reviewed and that for each you have confirmed that either

  1. You have a plan you trust for its completion
  2. It’s captured in a place where it will be revisited when the time is right

If you follow a GTD, bullet journaling, or BASB type system, that system should be meeting the above needs. If it doesn’t, it’s a good idea to analyze your routine to see why it falls short and seeing what you can tweak to fix that.

Cal adds that when you’re done, have a set phrase you say that indicates completion (e.g. “Shutdown complete”). I never did this part myself, but I can see how the routine-ness of it could help flick a mental trigger for some folks.

Conclusion

Deep Work can be an incredibly rewarding activity, both personally and professionally.

In short, the keys to successfully engaging in deep work are to:

  1. Work on tasks you find Highly Motivating and then motivate yourself some more
  2. Develop a Structure for how you’ll do deep work. Don’t just sit there waiting for inspiration to strike
  3. Train yourself to Get Better at Focusing. You can’t do deep work if you constantly get distracted
  4. Get Rid of your Time Wasters. I’m looking at you, Facebook
  5. Remove Shallow Work as much as possible. Sometimes that work is important, but often it’s not really
  6. Save some Down Time to Recover. Deep work is hard, give your brain time to rest

If you found these tips useful, sign up below to get new articles I write at the intersection of self-improvement, psychology, business, and technology.

]]>
<![CDATA[ How to setup a Free Custom Domain Email Address ]]> https://www.zainrizvi.io/blog/how-to-setup-a-free-custom-domain-email-address/ 5f123d248f96fd003930da34 Wed, 15 Jan 2020 00:00:00 -0800 I recently discovered that it’s possible to combine any domain that you own with your Gmail account and a free MailGun account to get a free custom domain email address!

When you’re done following the below instructions you’ll be able to send and receive emails addressed to you@yourdomain.com directly from your Gmail inbox.

Step 1 - Buy a domain

Okay, so you still have to pay to own your domain, but the rest is free. And if you want a custom domain email address, then chances are you’re interested in your professional reputation. Owning your own domain is a good idea even if you don’t plan to start your own sure just yet. I held off on it a couple years and that was enough for a law student to grab zainrizvi.com -_-

I prefer to buy my domains through namecheap.com, though pretty much any company would work. Use whatever you like, but don’t let the decision of which site to use to stop you from buying your domain! (Use namecheap.com if you’re uncertain)

Step 2 - Sign up for a mailgun.com account

Edit: Mailgun now charges for mail forwarding, but at the bottom of this email I've listed other services which let you do the same thing for free. Just replace Mailgun with your chosen service.

This is the secret sauce. We’ll use Mailgun to forward any email sent to your domain straight to your Gmail inbox (umm…you should sign up for Gmail too if you haven’t already)

Go to their site at http://www.mailgun.com and sign up. It’s free (even if they want you to give them your credit card)

Step 3 - Register your domain with Mailgun

Within MailGun find the option to add a new domain (it’s in Sending -> Domains -> Add New Domain). Chances are they’ll have setup their new user onboarding to guide you through that exact process. Follow the instructions on the page to setup your domain.

While MailGun does require a credit card before you can add your domain, they won’t charge you anything if you follow the steps in this blog.

At some point they will ask you to update your domain’s DNS records. It seems scary but they walk you through the worst of it.

Then wait for MailGun to confirm that everything is setup correctly (it tends to take less than a hour, but could be longer)

Step 4 - Setup mail forwarding within Mailgun

Within MailGun go to the Receiving section and click “Create Route”.

Set the “Expression Type” to “match recipient” and then for the recipient enter the exact email address you’d like to have (I’m using example@zainrizvi.io). Ensure the checkbox under “forward” is checked, and enter your gmail address there. Let’s pretend my gmail address is youraddress@gmail.com.

Once you hit save, any emails that get sent to that recipient email address (in this case ‘example@zainrizvi.io’) will get forwarded to your Gmail address.

Now you can receive emails at your custom domain, but if you reply to any email the recipient will see that you’re sending it from your gmail account. We’ll fix that in the next step.

Note: Don’t bother trying to send messages from the same Gmail account that the message is being forwarded to. Gmail tries to be smart and hides any messages you’re sending to yourself. So if you’re testing it out, send the message from a different email address instead.

Step 5 - Tell Gmail to send messages from your custom address

Gmail has this handy feature called “Send Mail As” which we’ll be taking advantage of here.

In Gmail, go to Settings -> Accounts and Import -> “Add another email address”

Enter the custom domain email address you created a route for in mailgun and click “Next Step)

The next screen asks you to input your mailgun SMTP credentials.

You can get those from mailgun by navigating in Mailgun to Sending -> Domain Settings -> SMTP Credentials, and ensuring you’re using the correct domain from the drop down menu at the top

Click the “Reset Password” button to get a new password which you can let Gmail use to log into the SMTP server. You’ll also find the username and SMTP Server to use on that page

Enter the relevant information into the gmail window and click “Add Account”. Gmail will then send your custom domain email address a message asking to confirm that it’s okay with gmail sending emails on it’s behalf. Since you’ve already set up email forwarding, that email will land right in your gmail inbox :)

Click the link to confirm and you’ll now be able to send messages using your custom address.

As a final step, if you want to make this new email address the mail one you use, go back go Gmail -> Settings -> Accounts and Imports -> Send Mail As. Find your new email address in that list and click on the “make default” button.

All emails you send will now be sent via your custom domain instead!

And Enjoy!

You’re done! Now you’ve added email to your custom domain at no additional cost, and you get to keep using the wonderful Gmail interface.

The One Caveat…

There is one caveat with this setup: In the unlikely event Mailgun ever decides to charge for their route forwarding or SMTP server you would suddenly have to pay to keep this going. However, there are other online services which make a similar setup possible for free, so you could always move to them. Some of the ones folks have pointed out to me are:

If you know of any other services which make a similar setup possible let me know and I’ll add it to the list above!

Did you find this post useful? I’d love to hear it! Drop a comment below or send me a message at my-first-name @ zainrizvi.io.

]]>
<![CDATA[ How to Create Customized Deep Learning Containers ]]> https://www.zainrizvi.io/blog/create-custom-deep-learning-containers/ 5f123d248f96fd003930da35 Tue, 17 Dec 2019 00:00:00 -0800 Ever find yourself needing to install the same packages on all your deep learning notebooks? Or maybe wishing you could send your exact setup to someone else who could run your notebook? Or perhaps you’re a corporation which wants all your data scientists to have some internal libraries on all their notebooks.

Turns out you can. GCP’s AI Platform Notebooks team offers Deep Learning Containers, which is a containerized version of the exact same images you get when you create a regular AI Platform Notebook (full disclosure: that’s my team).

And those containers are 100% free

Why would you want to use one? A quick list of benefits you can expect by using these:

  • Ability to run these deep learning environments anywhere, including directly on your laptop
  • Have your favorite libraries pre-installed by default. You avoid having to customize your notebook environment every time you create a new notebook
  • Have a consistent environment used by all of your data scientists
  • Ability to modify or replace the default Jupyter Lab IDE (if you really want to)

Below I’ll be walking you through the steps I took to create a Jupyter Lab container that lets you run Tensorflow with GPUs, but you can modify these instructions to meet your own exact needs.

Disclaimer: While my team offers the Deep Learning containers (among other products), I myself have never used containers before. So the below is the results of my first real experimentation and if you know of better ways to achieve what I’m doing please let me know in the comments!

At the bottom of the post are the key lessons I learned:

  • Differences between DLVM images and DL Container images
  • Some productivity hacks for working with Dockerfiles

Prerequisites

In order to follow along with the rest of the post I’ll assume you have the following installed on your computer:

  • docker
  • gcloud (optional)

Download a container

Let’s take a quick look at what containers we have available to us by running

$ gcloud container images list --repository="gcr.io/deeplearning-platform-release"

Currently that command outputs:

$ gcloud container images list --repository="gcr.io/deeplearning-platform-release"
NAME
gcr.io/deeplearning-platform-release/base-cpu
gcr.io/deeplearning-platform-release/base-cu100
gcr.io/deeplearning-platform-release/beam-notebooks
gcr.io/deeplearning-platform-release/pytorch-cpu
gcr.io/deeplearning-platform-release/pytorch-cpu.1-0
gcr.io/deeplearning-platform-release/pytorch-cpu.1-1
gcr.io/deeplearning-platform-release/pytorch-cpu.1-2
gcr.io/deeplearning-platform-release/pytorch-cpu.1-3
gcr.io/deeplearning-platform-release/pytorch-gpu
gcr.io/deeplearning-platform-release/pytorch-gpu.1-0
gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
gcr.io/deeplearning-platform-release/pytorch-gpu.1-2
gcr.io/deeplearning-platform-release/pytorch-gpu.1-3
gcr.io/deeplearning-platform-release/r-cpu
gcr.io/deeplearning-platform-release/r-cpu.3-6
gcr.io/deeplearning-platform-release/tf-cpu
gcr.io/deeplearning-platform-release/tf-cpu.1-13
gcr.io/deeplearning-platform-release/tf-cpu.1-14
gcr.io/deeplearning-platform-release/tf-cpu.1-15
gcr.io/deeplearning-platform-release/tf-gpu
gcr.io/deeplearning-platform-release/tf-gpu.1-13
gcr.io/deeplearning-platform-release/tf-gpu.1-14
gcr.io/deeplearning-platform-release/tf-gpu.1-15
gcr.io/deeplearning-platform-release/tf2-cpu
gcr.io/deeplearning-platform-release/tf2-cpu.2-0
gcr.io/deeplearning-platform-release/tf2-gpu
gcr.io/deeplearning-platform-release/tf2-gpu.2-0

That’s a list of all the different environments available for you to choose from. You can see Tensorflow, Pytorch, R, and others on the list, and most of them come in both CPU and GPU variations.

We’ll take the Tensorflow 2 CPU image and modify it to create our custom environment. My goal here is to create a containerized version of an R environment with support for using GPUs with Tensorflow available out of the box. I previously walked through a script that does all this for you on a AI Platform Notebook, but that script took tens of minutes to run and who has time to wait that long for each of their notebooks?

This solution will hopefully get us to the point where we get both of those things available in two minutes.

Steps

You can follow along these instructions by cloning the https://github.com/ZainRizvi/UseRWithGpus/ repository and running the below commands from there

1. Create your image

We’ll create a super simple image first. We’ll use the Tensorflow 2 CPU image as our base and not change anything other than adding our own name as the maintainer of the new image.

To do this, create a dockerfile and give it the following contents

FROM gcr.io/deeplearning-platform-release/tf2-gpu
LABEL maintainer="Zain Rizvi"

Note: I named my dockerfile tensorflow-2-gpu.Dockerfile and put it under the “dockerfiles” subdirectory, and will be using that for the rest of my examples. But convention is to just name your dockerfile “Dockerfile”

Now cd to the directory that contains that file and run docker build . -f dockerfiles\tensorflow-2-gpu.Dockerfiles And Docker will download that image from the GCP repository, apply your custom label to it, and save the resulting image locally.

Note: If you name your dockerfile “Dockerfile” and place it in your current directory, you can skip the -f [filename\] parameter.

You’ll see something similar to the following

$ docker build . -f dockerfiles\tensorflow-2-gpu.Dockerfiles
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM gcr.io/deeplearning-platform-release/tf2-cpu
latest: Pulling from deeplearning-platform-release/tf2-cpu
35c102085707: Already exists
251f5509d51d: Already exists
…
928e12577c37: Pull complete
48d9ceba06f1: Pull complete
Digest: sha256:88ae24914e15f2df11a03486668e9051ca85b65f8577358e7d965ce6a146f217
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/tf2-cpu:latest
---> e493f17c90d0
Step 2/2 : LABEL maintainer="Zain Rizvi"
---> Running in 561cbb80b0c5
Removing intermediate container 561cbb80b0c5
---> 8cee7adcf9c3
Successfully built 8cee7adcf9c3

Note the id in the last line Successfully built 8cee7adcf9c3. That 8cee7adcf9c3 is a local image id, and it will be important when we want to push our image (a couple steps down).

2. Push your image to a repository

To push your image, you need a registry to push it to. I’ll assume you’re using Docker Hub (which is free for public registries) but you can use whatever registry provider you prefer. For a Docker Hub registry you can go to hub.docker.com and create your public registry. You’ll need to create an account first though if you don’t have one already

Before the push, make sure you’re logged into docker from within the console (enter your password when prompted):

$ docker login --username zainrizvi

Now to push we need to tell docker which image it should be pushing to our new registry. We do this by tagging the image we built with the path of our registry and add an optional tag (yeah, the overload of the word ‘tag’ is a bit annoying).

Remember that image Id I told you to note earlier (mine was 8cee7adcf9c3), now is when you need that Id. We’ll tag that Id with the path to the repository we want to use:

$ docker tag [ImageId] [repo-name]:[image-tag]

Example:

$ docker tag 8cee7adcf9c3 zainrizvi/deeplearning-container-tf2-with-r:latest-gpu

If you run docker images you should now see an image with that repository and tag

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
zainrizvi/deeplearning-container-tf2-with-r latest-gpu 8cee7adcf9c3 4 minutes ago 6.26GB

However, just because we’ve tagged the image doesn’t mean it actually exists in the repository. We have to do a docker push to get it in there:

$ docker push zainrizvi/deeplearning-container-tf2-with-r

And now if you go to your docker registry you’ll see that the image is there for anyone to view and download

So that was cool, but we didn’t really do anything special. We’re not pre-configuring any of the packages we really need or anything like that.

Let’s now add some actual customizations to this image

3. Customize your image

Let’s extend this Dockerfile to support using Tensorflow with GPUs on an R notebook.

I’ve shared a few scripts on GitHub which can install R onto your AI Platform Notebooks, but those script takes way too long to run them every time you make a new notebook. Instead, I’d rather run the script in a container just once, and then save that container for future notebooks.

The scripts referenced below are chunks of logic I pulled out from these master scripts. You can read more about what those scripts do in this blog post on using R with GPUs . Splitting the logic into multiple scripts made this stuff much easier to debug (what problems did I run into that had to be debugged? I’ll tell you about it in a future post).

FROM gcr.io/deeplearning-platform-release/tf2-gpu
LABEL maintainer="Zain Rizvi"

RUN apt update -y
RUN mkdir steps
COPY steps/* /steps/
RUN chmod +x /steps/*

RUN /steps/1-Install-generic-dependencies.sh
RUN /steps/2-register-with-r-repository-ubuntu.sh
RUN /steps/3-Install-R-and-IRkernel.sh
RUN /steps/4-Install-common-R-packages.sh -m GPU
RUN /steps/5-Add-rpy2-support.sh
RUN /steps/6-Install-keras.sh

And now we can run docker build . -f dockerfiles\tensorflow-2-gpu.Dockerfiles again. This time the command will take a long time to complete (because some of those steps are sloooooow).

But once it completes, we’ll again be given a new image Id similar to the one we saw earlier. Just tag that and push it to your registry the same way we did before

$ docker tag xxxxxxxxxxxxx zainrizvi/deeplearning-container-tf2-with-r:latest-gpu
$ docker push zainrizvi/deeplearning-container-tf2-with-r

And now your image is available to use on your registry

Use your image!

To use your newly created image on AI Platform Notebooks:

  1. Go to the notebooks page -> New Instance -> Customize Instance
  1. Under the environment drop down select “Custom container”

Then in the “Docker container image” box enter the path to the registry you pushed your image to. Mine is: zainrizvi/deeplearning-container-tf2-with-r:latest-gpu

Click create, and in few minutes your notebook will be ready. You can open it up and see that TensorFlow is ready to go

And there you go, you now have an R notebook that can run Tensorflow on GPUs!

It wasn’t all Roses and Rainbows

The more astute among you may have noticed that while the script I previously demoed was just, well, a single script, the dockerfile above contains six different scripts which seem to be the original script split into six parts. The eagle eyed may even notice that some parts of the script have been slightly changed, and that I’m no longer compiling XGboost.

Turns out the Deep Learning VM images and Deep Learning Containers are note quiiiiite 100% identical…

Key differences encountered:

  • VM images run on Debian OS while containers run on Ubuntu
  • Container images don’t the CUDA compiler installed, which is (surprise) required to compile GPU binaries. It contains all the binaries required for runtime though. Turns out those were omitted in order to reduce the size of the docker container.
  • [mild] Containers get very confused if you give them a command that starts with “sudo”. Not a big deal since every command in a container runs as ‘sudo’ anyways

This led to a lot of time spent debugging what I had thought was a solved problem. (And did I mention this was my first time using docker containers?). Which led to…

Key productivity hacks discovered:

  • In your dockerfile, split up your mega script into multiple smaller scripts. Docker will ‘cache’ the results of your previous, successful scripts and restart the build from the script that was changed (downside: this adds more layers to your docker image, but there are workarounds)
  • Edit on the go: If you setup docker hub to pull your code from github and build the image, you can make minor 1-minute fixes from your phone directly on github, commit, and go about your day while docker hub starts a new build run (which may take 2-4 hours to complete…Docker hub is slowwwwww. But it’s free, and enables this nice productivity hack)

If you’d like to hear about the craziness I encountered debugging this image (it was over 7 hours of debugging + waiting for scripts to run), sign up on the form below to get an email when that article comes out.

]]>
<![CDATA[ The Essential Git Cheat Sheet ]]> https://www.zainrizvi.io/blog/git-cheat-sheet/ 5f123d248f96fd003930da37 Wed, 27 Nov 2019 00:00:00 -0800 Quick cheat sheet of commonly used git commands that I’d otherwise have to keep Googling to remember

Rebase your branch to the latest master

If your current branch was branched off of master and you want to pull the latest updates pushed to master since then:

git pull --rebase origin master

Pull a branch other than the one you're on

For example, if you're not on the master branch but want to pull and merge master

git fetch origin master:master

Committing Changes

Merge changes with last commit

git commit --amend --no-edit

Squash commits

To squash the last four commits into one, do the following:

git rebase -i HEAD~4

or

git rebase -i [hash of commit before the one last one you want to squash]

In the list that shows up, replace the word “pick” with “squash” or “s” next to all but the oldest commit you’re squashing.

Then, to change the timestamp of the resulting commit:

git commit --amend --reset-author --no-edit

Set the date of the last commit to right now

You may want to do this after you’ve squashed some commits

git commit --amend --no-edit --date "$(date)"

Editing commits

Edit a commit in your history

Say you have commits: A -> B -> HEAD and you want to edit B:

Do:

  1. git rebase -i B~. note the tilda is important, you want to point to the commit before B
  2. In the opened file, go to the commit in the list that you want to edit and change pick to edit. Save and close the file
  3. Make the edits you want, save them, stage them
  4. Do git commit --all --amend --no-edit. This amends the last commit (B)
  5. Type git rebase --continue

More details at: https://stackoverflow.com/a/1186549/21539

Insert a commit in your history

Same as Editing a commit in your history, except for in step 4 do a normal commit, not an --amend. This'll add your change as a new commit. Then continue the rebase.

More details at: https://stackoverflow.com/a/32315197/21539

Undoing/Reverting Changes

Discard all local changes to tracked files

git reset --hard

Discard all local changes to a single unstaged file

git checkout --[file]

Discard changes to all untracked/unstaged files. This cannot be undone

# first run 'git clean -n' to get a preview of what will be deleted
git clean -f

Discard all local changes

Combine the above to get:

git reset --hard
git clean -f 

Revert/Undo an Entire Commit

Note: this leaves the commit in the branch history

git revert [commit-id]

Revert/Undo changes to a single file in a commit

This will undo the changes to a single file from the given commit id.

The edited file will be unstaged.

git show [commit-id] -- [file] | git apply -R

Set your branch head to the remote branch’s head

Use when you want to discard any local changes and start over from the remote branch state

# Replace 'origin/master' with your desired remote branch or 
#   the specific CL you want to set the HEAD to
git reset --hard origin/master 

Pull a new remote branch to your existing repo

For when you want to pull a new branch from your repo origin. You can replace ‘origin’ with your desired remote branch if you’re using something different

git checkout --track origin/[branch_name]

For more details see this article

Rebase your branch to a different parent branch

If your current branch was forked from one branch, but you instead want to have you changes applied on top of a different branch

From the branch that you want to rebase run:

git rebase --onto [desired_parent_branch] [current_parent_branch]

More details here

Changing the Branch HEAD

To change the current branch’s head to a different commit

This can be used as a form of undo to remove commits from the dependency tree

git reset --hard [commit-id]
git push -f # This line changes the head on the remote branch as well

Finding SHA ids for old branch heads

git reflog lets you see what individual local branches used to point to on your box

Create a New Branch from Current Branch

git checkout -b [newBranchName]

And then to push that branch to a remote repository:

git push -u origin [newBranchName]

Diffing Changes

Key thing to note is that appending ~ to the end of a commit makes it refer to its parent. Appending ~N instead makes it refer to it’s Nth parent

Diffing the last commit

git diff HEAD~ HEAD

Diffing a specific commit

git diff [commitId]~ [commitId] #e.g. git diff f23a4s~ f23a4s

Diffing the last N commits

git diff HEAD~N HEAD # e.g diff last 4 commits: git diff HEAD~4 HEAD

Pretty Git Log printout

Add the following to your git config file to enable pretty logs (based on work by Filipe Kiss)

alias.lg=!git lg1
alias.lg1=!git lg1-specific
alias.lg2=!git lg2-specific
alias.lg3=!git lg3-specific
alias.lgs=!git lg1 — simplify-by-decoration
alias.lg1s=!git lg1-specific — all — simplify-by-decoration
alias.lg2s=!git lg2-specific — all — simplify-by-decoration
alias.lg3s=!git lg3-specific — all — simplify-by-decoration
alias.lg1-specific=log — graph — abbrev-commit — decorate — format=format:’%C(bold blue)%h%C(reset) — %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(auto)%d%C(reset)’
alias.lg2-specific=log — graph — abbrev-commit — decorate — format=format:’%C(bold blue)%h%C(reset) — %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(auto)%d%C(reset)%n’’ %C(white)%s%C(reset) %C(dim white)- %an%C(reset)’
alias.lg3-specific=log — graph — abbrev-commit — decorate — format=format:’%C(bold blue)%h%C(reset) — %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset) %C(bold cyan)(committed: %cD)%C(reset) %C(auto)%d%C(reset)%n’’ %C(white)%s%C(reset)%n’’ %C(dim white)- %an <%ae> %C(reset) %C(dim white)(committer: %cn <%ce>)%C(reset)’

With the above in your git config file you can just enter the below to see your history:

git lg

Or to limit history to just the past N commits:

git lg -[# of commits]  # Example: git lg -5
]]>
<![CDATA[ How to Use GPUs with R in Jupyter Lab ]]> https://www.zainrizvi.io/blog/using-gpus-with-r-in-jupyter-lab/ 5f123d248f96fd003930da36 Wed, 27 Nov 2019 00:00:00 -0800 Have you ever tried installing drivers for your Nvidia GPUs? The first time I tried, I spent the better half of an afternoon trying to get that done.

And once I realized I also had to recompile multiple packages to actually use those GPUs, I was one error message away from being this guy:

Things have gotten a lot better since then.

In this post I’ll share an easy way to setup your R language Jupyter Notebooks to use GPUs. (Though if you prefer to use R outside of a notebook, these steps let you do that too)

It’s a deep dive into one slide of a talk I gave at Nvidia’s GTC 2019 conference a few weeks ago.

The Easy Way

There are three things you need to get going:

  1. A machine with Nvidia GPU drivers installed
  2. Install R and Jupyter Lab
  3. Compile those R packages which require it for use with GPUs

If you use AI Platform Notebooks or Deep Learning VM images, the Nvidia GPU drivers will be pre-installed for you (notebooks will give you the easiest experience). You can also find offerings form other companies which also have the drivers pre-installed, taking care of step 1.

Once your machine with GPU drivers is ready, SSH into it and run the following command:

sudo -- sh -c 'wget -O - https://raw.githubusercontent.com/ZainRizvi/UseRWithGpus/master/install-r-gpu.sh | bash'

There, one line and you’re done.

It downloads a script from my GitHub repository and executes it on your machine, handling all the tricky parts. That’s it, you can now stop reading this article.

However, if you’re anything like me, you may be a liiiiittle bit wary of running random code from the internet.

Lets go deeper into what exactly this script does and make sure it’s safe to run.

What’s going on here?

I’ll walk through the code step by step to explain what it does. You can open up the code on GitHub if you’d like to see the full file.

1. Install common packages

Let’s take it from the top:

#!/bin/bash

# Install R

#required by multiple popular R packages
apt install -y \
    libssl-dev \
    libcurl4-openssl-dev \
    libxml2 \
    libxml2-dev

We’re installing some packages via apt. These are dependencies for some of the R packages we want.

Seem safe enough

2. Installing R

Turns out installing R is a little complicated. You have to:

  1. Install additional dependencies
  2. Add a whole new repository to your config
  3. Tell your computer to trust that new repository
  4. Then install r, presumably from that new repository

And the code for it:

# Install the lastest version of R from the offical repository
apt install apt-transport-https software-properties-common ocl-icd-opencl-dev -y
apt install dirmngr --install-recommends -y
apt-key adv --keyserver keys.gnupg.net --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF'

add-apt-repository "deb http://cloud.r-project.org/bin/linux/debian stretch-cran35/"

apt update
apt install r-base -y

The steps start to seem a bit iffy here (add a new key? a new repository?), but these are indeed part of the official instructions. Feels shady, but it really is legit. The official docs and various other tutorials all say the same.

(Still feels like 👇)

3. Integrate with Jupyter Lab/Jupyter Notebooks

Now we setup Jupyter Lab (or Jupyter Notebooks if you’re using that) to use R.

We install the IRkernel and register it with Juptyer.

You can skip this step if you’re not planning to use Jupyter Lab or Jupyter Notebooks

# Install IRkernel
Rscript -e "install.packages(c('repr', 'IRdisplay', 'IRkernel'), type = 'source', repos='http://cran.us.r-project.org')"

# Register IRkernel with Jupyter
Rscript -e "IRkernel::installspec(user = FALSE)"

4. Install your favorite R packages

This part is nice an simple. We install whatever R packages you want from CRAN. Feel free to install a different set of packages from what I chose.

Note that over here you can only install those packages which do not need to be recompiled for usage with GPUs. The notable example is XGBoost (a handy ML library), which I’m not installing here. It’ll need to be recompiled and I’ll do that further down.

# Install various R packages

function install_r_package() {
    PACKAGE="${1}"
    echo "installing ${PACKAGE}"
    Rscript -e "install.packages(c('${PACKAGE}'))"
    # install.packages always returns 0 code, even if install actually failed
    echo "validating install of  ${PACKAGE}"
    Rscript -e "library('${PACKAGE}')"
    if [[ $? -ne 0 ]]; then
        echo "R package ${PACKAGE} failed to install."
        exit 1
    fi
}

function install_r_packages() {
    PACKAGES=(${@})
    for PACKAGE in "${PACKAGES[@]}"; do
        install_r_package "${PACKAGE}"
    done
}

# Install google specific packages
CLOUD_PACKAGES=( \
  'cloudml' \
  'bigrquery' \
  'googleCloudStorageR' \
  'googleComputeEngineR' \
  'googleAuthR' \
  'googleAnalyticsR' \
  'keras' \
)
install_r_packages "${CLOUD_PACKAGES[@]}"

# Install other packages
 OTHER_PACKAGES=( \
   'tidyverse' \
   'httpuv' \
   'ggplot2' \
   'devtools' \
   'gpuR' \
   'xgboost' \
)

 install_r_packages "${OTHER_PACKAGES[@]}"

5. Setup the default installation dir for your R packages

By default R will write packages to a location which is not writeable without sudo access, making it tricky to install packages, especially from within a Jupyter notebook.

The below code sets up a new directory ~/.R/library to be used as the default location. This requires creating a default environment variable that will always be set on boot, and verifying that the folder always exists every time your VM boots up.

# Setup the default location for user-installed packages
export R_LIB_SETUP="/etc/profile.d/r_user_lib.sh"
cat << 'EOF' > "$R_LIB_SETUP"
export R_LIBS_USER=~/.R/library
# Ensure this directory exists at startup.  It needs to be in a persistent,
# user writable location.
mkdir -p "${R_LIBS_USER}"
EOF

chmod +x "${R_LIB_SETUP}"

6. Compile and install XGBoost for GPU

This is the most complicated step of the whole process.

The default xgboost on CRAN doesn’t support GPUs, so we have to compile it from scratch.

However, the version of cmake on Ubuntu is too out of date to be able to compile xgboost (at least that’s the case on the default image used by AI Platform Notebooks).

A newer version is not available in the repository, so we have to download and install it directly.

# Install cmake (required to compile xgboost)
wget https://github.com/Kitware/CMake/releases/download/v3.16.0-rc2/cmake-3.16.0-rc2-Linux-x86_64.sh

chmod +x cmake-3.16.0-rc2-Linux-x86_64.sh
CMAKE_DIR=/opt/cmake-custom
sudo mkdir $CMAKE_DIR
sudo ./cmake-3.16.0-rc2-Linux-x86_64.sh --skip-license --prefix=$CMAKE_DIR --exclude-subdir
rm cmake-3.16.0-rc2-Linux-x86_64.sh

sudo ln -s $CMAKE_DIR/bin/* /usr/local/bin

The steps are:

  1. Download the cmake v3.16.0 installer
  2. Make it executable
  3. Create a directory for it to install the script to
  4. Execute the installer
  5. Clean up afterwards
  6. Add the new cmake to PATH

And then of course we have to compile xgboost itself:

# Install xgboost
cd
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost
mkdir build
cd build
cmake -DUSE_CUDA=ON -DR_LIB=ON -DR_LIB=ON -DUSE_NCCL=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.1 -DNCCL_ROOT=/usr/local/nccl2 ..

sudo make -j4
sudo make install

That’s another download build and install.

Note that the cmake command takes a bunch of flags. The current command is optimized for running on AI Platform Notebooks, but you’ll want to modify --DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.1 to instead point to whatever location your own CUDA files are located

7. Install rpy2 for Python + R magic

Ok, this step isn’t strictly necessary, but it lets you do something really cool. With this you’ll be able to create Python notebooks and then call R functions from inside the Python notebook!!!

You can even pass variables back and forth between the two languages! You can run python code, get an output table, pass that output to your R code to view the data in a pretty graph.

It’ll let you use each language for whatever it’s best at, using the best tool for each job!

function install_pip2_package() {
    pip2 install --upgrade --upgrade-strategy only-if-needed --force-reinstall "$@" || exit 1
}

function install_pip3_package() {
    pip3 install --upgrade --upgrade-strategy only-if-needed --force-reinstall "$@" || exit 1
}

function install_pip_package() {
    install_pip2_package "$1"
    install_pip3_package "$1"
}

# Install rpy2

# To invoke R code in a python notebook, run the following code in a cell:
#   import rpy2.robjects as robjects
#   import rpy2.robjects.lib.ggplot2 as ggplot2
#   %load_ext rpy2.ipython
#
# Then you can use the %R and %%R magic commands to run R code

install_pip_package tzlocal # required by rpy2
# 3.0.5 is the last version that works with Python 3.5
install_pip3_package rpy2==3.0.5 # Code in both Python & R at the same time

8. Restart your VM

Remember how in step 5 we created a file to set your environment variable at boot time? We never actually executed that file.

Lets reboot your machine now so that script takes effect

# Reboot so that R user-installed packages path change takes effect
sudo reboot

Aaaaaaaand Done

Whew, that was a lot of steps. It would be a pain to run those every time you create a new VM. Fortunately you can just download and run the script I mentioned earlier, and directly start using GPUs within your R notebooks.

Want to make it even Faster?

The above script is convenient, but it still takes a good amount of time for it to finish running (around X0 minutes). Personally, I’d rather not wait that long for my notebook to be ready.

If you’d like to have your notebook be ready in just two minutes instead of twenty, you can create a Custom Deep Learning container with all of the above pre-installed.

I’ll be writing instructions on how to set up Custom Deep Learning Containers for GPU-based R projects (coming soon). Subscribe below to get an email when the article is ready!

]]>
<![CDATA[ Use Virtual Environments Inside Jupyter Notebooks & Jupter Lab [Best Practices] ]]> https://www.zainrizvi.io/blog/jupyter-notebooks-best-practices-use-virtual-environments/ 5f123d248f96fd003930da38 Fri, 08 Nov 2019 00:00:00 -0800 Using Virtual Environments has become a standard best practice in the Python community. They allow you to work on multiple python projects at the same time, without one accidentally corrupting the dependencies of another. While using these Virtual Environments has become the norm with Python projects, they haven’t yet caught on in Python notebooks. However, they’re easy to add to your Jupyter Notebook or Jupyter Lab setup. This post will describe just how you can use them.

What will this enable you to do?

By following these steps, you can have multiple notebooks running on the same machine in Jupyter Lab, where each notebook uses own versions of potentially conflicting python packages.

We’ll do this by creating an isolated python virtual environment for each notebook, so that each notebooks runs inside it’s own environment.

If you’re using Google’s AI Platform Notebooks, the scripts below will allow you to keep using the awesome deep learning packages that come pre-installed on them, while isolating each of your notebooks from any new packages you install for any other notebook.

Why does it matter?

Benefit from established Python best practices! Having each project in a separate virtual environment is an existing best practice for python projects, so it seems logical to extend this behavior to python notebooks as well. Of course, this will only apply to Python notebooks and not notebooks using other languages

How do you set it up?

First you’ll need a Jupyter Lab notebook environment. If you don’t have one already you can quickly create one us Google Cloud’s AI Platform Notebooks

Go to your Jupyter Lab notebook and in it’s terminal enter the following (replacing “myenv” with whatever name you want to give your environment):

# You only need to run this command once per-VM
sudo apt-get install python3-venv -y

# The rest of theses steps should be run every time you create
#  a new notebook (and want a virutal environment for it)

cd the/directory/your/notebook/will/be/in

# Create the virtual environment
# The '--system-site-packages' flag allows the python packages 
#  we installed by default to remain accessible in the virtual 
#  environment.  It's best to use this flag if you're doing this
#  on AI Platform Notebooks so that you keep all the pre-baked 
#  goodness
python3 -m venv myenv --system-site-packages
source myenv/bin/activate #activate the virtual env

# Register this env with jupyter lab. It’ll now show up in the
#  launcher & kernels list once you refresh the page
python -m ipykernel install --user --name=myenv

# Any python packages you pip install now will persist only in
#  this environment_
deactivate # exit the virtual env

After running the above code, you’ll need to refresh your JupyterLab tab for the changes to be visible

The Results

Here’s my launcher after I added two customized environments (“myenv” and “zainsenv”).

And if you try to switch kernels from within a notebook, you’ll see the virtual environments availalbe as kernels for you to use

See it in Action

Here you can see I installed the pytube package in one environment

That package was not visible in the other environment. And then when I installed an older version of the same package in the second environment (let’s pretend it needed the older version). The first environment kept using the newer version of the package

💡
Side note: you might also enjoy my article Interview Advice that got me offers from Google, Stripe, and Microsoft. I share insider tactics I learned over 13+ years that's landed me jobs at multiple tech companies companies including Google, Facebook, Stripe and Microsoft.

And there you have it. You can now create virtual environments on your Jupyter Lab notebooks

Interested in creating Conda Environments instead? Nikolai has a pretty nice write-up here, which I also depended on to write the above steps

]]>
<![CDATA[ Authenticating AI Platform Notebooks against BigQuery in Python ]]> https://www.zainrizvi.io/blog/authenticating-ai-platform-notebooks-against-bigquery-in-python/ 5f123d248f96fd003930da39 Tue, 15 Oct 2019 00:00:00 -0700 When you use AI Platform Notebooks by default any API calls you make to GCP use the default compute service account that your notebook runs under. This makes it easy to start getting stuff done, but sometimes you may want to use BigQuery to query data that your service account doesn’t have access to.

The below instructions describe how to use your personal account to authenticate with BigQuery. This specifically applies to authentication when using a python based notebook. If you want to authenticate on a R based notebook you can find instructions for that here.

Normally you would use gcloud auth login from the jupyer lab terminal to login to your personal user account and call Google apis, but the BigQuery library auth works differently for some reason.

Instead, you need to create a credential object containing your user credentials and pass that to the bigquery library.

Install the pydata_google_auth package:

%pip install pydata_goog_auth

Restart the kernel: Kernel -> Resart Kernel

Import the library and create your credentials:

import pydata_google_auth credentials = pydata_google_auth.get_user_credentials( ['https://www.googleapis.com/auth/bigquery'], )

When you execute the above cell you’ll see an output with an authentication link and a text box

Copy that link, paste it into a browser, and authenticate with google. You’ll see an authorization code similar to the below:

Copy that code and paste it into the authentication code input box you saw in your notebook

Next you’ll want to reload the bigquery magic in your notebook. You ‘reload’ instead of ‘load’ because AI Platform Notebooks already loads the bigquery magic for you by default:

%reload_ext google.cloud.bigquery from google.cloud.bigquery import magics magics.context.credentials = credentials

Now when you use the bigquery magic it’ll use your personal credentials instead of the service account ones:

%%bigquery SELECT name, SUM(number) as count FROM my-private-project.usa_names.usa_1910_current GROUP BY name ORDER BY count DESC LIMIT 10

And that’s all there is to it!

If you’d rather use the python code than invoke the bigquery magic just create a client with the user credentials and query away!

from google.cloud import bigquery as bq
client = bq.Client(project="project-name", credentials=credentials)

Thanks to Anthony Brown for sharing instructions on how to use BigQuery with Jupyter Notebooks

]]>
<![CDATA[ Become Creative by Asking Better Questions ]]> https://www.zainrizvi.io/blog/a-more-beautiful-question-summary/ 5f123d248f96fd003930da3a Fri, 09 Aug 2019 00:00:00 -0700 These are the key lessons I discovered while reading Warren Berger’s A More Beautiful Question. Most material is directly from the book with a few of my own ideas interspersed

Why do we need to ask questions?

The economic model of the world is shifting. The nature of knowledge, work, and employment is changing.

  • What business are we in now?
  • Will my jobs still be needed in a few years?
  • Is “knowing” obsolete in the age of Google?
  • How do you move from asking to action?

The Power of Inquiry: Asking questions is now more important than knowing

Everything starts with “why?”

A Questions Life cycle

  1. Stage 1: Why is something the way it is? Learn about the problem
  2. Stage 2: What If something was done differently? Generate ideas for improvements or solutions
  3. Stage 3: How can this idea be implemented?
  4. Implement the solution

These are described in more detail further down

Fighting Information Overload

With Information Overload we need to Focus on the Context (tips from John Seely Brown)

  • How valid are these beliefs?
  • Is there an agenda behind this information?
  • Is the data stale? Out of date?
  • How does it relate to other information I have?
  • What assumptions am I (or the others) making?

5 Learning Skills and Habits

When you learn anything new, ask for the following:

  1. Evidence - Is it true? How trustworthy is this info?
  2. Viewpoints - How would this look from other perspectives?
  3. Connection - Noticing any patterns? Have I seem something similar before?
  4. Conjecture - What if this information was different? What implications would it have?
  5. Relevance - Why does this matter? (Did I just waste my time reading that?)

Learning to ask “Why?”

  • Why does this situation exist?
  • Why is it a problem?
  • Why does it create a need or opportunity?
  • Who is it a need/opportunity for?
  • Why has no one addressed this yet?
  • Why do I want to spend more time thinking about this?

When you have a question, let it brew in your head. It’ll lead you to combinatorial thinking where you draw connections between it and other things you know or learn.

Breaking Down the Innovation Question Process

Start with Why? => Ask What if? => Figure out How?

Stage 1: Why?

How to get a Good Why Question?

  1. Step back
  2. Notice what others miss
  3. Challenge assumptions (including your own)
  4. Get a deeper understanding of the situation/problem through contextual inquiry
  5. Question the questions being asked (including your own)
  6. Take ownership of a particular question

Questioning the Questions

Your questions can be biased

  • Why did I come up with that question?
  • What assumptions does that question make?
  • Is there a different question I should ask?
  • Repeat the above until you get a good answer

Tools for Improving Questions

  • Shrink or Grow the question in scope (“is X true for the world?” vs “is X true for my city?”)
  • Open or Close the questions to catch assumptions (“Why is X true?” vs “Is X true?”)
  • Tailor questions to the specific relevant context, aka Contextual Inquiry (“Is X true in India?”)

Contextual Inquiry

Talk to actual people and listen to their stories, problems, situations, etc

This requires your decision to be committed to this question. Otherwise you won’t bother doing this

Productive Obsession

Actively pursuing a question. You’ll start thinking of it all the time.

This gradually leads you to the “What If?” stage

Stage 2: What If?

Here are some tactics that can help you come up with ideas when you’re in the “What If” stage

Connective Inquiry

  • Study various fields and let it peculate in your head
  • Boost: Take a few disparate resources/facts/tools and see how they could be used to solve a problem

Thinking Wrong

Thinking in ways that have nothing to do with the problem you’re trying to solve. Mixing and matching random things

This is a form of Divergent Thinking

Connection Exercise (page 113 of the book)

  1. Take a random word and think of ideas around it. You can rearrange letters to get other words, etc.
  2. Take a 2nd random word and do the same
  3. Can you combine both words?
  4. Do it in the context of the problem you want to solve

Random Connections

  • Try to connect random, nonsensical things
  • An oven that doesn’t heat. What can you do with that?
  • A car that doesn’t move

Invert Reality

What if restaurants gave you a menu only when you left?

What impact would that have?

Idea Storming

  • Come up with at least 50 questions about the problem
  • Be sure to push yourself to 50 questions. That’s around when the best questions come
  • This is an alternative to brainstorming

“How Might We…?” - This tends to open up the thinking process better than the question “How can we?”

Perspective Shift: Ask questions from the perspective of another person (“How would Hollywood address this?”)

Stage 3: How?

You came up with many ideas in Step 2. Now chose one idea to pursue. It’s time for convergent thinking

Quickly Identify Bad Ideas via Feedback

You need to get feedback on your idea, and get it fast. There are multiple ways to do this

One tactic: Share the idea to get feedback. Something like a napkin sketch and asking “what do you think of…” could be a good quick protype to use

Rapid experimentation: find the smallest possible experiment that you can run to validate or invalidate your idea. This is a form of creating a Minimum Viable Product

In every experiment you must ask: What will I Learn?

  • Then work backwards to the simplest experiment which will teach you that. This is the real MVP

Note: experimenting will lead to many failures and disappointments

  • Instead of focusing on the failure, focus on the whys behind that failure and what you can learn from it
  • Note what went right
  • Ask “Am I failing differently each time?”
  • Remember, you’re much better off if you fail after a few hours or days of effort instead of after many months of effort!
  • Edison said “I have not failed. I’ve just found 10,000 ways that won’t work.”

Learn to note when you or others are experiencing Cognitive Dissonance, a mental discomfort when facing conflicting attitudes, beliefs, or behaviors

  • This is a sign that you are doing something new
  • This is for getting early feedback on your idea

Collaborative Inquiry

Get help from others!

  • “Do you find this interesting? Want to join me and try to answer it?”
  • To the internet!
  • Not Impossible Labs - A site to find others
“Many more people are drawn to an existing idea they can join in on & help to improve or advance, rather than starting from scratch on their own.”

You are inviting collaborators as equals on a project. “Your” question becomes theirs too.

Questions belong to everyone

“Share your question in hope of you getting something new: solution, perspective, insight, purpose, etc. That thing will be yours”

Questioning in Business

  • Why are we in business? And what business are we really in?
  • What if our company didn’t exist?
  • How can we make a better experiment?
  • Should mission statements be mission questions?
  • How might we create a culture of inquiry?

Living Better by Asking the Right Questions

  • What if we start with what we already have?
  • What if you made one small change?
  • What if you could not fail?
  • How will you find your beautiful question?

What is your goal/purpose in life?

  • Your mission should fit in one sentence
  • This is a mountain you are climbing. Why are you climbing it?

How can I develop a sense of community?

  • Family
  • Friends
  • Neighbors
  • Block party?

What did I love doing as a child? Can I still do any of it?

What are you doing when you feel most beautiful/happy?

“It’s easier to act your way into a new way of thinking than to think your way into a new way of acting”
  • Fake it till you make it
  • [Zain aside: I’m not sure how true this statement is. It seems to apply only to certain types of things]

If we can’t agree on an answer, can we instead agree on a question?

Teaching to Question

  • Can a school be built on questions?
  • Can we teach ourselves to question?
  • How can we make “being wrong” less threatening?

Interesting Teaching Style: Activity-Permissive Education by Mayo Clinic. Lets kids move as they learn

How can parents teach their kids to question?

  • Ask the kids open ended questions
  • Get the kids to ask you questions (and guide them to certain questions)
  • Don’t answer those questions. Instead explore through experiments or personal experience and have the kid form a hypothesis about the answer
  • When kids come home from school, ask them “Did you ask a good question today?”

Question Formulation Training

This is a strategy that has been successfully used in classrooms

First offer a premise (e.g. “torture can be justified”)

Stage 1: Divide people into groups to Brainstorm questions

Rules:

  • Write each question down
  • Don’t debate or try to answer questions
  • Just keep trying to think of more questions

Stage 2: Improve the questions by converting open questions to closed and vice versa. For example you’d change between “why is torture effective” <==> “is torture effective”

  • This shows that the same question asked differently can make you think in a different space

Stage 3: Prioritize. Identify the top 3 questions to move the discussion forwards

  • It teaches them how to analyze questions and find ones to pursue further

Question Teaching Process

  1. Teachers design a focus for the questions
  2. Students produce questions
  3. Students improve their questions (open vs closed)
  4. Students prioritize their questions (top 3)
  5. Students and teachers decide on next steps for acting on those questions
  6. Students reflect on what they’ve learned

Imagine: What if the teacher was willing to ask questions that they themselves don’t know the answer to? If the class is interested then…“Lets figure this out together”

Other thoughts

How do you reward questioning in others or in children to encourage this behavior?

  • Would the process of discovery will be it’s own reward?

Ideas for a game based on asking questions:

  • Learn to ask better questions via proposing and improving questions on a given prompt
  • Example prompt: Given some set of random resources, how can you solve problem X?
]]>
<![CDATA[ So Good They Can't Ignore You - Key Ideas ]]> https://www.zainrizvi.io/blog/so-good-they-can-t-ignore-you-key-ideas/ 5f123d248f96fd003930da3b Wed, 07 Aug 2019 00:00:00 -0700 These are my notes from Cal Newport’s book So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love

I’m going to skip the persuasive arguments that Cal listed and jump straight to the conclusions that resonated with me

Value your Autonomy

“Find your Passion” is a lie. That’s not how humans work

Reality: We become passionate about any work that can meet certain requirements

The Three Keys to Enjoying Work (from Self Determination Theory):

  1. Autonomy - You have control and make your own decisions
  2. Competence - You’re good at it
  3. Social Connections - You’re connected to others

You’re likely to enjoy any work that gives you the above three.

Going deeper, great work tends to have the following traits:

  • It lets you be Creative
  • The work has Impact
  • You can Control what you do

Note that each of these falls under one of the above three keys

Develop Rare Skills to become Valuable

Develop rare and valuable skills to stand out in your job.

Cal calls these skills “capital” since these are the “money” that you can use to “purchase/acquire” higher value positions.

Developing these rare skills/capital requires stretching your skills and developing rapid feedback loops so you can improve faster.

Steps to stretch your skill:

  1. Identify the skill/capital you’re going after
  2. Define a “good” level of that skill (your goal)
  3. Stretch yourself in that skill via deliberate practice

Traps to Avoid

There are two traps that you might fall into as you try to advance in your career:

Control Trap #1: Grabbing too much responsibility

You can’t sustain control without having the skills/capital to back it up. You won’t be able to deliver on your promises and will fail.

Of course, it’s hard to know when you’re really ready for the next stage. So this really becomes a judgement call

Control Trap #2: Trading away freedom for other rewards

When you become more valuable, employers will want to offer you incentives to get you to keep providing value for them.

This could be a good thing. You may be happy with the extra money, etc. But this may also reduce your freedom & your control over your own life.

The lesson here is to be conscious of the trade-off you’re making so that you know you’re doing what’s best for you

Make your work Remarkable

A great project will be based on a mission. But in order to be successful, that project and mission need to be remarkable

What is meant by remarkable?

  • People will talk about it (remark) in casual conversation (“you have to see this”, viral messages, etc)
  • Controversial things also fall into this bucket. Use this knowledge with caution

Tactic for increasing your product’s chances to be remarked upon:

Spread the word about your project in a venue which supports those remarks. “Supports” could refer to a venue filled with people interested in the project, a venue that makes sharing easy etc. (e.g. a social media forum focused on an area related to your project)

]]>
<![CDATA[ Using BigRQuery on GCP AI Platform Notebooks ]]> https://www.zainrizvi.io/blog/authenticating-to-bigrquery-on-gcp-ai-platform-notebooks/ 5f123d248f96fd003930da3c Mon, 05 Aug 2019 00:00:00 -0700 Note: These instructions can be used whenever you’re using Jupyter Lab on a remote machine

GCP AI Platform Notebooks makes it really easy to run notebooks on Jupyter Lab and even offers R language notebooks. R is great for crunching large data sets, and a popular place for people to store their data is on BigQuery.

The most popular library for accessing BigQuery in R is the open source library BigRQuery. It’s an extremely useful library, but it has the downside that the authentication step will try one of two things:

  1. It either will prompt people for extra input on the command line
  2. Or open up a port on http://localhost to listen for GCP authentication

#1 doesn’t work with any Jupter Notebook since they are not designed to accept extra commands in the middle of an execution

#2 won’t work if you’re connected to Jupyter Lab on a remote machine (the http://localhost will point you to the wrong VM!)

Since neither of the normal ways to authenticate yourself will work just yet, this post describes two different methods you can use to authenticate yourself to BigQuery within a AI Platform Notebook:

  1. Authenticate using your normal GCP user credentials
  2. Authenticate using a service account

Prereq: Create an AI Platform Notebook for R

First create a new AI Platform notebook. This notebook is where you’ll be trying to use BigRQuery

  1. Go to http://console.cloud.google.com/mlengine/notebooks/instances
  2. Select ‘New Instance’ -> ‘R 3.x’ -> Create
  3. Wait for the notebook to be created and click “OPEN JUPYTERLAB”

Option #1: Authenticating using your GCP user credentials

This method uses the Jupter Lab terminal to run the interactive commands and cache the credentials. Once you’re authenticated, you can switch to a notebook and it’ll use the credentials in the cache.

First, start R in a Terminal

Run R

You’ll get the output:

jupyter@r-20190802-172922:\~$ R

R version 3.6.1 (2019-07-05) -- "Action of the Toes"
Copyright (C) 2019 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

Next we install the required packages

As of this writing, BigRQuery needs the dev version of gargle for this authentication to work. Later you shouldn’t need to explicitly install gargle.

Run the following commands to install the packages:

install.packages("httpuv")
install.packages("devtools")
devtools::install_github("r-lib/gargle") # get the dev version of gargle
install.packages("bigrquery")
install.packages("readr") # To read BigQuery results

Those packages will take ~10 minutes to install

Import the required libraries and Authenticate yourself by running the command bq_auth(use_oob = TRUE) (correct your email address as appropriate)

Commands to run:

library(httpuv)
library(gargle)
library(bigrquery)
bq_auth(use_oob = TRUE)

Say yes when it asks about caching the OAuth credentials.

You’ll see an error like the following

> library(httpuv)
> library(gargle)
> library(bigrquery)
> bq_auth(use_oob = TRUE)
> Is it OK to cache OAuth access credentials in the folder '/home/jupyter/.R/gargle/gargle-oauth' between R sessions?

1: Yes
2: No

Selection: 1
Enter authorization code: /usr/bin/xdg-open: 778: /usr/bin/xdg-open: www-browser: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: links2: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: elinks: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: links: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: lynx: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: w3m: not found
xdg-open: no method available for opening 'https://accounts.google.com/o/oauth2/auth?client_id=603366585132-0l3n5tr582q443rnomebdeeo0156b2bc.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code'

Here’s where it gets tricky. It’ll look like it’s only giving a list of errors. But if you look closely the error contains a url you see an https://accounts.google.com url in their. Copy/paste that url into a new window and you’ll see the usual Google Auth page.

Log in and give Tidyverse the permissions it’s requesting. It’ll take you to a screen giving you a secret code:

Copy that code and paste it into your JupyterLab terminal and hit Enter.

I know, it doesn’t look like the terminal is waiting for any kind of input, but it actually is (hopefully gargle will fix this soon).

Sample output:

> library(httpuv)
> library(gargle)
> library(bigrquery)
> bq_auth(use_oob = TRUE)
Is it OK to cache OAuth access credentials in the folder '/home/jupyter/.R/gargle/gargle-oauth' between R sessions?

1: Yes
2: No

Selection: 1
Enter authorization code: /usr/bin/xdg-open: 778: /usr/bin/xdg-open: www-browser: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: links2: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: elinks: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: links: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: lynx: not found
/usr/bin/xdg-open: 778: /usr/bin/xdg-open: w3m: not found
xdg-open: no method available for opening 'https://accounts.google.com/o/oauth2/auth?client_id=603366585132-0l3n5tr582q443rnomebdeeo0156b2bc.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code'
4/lgjskDFGSjkwETSgsjGSKEJTssfgKWlgjskDFGSjkwETSgsjGSKEJTssfgKWlgjsk <===== the GCP auth code I copy/pasted in
>

Now you can verify that your credentials have actually been cached.

> bq_auth(use_oob = TRUE) <===== retrying the auth to see if it worked
The bigrquery package is requesting access to your Google account. Select a pre-authorised account or enter '0' to obtaina new token. Press Esc/Ctrl + C to abort.

1: xxxxxxx@gmail.com  <===== The auth credentials were cached

Selection: 1

If you now try to authenticate to bigrquery using your email, it’ll work (if bq_auth() returns with no message then that means it worked. Not the most intuitive, I know)

> bq_auth(email="xxxxxxx@gmail.com")
>

Now you can create a new R notebook within Jupyter Lab and authenticate yourself!

Create a new R notebook:

Run the following code within your notebook. It’ll pull the authentication credentials for the given email addresses from the cache saved earlier:

library(httpuv)
library(gargle)
library(bigrquery)

bq_auth(email="xxxxxxx@gmail.com")

project_id <- 'my-project-id'
test_query_text <- "SELECT * FROM `bigquery-public-data.usa_names.usa_1910_current` LIMIT 10"

test_results <- query_exec(test_query_text, project_id, use_legacy_sql = FALSE)

test_results # print the results

Option 2: Authenticate using a Service Account

This method involves creating a new service account in GCP, saving the key for that service account on your notebook, and using that key to authenticate to GCP.

Full instructions for using this method are available here:

https://cloud.google.com/ml-engine/docs/notebooks/use-r-bigquery#create_a_service_account_key

]]>
<![CDATA[ Five Tactics for Tackling Complex Problems at Work ]]> https://www.zainrizvi.io/blog/five-tactics-for-tackling-complex-problems/ 5f123d248f96fd003930da3d Sat, 10 Nov 2018 00:00:00 -0800 “The definition of genius is taking the complex and making it simple.” –Albert Einstein

In today’s world the most valuable employees are the ones who can tackle complex problems. And the best of those are the ones who can make the complex problems look simple.

But how can someone actually do that? Below are some tactics I’ve used which can make a big difference.

These tactics are designed to get me thinking about the problem space in different ways. The different perspectives often yield insights that help separate the core components into smaller, more manageable units (and of course this list is always a work in progress)

1. Ask Questions and Challenge Assumptions

Questions are at the heart of discovery. Every step here requires asking questions and striving to find the answers. One question you should be asking yourself is “What assumptions can I challenge?”

Note: Challenging assumptions includes challenging your own assumptions. The mere act of trying to discover what assumptions you and others are making can give you a new perspective on your project.

2. Discover the Core Problem

What is the core problem(s) your boss (or whoever gave you this task) is trying to solve? What are your customer’s burning pains? Note: Your customer is whoever will use the thing you’re making. It could be your boss, your IT department, or your company’s customers.

Go beyond the mere features being asked for and get to the heart of the problem. This is the difference between “I need a SQL database” and “I need a reliable was to access and modify my business critical data”. Perhaps you can think of a better way to solve the problem.

Ask questions like: Who cares about this problem? Why is it important to them? If there are no good answers to these questions, is the problem even worth working on? Maybe this is an opportunity to quit early

Understanding the burning pains often involves talking to stakeholders and customers. It’s hard to understate how valuable the insights you’ll get with this step can be

3. Componentize your Solution

You should have some idea for a solution at this point. If you don’t, take some time to come up with one.

Take a close look at that solution. Which pieces could be split into separate modules or components? Can any of those components provide value independently? If not, can any be tweaked so that they do provide independent value?

4. Find your MVP

Think about what would make a good MVP (Minimum Viable Product) for your product. Remember, the goal of the MVP is to learn information that will affect the design of the final product. Try to Identify at least three different questions you want answered and what MVPs you could create to answer them (hint: the same MVP is unlikely to answer all your question)

Get creative in what you consider an MVP. Maybe showing random strangers at Starbucks a napkin drawing of your app’s layout would be good enough.

5. Let your Subconscious Work

If you follow steps 1-4 then you will most likely develop a much better understanding of your problem and already developed a bunch of insights.

Great, go work on them!

Now that you’ve spent so much time researching the problem you’ll probably find yourself also thinking about it in your spare time, when you’re in the shower, driving, etc. This is when all the different pieces you’ve been studying for so long can suddenly click together in a new way, giving you a fresh insight.

This eureka moment might happen a week or two (or more) after you’ve started implementing your previous ideas. Don’t be afraid to throw those old ones away in favor one something you now know is better!

The Real Secret

The common theme here is to develop a deep understanding of the problem you’re working on, which can lead to fresh insights others may not have. The above tactics give you that understanding by forcing you to look at the issue from different angles.

But undoubtedly there are other ways to achieve that understanding as well. If you use different tactics to achive this understanding I’d love to hear them!

]]>
<![CDATA[ 5 Habits for Better Learning ]]> https://www.zainrizvi.io/blog/five-habits-for-better-learning/ 5f123d248f96fd003930da3e Tue, 28 Aug 2018 00:00:00 -0700 These mental habits were developed to teach school children how to think critically and become problems solvers. It’s a valuable skill for adults to learn as well.

When presented with any new information think about:

  1. Evidence: Why do you think this information is true or false? What should count? What do you think you know and why?
  2. Viewpoints: How would this look from other perspectives? How would other people or companies think about this differently?
  3. Connections: Noticing any patterns? Have you seen something like this before?
  4. Conjecture: What if something about this was different? How would that impact things?
  5. Relevance: Why does this matter?

(Source: A More Beautiful Question by Warren Berger)

These tips will help you get a deeper understanding of whatever you’re studying, be it something technical, political, social, etc.

]]>
<![CDATA[ Salary Negotiation Tips ]]> https://www.zainrizvi.io/blog/salary-negotiation-tips/ 5f123d248f96fd003930da3f Fri, 20 Jul 2018 00:00:00 -0700 When I was applying for a new job a few years ago I read everything I could find on salary negotiation, a core life skill everyone should know.

Here's what I found. At the bottom you'll find more links, including ones with word for word scripts on what to say:

16 Principles to Remember during a Negotiation

1. They need to like you

  • If you get an offer, these people like you and want to keep liking you (but you’re not the only person in their life)

2. They have to believe that you deserve it

  • Don’t ever ask for something without saying why you deserve it
  • Sometimes saying why you deserve it might make them like you less (#1) so be careful

3. They need to be able to justify and act on it internally

  • Need to figure out where they’re flexible and where they are not flexible

4. You need to be flexible about which currency they pay you in

  • You shouldn’t care about salary, bonus, options, city, etc. Should focus on value of entire deal
  • The more ways you give them to pay you, the more likely you are to get paid
  • Including perhaps not rewarding you today, but rewarding you later on
  • They need to believe that they can get you
  • People won’t go fight for you and spend political capital for you if they think you’re going to refuse at the end of the day

5. Avoid the minutia

  • Think about the interests that are really important to you, but not haggle over every little thing (you don’t want to sour the negotiation and make them not like you: see point #1)

6. Try to learn as much as you can at every moment.

  • Need to understand the person across the table from you
  • Learn as much as you can before the negotiation
  • What do they want, need?
  • Talk to folks in the organization, friends interviewing there, etc
  • Anytime someone says something you didn’t expect or is ambiguous, try to figure out what is going on so you know which world you’re in.

7. Negotiate multiple issues simultaneously

  • For example, if there are multiple things you don’t like about the offer, mention all those things at once
  • Don’t be the guy who says: Hey, can you fix this one thing? “ok” Great, can you do this too? “alright” Cool, can you do this too please?
  • When you mention the few things that you need, it’s important to signal to them what’s most important and what’s less important
  • If you don’t they may pick the two things that are easy for them but less important to you. They may think they’re meeting you half way when you’re not really satisfied

8. What is not negotiable today may be negotiable tomorrow

  • Today they may be hiring 20 people just like you and they have a hard time differing between you. But 6 months down the line they might be convinced that you’re different, or they’re in a different phase of employment and they have more flexibility
  • What may not be negotiable on day 1 of the interview process may be negotiable on day 3
  • E.g., if they tell you we can’t delay the deadline date for you to respond to our offer. But as the deadline gets closer, at that point they may be able to change the deadline later on
  • When they say “no” to something, ask them “Can you help me understand why that is hard to do?”
  • “Under what circumstances would you be able to do this?”
  • “Have you ever done something like this for a person”

9. Stay at the table

  • Stay in touch
  • Perhaps things they couldn’t share with you before the offer, they can share with you after the offer. Or after you have the job, or after you’ve worked there for 3 months, etc

10. Sometimes the other side might bring up something that you wish they didn’t

  • Examples:
  • Do you have an alternate job offer?
  • Did your summer internship give you a job offer?
  • Instead of hoping they don’t ask those questions, prepare in advance to answer those toughest questions
  • What could they ask to put you in a defensive position?
  • When they ask you questions you wish they hadn’t asked?
  • Don’t get stuck on what they’re asking you. Try to figure out you why they’re asking you that?
  • What is the intent of the question? Where are they going with this?
  • Step back, ask them “Can you help me understand where you’re going with this?”

11. Avoid/ignore/downplay ultimatums of any kind (in either direction)

  • Don’t make them either
  • If they make one, pretend it was never said
  • It’s possible that at some point they themselves will realize that the position they took will result in no deal, and the last thing you want is for them to feel like they’ve painted themselves into a corner without losing face. The only way to give them a way out is to pretend it wasn’t even said.
  • If it’s a real ultimatum, they’ll let you know (they’ll repeat it multiple times)

12. Companies don’t negotiate, people negotiate

  • Negotiating with your future boss is different from negotiating with HR
  • It’s okay if HR is slightly annoyed with you, your boss being annoyed may be a problem
  • HR is hiring 100s of people, your boss is hiring very few people and may be willing to go fight to get you
  • Don’t let one person in the company ruin your view of the company

13. Don’t be in a mad rush to get offers

  • Sometimes you want the interview process to be a bit slow so that you have time to interview other places
  • You have to think about the portfolio of deals you’re negotiating, try to see if you can have the offers come in closer to each other

14. Tell the truth

  • Deception isn’t worth it

15. Shoot for an 11 out of 10

  • After the negotiation with you, your boss is going to think about how much do they look forward to working with you. You want them to think you’re an 11/10, not even a 10/10.
  • Negotiate in a way, openly, honestly, with empathy, with give and take, that afterwards they like you even more than they did before

16. If you’re thinking about how happy you’ll be in life, how you negotiate is not very important

  • The job you choose, the industry you go into, etc are all much more important
💡
If you'd like to learn how to get a job offer in the first place, you might like to read Interview Advice that got me offers from Google, Stripe, and Microsoft. I share insider tactics I learned over 13+ years that's landed me jobs at multiple tech companies companies including Google, Facebook, Stripe and Microsoft.

More references you should Check Out

The first three include word for word scripts and are highly recommended

Word-for-Word Scripts on What to Say During the Negotiation and Why they Work

By Haseeb Qureshi

Negotiation Video, Including Scripts on What to Say and Why it Works

Ramit Sethi's How to Negotiate Your Salary

Make sure you're considering more than just the salary

This episode from the Engineering Advice You Didn't Ask For show gives you great tips on key areas to consider when comparing two offers.

Salary isn't necessarily the most important!

Engineering Advice You Didn't Ask For on Negotiating your best tech offer

Book + Free email course with exact scripts on salary negotiations

Josh Doody offers some great material on on his site about salary negotiations.

If you prefer, he also offers 1:1 coaching services. He comes endorsed by Patrick McKenzie (the guy who wrote the article right above this one)

https://fearlesssalarynegotiation.com/salary-negotiation-guide/

Lecture on Salary Negotiation

By Harvard Business School’s Prof. Deepak Malhotra: How to Negotiate Your Job Offer

The above notes were mostly based on this lecture.

Why you really should negotiate your salary

This one is the gold standard for salary negotiations in the tech industry, but a lot of it applies more generally as well:

Salary Negotiation: Make More Money, Be More Valued (by patio11@)

💡
Want to get more interview and career tips? You can:
- Sign up for my newsletter below
- Follow me on twitter @ZainRzv
- Check out my course Insider Advice on how to Pass FAANG Interviews

]]>
<![CDATA[ How to: Redirect the default *.azurewebsites.net domain to your custom domain on Azure Web Apps ]]> https://www.zainrizvi.io/blog/block-default-azure-websites-domain/ 5f123d248f96fd003930da40 Sat, 07 May 2016 00:00:00 -0700 When you create a new website using Azure Web Apps you get a default <sitename>.azurewebsites.net domain assigned to your site. That’s great, but what if you add a custom host name to your site and don’t want people to be able to access your default *.azurewebsites.net domain anymore? (You paid good money for that custom domain.) This post explains how to redirect all traffic aimed at your site’s default domain to your custom domain instead.

It’s really simple. You just need to add a redirect rule to your site’s web.config file. You can do that by adding the following rewrite rule to the web.config file in your wwwroot folder. If you don’t have a web.config file, then you can create one and just paste the text below into it, just change the host names to match your site’s host names:

<configuration>
  <system.webServer>  
    <rewrite>  
        <rules>  
          <rule name="Redirect rquests to default azure websites domain" stopProcessing="true">
            <match url="(.*)" />  
            <conditions logicalGrouping="MatchAny">
              <add input="{HTTP_HOST}" pattern="^yoursite\.azurewebsites\.net$" />
            </conditions>
            <action type="Redirect" url="http://www.yoursite.com/{R:0}" />  
          </rule>  
        </rules>  
    </rewrite>  
  </system.webServer>  
</configuration>  

Basically we’re telling IIS to take any request where the host name matches the RegEx pattern “^yoursite.azurewebsites.net$” and return an HTTP 301 response. The response will include the originally requested url, except it’ll be pointing to your custom “www.yoursite.com” domain instead. When the user’s browser reads that 301 response and the new url, it will automatically load that new url instead. It’ll even change the address the user sees in the address bar.

That’s great, so how do we parse the above code exactly? I’m not a fan of copying code unless I know exactly what it’s doing.

So let’s see what’s going on here:

<configuration>
  <system.webServer>  
  ...
  </system.webServer>  
</configuration>  

This part tells IIS that we’re modifying the web server’s configuration settings. The next section is where it starts to get tricky:

<rewrite>  
    <rules> 
    ... 
    </rules>  
</rewrite>  

The <rewrite> tag tells IIS that the elements it encloses are settings for the the URL Rewrite module. The <rules> tag lists all the rules that we want that module to follow. In our case, we want it to follow a rule that will return a HTTP 301 redirect response to the client (the user’s web browser).

Now for the actual rule:

<rule name="Redirect rquests to default azure websites domain" stopProcessing="true">
  <match url="(.*)" />  
  <conditions logicalGrouping="MatchAny">
    <add input="{HTTP_HOST}" pattern="^yoursite\.azurewebsites\.net$" />
  </conditions>
  <action type="Redirect" url="http://www.yoursite.com/{R:0}" />  
</rule>  

The “name” attribute is just for human readability. It doesn’t affect execution at all. StopProcessing=”true” tells the rewrite module that if this rule applies to the incomming request, then not to bother processing any other rules after this one becuase they won’t matter. In our case we only have one rule, so this tag doesn’t do anything, but it can save you some CPU if you have more rules defined.

Next is the <match url="(.*)" /> section. That’s a regEx pattern inside, and since this regEx covers all possible inputs, it tells IIS to apply this rule to all requests, no matter what their url is.

Then comes the conditions section. We set logicalGrouping="MatchAny" to tell IIS to execute the rule if any of the following conditions hold true. Right now we only have one condition, so again it doesn’t matter, but if you had multiple conditions (for example, multiple domain names you wanted to forward to your custom domain name) then you could list them all here. Alternatively, you could set it to “MatchAll” to tell IIS to only run the action if it matches all the conditons given.

Here’s the condition we used:

<add input="{HTTP_HOST}" pattern="^yoursite\.azurewebsites\.net$" />

It says to look at the http host and evaluate the condition as true if the host matches the given regEx pattern, which we set to your default azure domain name.

The last bit is the action, the meat of the whole rule:

<action type="Redirect" url="http://www.yoursite.com/{R:0}" />  

Here we’re telling the Rewrite module what action to take (what it’s actually supposed to do) when a request matches the above rules. The action we want is “Redirect”, which is where it’ll return the HTTP 301 to the client, and when it returns a 301 we need to tell the client what url it should be redirected to. That’s how we get to specify our desired domain name.

But we don’t want to send the user to the root of the domain name, so we add in the /{R:0} bit, which (put simply) says “Look at the orignal url we matched against in the <match> tag, and stick that in.” A more thorough description is that we ran a regex expression in the <match> tag, and this {R:0} returns the first RegEx group.

And there you have it, that’s how you can redirect all request for your default azure domain to your custom domain.

💡
Side note: you might also enjoy my article Interview Advice that got me offers from Google, Stripe, and Microsoft. I share insider tactics I learned over 13+ years that's landed me jobs at multiple tech companies companies including Google, Facebook, Stripe and Microsoft.

As a final example, here’s web.config file’s conent for my site:

<configuration>
  <system.webServer>  
    <rewrite>  
        <rules>  
          <rule name="Redirect rquests to default azure websites domain" stopProcessing="true">
            <match url="(.*)" />  
            <conditions logicalGrouping="MatchAny">
              <add input="{HTTP_HOST}" pattern="^zainrizvi\.azurewebsites\.net$" />
            </conditions>
            <action type="Redirect" url="http://www.zainrizvi.io/{R:0}" />  
          </rule>  
        </rules>  
    </rewrite>  
  </system.webServer>  
</configuration>  
Final web.config contents
]]>
<![CDATA[ Deploy Statically Generated Sites with Yeoman ]]> https://www.zainrizvi.io/blog/deploy-statically-generated-sites-with-yeoman/ 5f123d248f96fd003930da42 Tue, 13 Oct 2015 00:00:00 -0700 There are a lot of awesome static site generators out there. It’s not always easy to figure out how to setup continuous deployment for them though.

This post will describe how to deploy a statically generated site using yeoman angular to Azure Web Apps, but these steps can be applied to deploy any statically generated site to Azure Web Apps.


Deploying the initial site

I tried using yeoman’s gulp-angular generator. I made a quick site following their tutorial, setup continuous deployment via github, navigated to the newly deployed site and I saw…huh?

img

What’s going on here?

Using Kudu for debugging

Luckily all Azure Web Apps come with a handy Kudu site that gives you command line access to your site. You can get to it at https://<yourSiteName>.scm.azurewebsites.net\DebugConsole.

alt

I navigated to the site’s D:\home\site\wwwroot folder and saw all the content was there. And that’s when I face-palmed and realized that statically generated site is saved to the dist folder, and that’s not even part of the deployment!!!

Luckily, that’s easy enough to fix.

Check the static site into the source code

First issue was to include the dist folder in the source code. You just need to exclude the /dist line from the .gitignore file for that. Easy enough.

Now when you deploy your site to Azure Web Apps your site exists in the new D:\home\site\wwwroot\dist folder!

alt

(Fyi, with yo angular you have to run grunt once first before you check in your code to actually generate the dist folder).

But your site still doesn’t work…because Azure Web Apps is expecting the site’s content to be in D:\home\site\wwwroot.

Darn.

Custom deployment settings to the rescue!

Add a .deployment file to the root folder of your code and paste the below inside:

[config]
project = dist

This will tell Azure Web Sits that the root folder for your site is the dist folder. Now your sites will be hosted from the D:\home\site\wwwroot\dist folder. If your static site generator puts your site in some other folder, set project to that folder’s name.

Check in the file, deploy it to Azure Web Apps, and see the magic happen.

alt

You can find a full copy of the sample code here on Github, with check-ins corresponding to each step of this tutorial: https://github.com/ZainRizvi/YoAngularOnAzureWebApps

You can see the final working site here: http://yoangularonazurewebapps.azurewebsites.net

]]>
<![CDATA[ Backup just the Important Parts of your Site with Azure Web Apps ]]> https://www.zainrizvi.io/blog/creating-partial-backups-of-your-site-with-azure-web-apps/ 5f123d248f96fd003930da41 Fri, 05 Jun 2015 00:00:00 -0700 Introducting a way to backup just the parts of your website that matter most.

Introduction

Azure Web Apps provides powerful backup/restore functionality. (Because disasters can happen to anyone)

However, sometimes you don’t want to backup everything on your site, especially if you backup your site regularly, or if your site has over 10GB of content (that’s the max amount you can backup at a time).

For example, you probably don’t want to back up the log files. Or if you setup weekly backups you won’t want to fill up your storage account with static content that never changes like old blog posts or images.

Partial backups will let you choose exactly which files you want to back up.

Specify the files you don’t want to backup

You can create a list of files and folders to exclude from the backup.

You save the list as a text file called _backup.filter in the wwwroot folder of your site. An easy way to access this is through the Kudu Console at http://{yoursite}.scm.azurewebsites.net/DebugConsole.

The instructions below will be using the Kudu Console to create the _backup.filter file, but you can use your favorite deployment method to put the file there.

What to do

I’ve got a site that contains log files and static images from past years that are never going to change.

I already have a full backup of the site which includes the old images. Now I want to backup the site every day, but I don’t want to pay for storing log files or the static image files that never change.

The below steps show how I’d exclude those files from the backup.

Identify the files and folders you don’t want to backup

This is easy. I already know I don’t want to backup any log files, so I want to exclude D:\home\site\wwwroot\Logs.

There’s another log file folder that all Azure Web Apps have at D:\home\LogFiles. Let’s exclude that too.

I also don’t want to backup the images from previous years over and over again. So lets add D:\home\site\wwwroot\Images\2013 and D:\home\site\wwwroot\Images\2014 to the list as well.

Finally, let’s not backup the brand.png file in the Images folder either, just to show we can blacklist individual files as well. It’s located at D:\home\site\wwwroot\Images\brand.png

This gives us the following folders that we don’t want to backup:

  • D:\home\site\wwwroot\Logs
  • D:\home\LogFiles
  • D:\home\site\wwwroot\Images\2013
  • D:\home\site\wwwroot\Images\2014
  • D:\home\site\wwwroot\Images\brand.png

Create the exclusion list

You save the blacklist of files and folders that you don’t want to backup in a special file called _backup.filter. Create the file and place it at D:\home\site\wwwroot\_backup.filter.

List all the files and folders you don’t want to backup in the _backup.filter file. You add the full path relative to D:\home of the folder or file that you want to exclude from the backup, one path per line.

So for my site, D:\home\site\wwwroot\Logs becomes \site\wwwroot\Logs, D:\home\LogFiles becomes \LogFiles, so on and so forth, resulting in the following contents for my _backup.filter:

\site\wwwroot\Logs
\LogFiles
\site\wwwroot\Images\2013
\site\wwwroot\Images\2014
\site\wwwroot\Images\brand.png

Note the starting \ at the beginning of each line. That’s important.

Run a backup

Now you can run backups the same way you would normally do it. Manually, automatically, either way is fine.

Any files and folders that fall under the filters listed in the _backup.filter will be excluded from the backup. This means now the log files and the 2013 and 2014 image files will no longer be backed up.

Restoring your backed up site

You restore partial backups of your site the same way you would restore a regular backup. It’ll do the right thing.

The technical details

With full (non-partial) backups normally all content on the site is replaced with whatever is in the backup. If a file is on the site but not in the backup it gets deleted.

But when restoring partial backups though any content that is located in one of the blacklisted folders (like D:\home\site\wwwroot\images\2014 for my site) will be left as is. And if individual files were black listed then they’ll also be left alone during the restore.

Best Practices

What do you do when disaster strikes and you have to restore your site? Make sure you’re prepared beforehand.

Yeah, you have partial backups, but take at least one full backup of the site first so that you have all your site’s contents backed up (this is worst case scenario planning). Then when you’re restoring your backups you can first restore the full backup of the site, and then restore the latest partial backup on top of it.

Here’s why: it lets you use Deployment Slots to test your restored site. You can even test the restore process without ever touching your production site. And testing your restore process is a Very Good Thing. You never know when you might run into some subtle gotcha like I did when I tried restoring my blog and end up losing half your content.

My horror story

My blog is powered by the Ghost blogging platform. Like a responsible dev I created a backup of my site and everything was great. Then one day I got a message saying that there was a new version of Ghost available and I could upgrade my blog to it. Great!

I created one more backup of my site to backup the latest blog posts, and proceeded to upgrade Ghost.

On my production site.

Bad mistake.

Something went wrong with the upgrade, my home screen just showed a blank screen. “No problem” I thought, “I’ll simply restore the backup I just took.”

I restored the upgrade, saw everything come back…except the blog posts.

WHAT???

Turns out, in the Ghost upgrade notes there’s this warning:

If you try to backup the data while Ghost is running…the data doesn’t actually get backed up.

Bummer.

If I had tried the restore on a test slot first I would have seen this issue and not lost all my posts.

Such is life. It can happen to the best of us.

]]>