Framed pen and ink of Brian Kernighan's famous Hello World program. 'Main(){printf("Hello, worldn");}'

Hello Remote World: How does it feel to onboard remotely?

I moderated a Rice University panel with six recent Computer Science graduates to learn how they were faring in their transition to their new post-graduation world. Luckily, for computer science majors, the job market has remained relatively strong despite the pandemic. The jobs were already easy to do remotely, the companies already invest in and use remote communication tools like Slack, Zoom, and Github, and the tech industry has seen increasing use of its products while everyone is stuck at home. Here is what I learned.

Panel of recent Rice Computer Science graduates starting their next step in their career during the pandemic: https://events.rice.edu/#!view/event/date/20200917/event_id/122108
  • All of the working members of the panel had interned before at the places they landed. That made their remote transition easier because they already knew the culture and their team. This is an unanticipated benefit of those internships that organizations offer.
  • Some moved to the city where their new job was and, if they had friends there, they were meeting in parks socially. Several were glad they hadn’t moved and were relying on their ‘home town’ friendships
  • The new graduate student had moved but was taking classes remotely. He was able to get to know fellow graduate students in the classes that had breakout sessions, but it definitely was harder.  
  • The big companies were still doing two week orientations – just online – with a combination of talks, labs, and icebreakers. I would like to have dug into that more. What on earth is two weeks of orientation online like? 
  • The key to the social activities seemed to be having a variety to try out, because some are awkward and some work well, which varies by person. Some of the social activities are clearly helpful for work life and others work better for building friendships.  Lunches and happy hours were more awkward, but still good for team building. Some companies were offering ways to have a randomly chosen coffee chat, and ways to get a mentor. 
  • One company offers monthly wellness in-days, rather than a day off. It is a day in, but with work-life-balance themes (health, yoga, earth day).  
  • Since this is a CS panel, folks were taking advantage of social slack channels (pets, alone-together, games), and online game groups organized through work (Code Names, Jack Box, Scrible, Brackets).
  • Not having a commute was a real plus
  • As you might expect, working hours have shifted. Several mentioned working long hours without realizing it because there is no transition, and others had shifted to working later into evening hours to take advantage of outdoor activities earlier in the day. 
  • They all missed being able to ‘roll your chair’ over to a colleague and ask questions, chat in the break rooms, etc. 

Building the RIGHT Thing

Product managers are responsible for creating and delivering the right thing to their customers. 

Product Managers are sometimes called product ‘CEO’s’, because of their central position of authority with respect to the product, and because of the need to communicate the vision for the product both to an audience of ‘investors’ (the company leadership) who must provide people and resources, and to their own team who must deliver the product. They are not ‘owners,’ however. Ownership conveys the wrong set of skills. Product Managers aren’t buying and investing as an owner would, and they aren’t deciding where the team invests time and energy based on their own personal preferences as an owner would. Instead, a good Product Manager has to learn the needs of their customers, and figure out how their team can fulfill them to achieve the organization’s strategic goals. 

Great product managers are skilled at determining what the right thing to build is with the consumer in mind, communicating that to leaders, colleagues, and team members, and then working productively and flexibly to deliver the right value to their customers. 

I recently gave a talk to Rice University Computer Science Alumni as part of a panel on Product Management as a career, and you can access my talk and slides below. In this series of posts, I will be going into a lot more depth to explain  more ‘tools’ for the product management toolbox and how to use them effectively to create useful and beneficial products.

Panel: bit.ly/product-management-career-panel (Minute 36:44)Slides: bit.ly/fletcher-on-product-management 

Crowd 'speaking' with labels "Level Up!, 5 stars, A+"

Leveling up crowd-sourced educational content — with a little help from machine learning and lessons from HCOMP19.

More lessons from HCOMP 2019

Crowd-sourcing content: Although publishers (I work for one) create high quality content that is written, curated, and reviewed by subject matter experts (SMEs, pronounced ‘smeez’ in the industry), there are all sorts of reasons that we always need more content. In the area of assessments, learners need many, many practice items to continually test and refine their understanding and skills. Also, faculty and practice tools need a supply of new items to ‘test’ what students know and can do at a certain point in time. Students need really good examples that are clearly explained. (Khan Academy is a great example of a massive collection of clearly explained examples). When students make attempts to solve homework problems, they also need feedback about what NOT to do and why those aren’t the correct approaches. Therefore, we need to know what the core concepts needed for each activity or practice are. 

Faculty and students are already creating content! Because faculty and students are already producing lots of content themselves as part of their normal workflow, and faculty are assembling and relating content and learning activities, it would be great to figure out how to leverage the best of that content and relationship labeling for learning. This paper by Bates, et. al, looked at student generated questions and solutions in the Peer Wise platform (https://journals.aps.org/prper/pdf/10.1103/PhysRevSTPER.10.020105) and found that with proper training, students generate high quality questions and explanations. 

So at HCOMP 2019 I was listening for ways to crowdsource examples and ground truth. In particular, it would be useful to see if machine learning algorithms could find faculty and student work that is highly effective at helping students improve their understanding. The two papers below address both finding content exemplars and training people to get better at producing effective content. 

Some highly-rated examples are better than others. Doroudi et. al wanted to see what effect showing highly rated peer-generated examples to crowd workers would have on the quality of work they submitted. In this study, the workers were writing informative comparison reviews of products to help a consumer decide which product better fits their needs. The researchers started with a small curated ground truth of high quality reviews. Workers that viewed highly rated reviews before writing their own ended up producing better reviews (more highly rated). That isn’t surprising, but interestingly, even among equally highly rated reviews, some reviews were much more effective in helping improve work! They used machine learning to determine the most effective training examples. So that suggests that, while viewing any highly rated examples will improve new contributors’ content,  we can then improve the training process even more by selecting and showing the examples with the best track record of improving workers’ content creations.  

Using training and leveling up:  Madge et. al introduced a crowd-worker training ‘game’ with a concept of game-levels. They showed the method was more effective than standard practice at producing highly effective crowd workers. Furthermore, they showed how machine learning algorithms could determine which tasks belonged in each game level by observing experimentally how difficult tasks were. 

Crowd workers are often used to generate descriptive labels for large datasets. Examples include tagging content in images (‘dog’), identifying topics in twitter feeds (‘restaurant review’), and labeling the difficulty of a particular homework problem (‘easy’). In this particular study, workers were identifying noun phrases in sentences. The typical method of finding good crowd workers is to start out by giving a new worker tasks that have a known “right answer” and then picking workers that are best at those tasks to do the new tasks you actually want completed. The available tasks are then distributed to workers randomly, meaning a worker might get an easy or difficult task at any time. These researchers showed that you could train new workers using a ‘game’, so that they improve over time and are able to do more and more difficult tasks (harder and harder levels of the game), and the overall quality of labeling for the group of workers is better.  

Better education content: Faculty and students could become more effective producers of education content with the help of these two techniques. Motivating, training and selecting contributors via comparison with highly rated examples and leveling up to ‘harder’ or more complex content would be useful to help contributors to create high quality learning content (example solutions, labeling topics and difficulty, giving feedback). These techniques also sound really promising for training students to generate explanations for their peers, and potentially to train them to give more effective peer feedback. 

personified lock (unlocked) fighting personified corona virus

Open Vs. COVID Round 2: Collaborating for the Knockout

Opening up information is one of the keys to a concerted and effective response to the COVID-19 Pandemic. We need to make sure that insights from the world-wide treatment efforts are discovered quickly and shared widely to ensure effective preventions and treatments spread. A case in point on the power of coordinating and sharing medical information comes from the Castleman’s Collaborative, illustrating how open processes and open resources can be an asset to humanity in this global health crisis.

“Doing in a year what often takes a decade.1 I recently caught Terry Gross’ interview with David Fajgenbaum on Fresh Air about ‘crowdsourcing’ a cure for Castleman’s disease, which he suffers from. It’s a great listen. Castleman’s is a deadly disease for which little is known because of its rarity and the dispersal of cases. Luckily for the world, Fejgenbaum was in medical school when he had his first attack and, because he was close to finishing his medical degree, he began researching.  He discovered that little coordinated information and study was available. The Castleman’s Collaborative, https://cdcn.org/, has come up with an eight step research process that is crowdsourced, prioritized, funded, executed, and published (almost all freely available). Much of the research targets off-label use of FDA-approved drugs, since development of specifically targeted new drugs is prohibitively expensive and time consuming, but many, many drugs already exist and might be effective. While, individually, rare diseases impact few people, cumulatively, many many rare conditions impact large numbers of people, and so applying this collaborative method can be tremendously impactful. 

They are applying the same collaborative method to COVID19 treatment: The Castleman Collaborative is now applying the methodology to COVID19, https://cdcn.org/corona/. Their first step is compiling a database of off-label use of drugs to treat COVID19 symptoms, aptly named CORONA for COvid19 Registry of Off-label & New Agents. Having that database openly available to those around the world who are researching treatments helps prevent duplication, combine efforts in the medical community, and lets researchers and practitioners build on each other’s knowledge, which has always been the promise and practice of science.  

‘Open’ is a critical tool for solving hard, global problems. Open resources (education, data, software) and collaborative processes are unique in their ability to pivot to address new crises and public concerns, because they remove barriers to building on previous work and disseminating knowledge quickly and widely. In the case of the pandemic, this gives open collaboration a fighting chance against the virus, which also spreads widely and builds on its own infectious success. My day job is helping students learn and achieve by building effective products, which also benefits from and is accelerated by open content, open research, and collaborative team processes. My interest in open software started in graduate school and the more I learn, the more I believe in its potential to help the world. In this global health crisis, open solutions are continuing to establish themselves as an integral part of the thriving open software ecosystem that I am proud to be a part of. 

1 From the Castleman Collaborative website, July 6, 2020

Student sleeping akimbo at desk covered with books.

The Second Shift: Does homework fly in the face of current productivity research?

Students are doing homework after a full day, and may be caring for siblings, working, and helping out at home. Some of them don’t have adequate tech or space to work. Homework is a second or third shift for them and may be increasing educational inequity.

Is ed-tech exacerbating inequity?

I have been thinking a lot about where ed-tech might be exacerbating existing inequity. And that led me to read a colleague’s tweet of “Homework is a Social Justice Issue”, originally published here in 2015. It talks about the underlying assumptions being made when we give homework, especially in K12: that students have the time, background knowledge, and tools to do school work at home. If students are working or taking care of younger siblings, they don’t have the time. If the type of work often lures in parents of affluent students to help, then they probably don’t have the background knowledge yet. And if the homework is on a laptop/phone and requires internet access, or requires space to organize and maintain materials, they may not have the tools. We must take these environmental realities into account when designing and building educational software that will meet the needs of students from all walks of life. 

Long hours don’t work.

I recently realized that the people I meet with after 4pm aren’t getting the same creativity and deep listening as people that talk to me at 9 or 10am. It made me wonder why we are asking students to do a second or often third shift, when the research on the harms of long hours to productivity of overwork are so clear (here’s a summary of the harms) and similarly there are real harms to work quality (see this study on long medical shifts). Do you want someone in their 18th hour doing brain surgery on you? 

When and what to assign?

So, even IF students have the time, background knowledge, and tools, does it really make sense to ask them to work a second shift? Students do need time to grapple with hard problems, and many students need quiet to work. So it isn’t an easy problem to fix. The article suggests that if you are assigning homework in K12, you should ask yourself these questions.

  1. “Does the task sit low on Bloom’s Taxonomy? In other words, are students likely to be able to do it independently?
  2. If not, does the task build primarily on work already performed or begun in class? In other words, have students already had sufficient opportunity to dig deep into the task and work through their difficulties in the presence of peers and/or the teacher?
  3. Does the task require only the technology to which all students have sufficient access outside of school?
  4. Can the task reasonably be accomplished, alongside homework from other classes, by students whose home life includes part-time work, significant household responsibilities, or a heightened level of anxiety at home?”
    https://modernlearners.com/homework-is-a-social-justice-issue/

How could ed-tech help? 

Homework systems and courseware could make it easy and safe for students to provide feedback on their assignments, including individual questions and tasks within their assignments. Rather than focusing so much on giving analytics about students, ed-tech could provide instructors with analytics about the assignments, questions, and tasks they give. Which ones seem to require a lot of prerequisite knowledge that students don’t already have? Which ones seem to help students do well in the course? Which questions behave like “weed-out” questions? Maybe ed-tech should find ways to collect demographic information and measure outcomes to report on inequitable results, while protecting student privacy. 

I am interested in hearing your ideas, too.

See you earlier tomorrow! And by the way, I have started making sure that the people that I mostly speak to later in the day occasionally meet with me at an earlier time, so that they get the benefit of my full listening capacity and creative potential.