Category Archives: ethical software

Student sleeping akimbo at desk covered with books.

The Second Shift: Does homework fly in the face of current productivity research?

Students are doing homework after a full day, and may be caring for siblings, working, and helping out at home. Some of them don’t have adequate tech or space to work. Homework is a second or third shift for them and may be increasing educational inequity.

Is ed-tech exacerbating inequity?

I have been thinking a lot about where ed-tech might be exacerbating existing inequity. And that led me to read a colleague’s tweet of “Homework is a Social Justice Issue”, originally published here in 2015. It talks about the underlying assumptions being made when we give homework, especially in K12: that students have the time, background knowledge, and tools to do school work at home. If students are working or taking care of younger siblings, they don’t have the time. If the type of work often lures in parents of affluent students to help, then they probably don’t have the background knowledge yet. And if the homework is on a laptop/phone and requires internet access, or requires space to organize and maintain materials, they may not have the tools. We must take these environmental realities into account when designing and building educational software that will meet the needs of students from all walks of life. 

Long hours don’t work.

I recently realized that the people I meet with after 4pm aren’t getting the same creativity and deep listening as people that talk to me at 9 or 10am. It made me wonder why we are asking students to do a second or often third shift, when the research on the harms of long hours to productivity of overwork are so clear (here’s a summary of the harms) and similarly there are real harms to work quality (see this study on long medical shifts). Do you want someone in their 18th hour doing brain surgery on you? 

When and what to assign?

So, even IF students have the time, background knowledge, and tools, does it really make sense to ask them to work a second shift? Students do need time to grapple with hard problems, and many students need quiet to work. So it isn’t an easy problem to fix. The article suggests that if you are assigning homework in K12, you should ask yourself these questions.

  1. “Does the task sit low on Bloom’s Taxonomy? In other words, are students likely to be able to do it independently?
  2. If not, does the task build primarily on work already performed or begun in class? In other words, have students already had sufficient opportunity to dig deep into the task and work through their difficulties in the presence of peers and/or the teacher?
  3. Does the task require only the technology to which all students have sufficient access outside of school?
  4. Can the task reasonably be accomplished, alongside homework from other classes, by students whose home life includes part-time work, significant household responsibilities, or a heightened level of anxiety at home?”
    https://modernlearners.com/homework-is-a-social-justice-issue/

How could ed-tech help? 

Homework systems and courseware could make it easy and safe for students to provide feedback on their assignments, including individual questions and tasks within their assignments. Rather than focusing so much on giving analytics about students, ed-tech could provide instructors with analytics about the assignments, questions, and tasks they give. Which ones seem to require a lot of prerequisite knowledge that students don’t already have? Which ones seem to help students do well in the course? Which questions behave like “weed-out” questions? Maybe ed-tech should find ways to collect demographic information and measure outcomes to report on inequitable results, while protecting student privacy. 

I am interested in hearing your ideas, too.

See you earlier tomorrow! And by the way, I have started making sure that the people that I mostly speak to later in the day occasionally meet with me at an earlier time, so that they get the benefit of my full listening capacity and creative potential.

Getting to the truth, the ground truth, and nothing but the ground truth.

Takeaways for learning from HCOMP 2019, Part 2

At HCOMP 2019, there was a lot of information about machine learning that I found relevant to building educational technology. Surprisingly to me, I didn’t find other ed-tech companies and organizations at the Fairness, Accountability, and Transparency conference I went to last year in Atlanta or the 2019 HCOMP conference. Maybe ed-tech organizations don’t have research groups that are publishing openly and thus they don’t come to these academic conferences. Maybe readers of this blog will send me pointers to who I missed! 

Mini machine learning terminology primer from a novice (skippable if you already know these): To train a machine learning algorithm that is going to decide something or categorize something, you need to start out with a set of things for which you already know the correct decisions or categories. Those are the ‘ground-truths’ that you use to train the algorithm. You can think of the algorithm as a toddler. If you want the algorithm to recognize and distinguish dogs from cats, you need to show it a bunch of dogs and cats and tell it what they are. Mom and Dad say —  “look, a kitty”; “see the puppy?” An algorithm can be ‘over-fitted’ to the ground truth you give it. The toddler example is when your toddler knows the animals you showed them (that Fifi is a cat and Fido is a dog), but doesn’t know what new animals are, for example the neighbor’s pet cat. To add a further wrinkle, if you are creating a ground-truth, it is always great if you have Mom and Dad to create the labels, but sometimes all you can get are toddlers (novices) labeling. Using novices to train is related to the idea of wisdom of the crowd, where the opinion of a collection of people is used rather than a single expert.  You can also introduce bias into your algorithm by showing it only calico cats in the training phase, causing it to only label calicos as “cats” later on. Recent real world examples of training bias come from facial recognition algorithms that were trained on light-skinned people and therefore have trouble recognizing black and brown faces. 

Creating ground truth: A whole chunk of the talks were about different ways of creating ‘ground truths’ using ‘wisdom of the crowd’ techniques. Ed-tech needs quite a bit of ground-truth about the world to train algorithms to help students learn effectively. “How difficult is this task or problem?” “What concepts are needed to do this task/problem?” “What concepts are a part of this text/example/explanation/video?” “Is this solution to this task/problem correct, partially correct, displaying a misconception, or just plain wrong?” 

Finding the best-of-the-crowd: Several of the presentations were about finding and motivating the best of the crowd. If you can find and/or train ‘experts’ in the crowd, you can get to the ground-truth at lower cost (in time or money). I am hoping that ed-tech can use these techniques to crowdsource effective practice exercises, examples, solutions, and explanations. 

  1. Wisdom of the toddlers. Heinecke, et. al (https://aaai.org/ojs/index.php/HCOMP/article/view/5279) described a three step method for obtaining a ground truth from non-experts. First, they used a large number of people and expensive mathematical methods to obtain a small ground truth. (If we are sticking with the cats and dogs example from the primer above, you have a large number of toddlers tell you whether a few animals are cats and dogs and use math to decide which animals ARE cats and ARE dogs using wisdom of the toddlers.) From there, step 2 is to find a small set of those large numbers of people who were the best at determining a ground-truth, and use them to create more ground-truth. (Find a group of toddlers who together labeled the cats and dogs correctly, and use them to label a whole bunch more cats and dogs). Finally, you use the large set of ground truth to train a machine learning algorithm. I think this is very exciting for learning content because we have students and faculty doing their day to day work and we might be able to find sets of them that can help answer the questions above.
  2. Misconceptions of the herd: One complicating factor in educational technology ground truths is the prominent presence of misconceptions. The Best Paper winner at the conference, Simoiu et. al (https://aaai.org/ojs/index.php/HCOMP/article/view/5271), found an interesting, relevant, and in hindsight unsurprising result. This group did a systematic study of crowds answering 1000 questions from 50 different topical domains. They found that averaging the crowd’s answers almost always yields significantly better results than the average (50th percentile) person. They also wanted to see the effects of social influences on the crowd. When they showed the ‘consensus’ answer (current three most popular answers) to individuals, the crowd would be swayed by early wrong answers and thus did NOT perform on average better than the average unswayed person. Since misconceptions (wrong answers due to faulty understanding) are well known phenomena in learning, and are particularly resistant to change (if you haven’t seen Derek Muller’s wonderful 6 minute TED talk about this, go see it now!) we need to be particularly careful not to aid their contagion when introducing social features.

Are misconceptions like overfitting in machine learning? As an aside, my friend and colleague Sidney Burrus told an interesting story that sheds light on the persistence of misconceptions. Sidney talked about how, during the initial transition point between an earth-centered and sun-centered model of the solar system, the earth-centered model was much better at predicting orbits, because people had spent a lot of time adding detail to the model to help it correctly predict known phenomena. The first sun-centered models, however, used circular orbits and did a poor job of prediction, even though they had more ‘truth’ in them ultimately. Those early earth-centered models were tightly ‘fitted’ to the known orbits. They would not have been good at predicting new orbits, just like an overfitted machine learning model will fail on new data. 

Accessibility Sprint – Part 3: Giving non-visual feedback for learning from interacting with PhET simulations

This is the third part of a series of blog posts about a coding sprint about creating interactive online learning that is usable for people with disabilities.

The first post gives an overview of the coding sprint. Each of these subsequent posts describes the work of one team.

Sims Team Goal

Make the University of Colorado Boulder’s well respected, freely available, open-source PhET simulations more accessible for students who cannot see the simulation. By providing just the right amount of aural feedback about what is happening in the simulation after an action taken by a learner, blind and low-vision students could interact with the simulation, hear the results, and try additional actions to understand the underlying physics principles.

For example, PhET has been working on making their Balloon and Static electricity simulation accessible by including scene descriptions that screen readers read aloud in order to orient learners that can’t just look around to see what looks controllable. The controls are all accessible via keyboard actions. But, when a learner takes an action, for instance removing a charged wall that is keeping the balloon steady, the resulting balloon movement must be described. It would overwhelm the listener if small changes are repetitively described, and it can be confusing if messages end up being read out of logical order. For instance, messages about the balloons movement might end up being read behind a message describing its reaching an object and stopping.

Balloon simulation on laptop screen with sweater, balloon, and wall showing. Scene description listed to the right.
This image shows a balloon simulation of static electricity moving between a shirt and balloon. Beside the visual is information encoded in the DOM that is read when particular actions are taken. This is what will be read using assistive technology to help operate this sim and understand the results when actions are taken.

This group decided to work on extending the messaging being reported by this balloon sim, in order to better report very dynamic events, such as moving the balloon, or the balloon moving itself (attracted to sweater) without overwhelming and overlapping messages. To do this, they designed an UtteranceQueue, which is a FIFO (first in, first out) message queue with certain rules: it takes an object that contains an utterance, an object the utterance is associated with, an expected utterance time (to delay before the next utterance) and a callback that returns a boolean, to allow the utterance to be cancelled, rather than spoken, when it reaches the top of queue. This should allow a simulation programmer to design the set of messages a particular object should report. For example the balloon would report being moved, as well as its state of charge, and whether it is stuck to something. The callback would allow, for example, the balloon movement messages to cancel themselves if the balloon is in fact now stuck to the sweater or wall.

Laptop with balloon sim and code inspector
This image is showing the same balloon sim with the browser code view open to see what is controlling the simulation.

Testing (of the earlier version)

While the above development was occurring, one of the team members, Kelly, tested the feedback announcer function in the existing version of the balloon sim (the one before the code sprint) and got some user feedback for the group. The person that she tested with had worked with the sim before, but not with the new scene narration. Her test subject found the narration volubility to be just about right. He did, however, want to have a way to repeat some narration.

Demo

At the end of the day, this group demonstrated the operation of the new UtteranceQueue when the wall is removed and the balloon starts drifting toward the sweater. The movement was described (and not overly repetitive) and when the balloon got to the sweater that event was narrated. No other messages followed.

People who worked in this group: Jesse Greenberg, Darron Guinness, Ross Reedstrom, Kelly Lancaster

Code

https://github.com/phetsims/balloons-and-static-electricity/tree/ocad-hacks

Links

Accessibility sprint – part 2: Creating a mobile-friendly and accessible Infobox for maps

This is the second part of a series of blog posts about a coding sprint that happened the day before CSUN 17. The sprint was about creating interactive online learning that is usable for people with disabilities. This whole software area is called accessibility, and known as inclusive design.

The first post gives an overview of the coding sprint. Each of these subsequent posts describes the work of one team.

Creating a mobile-friendly and accessible Infobox for maps

Team Goal

Create a widget for helping people who are blind or have low vision explore maps that display statistical information (think popular vote winners in the US). This type of map is called a choropleth.

The existing infobox widget takes statistical data in a simple format and works with hot spots on an svg map to bring up an info box as a user mouses over or tabs to different regions on the map. The current version, however, isn’t accessible for low vision, doesn’t work well with screen readers, and doesn’t work on mobile. The team worked on improving these aspects of the widget (which can be reused for any statistical map).

Map of the 2016 US presidential election popular vote results by state, with blue, red, pink and light blue colors for each state. An info box is open for New Mexico with the winner and actual vote totals.
United States 2016 presidential race: Popular vote by state. 

Demo at the end of the day

Doug Schepers demonstrated the improvements. The demo showed the map tool improved for low vision and screen reader access. For low vision, the state selection outline was thickened, the info box contrast was increased and made resizable, the info box placement was adjusted to make sure the selected state was not covered. The ability to select the next state via tabbing on the states was added. Selection is currently in alphabetical order, and a better system would work on the navigation also. He also demonstrated using a screen reader and being able to select a state and hear it read the info box for each state. It uses ARIA Live Regions to update things. The statistical data is formatted using simple name, value pairs.

The ultimate goal is to define a simple standard for describing statistical map data and provide an open-source, reusable, accessible widget for interacting with these maps.

Doug Schepers and Derek Riemer worked together.

The code is available here: https://github.com/benetech/Accessible-Interactives-Dev/tree/master/MapInteractives

CSUN 17 Acessibility Coding Sprint for People with Disabilities (Making learning accessible) – Part 1

Last week, my colleagues at OpenStax, Phil Schatz, Ross Reedstrom and I attended the 2nd annual pre-CSUN (but third overall) accessibility coding sprint to help make learning materials useable by people with disabilities.

Prior accessibility coding sprints

The first took place in 2013 and was jointly sponsored by my Shuttleworth Foundation fellowship and Benetech and held at the offices of SRI. You can read more about that one in these earlier posts (2013-accessibility-post-1, post-2, post-3, post-4, and post-5). The second took place last year before the CSUN 2016 Accessibility Technology Conference in sunny San Diego and was again sponsored by funds from my Shuttleworth Foundation fellowship and by Benetech. That one focused specifically on tools for creating accessible math. Read more in Benetech’s blog post under “Sprinting towards accessible math”, Murray Sargent’s follow up post on accessible trees and Jamie Teh’s post about creating an open-source proof-of-concept extension of math speech rules used by the NVDA browser to make them sound more natural.

This year’s sprint

Four tables with 10 participants conferring and working at laptops
Participants at work

This one again took place in not-quite-as-sunny San Diego (California has been getting lots of rain) before this year’s CSUN-17 conference. The focus was on making interactive learning content accessible. And the very cool thing from my perspective is that my fellowship had nothing to do with the organization of this one. Benetech and MacMillan Learning sponsored and organized this one. The attendance was the largest ever with 30-ish in person participant and 5 or so attending remotely. We had several developers that both create accessible software and use assistive technology themselves.

Like previous sprints, we spent time initially getting to know each other and brainstorming and then divided into multiple teams ranging from a single person to five people working together to prototype, explore, or make progress on a particular accessibility feature. In upcoming posts, I will highlight each of the team’s goals and what they demonstrated at the end of the day.

Upcoming posts (links will be added as subsequent posts appear)

  • Creating responsive (mobile-friendly) and accessible (screen-reader friendly) Infobox for maps
  • Giving non-visual feedback for learning from interacting with PhET simulations
  • Using alternatives to drag and drop for matching, ordering, and categorization tasks
  • Personalizing website interfaces for better accessibility (both sensory and cognitive)
  • Standardizing the display of math in publications
  • A Nemeth and UEB Braille symbols table
  • Using MathJax to produce Braille output from LaTeX math

Read more