Tag Archives: sprint

Sprinting with SPEN

This week I spent a day in Atlanta ‘sprinting’ to create practice problems with the Signal Processing Education Network (SPEN). SPEN is an NSF funded project bringing together five universities to create a broad teaching network and body of open educational resources (OER) for teaching and learning signal processing at the undergraduate level.

Sprint goals: Creating practice problems: The SPEN network is enhancing existing open materials and creating new open textbook teaching materials integrated with interactive simulations and a rich body of homework, test, and practice problems. This sprint brought together 33 people (faculty and graduate students) to create practice problems and upload them to two different question and answer databases. Each question ended up in both banks. One of the banks, Quadbase, is an open question bank that anyone can add questions to, and anyone can take questions from. The question bank can be used for interactive tutoring systems that pull targeted practice questions to help students learn and retain knowledge, for teachers building homework sets, and for learners looking for practice problems. The second question bank is Georgia Tech’s Intelligent Tutoring System that is both a question bank and a tutoring system that Georgia Tech faculty and students use in their undergraduate signal processing classes.

This sprint was the first time SPEN got together for a day of group content creation. The sprint ran incredibly smoothly and the result was 160+ new questions for the databases. 

The procedure:

  • Prep: Participants bring existing materials: Before the sprint, organizers requested that participants bring any homework and problem sets that they already use.
  • Instructions: Sprint organizers created a set of instructions with question topic prompts and sample questions for several different types of questions (multiple choice, matching, free response).
  • Groups: The organizers put people in groups of 3 – 5 at round tables and gave each group a set of topics and a shared Google Doc to work together in. The Google Doc had question samples that could be copied and pasted to create new questions. Math was entered in TeX math which most of the participants already know and use fluently. It is a dense notation that can be converted to attractive equations.
  • Creation: Groups worked for 2 hours creating questions.
  • Signaling completion: When a question was finsished, someone would highlight it in green.
  • Uploading to the question banks: A separate group of four people spent the whole day watching for the green highlighting and then copying the questions into the two different question banks. It was labor-intensive, but meant that every question could end up in both banks despite each bank having slightly different formats. It also meant that participants didn’t have to learn new tools.
  • Review: Then groups reviewed the questions for 1 hour. Two groups merged to do the review (thus making groups of 6 to 10 for the review. They would edit the questions to improve them.
  • Repeat: The whole process was repeated in the afternoon.

My observations

  • Pre-workshop prep not done. Very few participants brought existing materials. So it looks like it is important that success not require preparation.
  • Paper collaboration: Groups collaborated on creating questions by drawing on paper and talking about them. They did not collaborate directly in the Google Docs. Individuals would write up entire questions in the doc and then highlight them green to signal the uploaders.
  • Review by reading and solving: To review the questions, recall that two groups would merge or swap. So groups 1 and 2 would review together. Group 2 would review group 1’s questions and perhaps ask some questions about the intention for each question. Then group 2 would actually solve all of group 1’s questions. And of course group 1 would be doing the same with group 2’s questions.
  • Review resulted in considerable change: The review process resulted in substantial revision of the questions. Revisions occurred for clarity, correctness (after solving), and also for transcription errors as the questions got put into the question banks.
  • Pedagogy benefits: In addition to the creation of a large body of questions that are now globally available and shareable, the process itself was valuable as participants discussed and reviewed and improved the questions. Thus the time that participants donated to the sharing process was also valuable pedagogically.

Sprint artifacts

Want to learn more?

Sprinting with Connexions

First progress implementing a bit of a publishing API for OER, based on SWORD and AtomPub.

Last week at the Plone East Symposium in State College PA, plone developers across the US gathered together to learn and share about using Plone in educational settings. At the end of the week, Friday and Saturday, about half the attendees stayed to “sprint” (original plan, full report).  At sprints, people develop working code together on various projects in order to share expertise, learn from each other, and expand networks of technical mentors. Knowing that Connexions already had a partial implementation of SWORD for creating modules from Word documents, and that SWORD is likely to be the backbone for the OER Publishing API (your comments, approval, concerns welcome), I brought a sprint topic to the symposium — “OER Publishing API: Extend Connexions SWORD implementation”. Connexions provided an expert, Phil Schatz, to lead the sprint and we created a milestone to track the work. Carl Scheffler joined Phil and me working on SWORD and we got advice and help from Michael Mulich (Penn State), Ross Reedstrom and Ed Woodward at Connexions.

What the Connexions/Rhaptos SWORD service does now:

The current Connexions SWORD service is tailored to a very specific client, the Open Journal System (OJS). It takes a zip of a Word file and a METS file with some metadata and a bibliographic entry that is used to insert a reference to the the original publication of the article in a journal. The service then creates a new, unpublished module with the content of the Word file, and puts it in a work area chosen by the client.

What we got done at the sprint:

  1. Reorganized the existing SWORD code to make the coding cleaner.
  2. Extended the service so that it would take a Word file, or the Connexions native format.
  3. Changed the service to get the title and abstract from standard locations.
  4. Got the SWORD client toolkit, EasyDeposit, to work with the new code (and partially work with the existing code.)