The Web Is Your MOOC, and Portfolios To The Rescue

I'm getting ready to head in to DrupalCon, where over the next few days I'll be talking education and open learning with anyone who is interested.

And as I'm heading in, I have MOOCs on the brain - not because I'm particularly a fan of MOOCs, but because of the tendency to take a great thing (in this case, information and interpersonal exchanges distributed broadly over the web) and reduce it into something that feels more manageable, but is ultimately something lesser (in this case, MOOC platforms). More on this later.

The Web Is Your MOOC

Part of the reason that I'm thinking these thoughts prior to heading into DrupalCon is that I've long held the notion that open source communities have been engaging in effective peer-supported learning, even while many for-profit companies and academic communities have been struggling to distill the process of peer-supported learning into something resembling a replicable product. From having participated in and built many types of learning communities over the years, simpler is often better - many open source communities have done amazing work with listservs and issue queues, and many more feature-rich platforms have withered because, over time, a site owners "must-have" feature is the post launch usability nightmare. There's a moral in there about user-centered design and user testing, but that's a subject for another post.

But getting back to MOOCs, the early MOOCs - the ones run by Stephen Downes, Alec Couros, Dave Cormier, George Siemens, (and yes, I know I'm forgetting people - please fill in the gaps in the comments) etc - encouraged participation from anywhere. If you had a blog with an RSS feed, you were in. Participants remained in control of their work (depending, of course, on the publishing tool they were using. Open source platforms generally offer more options for data ownership and portability than their closed brethren). The MOOC was like a marauding mob of information, with the potential to sprout anywhere.

It's All About The Portfolio

In the post-lifestream, post-MOOC era, it's been rare to see much excitement about portfolios. This doesn't surprise me, because like all good ideas, portfolios have been around for a while, and thus lack the shiny newness that generates great marketing copy. However, the need for the concept hasn't diminished - any time you see a site that promises to collect the sources of your learning into a single location, so you can show your employers what you know! - you should think, "portfolio." All of the sites that promise to simplify collecting and curating your digital footprint? Portfolios. A lot of the conversations around documenting and receiving credit for informal learning have their roots (and possibly solutions) in portfolios.

In the conversations we have had about portfolios over the years, we have seen three main barriers, or areas of misunderstanding:

  • Distinguishing between a working and a presentation portfolio: simply put, the working portfolio is a running collection of just about everything you do. The presentation portfolio is a selection of elements from the working portfolio selected for a specific purpose. Portfolios can serve different purposes for different reasons, and the relationship between the working portfolio and the presentation portfolio is key.
  • Portfolios need care and feeding over time: as mentioned before, the working portfolio is messy. Periodically, the working portfolio needs to be pruned and cleaned up. But, messy is great, and if it's not messy, that could be a sign that things aren't working as they should.
  • Ownership and control of the portfolio: because most portfolio implementations are paid for by an organization, the organization usually controls access to the portfolio and any information in it. Organizational control is also seen as an essential element to assessment. However, this flies in the face of learner control and ownership of the means by which they learn. Ultimately, this is a data portability issue with implications for the learning experience. More on this later.

Concluding Thoughts

One of the things that has been particularly underwhelming about the corporate MOOCs that have cropped up is their uncanny resemblance to an LMS with an open enrollment policy. While there are many differences between the platform-stylehttps://chronicle.com/article/Providers-of-Free-MOOCs-Now/136117/ MOOCs and the original versions, the lack of learner control is a key element. Like Vegas, work in a MOOC stays in a MOOC (unless, of course, a company pays money to study student data).

In the platform-style MOOCs, the open web is missing. From a learner perspective, the portfolio is MIA. For a learner, throwing the evidence of your learning into a space that someone else controls isn't a viable long term strategy.

So, if you're at DrupalCon and want to talk open learning, let's make some time and sit down together. Open source, and the methodologies that support sustainable open source development, have a lot in common with open learning. I'd love to hear what other people are doing in this space.

Upgrading to Drupal 7.20 and Fixing Broken Image Paths

On a project that is still in development, we recently did a core upgrade as part of our pre-launch preparations. The project involved a data migration of tens of thousands of nodes and users; part of the migration involved manual cleanup of image data to account for a responsive design in the new site and different use of screen real estate between the old site and the new site.

However, as is noted on the 7.20 Release Notes, there are some issues that can arise on sites using the Insert module after upgrading to Drupal 7.20. The diff on the release notes page for the 7.20 release gives a sense of how the ramifications of these issues are still evolving; given that there are a little over 42,000 sites reporting use of Insert in D7, there will likely be other people affected by issues similar to what we experienced.

What We Experienced

After we upgraded, none of the images that had been inserted into text areas via the Insert module were displaying. The images were still present in the file system, but given that they were derivatives created via imagecache, it would have taken a single flush of imagecache, or a modification to an image style, and poof! the stored version is gone, making the pre-existing path to the image even more useless.

As is noted in this issue, older versions of the Insert module don't work cleanly with Drupal 7.20. As several commenters posted in that issue, all relative links no longer worked. This issue led to the creation of a sandbox project that begins to address the issue. However, cleaning up pre-existing content is heavily dependent on the site in which that content was created or edited.

Next Steps

After researching the issue, Jeff developed an update script that we tested in dev prior to rolling live on staging.

As mentioned above, the specifics of updating content will vary based on your precise site configuration, so the chances of this script working cleanly on your site with no modification is virtually nil.

However, in the interest of saving the next people to face this issue some time, the script is on GitHub; please fork away and have at it.

Last day to submit for User Experience Track at DrupalCon Portland 2013

For the first time ever, DrupalCon is featuring a User Experience Track. We will have 13 sessions discussing User Experience Methods, tools and philosophies.

Today, Friday February 15th, is the last day to submit sessions for DrupalCon. Get your sessions in!

UX Track Featured Speakers

We already have 3 Featured speakers planned for the track:

Here is our focus for the User Experience track.

The User Experience process is key to the success of the development of websites and web applications. From user research, interviews and analytics, we learn what the user actually wants and needs; not what we assume they want. At this year's DrupalCon we present a new User Experience track to show our community's dedication to user needs.

Main Themes

  • Explaining what user experience is and why it’s important
  • UX for mobile and tablet, Responsive UX
  • Speeding up the design process using UX tools and techniques
  • UX for multilingual sites - especially RTL
  • Content Strategy


  • UX professionals - Drupal
  • UX professionals outside of Drupal
  • Backend devs that are interested in UX
  • Frontend devs interested in UX

When We Talk About Open Content, This Is What We Talk About

Over the next six months, we have three scheduled events supporting communities developing open content. The three scheduled events are taking place on the following dates and times:


These events are being run unconference style. We will be documenting the planning (both logistics and content-related) to run a successful event over the next few weeks. Our goal is to create an replicable blueprint that supports anyone, anywhere putting on their own open content event. We will update this post with information on the San Francisco and Portland events in early 2013.

We have also talked with a few other people in different cities, and it is possible that we will add other dates to this list. If you are interested in hosting an event, please be in touch. We want to see community focused open content authoring events become a common part of the landscape.

As part of our work with open content, we are also working on freely available open source web-based software that will allow communities to create, distribute, remix, and redistribute their own open content. This software allows organizations to create their own resources, texbooks, and supporting material, which they can then share if, when, and how they choose. We have already built the tools that allow this content to be exported in ePub 3.0 and .mobi formats, so that any content created within this site can be browsed on the web, and/or exported and read on Android and iOS devices. We are also putting considerable focus on the user experience of authors, and of the design of the site across all devices that connect to the internet. As part of this work (as well as for some client work) we recently built Zoundation, a Foundation-based theme. This earlier writeup provides additional background on Foundation.

Our goal is to be as close to fully transparent in our work as possible; any software we release will be freely available under an open source license, and as an installable site built in Drupal, and we will regularly blog about our progress and our process. After the Open Content event in Philadelphia, we are presenting at Educon on this work, and looking to grow the network of potential collaborators. To be clear, when I say "collaborators" in this context, I mean both technological and educational, as both skillsets are required to make this grow in a sustainable way.

While we have written about open content in the past, we find it both useful and necessary to revisit our definitions and make sure that we're not working on any assumptions that are out of date, or otherwise crazy. In general terms, when we talk about open content, this is part of the foundation holding up the conversation.


When creating open content, it needs to be easy to break a collection of resources up into its component parts. As an example of what we mean, a unit on the French Revolution can stand on its own, but someone coming along looking to adapt the material should be able to extract the information directly relevant to the Tennis Court Oath, and only use that.

Some formats (pdf, flash, SCORM, etc), regardless of how the content is licensed, require work to disassemble into their component parts and reuse the material. At times, organizations that market their work as open put technological barriers between users and content as a means to complicate the process of reuse. Keeping the concept of granularity in mind when designing systems for open content, and when authoring open content, can help ensure that no unnecessary barriers to use and reuse are placed between people and information.


Licensing is a topic worthy of many posts; over the years, many of these posts have been written by people far more knowledgeable on the subject than me.

As a matter of personal preference, I strongly prefer the Attribution Non-Commercial Share-Alike license. This license allows for reuse and modification, by anyone, in any work, provided they are not using it it commercially, provided they attribute our original work, and provided they share it under a license that supports non-commercial reuse. Part of the reason I like this license is that if someone wants to reuse my work commercially, all they need to do is ask. The non-commercial clause is a lot better than the status quo, and the need to ask permission is the same as material covered under a traditional copyright.

However, when remixing content from various sources, the combination of the Non-Commercial and Share-Alike licenses can prevent reuse of content from different sources. As an example, a person has content from two sources. One is licensed under the Non-Commercial Attribution Share-Alike license. The second source is licensed under an Attribution Share-Alike license.

The Attribution part is easy, but things start to get dicey with the Share-Alike portion of the license. It's very unclear what license the derivative work can or should be released under. Within the FunnyMonkey office, Jeff Graham has been telling me this for years, and due to my innate stubbornness I have only come to realize the accuracy of what he has been telling me in the last few months. This post on data migration also demonstrates some of the issues at play here; while the focus of the writeup is data migration, the section on License Chaining is directly relevant to open content.

And, until this gray area gets cleaned up, we are advocating for use of the Attribution Share-Alike license. The thinking behind this is that the Share-Alike component of the license will prevent anyone from appropriating open content and interfering with the free reuse of derivative works. It's a good thing the textbook companies don't have many lawyers, and that they aren't litigious about inane details.

The short version: licensing is not simple, but the Attribution Share-Alike license simplifies more problems than it creates.

Sharing, and the various layers of sharing

So far in this post, we have spent some time focusing on the ideal setup, rather than the practicalities.

However, all open content becomes open through a simple act of sharing. There are countless reasons people give to not share their material: It's not good/coherent/clear/polished enough; I only wrote this for me; I need to be able to collect usage data, etc, etc, etc. However, let's set these arguments aside, and ask a simple question:

What happens if a piece of work gets shared out in any format, be it a pdf, a word-processing document, a google doc, something linked within a Tumblr or Posterous - really, just some low level, relatively straightforward mechanism to share?

First, no one might find it. But, that's no different than the status quo. If work isn't shared, no one will find it there either.

Second, no one might use it. See answer above.

But the reality is, if its on the internet, someone, somewhere will stumble across it. And, out of the people stumbling across it, someone will find it useful.

Reuse cannot occur without the initial act of sharing starting things off.

And yes, I realize that earlier, I was talking about granularity, and the need for formats that support reuse, etc. And all of that still holds, but if we look at creating open content as a continually ongoing process of refinement, redistribution, and reuse, information in less usable formats can be curated and converted into more usable formats. The process of bringing good information into reusable formats is one of the key goals of the Open Content Barn Raisings that we are holding.

And it all starts with sharing what you have created under a license that supports reuse.

Web to print

I have seen many open content initiatives get mired in the perceived need to support a web to print (not "ctrl-P" print, but "professional textbook" print) workflow. This is a business or organizational need, not a learning need. It's 2013; between a responsive design that works on the web across devices, .mobi and ePub export, and the ability to (ctrl-P) print sections, we have the majority of learner-centered use cases covered. If an organization needs to be able to support print on demand, they can develop a workflow that makes sense for their organization - this is a problem that has been solved in many ways, but it is not a foundational concern for learners. I haven't encountered many learners reading content on their phone saying, "I really wish I could convert this free ebook into a textbook I could pay sixty dollars for."

Open Content as Teacher Professional Development

If a group of teachers are working together to develop resources to both use in their classes and get reused internationally, that sounds like a great use of professional development hours. One of the benefits of having content reused over time, across geographic areas, is that teachers working within the community will have the benefit of feedback on their work from a broader range of professionals than is possible within a single school, district, college, or university.

We have talked about this before, but a broader use and adoption of open content has the potential to shift how we think about Teacher Professional Development. Additionally, if we look at a body of open content that has been created by a group of educators over time, that body of work begins to look suspiciously like a professional work portfolio.

Closing Notes

Open Content is about many things, but a facet that surfaces repeatedly over time has to do with choices. Using open content is a clear way of demonstrating to teachers and learners that we have options. Over the next few weeks, in the lead up to the first event in January, we'll start documenting the planning steps needed to hold an event, and the steps needed to create good open content.

Image Credit: "Climbing" taken by Alex Indigo, published under an Attribution license.

Vagrant and Puppet for Drupal development


We here at FunnyMonkey have been using virtualized development environments for several years. Fortunately for us this has all been with VirtualBox, so making the transition to vagrant made perfect sense. Along with this transition we had previously been using a set of shell scripts to configure our environments exactly the same. While this process was certainly adequate and a major improvement upon XAMP/MAMP/etc, it still left a bit to be desired. Vagrant combined with puppet improves the shortcomings in our previous approach.

What is vagrant?

Vagrant bills itself as "Virtualized development made easy." and this is a fairly accurate self-assessment. From our experience vagrant is an improvement upon just a straight virtualbox development environment. So far, in our limited use, vagrant solves several problems that were issues requiring extensive documentation or manual workarounds for each deployment. With vagrant we solve several issues;

  1. What is our starting point? Vagrant allows pre-defined boxes to be your configurable starting point. These can have all sorts of options and configuration pre-baked.
  2. How do we handle host specific issues? Specifically how do we handle the myriad of OS and hardware issues?
    • How many CPUs or cores should be associated with this virtual environment?
    • How much RAM should this virtual machine use?
    • How do I get an active routable internet address on this host?
    • Are there any other specific workarounds that need to be accounted for?

    These issues as well as quite a few others are addressed via the Vagrantfile; with a well defined hierarchy it is easy to provide an inheritance structure with the local defaults trumping any values that may have been set in the originating box definition, and project specific settings trumping all other values.

  3. How do we ensure the box is configured as our project needs it? Vagrant kicks off either chef, puppet, or even custom shell scripts to handle project specific configuration. The rest of this documentation will be assuming the use of puppet.

What is puppet?

Puppet is a configuration management tool. What that means is that puppet ensures that the system it runs on is configured in exactly the manner in which the puppet configuration defines. In large scale production or cloud computing environments this is absolutely critical as it ensures that each machine is configured identically. In development environments this is valuable for several reasons;

  1. In deployments requiring special configuration the development environment can be configured precisely the same as the production environment
  2. Development environments can be unique per client project. Rather than overloading a XAMP/MAMP installation with a sub-directories or virtual host per project each development environment can be standalone.
  3. The time required to deploy a development environment is a small fixed cost and is quite predictable.
  4. Development environments can have the exact packages and configuration they need. Does your current project require the latest PHP, how about compass, or ApacheSolr? Using a virtualized environment allows you to pick the best in class OS and package management for your particular project, and allow that configuration to be used by other members of your team just by using the same development platform.

Why should I care?


Using defined development environments that mimic or are identical to production allow every team member to have meaningful conversations about real issues rather than being mired in OS specific or configuration specific details. That is, it lets development team members focus on the real problems rather than;

  • How much memory is PHP configured to use?
  • What version of X are you using?

Most importantly it let's us assume that everyone is starting at point A. This means that in terms of a specific project each team member has the same directions to get from point A to point B. We no longer need a completely unscalable set of instructions on first how to get to point A for each team member. Instead we can focus on the needs of our project.

How do I get started?

You can follow the vagrant getting started instructions here. We have put the scripts we have developed on Github; feel free to grab them and fork them. If you follow these steps through "Install Vagrant" you can then clone our git repository from github which contains some bash scripts and a vagrant with puppet configuration designed for Drupal development.

  1. Install vagrant
  2. git clone git@github.com:FunnyMonkey/fm-vagrant.git
  3. cd fm-vagrant
  4. run ./build.sh This creates the Vagrantfile and nodes.pp
  5. vagrant up
  6. start using your virtual environment.

There is additional documentation included in the README on github.


Why not just pre-define a box for Drupal development and remove puppet?

While you could certainly do this we feel that the best approach is to configure on the fly. This provides several benefits;

  1. You can have a consistent basic configuration that every project uses. This affords a certain amount of familiarity in ensuring that each project has a similar starting point.
  2. Each project can have exactly the components it needs rather than every component that every project could need.
  3. A pre-baked box or configuration is decent in a static environment. Over time security exploits will be discovered and corresponding fixes will be implemented. Occasionally these security fixes will create incompatabilities with a projects' configured features. As such a project needs to grow and evolve in a similar manner to how its parent OS and configuration grows and adjusts.

Why not grab Drupal as part of the install?

While you can certainly do this, puppet will reset any changes that you make if you do not make the changes via the puppet configuration.

What does this workflow look like?

Once you have the basics set up you just clone the vagrant repository for each project, and then 'vagrant up' a new environment.

I prefer to connect over ssh to the virtualbox as that way everything is self contained in the virtual box. You could also setup shared folders that would map to your host OS if that workflow is better for you or your OS & editor combination does not support editing files over ssh. From here your regular workflow should take effect.

As always if you notice any errors, inaccuracies, or ways in which this approach could be improved please chime in.

Getting Better Faster - Thoughts From BADCamp

While down in Berkeley for BADCamp, I had the chance to go out to lunch with Jeff Graham, Chach Sikes, and Catrina Roallos. We got to talking about ways to help people working in technology (or wanting to expand what they do with technology) learn the requisite skills needed to continue to grow.

We talked about ways of finding community - either within open source projects or in hackfests - and about how the connections there can be key. And we also talked about how learning leading edge development best practice - for both front end and back end developers - really isn't widely available within schools.

One thing Chach brought up in the conversation stuck with me. When she brought it up, she made sure to point out that it was advice that she had been given from several people over time, but the reason it stuck with me is because it's so simple, but it's the kind of thing that can help you no matter your experience level (and really, it can help both within tech and in other disciplines).

The advice Chach gave is this: If you work on soving a problem for more than half an hour and you are still stuck, stop and ask a question.

This is some seriously awesome advice. First, it ensures that a person is making a concerted effort to solve the problem on their own before reaching out. This helps avoid obvious and/or lazy questions. Doing some initial legwork also leads to informed questions; an informed, focused question is a lot easier to answer than a general fishing expedition.

This approach also assumes that you have a community, or at least a place where you can ask the question. If you don't know where to ask questions about your specific project, places like StackExchange or even Quora can be good places to start. But, the thing that's awesome about asking a question is that it implicitly acknowledges that none of us ever needs to work in a vacuum, and that it's okay to not know the answer to everything. And, in situations where you don't know the answer, seeking out smart people is a great idea.

Additionally, setting a time limit helps ensure that you don't get lost down a rabbit hole. If we work on something without success for too long, it's natural and normal to get discouraged, frustrated, or angry, and these states of mind rarely lead to our best work. We all have different thresholds, but setting a time limit helps minimize the chance that we'll lose half a day trying to chase down a solution.

As we continued talking over lunch, it was pretty obvious that experienced developers have work habits that are only tangentially related to technology, but that these habits are a key part to their continued success. Individually, none of these habits are a magic bullet, but taken collectively, each strategy helps create incremental improvement.

And that's how we get better.

Over the next few weeks and months, we are planning on doing some work around helping people learn both the techical skills and the less tangible habits and strategies that allow people to have more options in their lives. Things are in the early stages yet, and we'll be updating here with details as things progress, but as we get started on this path, I wanted to share this out. There are simple things we do - habits that work for us and help us work more effectively - that can be shared and taught, and that will help others. One of the things I love about the advice Chach shared is that it can be put to use pretty much immediately, and it can work anywhere - for Drupal developers, for designers, for sysadmins, or outside technology.

What are some tricks, habits, or strategies that have worked for you?

Building Drupal Style Tiles using Foundation and SCSS

This past weekend, Pacific Northwest Drupal Summit in Seattle, WA, was buzzing with talk of Foundation, an amazing responsive web framework built with HTML5 and SCSS.

I led a session on Building Drupal Style Tiles using Foundation and SCSS and provided sample code for attendees to download and install to create their own style tiles.

Original video is available on YouTube.

What are Style Tiles?

Style Tiles are not a new concept in design. Graphic and interior designers often use Mood Boards or similar deliverables to develop aesthetics before beginning design. Everyone can now style tile-ize whatever they want - Just sign up for Pinterest and start pinning and you too can create a palette for your new living room or create a palette of cute baby animals. (Why I really love Pinterest)

Style Tiles for the web were made popular by Samantha Warren of Twitter. She has developed the site styletil.es which discusses the value that style tiles have in the design process and provides a Photoshop template download so you can create your own tiles. She has written a great article on A List Apart discussing the style tile process.

I have used Photoshop style tiles in multiple projects and have found them to be an invaluable step in the design process. By creating style tiles, a designer is able to flesh out multiple palettes, typography choices and design patterns without having to take all the time it takes to design an entire homepage. I especially love creating the different patterns and only giving the user a "sneak-peek" of these patterns. Once the client chooses the option with which they want to move forward, the style tile process makes it easy to duplicate the Photoshop layer patterns and begin the homepage design.

One thing I have removed from my style tiles is the descriptive adjectives. I have found that they are distracting to the clients and that often, the clients focus more on the words than the design.

Creating Style Tiles Using HTML5 and SCSS

This summer while sitting on an airplane and designing a style tile in Photoshop, I thought to myself:

"Wouldn't it be cool if I had a style tile built in HTML that I can just plug colors and other stylistic elements into SCSS variables and create an interactive style tile?"


"Wouldn't it be cool if I could build style tiles that use Drupal HTML code that I can style and then reuse code and mixins in my site's theme? "


"Wouldn't it be cool if those style tiles were responsive so that the client could get a feel for how elements change in different devices?"

At the same time, I had just discovered Zurb's Foundation web framework and was totally swooning over it. I still am.

I immediately closed Photoshop, downloaded Foundation and started coding.

What is Foundation?

So what is Foundation? Foundation is, and I quote Zurb on this (and totally agree), "The most advanced responsive front-end framework in the world."

Foundation is written in HTML5 and SCSS to provide a responsive open-source code set that developers can download and use for prototyping or site building.

Foundation comes with a huge list of features. Some features are:

  • 12-column typographic grid designed to work on almost any sized device screen
  • Typography based on a golden ratio modular scale
  • Multiple button styles, patterns and sizes plus hover and active states
  • Custom form styles with validation states and checkbox/radio styles
  • Multiple navigation styles with drop down support and configurable for responsiveness
  • Responsive image or content slider
  • Modal Dialog

Whoa. So great!

Building the Style Tiles in Foundation

Download Style Tile Code

I use Foundation to create the two-column responsive layout. I have created two different directories. One is called base, the other is called example. Each style tile shows and defines the style for the following elements: typography, swatches, link styles, button styles, tabbed navigation, tags, pager and status/warning/error messages. I have included app.scss and settings.scss and modifed specific settings in settings.scss to define colors and fonts.

I created tile.scss to provide the layout and the swatch mixin for the style tile. I have also included an empty overrides.scss for any new styles.

Base Style Tile

The base style tile shows Foundation in action using its default colors and styles.

In the example style tile, I have removed the Foundation classes on HTML elements and I added styles to the overrides.scss. These styles and mixins are intended to be the code that a front-end developer can use in their Drupal theme.

How can these style tiles be used?

During the presentation at Pacific Northwest Drupal Summit, the group and I discussed the different potential usages of this code. Straightforward, these style tiles can replace Photoshop based style tiles. While the designer may still do a bit of work in Photoshop, the presentation of the style tile will be in HTML.

We thought that this could be a tool to help facilitate communication between designers and front-end developers to help clarify how the design is carried out into interaction. This tool could also serve as a client deliverable to show how fonts will render, allow the designer to showcase different webfonts and demonstrate how specific elements will change with user interaction.

Potentially, as a Drupal-specific project, we can access the Styleguide module through hook_styleguide() and hook_styleguide_alter() to modify the Style guide output so that it contains style tile elements such as color swatches.

I would love to hear any other ideas people have. Please download, fork and add bugs or questions to the github queue if you have any issues.

Using Drupal In Education Unconference in Portland - Save the Date

At DrupalCon Denver, we had an extremely successful Education Unconference and we're planning to do it again in Portland. The planning for this event is in the very early stages; we have nailed down a date - Monday, May 20, 2013 - and are in the process of securing a venue. The event will take place in Portland, Oregon, and as soon as we finalize the location, we will update the announcement page with details. One thing to note at the outset, though, is that we want to make this event an opportunity for people familiar with Drupal and people familiar with education to come together and discuss common issues. If you work in either one of these areas, please come - we all have a lot to learn from one another, and with one another.

Based on discussions in Denver (and within the Drupal community over the years), several general areas of interest continue to emerge. Some likely topics could include:

  • Large scale deployments, and how to balance the needs of individuals/units within an organization against maintaining a reasonably standardized platform;
  • Responsive web design;
  • Strategies for mobile web applications;
  • Ensuring accessibility within web sites;
  • Supporting communities of practice;
  • Drupal as a traditional LMS;
  • Using Drupal to support informal and inquiry-driven learning.

Admission is free of charge; just register here! The event will follow an unconference format, so if there is something you want to talk about, propose a topic, find some like-minded individuals, and let the conversation start.

As with the Denver unconference, we have similar high-level goals:

  • Facilitate connections between people working in the education space who would not have the opportunity to interact within the larger venue of DrupalCon;
  • Generate conversations among people working in different areas of education; in this way, K12 folks could talk to Higher Ed, people working in Libraries could talk to other stakeholders, etc - while there are many differences in what we do, there are also similarities, and it would be good to see some opportunities for collaboration materialize;
  • Set the stage for more focused BoFs at DrupalCon - rather than spend the first BoF of DrupalCon figuring out who wants to talk about what, we could lay the groundwork at the unconference for ongoing conversations throughout DrupalCon;
  • Discuss development methodologies and best practices that are making our lives easier, and more productive;
  • Demonstrate and discuss example sites, and talk about how we built great sites that help people learn more effectively;
  • Your idea here: https://2013.drupalpdx.org/forum/education-unconference-planning

So, mark May 20, 2013, on your calendars. The second Using Drupal in Education Unconference is on! We look forward to seeing you there.

Drupal Presentation Notes, and the Role of Open Source in Mainstream Ed Tech

For the last few days, I was down in San Diego, California for the 2012 ISTE conference. I was down there running a session for people to learn about Drupal. I have added the notes I used for my presentation into the handbook in the Tutorials section.

During the conference, I wandered onto the vendor floor to touch base with some friends who were working in different booths. Once I recovered from the shock of a bright orange gimp-man shilling hardware, I was struck by the overwhelming lack of any open source representation. Aside from a single Moodle vendor, I didn't see any open source representation.

Orange Man

This paucity is all the more striking because of the amazing, innovative work I see happening within education using open source tools. On a very regular basis, I see schools using a range of open source tools to support curriculum mapping, online classes, collaborative projects, community outreach, professional development, portfolios - and in these cases, schools aren't paying exorbitant fees to vendors, or losing control of the work performed by teachers and students, or ceding flexibility for convenience. They are just working - intelligently, intentionally, making mindful progress towards articulated goals, and using open source tools to support and extend that work.

But this narrative seemed largely missing at ISTE - possibly, this is due to the company I keep, as I tend to gravitate more toward people who are doing the day to day work in the classroom. But from visiting the vendor floor, the story of educational technology - at least this year - seemed to be one of convenience and speed over vision and ongoing effort. Technology, at least the vision of it being articulated and sold on the vendor floor - is the panacea. It is the thing that makes the difficult easy, and makes all of us smarter.

Open source has a role to play in articulating a different narrative about education - a narrative that values individual effort within a community that is loosely united toward a common goal. The development model within open source communities (and this model has been in place and thriving well before the days of Web 2.0-ifying everything) has always supported (in general terms) peer to peer learning and support. The absence of open source companies in the larger mainstream educational technology world is a loss for both open source and educational technology.

For an additional perspective on the state of data control and access to data, Audrey Watters has a piece over at Hack Education on how vendors responded to questions about data portability and apis. She was aided on the quest by Kin Lane, who knows a thing or three about APIs.

Julio and Organic Groups

Last week, we announced that we had put together documentation and a demo site for Julio, our distribution for schools, school districts, and academic departments for K-12 and higher education.

As we have been building school web sites over the years, a common feature request we received was: "I want people to be able to put content in one place on the web site, but only that place." Translated, this meant that they wanted to decentralize control of the web site, and allow people freedom within the areas where they are responsible.


So, for example, the Football coach can put anything she wants into the Football team section of the web site, but she cannot put anything into the any other place.

As Drupal has evolved over the years (for us, starting in 4.5) we solved this in many different ways. However, starting midway through D5, and with some consistency in D6, we began using Organic Groups as the tool to address this functional requirement. In D5 and D6, this involved overriding many OG features on a pretty regular basis, as the default settings provided options within the user interface (UI) that ranged from distracting to scary, depending on the end user and their privileges within the site.

For Drupal 7, Organic Groups was rewritten from the ground up. To understate things, this was an enormous undertaking. The lead developer on this is Amitai Burstein from Gizra, and on this rewrite, he nailed it. The rewrite makes smart use of entities in D7, and, last November, Amitai made the key decision to simplify Organic Groups by deprecating the OG Group entity. This change has many benefits, and one of the more immediately tangible benefits is that the views integration is now much cleaner, which simplifies the work of site builders. We had been working with OG in D7 for about a year when Amitai announced his proposed changes, and the direction he was talking about dovetailed pretty cleanly with our experiences up to that point. It's also worth noting that Amitai's approach to maintaining OG is pretty incredible; the work required for the initial rewrite of Organic Groups for Drupal 7 was considerable, and the rewrite for the 2.x branch was no small feat either. It would have been easy to stick with the original approach in the 7.x-1.x branch, but Amitai made the call to simplify Organic Groups to make it easier to use, and that's a great choice for all of us in the community.

In addition to sponsoring some of the original development, we have tried to be pretty active in the OG issue queue, helping with patches and testing as part of our ongoing work. We have been working with the 2.x branch of OG since late January, 2012, and it has been incredibly useful. In D5 and D6, delivering a site built on top of OG required putting a significant amount of work into UI tweaks. The core functionality of OG was (and remains) incredibly flexible, but the rewrite simplifies the process of delivering a site that has powerful and extensible community sections while being easy to use. From a site maintainer/builder perspective, one of the main improvements in the D7 version of OG is the ability to provide rights on a per-group basis.

Ongoing development and improvements are taking place in the 2.x branch; if you have been waiting to test this out, well, what are you waiting for? Grab a copy, and start testing, now! Yes now! What are you waiting for?

For more information and background on Organic Groups, check out Amitai going over some of the details from his DrupalCon Denver session: OG7 - Pride and Prejudice.

Image Credit: "grilled sugar snap peas" taken by woodleywonderworks, published under an Attribution Non-Commercial No Derivatives license.


Subscribe to RSS - drupal