What I’ve been up to

Happy New Year! It’s been a while since I’ve posted an update here. Time to get cracking on those work projects for the new year, both ongoing and new ones. Here’s a brief outline of what I’ve got going on:

Quickbase

The improvements we saw in staff efficiencies over the past year have been tremendous, including:

  • the ability to better track IT requests
  • automated much of the course creation process that had been previously handled manually
  • automated much of the quality assurance process
  • better tracking and automation of permissions, IP agreements, and copyright sources

Over the next few months, plans will be in the works to also integrate budget information, time tracking, and tasks. It’s a lot of work but always rewarding when you get good feedback and see real results in the form of improved use of people’s time. No more redundancies and bloated amounts of time spent on “administrivia”!

Evolution usability testing

This one’s coming soon, and it’s something I’m pretty excited about. I’m going to be working with Mike Brooks on devising a plan for testing our new and improved Evolution student and faculty interfaces. It’s something we’re definitely rolling out in most World Campus courses over the next year,  and we have a good opportunity now to do some usability testing with our students and faculty, and develop a longer term feedback plan for when the new interface rolls out. We’ll be trying out a newish usability testing application, Silverback: http://silverbackapp.com/

Blogs

I don’t really have a specific plan or application in mind here yet, but I’d love to see Blogs@PSU have more of a role in World Campus courses and programs. I’m tinkering with the creation of a “geoblog” similar to the application created by Chris Stubbs over at ETS (and I’ve consulted with him about this), the Geoblog for students studying abroad:
http://geoblog.psu.edu/
I think something like this would be a great way for World Campus students to connect and share. The visual of the map adds a sense of place and connection that might be lost by students just saying text-wise where they live or work. Stay tuned.

Mobile announcements

Over the summer I took a workshop on the creation of Web mobile and native iPhone apps. The workshop only lasted one day so it really only allowed the participants to dabble in this stuff, but I was able to put together a very crude prototype of a mobile announcement app that we could use in our World Campus courses. I hope to refine this prototype a little further and share it with my colleagues.

The Wave fizzled – or did it?

wave_220x14715703.jpgA few months back I had a “light bulb moment” and wrote about the potential for workplace communication and collaboration that I was starting to see in Google Wave. Soon after writing that piece, I started to see a rapid decline in Wave’s usage among my fellow early adopters. It was easy to see why. Google Wave was available by invitation-only, and this obviously hurts buy-in from potential collaborators. It was also a tad buggy and undeveloped, as is typical in a “beta” offering.

Over the past few months, though, I’ve observed a pickup in interest around Wave, and this was due to a number of things:

  1. Google finally removed the “invitation-only” restriction, and allowed anyone with an e-mail address to be added to a wave.
  2. The development of many more gadgets, robots and other extensions that potentially made Wave more useful.
  3. The development of stand-alone and mobile applications for Wave.
Thanks to these developments and the pickup in buzz (again) I’d observed in the educational technology community, I decided to put last year’s “light bulb moment” to the test. I’ve moved a few of the projects I’ve got going at work into Wave, in the hopes of facilitating focused conversations and collaboration around these projects. I was starting to see just a little traction, and was contemplating my next blog post’s focus on these efforts, when, suddenly…
Google announced it was killing Wave. As in, stopping development immediately, and stopping the hosting of Wave by the end of the year.
What are the lessons to be learned here? Well, it was obviously a business decision on Google’s part. Many successful new technologies follow an adoption pattern of hype/early adopters, followed by a lag in interest (read “flat” or “slow” uptake, or perhaps even a drop), followed again by steady mass adoption. We saw this model perfectly with Twitter – enthusiastic early adopters, followed by a lag, followed by a slow steady mass adoption to the point where it is today. Was that secondary uptick curve not looking good enough to Google? Was there no secondary uptick curve at all, with me only seeing slow returning enthusiasm among my peers in the ed tech community? We may never know. As I said, it’s a business decision. In a tough economy especially, it’s probably necessary to let go of our sunk costs in projects that we may be altruistically attached to, but where returns don’t justify continued investment. (It was definitely a rather sudden announcement though, even the Google Wave Blog makes no mention of it as of the time of this posting.)
I still think there’s hope. Wave is mostly open-source, and someone else might pick it up, perhaps even turning out an enterprise version. A company with good business sense would learn the lessons from Google’s mistakes and improve the user experience, attempt to understand real world use cases, write better documentation, market it well, etc. All in all, I still strongly feel that a focused, multimodal, real-time communication platform like this has vast potential in terms of keeping people and projects on track. As I said back in November, I find e-mail to be cognitively distracting and a terrible way for collaboration to happen. I can say the same thing about Twitter (sorry Twitter fanboys and fangirls). It’s all just too much noise.
I’m very interested to hear your thoughts in comments.

NMC2010 – Reflections on Mimi Ito Keynote

According to her bio on the NMC conference program, Mimi Ito is a “cultural anthropologist who studies new media use.” She studies informal learning among peers and has conducted extensive studies of the online Japanese anime fandom community. danah boyd (@zephoria on Twitter), another new media scholar whose work I’ve admired for years, is excited to be here:

Aviary twitter-com Picture 1.png

Given this ringing endorsement, I expected a powerhouse of a talk. I was not disappointed.

Mimi opens by mentioning The Shallows: What the Internet is Doing to our Brains, a highly rated new release on Amazon by Nicholas Carr. In the book, Carr talks about distracted culture in the internet age and its perceived negative effects on human intelligence. A similar theme can be found in The Dumbest Generation by Mark Bauerlein. In contrast, you have the work of technology embracers like Don Tapscott (Wikinomics, Grown Up Digital) It’s all too familiar according to Ito (and I agree) – the same old polarization: those who embrace new technology or other cultural elements vs. those who blast them. But both views are correct. It helps to remember that “opportunities and risks are inextricably entwined” (missed attribution). I’ll highlight at this point one quote from Ms. Ito that had the audience applauding:

Google isn’t making us stupid, we have only ourselves to blame for that.

Indeed. If we are distractable, we will find something to distract us. The technology itself is a neutral entity. There is indeed great opportunity with the social internet: information at our fingertips, on our desks or in our pockets. There’s also an undeniable temptation to distraction. But again, we can’t blame the technology.

Now for the meat of Mimi’s talk. There is an opportunity for peer-to-peer learning here that has not yet been fully realized. Very few institutions are taking advantage of the opportunity. We have a mindset of separating entertainment and learning and so we’re looking at these social networks and online cultures as outside of the realm of education. This is wrong – we need to tap into the spaces where the learners are clearly engaged.

There is a culture clash here. We expect students to meet a standardized set of objectives year after year, and get upset when they copy each others’ work. We need to reward the act of building on the work of others when creativity and depth is added. We watched some hilarious videos of the “lip synch” and, um, “crazy antics” variety set to popular music, and we viewed other types of remixes.

Jonathan McIntosh remixes TV advertisements to create a whole new critical narrative:

Buffy vs Edward  – more than just a funny video, it offers a critical view of gender roles:



So, what would it look like to use that deep peer interaction and engagement as the actual focus of learning? Students are interacting in their networks and not just with static content – we know this. So why are we not working to create social interactive wrappers around our content? Twitter user @skiley13 quips on this point:

Aviary twitter-com Picture 2.png

Indeed. So which version of the “net generation” is correct? Socially engaged or dumbed down? Is that even the question we should be asking as educators? Kids and adults of all ages have always found means of distraction; indeed, a New York Times editorial by Steven Pinker published just today (well worth the click-through and a read) argues this very point – there have always been detractors arguing that new media makes us stupid whether that new media is the printing press or the Internet. Rather, today we have a unique opportunity to utilize these technologies in a way that will keep our students engaged. If we don’t meet this challenge, we will be the ones “dumbing down” our students by forcing them to conform to old models of teaching and learning, and ultimately losing them. There are already examples of faculty and institutions doing this – the rest of us need to get on board.

MacArthur foundation – re-imagining learning in the 21st century 

I would love to hear your take on the keynote or on any point I’ve made on this evolving topic.

My week with the iPad

One week is hardly enough time to really understand how the iPad might apply to and enhance work, play, and life, but I have to give my iPad back so others at the office have a chance to take it home. (Is it telling that I said “my iPad” just now? Heh heh…)

Without further ado, here are some of my initial takes on my week with the iPad. Further time and testing might have revealed more likes/dislikes and good/bad/needs improvement features, but I think I did a decent job putting it through its paces. Apple’s all about the user experience, so my review will focus on my impressions of that.
General: The touch screen interface was very intuitive to use. Of course, I already have an iPhone, so the transition was pretty much seamless for me. The keyboard leaves a lot to be desired though. I actually found myself making a lot more mistakes than I do typing on my iPhone. It’s too big for one-finger or two-thumb techniques that work well on the iPhone, too small really to comfortably type with both hands, and without being able to feel the keys, it’s very easy to make mistakes.
Reading: In general, I found reading on the device to be a very positive experience and I feel this is one area where it really shines. After a couple years of trying to read longer items on my iPhone (news articles, blog posts, etc.), I found the experience on the iPad to be quite welcome. The USA Today and New York Times iPad apps are intuitive and very much like actually reading a newspaper. Text size can be changed on-the-fly and images zoomed. Reading books through iBooks was even nicer, though there were ultimately problems. First the good: 
  • The text and page-flipping interface were quite lovely and provided a seamless experience (read: no delay in rendering when flipping pages, as on the Kindle). 
  • Text  size could again be changed on-the-fly.
  • Screen brightness can be changed on-the-fly, giving opportunity to lessen eye strain
  • Orientation can be “locked” with a switch on the side of the iPad, so if you tend to fidget and move around while reading, you won’t have to worry about your iPad flipping its orientation.
  • Text can be easily searched, bookmarked, and copy-and-pasted.
  • Much, MUCH, better than reading on either an iPhone or a laptop. The iPhone is too difficult, a laptop is too awkward and restricting (can’t really roll over on your side).
Now the not-so-good:
  • Though screen brightness can be changed on-the-fly as I mentioned, it’s still ultimately a backlit LCD display, which can create numerous problems. First there is the problem of glare. I’m not a fan of reading in the bright sun regardless (even in the shade), but for anyone who enjoys reading on the beach, this could be a problem. I had trouble with glare even trying to read on the bus with sunlight coming in the windows, or at home with my nightlamp on. I found it was easier to just turn my nightlamp off and use the glow of the iPad to read by. Tough to get used to at first, but I might be able to over time. Still, I spend a lot of time in front of a backlit monitor in the course of my job – do I really want to do this late at night in my leisure time?
  • Though it’s much better as I said than an iPhone or a laptop, it’s still got some weight to it, making it maybe not ideal for leisure reading. Maybe again, something to just get used to.
  • This is an odd one, but I wanted to note it. The iPad is COLD. I noticed it particularly the last couple nights since we’ve been having a freak May cold snap. It’s nice to cuddle up under the covers with a “dead tree” book on cold nights, but the iPad’s aluminum backing seems to such some of that warmth out of you. I suppose the coldness could be lessened with the right kind of cover, but then again, that would add to its weight.
Games: I installed the Words with Friends HD app (pretty much Scrabble) since I already had a couple of games going with friends on my iPhone and wanted to try it out. What a difference! This is a true strength of the iPad – as a gaming interface, it is superior to just about anything else. It’s large enough for a complicated game like Words with Friends to take place without a lot of zooming around, and it’s much more portable than a laptop. And the tiles are pretty close to real life Scrabble tiles! I like Words with friends because there is no time limit or pressure like there is in a real life game; you play at your leisure, when you have the time, and you are notified when your friends have made a move. Chess with Friends is another social game that works like this; I imagine the interface and gameplay is just as lovely there.
Other: I love to cook. So when I saw the Epicurious app for iPad, I was full of excitement! On the iPad it works much the same as on the iPhone – you search for seasonal recipes or by dish type or by whatever you have on hand. You can select many recipes, perhaps for a dinner party you are planning or just family menu planning for the week. You can add your recipes to your “shopping list” where the ingredients are compiled and organized by type (produce, dry goods, seasonings, etc.). Your list is presented as a checklist which you can check off beginning at home based on what you have in your kitchen. From there you have your complete shopping list! Since it’s organized by type, it’s very quick and easy to shop from. Then, when you are ready to prepare your meals, you can just set the iPad up on a stand in your kitchen and use the very lovely and easy-to-read recipe interface as your reference.
I took the iPad with my shopping list to Wegman’s over the weekend. Though it’s very nice and easy to shop with, ultimately it was awkward to lug around. Couldn’t exactly stick it in my pocket – I longed for my iPhone again for that purpose. Ideally, there would be a way for my iPad (I said “my iPad” again, uh oh) to communicate through Bluetooth with my iPhone, so that I could compile my shopping list at home and “beam” it to my iPhone to take with me. And vice versa, if I’m out shopping and see seasonal or sale items and find recipes using them through my iPhone Epicurous app, I could save them and “beam” them back to my iPad (said it again!) at home and work from it in my kitchen. In this sense I don’t see the iPad as a truly “mobile” device – I’d rather call it “highly portable” or something like that.
What do you think? Have you had a chance to work/play with the iPad yet?

iPad fever

The tech world and even the popular culture world are abuzz with the imminent arrival of the iPad. We’re starting to see iPad apps popping up on iTunes, and the tech blogs are rich with screenshots and descriptions of these apps (see Mashable and Engadget to see what I mean).

I’ve been a fan of the iPhone for some time, and it’s hard not to see Apple’s newest product as just an enlarged version of it. The interface and the way one interacts with it are nearly identical on both products. The large majority of the apps I’ve seen thus far for the iPad look like nothing more than enlarged versions of existing iPhone apps, perhaps with a few added bells and whistles. A lot of games are easier to play on the larger interface, but nothing to really stir the soul.

I do see a lot of potential in the iPad as an e-reader though, and I’m a little disappointed to see this capability lost in the midst of all the iPad app buzz. I love the natural touch page-turning capability as well as the ability to read in vertical or landscape mode. I think it could be a great improvement over the Kindle and other existing e-readers whose interfaces I never really liked. Turning and reading pages by manipulating buttons is just inherently not intuitive. The e-ink technology they use is nice, however, and I wonder how the iPad will fare in terms of eyestrain (iStrain?) with its backlit LED display.

ijdfsdo9ufi7u80ew987rw0e-r

Mobile phone development

At the end of October last year, I attended a day-long workshop on Quickbase. The sessions at this “Tech Fest” were led by real world developers who had come up with unique solutions in their own deployments of the Quickbase product. Now, I have blogged previously about the intricacies of the productivity problems we’re trying to solve with our own Quickbase solution, and I believe we’re getting closer to implementing some real solutions that will make everyone’s job in the office easier (thanks in NO SMALL PART to the efforts of our database guru Jeanette Condo). The Tech Fest really got me thinking on a grander scale about what possibilities there are not only with Quickbase but with other ed-tech related projects as well. Two sessions in particular that really inspired me to run with it were a session on jQuery and one on using jQtouch for iPhone development.

I’ve recently upgraded to a paid personal account on Safari since Penn State’s access only includes a subset of the full Safari library, and not a lot of recent works. I’m learning jQuery fast and finding that I really love it. Just like css, jQuery allows you to keep your HTML pages clean and uncluttered. Unlike css, which controls the styles on and appearance of your pages, jQuery adds dynamic and interactive effects. It’s pretty slick and easy to learn. It helps to know some javaScript, but luckily I’m not too rusty from my days coding javaScript in the 90’s. Back then, a lot of javascript actions were inserted directly into the HTML, as was any element styling or document layout coding (read: HTML tables for layout). I’m most familiar and comfortable with client-side scripting, which is how jQuery primarily works, so this is all a piece of cake!
Here are the books I have on my Safari shelf for learning jQuery (with links to their Amazon pages):
Now to return to the title and the original purpose of this post. Knowing jQuery is a good foundation for becoming familiar with jQTouch, which is the library of javascript methods used by the iPhone and other mobile devices (so I’m told, but only real-world testing will tell). Supposedly too there are utilities for turning your jQTouch-based mobile apps into native iPhone apps (negating the need to learn much Objective C). Mobile apps for productivity purposes in the workplace sound intriguing to me. Time tracking or project management while on the go? Would potentially eliminate some of the inevitable “catching up” time on these necessary evils when returning from a conference or offsite meeting. Maybe I am just dreaming, but I think it would be fun to try. Besides, in a more mission-focused sense, if we are to pay attention to the needs of our learners, mobile learning is really looking like the next big thing. Perhaps it is better to rephrase “mobile learning” as “reaching our learners where they are” because I think that is really what we are looking at enabling with mobile phone development. The 2010 Horizon Report lists mobile computing (their term) as a technology for educators to adopt in one year or less. We are here now, folks!
In that vein, I plan to read about mobile phone development from a strategic and planning standpoint by reading this:
This book seems to touch on the actual details of mobile app development but does not delve much into it. For the real nitty-gritty, I plan to read this:
One more thing on the jQuery front. I have some ideas, based on the exercises I’ve done, for ways to improve the usability and interactivity of our course content pages that I plan to share with the Evolution programming team.
That’s all. If you have any thoughts on any of this, please leave a comment. In particular, if you know of any good resources or books on jQuery, jQTouch, or mobile development that I haven’t listed, please let me know.

Livescribe Pulse pen

My department has a small budget with which to purchase and evaluate new technologies, and recently I got the chance to evaluate the Livescribe Pulse pen. The pen uses special paper and a special pen which hides a camera and microphone inside. The camera records the pen’s motions against a special dot pattern on the special paper. The pen has ink of course, but the ink is really only for the user; it is irrelevant to the technology. The microphone is available to record the pen user’s voice, allowing for what Livescribe has dubbed “pencasting” – real time recording of writing or drawing along with an audio description of what is going on.

Some excellent possible uses for the pen include writing out and demonstrating math equations, formulas, and graphs, and also possible pen and ink drawing. Any of these uses would be useful for providing the “chalkboard” type experience to distant learners.
There are some drawbacks, however, of course. The output is a proprietary format that is hosted on the Livescribe site – not good if you’re thinking of doing in-house enhancements to the pencasts (like adding captioning for accessibility purposes). Also, though students can use the pen to demonstrate and submit their work, it does not produce a file that can then be marked up and returned by the professor.
All in all, a nice way of demonstrating problem-solving techniques, but not a great way of providing a true two-way or social experience.