Simulation of the Mission to the Moon

I’ve always amazed by the first mission to the moon.  Tons of simulations had to be done carefully and a lot of theoretical assumptions had to be made.  In the course of the mission itself, a lot of feedback data needed to be collected for future improvement.  A trip to the moon was expensive, time-consuming, and life critical, so the amount of energy and minds put in the mission was understandably tremendous.

I did have those days where a simple code editor wasn’t even available and compiling was so expensive that it was justifiable for me to write code on paper, debug it as much as I could, before even try to type it carefully, line by line, into a terminal.  On my paper there were areas for code, registers, flags, and memory maps.  When things didn’t go as expected, the first thing to do was actually going through the whole paper debugging again.

I thought those days were gone.

Not quite.

Recently I have been working on a project that involves security patch, feature test, and load testing.  The platform on which I develop the application is different from the production platform or the test platform.  Usually this won’t be a problem if the application stays purely in the application layer.  Unfortunately, the security problem is OS dependent and I have no access to the test nor production server.  All I can do is rely on documents and information given by system admins on those servers.  The development cycle is like this:

  1. I write code according to the information I am given and imagine they will run well on the test server.
  2. I pack the code, email the code to the system admins of the test server.
  3. When the system admins get the code, they deploy it, run the code, and
    tell me the symptoms.  I can also run it remotely.  However, some
    symptoms don’t appear via web so I have to rely on the system admins to
    email me back some error messages.
  4. Judging form the error messages (or lack thereof), I need to guess what’s going on, and fix them.  And jump back to step 1.
  5. Sometimes I can’t figure out what’s wrong, so I have to write extra
    debug code that outputs explicit error messages.  Jump to step 1.
  6. Sometimes the debug code is not simple and there may be bugs in the debug code.  So I debug the debug code them by staring at it really hard.  And jump to step 5.

To be fair, the system admins are very knowledgeable, responsive, and helpful.  I think they provide me the best support they can, under the restriction that I cannot have access to the testing environment.

Does the cycle have to be this time-consuming and painful?  I certainly hope not.  Before it can be improved, I use nostalgia to keep my spirit up.

Learner’s Perspective

One of my recent projects is help Mary Ramsey build the Music Lab in Warnock.  And recently we’ve been writing a user guide for the facilities.  To make the guide more clear, quite some pictures need to be included.  Most of them are photos from the digital camera or screenshots with annotation.  However, I needed a clean block diagram as a schemata to explain the conceptual architecture of the configuration to replace a noisy photo.

I went to David for help and it has been quite a mind-opening journey for me.

One of the questions David asked me is “What details are important for the users in this diagram?”  For me, it’s like a Zen question.  The whole point of a diagram is to let users grasp the main idea easily.  However, David also mentions that if we can differentiate each block by using a meaningful icon (as opposed to a blank block with a label), it will help users, too.  Now it becomes a balance that requires some wisdom: we want enough details for the icons to be easily recognizable but not too much less important details to distract users from the main point.

Another interesting point we discussed was which component is important for the users.  The Music Lab provides the facility for users to connect their own headphones and guitar for music recording and production.  I told David that the headphones and the guitar should be in the diagram so users know how to connect them, but when users come into the lab, they won’t actually see the headphones and the guitar.  David suggested we differentiate components that are already there and components that users need to provide with different grey-scaled images.  So I automatically said, “that’s a good idea.  Let’s grey out the headphones and the guitar since they are not actually there.”  Then David said, if our purpose is for the users to quickly see what users need to do, perhaps having the components that users don’t have to change greyed out would be a better idea.

What a lightening moment!  He composes the documents from the users’ point of view and produces the documents that’s really for the users.

Before David left, he asked me: so how big shall the diagram be?  I found myself unable to answer that question.  He humored me: “Not too big, not too small, just the right size, eh?”  I ended up showing him where his picture would fit in the document and let him decide on the size.

I certainly learned quite something from him.

Web x.0 tools in Meetings

In the recent TLT meeting, Kevin was invited to share his thoughts.  One of his comments made was “In 20 years, none, NONE of the technology we are using now for education will still be there.

As obvious as it sounds, I’ve never actually thought about the implication of that fact.  First of all, it is clear that what’s important for us education technologist is not what we happen to be familiar with but whether we are able to quickly grasp the idea of new technology and make good use of them.  Secondly, it is important to be well informed on all competing trends and the communities behind them so we can ride on the right one and direct it to good destinations.

The innovation, or at least the intention to innovate, of the meeting itself fits the theme.  Folks at ETS are already very familiar with the new communication tools used in conferences and meetings and I’d say these cases are mostly successful stories on how these tools help us connect better.  Those who are still new to the scene, come to one of our major conferences (#LDSC09 or #tltsym09) and experiences a totally different level of live interaction.

Some of my trainings were totally based on oral and aural means; they emphasized close face-to-face communication, 100% single-task attention without multitasking, and memorization.  Even note-taking was considered distracting to the presenter and the note taker.  As powerful and effective these forms of communication can be in their own right, they are not always suitable in our modern society.  Nowadays we value collective group inputs, discussion, getting ideas from multiple inspirations, frequent interaction, and multi-tasking.  If we allow a meeting participant to use notes instead of his memory, is it all that strange to see a group of participants collaborate on a meeting wiki during the meeting?  And does it really matter whether the ideas are exchanged on wiki, twitter, IM, Text, Live Question Tool, etc.?

On the other hand, just as we now see note-taking as a sign of taking the meeting seriously, one day we may expect people to use these tools actively.  It is not necessary true that if a person is not taking notes, she is not actively participating in the meeting.  However, if meetings evolve into a real-time, interactive form of communication, a person not using any of these tools may actually miss a lot in the event.

If our meetings are to go into the new ear, we will likely see some changes, and changes by definition are something we are not used to.  For example, active participants may be reading and typing at their laptops instead of looking at the presenter in a live event.  The reason people sit in the back of the venue may be for wireless signal reception’s sake.

Of course, as long as our society allows free will, it is impractical to expect every single mind to actively participate.  People learn to nod back to the presenter without really listening.  Note-takers can doodle, or write down the words presented mindlessly.  People could chat with their buddies about something irrelevant to the meeting.  The tools are not the real problem.  Otherwise, we would feel very justified to use the eyelid holders as in Clockwork Orange, or a milder approach, wireless signal blockers.  The real issue is always the mind.  What motivates people to participate?  What’s the real purpose of a meeting?  What are people’s concerns about when, where, how the meetings are conducted?

I see myself as an innovator, or at least an adaptor of innovations, and I make efforts to be open-minded while being mindful on what works, and the necessary costs of the changes.  For me, it’s been, and will continue to be, a journey that defines what it means to be relevant in the modern day.

ITS Training: Crucial Conversations

ITS provides managers and staff members a training course, Crucial Conversations, to improve the communication quality of workplace.  At first, I thought it’s just one of those “We need to really listen to each other” type of cliché.  It turned out to be a very valuable training for me in both professional and personal life.

Crucial Conversations defines a conversation become crucial when a situation calls for a change and a serious communication is needed.  It sounds self-explanatory but the skills involved go beyond what we think we naturally can do without awareness.

For example, for an effective conversation to happen both parties need to be willing to do it, sincerely.  However, most of the time, the situation came to that stage because there was some kind of  difficulty and at least one party is less willing to change than the other.  What happens when the party which initiates the conversation will face some resistance from the other party in the form of either silence or violence.  Unfortunately, when we encounter silence or violence, we tend to become part of it.  For example, if the person we try to talk to tries to avoid discussing an uneasy issue, we tend to let it go; if the person gets upset, we tend to get upset, too.

The course teaches some skills to avoid participating in the unproductive or even destructive behavior pattern.  Instead, we learn how to encourage the other party to join our intention of sincere conversation which involves honesty and the intention to improve situations.

Some of the self-observation exercises have made me realize that it takes a lot of awareness to be able to conduct an effective conversation.  One of the most important questions we usually forget to ask ourselves when we are engaged in a heated debate is: “Is this the result I want?  Did what I just say cause the effect I specifically didn’t want?”  We claim that we want to improve the situation but we forget that it takes real patience to achieve the goal — we need to allow the other party enough time to come from the defense side to the mutual side, from which we face the problem together.

All of the skills we learn make it very obvious that the key to successful communication is sincerity.  If we are not sincere enough, none of the techniques will work.

One of the reasons that I enjoy my job is that most people I work with are no-nonsense type.  They mean what they say.  And they do what they say.  I like that.  And a training like this is only meaningful to those who are serious about changes.  The reason that ITS chose such training and provides this training to its staff at such a large scale says a lot about the leaders here.

I am grateful for what I got from the training, and the reason why I got the training.

ITS Training: EASY Re-Engineering

Title: ITS Forum: EASY Re-engineering
Time: 5/29/2009 10:00AM – 11:00AM in 141 Computer Bldg

There was a misunderstanding on what the workshop is about on my side.  I didn’t realize the word “EASY” is an acronym of a Human Resource internal work flow system.  Instead, I thought EASY was capitalized to emphasize the ease of the re-engineering methodology.

Fortunately, the experiences presented in the workshop were still very relevant to system deployment in general.

The challenges the EASY team have include:

1. The want to port the original 3270 terminal interface to web interface.  This is very understandable — I can immediately imagine the coding cost for terminal interface, as well as the operation training cost and the cost to maintain a terminal program on the client side.  One of the major improvement they put in is now all the acronyms of the form can easily be explained (or even abandoned) by use of plain English — thanks to the powerful web interface.

2. They use JAVA to replace the old code.  JAVA is one of my favorite languages for its clean design and resourceful libraries.  It has also been one of the easiest languages for project maintainance, from my own experience.

3. They also decided to take the chance to consolidate the workflow.  There are a few things:

3a. Originally, most forms need multiple levels of authorization.  And the routing of authorization is defined by PSU IDs.  For example, if a form needs authorization of an employee, her supervisor, the chair of the department, the chair of the school, and the president, the old system has all the IDs written.  If there is any job duty change, or job change, one needs to search the whole system to replace the old IDs with the one of the new person.  To make it worse, a person holding multiple positions/responsibilities, after job changes, may hold part of the original responsibilities.  The solution is to assign ROLES and then assign PSU IDs to the ROLES.  It seems to be a very obvious approach today but probably it was difficult with the old framework.  Another improvement is the use of email notification.  Well, that is self-explanatory.

3b. Multiple forms are merged (there are 70+ forms developed in the past 20 years and a lot of them serve similar functions).  Each form is now a unit that can be freely combined into “processes” for different needs.  This is object oriented design.  Now there are 13 financial and 7 OHR processes.  This is impressive improvement.

I’m always very interested in the cost of projects.  I was told it’s a huge project involving 8 university units and there were about 70 people involved in the consultation.  I wasn’t able to get the detail information on how the estimation of the cost was done but it seems to be quite obvious that the modernization of the system is unavoidable.  It’s still an on-going project and only parts of their work flows are modernized.  I assume there must have been some HR operation cost estimation involved to prioritize the work flows to be improved.

What also interests me is how they make the transition smooth for the users.  Brad and I have always put in a lot of thoughts when pushing out a new feature in Blogs @ PSU.  The EASY folks do something called “limited deployment”.  They choose three institutes to test out the new interface before university-wide deployment.  During the time, the beta testers have access to both new and old interfaces so they can always fall back to use the old system should the new one doesn’t work.  And the development team can take feedback during this test.

From this fact, I guess that the development team probably keeps the backend data scheme so the interface has the access to the same data.  This requires the abstract design which separates raw data and the data processing.  And a logging system will be crucial for event retrieval.

Another thing that I am pleasantly impressed is the appearance of Beth Hayes.  She is retired but volunteered to present some of the initiatives started during her days.  It feels good to see people enjoy their job and their jobs are not only jobs but part of their life and passion.