The Spaces Project is an application containing a selection of guided activity tours designed to educate participants about various elements of nature. In this post I will cover the development steps I took while creating the Pollinator Tours for the Spaces app.

Spaces Development Preview Video


This post will cover the following development steps I took while creating this project

Step 1:

 


The Focus

The idea behind the creation of the Spaces Project was to create a tour experience that would teach users about pollinators. These tours would guide users to various points of interest around a physical location while encouraging them to observe natural elements that contribute to the survival and habits of pollinators. The intended take away would be a fun and memorable experience that would lead users to appreciate pollinators and their important role in our world.

This project is primarily intended to be used by families with elementary school aged children. Upon arriving at the tour location each family would be given an iPad running the Spaces Project and as a group would be guided by the app to travel from location to location completing activities.

Having little experience developing programs for mobile devices and having zero experience developing for Apple/IOS, I was excited for the new challenge.


Slide Decks

Before I jumped into creating any of the activities for this project, I decided to start with creating a few simple templates and a method for navigating through them. Given the style of this project, it seemed to me that creating a couple template slides in Unity was the best starting point. I created a basic slide template containing two primary layers, the body of the slide, and the slide overlay. The body would hold any of the content within the slide such as interactable activity content. The overlay contained elements that always needed to be visible on top of the content placed in the body of the slide. The overlay contains elements such as text for the title of the slide, the primary informative text in the middle of each slide, and navigation buttons allowing the user to move to the next and previous slides. With these basic templates setup I could save development time by rapidly duplicating slides for additional content. This also allowed me to quickly make general changes to all instances of a template slide at once when needed, rather than having to make the same change for each slide individually.

Once this basic slide template was put together, I was able to create a deck system to manage and navigate the slides. I was also able to make variation templates for the various activities within the program.

The slide decks system held groups of slides together in specific order, similar to a deck of cards. This allowed me to easily navigate forward and backward through the order of slides, but also allowed me to jump between various decks without losing the active slide progress made throughout any specific deck. These decks also allowed me to reuse some activity slides, such as the map or camera slides, without having to jump around in deck order or make duplicate slides.


GPS

The first activity I decided to work on was the initial framework for GPS interactions. I decided to start with the GPS functionality because I anticipated it to potentially be one of the trickiest features to create out of the list of features the Spaces team asked me to create. When working on projects, I usually prefer to attempt to tackle the most difficult or time-consuming elements first.

Since I do most of my development work on a PC, I knew I would not be able to access actual GPS data while initially building this program. So, to start I created some code that would allow me to manually enter placeholder latitude/longitude values into the project which would be used to place and move a map marker around a map area. The initial setup was fairly straight forward. I game the code values for the boundaries of the map area and a set of values for the marker’s current location. My code would check if the marker’s position values fell between the boundaries of the map. If so, my code would then calculate the ratio of the marker’s latitude value between the map’s north and south latitude boundaries, and the ratio of the marker’s longitude value between the map’s east and west longitude values before placing the map marker accordingly. After a little tinkering I was able to make a usable demo GPS map which can be seen in the following demo video. At this time, I did not have access to an iPad nor a Mac, both of which were required in order to load the app onto a mobile device for testing. Until I loaded the app onto a mobile device, I would not be able to test this GPS map functionality using actual GPS data.


Camera

The next feature I decided to prototype was the Camera functionality. The Spaces team asked me to create two variations of camera functionality. The first was an activity that allowed the user to use the iPads front facing camera to take pictures of flowers and pollinators around the tour environment. The second was an activity where the users could use the iPads rear facing camera to take a “Bee Selfie” which would be a selfie with bee antenna and proboscis overlayed onto their face. Just like with building the framework for the GPS functionality, I still didn’t have an iPad to use while developing the framework for the camera functionality. Lucky for me, due to the pandemic and the proliferation of Zoom meetings, I did own a webcam and was able to access this for the preliminary development of the camera features. A demo of the prototype camera functionality can be seen in the video below.


Fake Scan

Next, I decided to create a prototype for the fake scan animation activity. This activity asks the users to point the iPad at a specific real-world object and press a “scan” button. Doing so would play a short animation which synthesized the object being digitally scanned (similar to the fake “scan” technology displayed in most sci-fi or crime detective tv shows). Although I would not be receiving the actual animation created by Nick Rossi that would be used in the final version of the app until later on, I was still able to create a simple stand-in animation for the initial prototype. I created a simple animation of a small black square that grew to screen size three times in a row before triggering the app to move to the next slide. This animation was activated by clicking the “scan” button on the previous slide. A demo of this prototype scan functionality can be seen in the video below.


Bee Vision

The next feature I worked on creating was an activity called “Bee Vision”. Bees and may types of insects have complicated eyes that see the world using ultraviolet, blue, and green light as opposed to human eyes which see the world in red, blue, and green light. To illustrate the difference between these two ways of seeing the world around us, the Spaces team asked me to create an activity that would display how an image would look under these varied conditions. To accomplish this, I created an activity with two images and a slider bar. The two pictures used were both pictures of the same flower, one taken using a normal camera and the other taken with an ultraviolet camera. I then layered these two pictures on top of each other. As the user slides the slider bar up and down the screen, the opacity of the top image is scrolled up and down accordingly. This makes the top image appear more or less transparent allowing the picture behind it to be visible. This creates the effect of fading/blending from one vision type to the other and back. A demo of this feature can be seen in the video below.


Food Selection

An important aspect of teaching people about pollinators is educating them about what foods require pollinators in order to grow. In the Food Selection activity, the users are asked to mark various types of food items according to which ones they think require pollinators in order to grow. To create this activity, I made a scrollable grid of square image placeholders that, when clicked on, would activate/deactivate a green checkmark and highlight square around the placeholder image. Then on the next slide the user would be show the correct answers (which in this case was all of them). A demo of this feature can be seen in the video below.


Journal

The next feature I build was the Journal feature. The plan for this feature was to give the user a location where they could view the pictures related to the activities they had completed. The journal would act similarly to a scrapbook in this regard. In this initial prototype phase I build the journal to be a scrollable grid of thumbnail placeholder images. The user could view a full-size version these images by clicking on an image. Clicking anywhere on the screen would minimize the full-size image again. A demo of this feature can be seen in the video below.


Glossary

Now that I had created functional prototype demos of each of the activities, I needed to build an activities glossary. This glossary would allow users to jump directly to specific activities that they had already completed if they wanted to repeat any activities they enjoyed. The glossary buttons for each activity would remain inactive until the user completed each corresponding activity, preventing them from skipping ahead and risking skipping over an activity they had not done yet. A demo of this feature can be seen in the video below.


Next Entry

In the next Spaces Project blog post I will cover the second phase of development when the fun really begins as things get complicated, new challenges arise, and I acquire an iPad to start really getting to test the app on a mobile device.


 

* The Transforming Outdoor Places into Learning Spaces project was made possible in part by the National Science Foundation, grant #1811424. The views, findings, conclusions, or recommendations expressed in this blog do not necessarily represent those of the National Science Foundation. *