The ADEPT (Awareness of Decisions in Evaluation of Promotion and Tenure) team at Georgia Tech works towards giving professionals in academia (like professors and other faculty) information about how promotion and tenure activities happen in academia.

The existing tool was built around 10 years ago and consists of three activities/games: Simulated Meetings, Navigating Your Career, and Annotated Vitae. Each game is primarily targeted towards a particular group of stakeholders. Users are part of a scenario/conversation where they answer questions and make statements based on the information given. Users gain points and can read literature that expands on the answers. The games are based on fictional case studies and CVs, influenced by real ones.

tl;dr the purpose of the games is to let users explore different scenarios and their consequences related to promotion and tenure processes.


UX Designer (me) + Front-end Developer + Advising Professor (Field Expert) + Content/Marketing Strategist



How may we redesign an existing flash-based role-playing game to incorporate a better user experience focusing on inclusivity for stakeholders within the academic space.

Project Constraints

  • Text content (storyline) remains the same and cannot be modified.
  • Only front-end development was feasible, given time and resources. We didn’t have the time or resources to build a back-end, resulting in the lack of ability to store user information and progress. As a designer, I needed to craft a system that worked around this constraint and that consisted of appropriate measures to inform the user prior to any loss of progress.


We had a very short amount of time dedicated to research, as we needed to concentrate on designing, developing, and taking the entire tool live in a short span of time. We spent a short amount of time dedicated solely to research and then started designing.

Existing Product

The existing product was an out of date Flash game that needed to be downloaded to be played. It also didn’t comply with any of the brand guidelines of Georgia Tech, the owner of this product. Instead of focusing on what didn’t work in the existing product, I decided to concentrate more on what worked and take that forward.

Older designs

Concepts taken forward from past design:

Conversational format
The primary content was framed in the form of a conversation instead of a case study; this mirrored the actual situation, making it easier to connect with and understand the situations.

What is the best answer?
A ‘most appropriate answer’ rather than one single correct answer implored the user to explore and discuss multiple scenarios. Users are scored on multiple aspects to give them some indication of whether they were going in the right direction.

Learning tools

To begin with, we researched existing learning tools like Duolingo and Coursera to understand what works and what doesn’t when it comes to information architecture, user flows, and task analyses.

Inspirations for design

User Interviews

I spoke to professors at different stages in their careers.

Purpose of interviews:

  • Gauge their level of understanding of the promotion and tenure process
  • Current methods of gaining information
  • Problems, obstacles faced
  • Priority given to this topic
  • Familiarity with the existing tool


Personas + User Goals

Based on our user interviews and our target audience in the academic realm I was able to create two distinct user personas. One, an experienced academic who knew about the process because he had gone through it himself and the other, a relative newcomer in the academic space who wants to learn more about the process and try to do everything by the book. These two personas guided my design process that followed and made sure I designed for a specific user and avoided trying to design for everyone.

Design Guidelines

Based on our research, I came up with the following (non-exhaustive) list of design guidelines. The text in green indicates the part of the research that I deduced the guideline from.

User experience should allow for full immersion into the game. (from feedback forms)
Since the game requires a good amount of reading, it is important that the user is fully engaged in the game. Research showed that letting the user ‘play a character/role’ proved essential for this cause.

The design should follow Georgia Tech brand guidelines. (business need)
Since the product is owned by GT, it should follow all brand design guidelines, while at the same time encourage inclusivity by inviting scholars from other schools to use it too.

Users should be aware of their progress.(user interviews + contraint)
It is important to constantly display the player’s progress to inform them of how much of the game is completed and how much is left.

Users should have supplementary resources at their disposal. (UX principle of recognition rather than recall)
While playing the game, the user needs to make decisions based on a candidate’s past record. Therefore it is important that this past record is available on the screen to refer to at all time.

Users should be aware of external resources to explore after playing the game. (purpose of game)
The activities are designed for the user to explore different scenarios concerning evaluation and mentoring. The statements in the games are meant to open doors to further discussion. Therefore this game should be able to lead users to other useful resources in the academic world.

Validations and alerts(constraint)
The system should clearly respond to user actions to validate the action. The system should show alerts at crucial points in the user flow. This is important as (due to time and resources constraints) we could not create a functioning back end for the game. Therefore, on exiting a game all progress is lost.


The design phase began with revising the task flow and the information architecture to provide a better user experience. I then moved on to low-fidelity paper prototypes and eventually began the visual and UI design part.

User Flow

The user flow for the game was relatively linear. I started with the flow for the existing game and then kept revising it based on the design guidelines.

User Flow

Low Fidelity Prototypes

Introductory Page
Members’ Bio Option 1
Members’ Bio Option 2
Simulated Meeting Main Screen
Simulated Meeting Transcript
Navigate your Career Main Screen

Characters – Visual Design

Our existing content consisted of conversations between different stakeholders, depending on the game. For example, Simulated Meeting was a discussion between four panel members (the user being one of these four members). Therefore, we designed characters to go with the names and scenarios for each of these games.

Visual Design of Characters

Designing for Diversity:  The academic community at Georgia Tech is striving to create a safe and inclusive atmosphere for all its members. One of the requirements for this project was to be part of that initiative. Our characters are designed to represent the diversity in race, gender, and more that is often seen in the faculty at different educational institutions across the country.

User Interface

Design Decisions

Following brand guidelines

While this product is owned by Georgia Tech, its users would be from multiple educational institutes. Therefore it was important to follow Georgia Tech’s brand guidelines and at the same time retain a general visual design that is inclusive to all audiences.

Easily distinguishing participants in the conversation

Following Gestalts principle of similarity, each character was given their own distinct color in the conversation. This was generally the color of their clothing. This color also served as an accent color for the chat bubbles to let the users easily distinguish the person talking.

Recognize rather than Recall

Users would often need to refer to the candidate’s profile (CV and Case Study) while making decisions. The user first reviews this profile on the introductory page of the game. The same profile is then condensed and conveniently placed on the sidebar, from where it can be accessed at any stage in the game.

Estimated Time

Professors are often very busy during their week and dedicating an undivided stretch of time to this activity may require some planning. The opening screen tells the user the approximate time it would take to complete the activity. Now they can make an informed decision of whether they should start the game immediately or come back another time.

Progress Bar

The progress bar will  constantly inform the user of how much of the game is completed and how much is remaining.


We conducted multiple focus groups and usability testing sessions to get feedback. The focus groups consisted of academic faculty who would ideally be using this tool; both young faculty as well as seasoned professors who would serve as panelists and mentors. We also involved professionals with expertise in accessible design.

Usability Testing

We conducted usability testing with 11 users, both familiar and unfamiliar with the system of promotion and tenure. We used a think aloud protocol while each user was instructed to play atleast one game from start to finish.

We tested for:

  • Understandability: Does the user understand their task?
  • User flow: Since the flow of each game is primarily linear, does the user understand how to get to the next screen?
  • Findability: Can the user easily find content they assume/know is available?
  • Discoverability: Is supplementary content easily discoverable on the screen by the user?


To better quantify user responses to the new design, users were asked to complete a short survey regarding the usability of each game. The survey primarily consisted of likert scale-based questions, and was created and distributed the survey through Qualtrics.

Focus Groups

I moderated a focus group consisting of our target users who tested the system. As these were academics, it was sufficient to give them a prompt related to a certain aspect of the game, and they discussed it at length. The main task I had as a moderator was to keep the participants on track and not let the conversation drift off on to tangents.

The overall feedback from all the testing sessions was positive. The following are some of the observations and insights we received:

  • Users appreciated the indication of the time required to play each game at the start. This prepared them for what was to come.
  • Conversations that were intentionally negative got two distinct responses from users: one of rage, and the other of disbelief. Regardless, both of them started passionate conversations around the topic, which was the main purpose of the product.
  • Some users thought of this product as a test or training, constantly striving to get the correct answer. This goes against the purpose of the product to encourage the user to explore different scenarios. In response to this, we plan to reduce the points section to the background and put more stress on the explanation sections.
  • Users don’t read instructions, especially if they are presented in large chunks and paragraphs.
  • While the language of the actual text content should fit academic standards, the instructions should be straightforward, short, and easy to understand.


Short Term Changes

I made small changes to the design based on the feedback we got in the testing phase.

I broke up the large chunks of text, added bullet points and numbered steps to catch the user’s attention.

Future Iterations

We documented the rest of the feedback for the next group that works on this product. This documentation consisted of the survey results, as well as notes about research and testing. This would prove as a good starting point for the team that takes up this project in the future.


Keep stakeholders involved always
While a majority of our testing happened in the second half of our project, we had a couple of professors who served as mentors and field experts throughout the process.

Don’t assume user behavior
The first iteration of our design was heavy on words, and there were detailed instructions given using high level English. Assuming that these were professors and academics, we thought that they would read every single word. It turned out to be the opposite. The users often skipped crucial instructions. Therefore, we vastly reduced the amount of text for instructions, as changed it to simpler and straightforwards words.

Define metrics for evaluation
At the start of the process, we defined what would entail a successful redesign. Metrics revolved around time, ease, pleasurability, etc. This helped us assess our design in the later sessions. Due to constraints put on us by our customers, we also defined what would not qualify as a metric. For example, the text content and the stories of each individual character were not to be changed. Therefore, if users had slight issues with them we had to overlook those.

I worked at the principal designer in the team, along with a developer, mentor, and another designer/marketing professional. At the beginning of our project we clearly defined our roles and veto powers. I was responsible for final decisions wrt design, ofcourse with taking into consideration all team members’ opinions. We set weekly meeting times and goals, and often met for secondary meetings during the week whenever needed. Lastly, we constantly kept each other updated on our work through Slack.