by Joshua Smith, Retired Chief Master Sergeant
Securing funding for a human performance program in the military requires intimate knowledge of the inner workings, policies, and politics of different military branches. It’s also crucial to outline and implement a well-designed pilot program that provides clear outcomes derived from accurate and consistent data to demonstrate the validity of your request and show how human performance monitoring will benefit candidates, staff, and leadership.
This article aims to walk you through the process I used to obtain funds to launch and sustain a successful human performance program, with the hope that you can do likewise.
DESIGNING A PILOT PROGRAM WITH CLEAR OUTCOMES
To make a strong case for your funding request, you should first run a pilot that shows the value of your human performance initiative. Our pilot program lasted a year, and when it was complete, we compared it against two previous projects and determined that Smartabase was the best tactical performance platform for our purposes.
Before you start, you need to talk to all the main stakeholders and determine what your pilot will consist of and what will constitute unsuccessful, moderately successful, and fully successful outcomes. You also need to decide what the next steps will be after the program is completed.
We broke our pilot down into three distinct but interrelated stages. The first involved figuring out what information we should provide to candidates so they had feedback on their progress. The second was deciding which data we needed to present to staff to help them make more informed risk management decisions. The third was how we could give leadership a way to see which candidates and courses had a higher success based on the data collected. From there, we broke down each of the three categories to make sure we knew what we were doing, how we were doing it, and why it would benefit candidates, staff, and leaders alike.
When thinking about how best to present the information to students, we decided to give them the ability to not only view their own data, but also to see how they compared to their cohort and previous candidates who had gone on to graduate.
On the staff side, we wanted to more readily identify high-vulnerability candidates. To zero in on those that had a higher risk of injury, we looked at the readiness of the candidates, which included overtraining markers, hydration levels, and cognitive readiness.
From a leadership perspective, we explored which characteristics made an individual more likely to succeed than their fellow candidates, examining factors like age, sports background, where they came from, and so on. With a broader lens, we also assessed the progress of the current program and compared it to the baseline metrics of previous ones to see year-to-year deviations trends and figure out if we were moving in the right direction or not from an overall performance perspective.
JOINING THE DOTS BETWEEN DATA COLLECTION AND MANAGEMENT
Once you’ve determined which data you want to capture during your pilot program and why it will be beneficial, you need to decide which systems you’ll use to collate, manage, visualize, and report on it.
By introducing an AMS system such as Smartabase as soon as possible to any type of human performance program, you’ll have the ability to immediately capture accurate data from whatever wearables and other monitoring devices you decide to use for data collection. By having an integrated system at each location, you’ll start building a real-time, whole-person data twin which will allow you to compare the member’s data at previous training facilities at your fingertips, versus having to call around to get previous data sent over for correlation with the information you have at your location.
It’s also important that you don’t over complicate by adding too much too soon. We initially introduced four sensors to monitor each candidate and it became too much data right out the gate for everyone, particularly for the instructors to process and make changes to their programs. So we scaled back our data collection and dialed in on just two sensors and those devices’ data sets. Then we added an additional one after staff understood how to utilize the first two. After several months we added another, then another, before finally adding the last sensor. This allowed staff to become comfortable with what data was available and how it could be used to optimize their program.
We also decided early on to concentrate our pilot on a single course. Once we’d worked out the challenges with a course of 150 candidates and were seeing the outcomes we’d hoped for, then we extended sensors and data collection via Smartabase to all of our pipeline courses to capture a true initial entry through graduation data set.
This involved monitoring the performance of candidates as they moved down two different tracks. The first was our one AFSC program that feeds into TACP (Tactical Air Control Party) training, whereby students went straight from our course into theirs and didn’t move around. The second was for PJs, combat controllers, and special reconnaissance candidates who left the initial program and went into our A&S selection process. After they’re selected, they attend between six and nine different schools over the next couple of years. We wanted their human performance, cognitive, academic, and progress checks to move with them and track their progression and regression as they transitioned between courses. This would allow a course which a candidate attended 16 months previously and struggled in a certain performance area to be identified at the start of the current course.
One example would be a candidate who struggled on exiting during the Military Free Fall parachute course. They passed but struggled and now, 16 months later, they begin the final air operations block. Knowing that exiting was a challenge initially, you can provide additional attention focusing on their exits from aircrafts. By recognizing this as an area of struggle, staff can be aware and put proper risk mitigation factors in place to reduce the likelihood of a repeat incident, and if it occurs, an instructor is there to immediately assist them.
AVOIDING PITFALLS AND SUPPORTING THE WHOLE AIRMAN
If you begin a human performance project by going too broad with your pilot, you’re in danger of weakening your case for funding because you won’t be able to present clear and concise data that’s tied to well-defined objectives. It’s better to start simple and small with a targeted intention of what you hope to achieve, and then build outward from there. We also found it beneficial to put Smartabase into place immediately so we could begin capturing, managing, and interpreting data right out of the gate.
Some pilot programs start off by collecting the information up front from wearables and other devices first, and then trying to go back in to enter it into an AMS afterward. This is a time-consuming process and creates a delay between when the data is obtained and can become actionable, reducing effectiveness and making staff reactive instead of proactive. The earlier you can implement an AMS, the sooner you can start making data-informed decisions, the better the outcomes of your pilot will be. This will make it more likely that you’ll get approval for the funding you seek to continue and expand upon your human performance program.
It will also allow you to take timely action that otherwise wouldn’t be possible. In addition to capturing objective data during our pilot, we also conducted a six-question daily survey with our candidates that asked how well they slept, how motivated they were to train on any given day, and if they had any problems preventing them from staying focused on the task at hand: graduating. We noticed one of our candidate’s performance had dropped below his normal high standards during a 48 hour period, and in his survey on day three he noted that he and his wife had a problem.
We brought him in within 45 minutes of the Smartabase notification and he told us that she was stranded in Africa in the midst of a civil war that had just broken out. He hadn’t heard from her in three or four days and didn’t even know if she was still alive. It was the resulting stress that had impacted his output during training. Through DoD channels, we were able to connect this couple via a phone call, and the candidate’s wife assured him that she was safe. This shows how using a system such as Smartabase to manage both objective and subjective data goes far beyond human performance and can positively impact the lives of candidates and warfighters in keeping with the “Whole Airman” approach.
FINDING THE RIGHT FUNDING SOURCES AND DEMONSTRATING ROI
With funding, timing is everything. We had a Group Commander that had the foresight to see how a system such as Smartabase could justify the programs and initiatives we were trying to implement, validate standards for courses, and show where we had areas of improvements through analyzing data sets for production improvements and injury prevention. That being said, we still needed to demonstrate a return on investment (ROI). The way we did this was to show that once we ran the initial pilot with one course and then expanded it to all our pipeline courses, we were able to reduce the number of incoming candidates from 1,700 a year to 800 and still produce the same results, as long as the quality of candidates shipping was the same. As we had 900 less personnel to feed, house, train, and treat injuries for, the cost savings alone would pay for the Smartabase license.
The other component of making a strong ROI case was that Smartabase took the place of several other disparate systems, which saved yet more time and money. We can track training equipment that is issued to students, room assignments, and the items assigned to the room as they are constantly moving from course to course, pipeline scheduling courses for candidates, and even staff for courses – just to name a few systems that became obsolete with Smartabase implementation. One should be looking at Smartabase as not just a HP database, but as a lifecycle management system that eliminates other siloed software. This might pay for Smartabase outright or significantly offset its cost.
Sometimes dedicated funding can be difficult to come by and takes up to several years to be dispersed, even after your human performance project is approved. This is where alternative funding programs can prove useful. Initiatives such as Small Business Innovation Research (SBIR) and Small Business Technology Transfer Research (STTR) have both paid for human performance projects in the military. Going beyond these options, Other Transaction Authority (OTA) is also a possibility, as it provides “procurement instruments other than contracts, grants, or cooperative agreements enabling flexible business arrangements to acquire research and development to support technology advancement or to quickly develop a prototype,” according to AiDA.
We’ve also seen how certain situations can lead to new funding pipelines opening up, such as those that were made available during the COVID pandemic. Even if warfighters’ health tracking is a secondary goal, emphasizing it in a proposal or grant application could help your unit garner funding that could also be utilized for other aspects of human performance. As a system like Smartabase can unite the benefits of an Electronic Medical Records (EMR) system and an AMS – and even served organizations like Air Force Research Laboratory as a COVID monitoring platform – it’s the perfect fit for such a dual-track initiative.
It’s also important to know the right steps to take during the procurement process. GSA and DLA enable units to avoid a lengthy bidding process that can draw the contracting phase out. These contract vehicles make it easier to reach an agreement with an approved vendor, such as Fusion Sport via Guardian Premier Solutions, so that funds are executed in a timely manner and the human performance project gets up and running quickly.
IF YOU ENJOYED THIS ARTICLE, YOU MIGHT ALSO LIKE…