Overview of Methods
Download the most updated methods documentation
March 2019, Prepared by Laura Brown, Aaron Budris, Kristina Kelly, and Ryan Faulkner
- Introduction and Background
- Overall Approach
- Selection of Participating Trails
- Infrared Counts and Manual Count Calibrations
- Limitations and Sources of Error
- Data Collection
- Adjustments and Manual Count Calibrations
- Intercept Survey Methods
- Survey Tool
- Sampling Schedule
- Summary and Refusals
- Data Aggregation, Cleaning & Analysis
- Limitations and Sources of Error
- Methods Improvement
Introduction and Background
In 2011, the Connecticut National Recreational Trails Program Recreational Trails Plan, published by the CT Department of Energy and Environmental Protection (DEEP), stated that, “Little research has been done regarding the number and types of trail users around the state, potential conflicts and safety concerns.” Without user data, state, municipal and regional planning agencies as well as trail administrators had little information on which to base trail planning and maintenance decisions, evaluate the effectiveness of modifications to the trails, or justify the need for additional resources. Until 2016, such data collection was largely conducted on an ad hoc basis. The Recreational Trails Plan offered a solution to this problem. “Working with some of the academic institutions in the state, the DEEP should develop a protocol for surveying trail users so that the present and future needs of these constituents can be met.” The Connecticut Trail Census was developed as a direct response to this recommendation and was funded as a pilot program by the 2015-16 Recreational Trails Program, concluding in August 2018.
The Connecticut Trail Census is a statewide volunteer data collection and education program that encourages data informed decision-making and promotes resident participation in trail monitoring and advocacy. The Census involves data collection through a trail user intercept survey and infrared user counts on multi-use trail sites throughout the state of Connecticut and makes this data accessible to trail user groups, administrators, government agencies, and the general public. The project is funded by the Department of Energy and Environmental Protection Recreational Trails Program and project partners include the Connecticut State Greenways Council and the Naugatuck Valley Council of Governments.
The Trail Census was developed based on portions of the economic impact study “Pathway to Revitalization – Economic Impacts of Phased Completion of the Naugatuck River Greenway” conducted on the Naugatuck River Greenway in 2015. This study involved collection of intercept surveys on completed portions of the greenway as well as short-term infrared counter installations on these segments. Following the publication of this study, there was overwhelming interest from other trails around the state in gathering similar data. As discussed above, a very limited number of trails in the state were collecting user data. With support from a grant from the Department of Energy and Environmental Protection Recreational Trails program, a pilot of the Connecticut Trail Census launched in 2016 with twelve pilot sites and was supported through in-kind staffing from the Naugatuck Valley Council of Governments and the University of Connecticut Department of Extension. Methods for the pilot were based on the 2015 study and methods chosen were largely based on direct feedback from trail users groups about types of data most needed and its intended use.
The 2016 pilot was completed in 2018. Outcomes of the pilot project included: installation technical assistance, maintenance of 16 infrared counters (as well as temporary installations of counters at several sites) that logged over 1,401,000 trail uses, physical quarterly downloads of the data at each site, development of data analysis tools including manual count techniques and calibration factors, analysis of over 70 hours of manual counts and 1,044 trail user intercept surveys conducted by 63 local volunteers logging an estimated 1,348 volunteer hours (an estimated value of over $36,000), publication of quarterly data reports and annual reports, hiring and training a part-time Trail Census Coordinator (hired Spring 2017), launch of the new website and data portal www.cttrailcensus.uconn.edu, teaching 6 face-to face trainings and 2 webinars, establishing a social media presence and email newsletters for the program, and leveraging a $40,000 grant in partnership with the University of New Hampshire for Extension colleagues to learn best practices for downtowns to capitalize on trails. The success of the 2016-2018 pilot led to the development of a subsequent application to the Department of Energy and Environmental Protection in 2018 which has provided $206,049 in funds to continue the program to 2020.
The goals of the Connecticut Trail Census are based on learning and insights documented throughout the pilot program through both formal and informal evaluation. The program methods described here are continuously changing to best meet the needs of data users and these changes will be updated in this document. Throughout the pilot period, a project steering committee consisting of Aaron Budris NVCOG, Laura Brown – UCONN, Kristina Kelly – CTTC Coordinator and Ryan Faulkner – Data Analysis Staff met weekly to review what was working and what could be improved (these meetings have continued). Staff and project administrators also regularly solicit feedback from the Connecticut Greenways Council, and from Site Coordinators (volunteers who serve as the primary site contacts) through regular email and phone conversations and more formally through an online survey conducted annually. Site coordinators also provided feedback on the pilot methods through a focus group conducted at the 2017 Connecticut Trail Symposium as well as a formal program evaluation survey completed in September 2018 with 26 respondents. Results of these evaluations are discussed throughout this document as justification for methodological improvements. With the receipt of on-going funding in late 2018, the Trail Census steering committee formed a project Advisory Committee which is currently comprised of 17 representatives including local and statewide trail advocacy groups, regional planners, and and trail use advocates. This advisory committee meets quarterly and one primary role is to review and advise Census staff regarding project methods.
Unlike other trail research projects, the Connecticut Trail Census is an entirely volunteer based data collection model. Trails selected to participate demonstrate they have an interest in using the data effectively as well as volunteers who are willing to be trained to appropriately collect it. While this model presents unique opportunities and challenges for the program, the hope is that structuring this as a citizen based volunteer program will promote more active resident participation in monitoring, and understanding the value of trails.
Selection of Participating Trails
The twelve sites selected for the 2016-2018 pilot were selected by the Connecticut Greenways Council based on a call for applications to trail user groups interested in collecting user data in October 2016. Applicants were asked to provide: the name of the affiliated organization, reasons for participating, background on the trail segment and relationship to the applying group/ organization (type of trail, year built, average use, connections, marketing website or map if available), if any count or survey data been collected on the trail in the past including when and how data was collected, where it is available and how it has been used, how the organization would recruit volunteers, what specific locations on the trail might be appropriate to collect manual counts or intercept data, how the group or organization expected to use the data collected and if there any specific information that they felt would be most useful to collect. Of the thirteen applicants received, twelve were selected to participate. The application process revealed that primary questions these trail users groups were most interested in understanding how many people are using the trails, what types of users are using the trails, their mode and time of use, and where users are from. When the project received notice of additional funding in 2018 an additional application process was opened in December 2018. At the time of publication of this document it is not clear how many additional counters will be made available for new site use so no additional sites have been notified of participation.
Infrared Counts and Manual Count Calibrations
One goal of the Trail Census is to collect long term quantitative trail user information in order to gain an understanding of total trail use, identify use patterns and trends, and to enable tracking of changes over time. Short term counts were considered, but it was clear that more permanent long term record of trail use was preferable to avoid error caused by extrapolating short term counts to long term use estimates. A range of pedestrian counter technologies exist that are capable of tallying non-motorized transportation trips on trails. Several technologies were investigated for inclusion in this study including active infrared, passive infrared, video, radar, pneumatic tubes, and magnetic loops. The Federal Highway Administration (FHWA) offers a good discussion of these technologies in the FHWA Bicycle Pedestrian Count Technology Pilot Project report published in December 2016, and by the National Bike Pedestrian Documentation Project’s whitepaper Automatic Count Technologies .
After careful consideration, passive infrared (pyroelectric) trail counters manufactured by TRAFx Research Ltd. in Alberta, Canada were selected as the preferred technology due to comparative cost per unit, durability, ease of setup, and portability. The counters work by detecting the heat difference between passing trail users and the ambient air or background temperature. They were mounted in lockable electrical junction boxes for security purposes as per the manufacturer’s recommendation. The counters record warm objects passing by the count site 24 hours per day, compiling them into one hour time blocks. The data is recorded in .txt files, and the data needs to be physically downloaded from the counter using a TRAFx docking station. TRAFx maintains a cloud based data application that allows users to upload counter data, and that can be used to manage, store, and view data, conduct analysis and produce reports.
During the program application process, applicants are asked to identify possible counter locations. Once trails were selected to participate, locations are investigated, and program staff meet with trail coordinators in the field to finalize the location and install the counters. Generally, sites are selected close to trailheads, but in an area where trail users are unlikely to congregate to avoid miscounts by groups of people blocking the counter. Counters were affixed to existing sign posts, trees, or fence posts on one side of the trail facing trail traffic, and away from ancillary traffic that might be registered by the counter. Counters are installed per the manufacturer’s recommendations approximately 3 feet from the ground and avoiding direct sunlight where possible.
At the time of installation, trail coordinators are given a tutorial on the basic operation of the hardware. Since trail counters are set out in often remote areas, and cannot be constantly monitored, there is risk of tampering or the possibility that a counter malfunction might not be caught for an extended period of time. Coordinators are asked to inspect the counters bi-weekly to ensure that they are working properly, and to report vandalism, malfunction or other problems. Information sheets detailing the inspection process were developed and are available on the project website https://cttrailcensus.uconn.edu, and provided in the project materials box that was distributed to coordinators.
Limitations and Sources of Error
There are some limitations to passive infrared counter technology in general, and the TRAFx counter specifically. First, the counters are not capable of determining the type of use so pedestrians, bicyclists, and any other user are indistinguishable in the count data. Second, there are multiple conditions that can result in error in the data. Sources of this error are described in detail below.
Undercounts are a common problem with TRAFx counters. The TRAFx trail counters used are considered “screenline” counters that detect trail users passing the line of “sight” of the sensor. Two individuals walking or riding side by side are counted only once since the counter “sees” one heat signature. This type of undercount is typical, and is referred to as occlusion. The same problem occurs when users pass the counter at the same time in opposite directions, and the undercount can be even greater for larger groups and on heavily trafficked trails. High ambient air or background temperatures can also cause undercounts. As the temperature approaches human body temperature, the differences between the two may become difficult for the counter to distinguish and may result in undercounting. In a similar fashion during very cold weather, highly insulated clothing may prevent counters from registering a difference between the background and clothing surface temperature resulting in undercounts. Yet another cause of undercounting results from user speed. Bicyclists or other users passing the counters at high speed are often not registered by the counter. For these reasons, counters were installed in locations selected to minimize undercounting as per manufacturer recommendations, for instance near trailheads where high speeds are unlikely, and out of direct sunlight to avoid overheating.
There are also conditions that may cause passive IR counters to overcount users. For instance, if trail users stop or congregate in front of a counter they may be counted several times. Pets or wildlife may be inadvertently counted. Heated background vegetation moving in the wind may cause false counts. Understanding these possibilities, the counters were placed in a manner to minimize the risk of overcounts. The counters were placed away from areas where trail users typically congregate, at a height so that most pets or wildlife would not be counted, and to avoid background vegetation in the counter’s field of vision. Trail users will be counted every time they pass the counter, meaning that a trail user who takes an “out and back” route will be counted twice. All of the trail sections counted likely have a high percentage of these types of users, but it is difficult to calculate the percentage to be used in corrections. The published data from the Census therefore represents the number of trail trips or uses not trail users.
A common reason for data error during the summer months is the issue of insects nesting near the sensor which can either block its ability to sense the temperature change when a trail user travels by or cause it to record false uses. Data loss due to nesting insects can be avoided through regular external checks of the sensor lens and cleaning with a cotton swab.
Technical failures of the TRAFx counters have been limited but this can occur due to low batteries in the counters, or human error in the data collection process as described below. We encountered at least one unexplained technical failure of a TRAFx counter that was resulting in severe overcounts. After review of the data, inspection of the counter and consultation with the manufacturer, this counter was returned to TRAFx and replaced. To avoid other technical failures, when data is downloaded moisture absorption packets are replaced inside the counter casing. While the counter itself has a low battery light function, batteries are routinely replaced annually. Staff conducting the data downloads are also instructed to check the counter for functionality before and after downloads.
IR counter data is downloaded quarterly by project staff and imported into to TRAFx DataNet, a cloud based portal that allows for the storage, management and basic analysis of trail use data. A docking station provided by TRAFx is used for this purpose. While this process is relatively simple, if the data download does not complete fully before the docking station is removed from the counter, some data may be lost.
Adjustments and Manual Count Calibrations
In order to account for the potential differences between the actual number of trail uses and what IR counters register, an adjustment factor is used to correct the raw counts for each trail. Coordinators are asked to conduct at least 10 hours of manual counts at the counter location annually which were compared to the counter records to determine the level of typical over or under count and calculate the appropriate adjustment factor. Volunteers conduct manual counts by visually watching the counter location and recording the number of trail user that pass the counter during one hour periods synchronized to the counter’s clock using official US government time (www.time.gov). Volunteers are asked to record details about trail users including mode (cyclist, pedestrian or other), and how many users were passing the counter simultaneously. Manual count forms are provided, as well as a detailed manual count instruction sheet, both digitally on the website and in hard copy in the project materials. A live manual count webinar training called “CT Trail Census-Conducting Manual Counts to Calibrate your Infrared Sensor” was also recorded on March 30, 2017 conducted for trail coordinators and volunteers, the recording of which is posted to the project website. Completed manual count sheets are submitted to project staff when they are completed, and staff enter the reported totals into a spreadsheet. Counter records are then retrieved for comparison to the manual count totals for each hour.
Raw data downloaded from TRAFx counters is adjusted or “calibrated” using the manual count totals. Individual adjustment factors are established for each trail location by dividing the manual count total by the IR count total at that location for identical time periods. In 2018, these factors ranged from 1.29 to 2.65. That is, the IR counter totals are underrepresenting the actual uses such that for every 1 count on the IR counter, 1.29 to 2.65 actual uses occurred. To make adjustments, the IR counter raw data downloaded from TRAFx is multiplied by the adjustment factor. These factors are consistent with factors calculated for other multi-use trails in similar studies. For trails that did not submit manual count forms, the lowest calibration factor (1.29) was applied. For the purposes of reporting, unless otherwise stated, all totals released in count data reports have been adjusted in this manner.
Intercept Survey Methods
The initial intercept survey tool was developed based on questions identified from a similar survey used in the state of Rhode Island and the National Bicycle and Pedestrian Data Collection Project (NBPDCP), a survey effort sponsored by the Institute of Transportation Engineers Pedestrian and Bicycle Council. While an internet based survey was explored in the beta test, project staff found that assessing the survey via a tablet created difficulties for some users and providing a link to an online survey resulted in extremely low response rates. Research has shown that intercept surveys have been shown to be a reliable research instrument for collecting information about trail use (Troped, Whitcomb, Hutto, Reed, & Hooker, 2009). A paper-based intercept survey tool was chosen for these reasons. The survey was peer reviewed by a panel of colleagues, professional staff, and trail volunteers and beta tested by staff from the Naugatuck Valley Council of Governments, and piloted on five existing portions Naugatuck River Greenway in 2015 as part of a related impact analysis. That study involved the development of a user intercept survey as well as collection of data from infrared counters. Changes were made to the initial survey following the pilot to improve clarity of the questions and improve its usefulness in broader application for this study based on trail volunteer feedback. Questions on the survey included user perceptions of the trail, type and frequency of use, home zip code, expenditures related to the trail, and demographic information.
The Trail Census uses a modified non-probability stratified sampling schedule. During the pilot period (from 2016-2018) the research team recommended volunteers collect intercept survey data in May 2017 and September 2017 on highest use weeks and days specified by National Bicycle and Pedestrian Data Collection Project (NBPDCP) and based on volunteer interceptor availability. Mid September serves as the national week for trail project data collection nationally and according to the NBPDCP. On each section of trail the Census required 2 hours of data collection on a weekday (Tuesday, Wednesday or Thursday 3-5 pm) and two hours of data collection on a weekend (Saturday 10- noon).
These dates and times were chosen in the pilot period for several reasons. First, according to NBPDCP methods this period mid September “represents a peak period for walking and bicycling, both work‐ and school‐related. Weather conditions across the country are generally conducive, schools have been underway for several weeks, and people have returned from vacations and are back at work.” May, July, and January are suggested, optional additional times for collection. Tested extrapolation procedures are available for these sampling times from the NBPDP based on national trail data use. Second, because there had been no previous data collection on most of these trails, it was not possible to randomize sampling times (which would represent a more accurate sample of users) or to determine estimate highest use times. Finally, these sampling times served as a starting point and guide for honing sampling times to best meet the needs of the study and the community. The actual site of the data collection was chosen to accurately represent normal trail use along the segment. These sites varied slightly from the locations chosen for infrared counts. During all data collection times interceptors were asked to select a location away from IR counter so as not to influence the count data due to congregation.
In 2018 the data collection time recommendations were changed, primarily to more accurately represent the population of actual trail users and reduce structural and sampling error. The following issues were considered:
The Connecticut Trail Census intercept survey uses a sample based design. In taking a sample of a population the sample should be as representative of the total population as possible within the constraints of time and money. The margin of error, sometimes called the “confidence interval,” is a way of understanding how effective a survey is at representing the actual values in a population. The smaller the margin of error, the more confidence we can have in the results. It describes the range of values above and below the observed value that represent what the actual values might be. Unless we can survey every single trail user, we will never be able to know the actual values, so providing a margin of error allows to to know about how accurate our estimates are.
The confidence level reflects the certainty that the actual values in the population falls within the margin or error we specified. For example, we might say that We are 95% confident that actual gender distribution of trail users is within a 10% margin of error (higher or lower) of our observed values.
A sample size is the number of completed surveys. It’s called a sample because it represents only a part of the group of people (or total population) who are using the trail at that location.
These are the steps we used to arrive at our sample methods in order to reduce the error:
Step 1: Define the total population
For the purpose of this survey, the total population is all of the users of a trail at a particular community location. In the Trail Census, the population is not the total population of users of the entire length of a given trial, regardless of length. This was calculated in 2017 and 2018 using the IR counter data total annual user estimates. In all cases this was the total population of uses divided in half (since most trips recorded are out and back).
Step 2: Decide what level of accuracy is appropriate
Next we determined how much risk we’re willing to take that the results of the survey differ from the population as a whole. This means measuring the margin of error and confidence level for the sample. In the pilot year of the Connecticut Trail Census and based on our counter estimates of numbers of trail users, we believe the margin of error from our survey samples ranged from 7% to 19%. Margins of error are likely to be higher in subgroups based on the variables in question (i.e. gender, age, income, types of use, time of use etc.).
Step 3: Define the sample size
The next step was to balance the confidence level we want and the margin of error we find acceptable, to decide how many completed survey responses we need for each location.
To be able to say with 95% confidence that the actual value is within a 10% margin of error of our observed values, each Trail Census site would need to collect about 100 surveys.
This assumes a relatively high margin of error. However, since most of our trail advocacy groups and constituents are using this data to get a general idea of how these trails are used, this is sufficient for the current use of the data.
In order to be able to say with 95% confidence that the actual value is within a 5% margin of error each site would need to collect about 380 surveys. Given the time constraints of this project and the many involved volunteers, we advised sites to collect 100 surveys.
Step 4: Calculate the response rate
The response rate is the percentage of actual respondents among those who receive the survey. Based on our observations in the 2017 pilot year, we know our refusal rates are less than about 2%. That is, almost every trail user on the trails during the survey periods completed a survey.
Step 5: Total number of people to survey
Given what we know from the above, we know that each trail should aim complete about 100 surveys
Step 6: Choose a sampling method
There are many methods for sampling. In order to be able to generalize about a total population based on the sample, we would use a random, or probability sample. In this type of sample, every user of a trail at a particular location would have an equal chance of completing the survey. For any trails who would like to complete a random sample, we are happy to provide a random sampling schedule for times throughout the year. Given the constraints of our program, however, random sampling is not entirely possible. For our purposes we have adopted a modified non-probability stratified sampling schedule. That is, we have stratified the days of the week (weekends and weekdays) as well as the seasons, and times of day with the hope of ensuring some representation of users within each of these times. This type of sampling will allow us to compare users of various trails across the time schedules and weekends and weekdays but is not generalizable to the general population of users at a location or across Connecticut’s multi-use trails as a whole.
These recommendations were to:
- Collect at least 100 surveys (this recommendation is discussed in detail under “Limitations and Sources of Error” below)
- Collect surveys in 2-hour intervals if possible with 2 surveyors.
- Rotate between weekends/weekdays
- Rotate times of day: (Consider the following time blocks. Please review your trail use patterns. Aim to collect data during each of the following periods: 6-8 am, 8 am – 12 pm, 12-2 pm, 2-6 pm, 6-8 pm)
- Aim to collect during every season
- During each surveying session, one surveyor should record refusals and complete the summary sheet
One of the founding principles of the Trail Census is to make trail user data accessible and available to decision makers. For this reason, it is essential that participating volunteers understand that the data is, essentially theirs, and not data being collected by or for any other entity or research purpose. This is an underpinning value in the development of the program and in recommendations provided to the volunteers who are collecting the data. All trainings therefore serve as recommendations to the volunteers participating and the subsequent data is viewed as theirs to share or not. Recommendations are based on learning gained through the program pilot, best practices from other similar survey assessment projects and efforts to reduce error. To date, no communities have chosen not to share their data for analysis. However, not all communities have been able to fully comply with the recommendations for appropriately implementing the survey. This will be described in more detail below.
Training methods have been adapted from year to year to best meet the needs of participating volunteers. In the 2016-2018 pilot period, volunteers representing participating sites attended a one hour face-to-face training on how to best assess a survey tool. These trainings were held at regional locations to make them most accessible to volunteers. Six face-to-face trainings were held for volunteers and a total of 38 people participated. Trainings were presented such that attending volunteers could provide training and materials to other volunteers who were not able to attend. In some cases trail coordinators attended the training and conveyed essential information to the volunteers who would ultimately collect the data. At this training volunteers from each collection site received a box of survey tools including 200 copies of the paper surveys, instruction sheets for common questions on the survey, pre-posted envelopes and summary sheets for sending completed surveys, information sheets regarding the University of Connecticut IRB approval, clip-bards, pens, orange vests, and post-it notes for identifying any issues on the survey.
Summary and Refusals
Following data collection, trail coordinators are asked to complete a summary sheet documenting number of refusals, time, day and weather conditions during the data collection period or any other unusual circumstances. Completed de-identified surveys are sent to the Trail Census research team in stamped envelopes provided to volunteers during training. The research team documents the meta data on the Summary and Refusals Form and numbers the surveys. The survey data is subsequently inputted manually into Qualtrics software.
Data Cleaning, Aggregation & Analysis
Because UConn faculty are involved in the data analysis portion of this project, the survey analysis portion of the Trail Census was approved (determined exempt for Human Subjects Review) by the UConn Institutional Review Board number X16-181 in 2017 and #Xl5-174 in 2018. No University of Connecticut faculty or students were involved in the actual collection of data and the University was involved only in the aggregation of de-identified data provided by the trail user groups after collection. As described above, the data is viewed as belonging to the trail user groups who collect it and they make the decision to share it with UConn for analysis.
In 2017 eleven of the fifteen sites (73%) collected an aggregated total of 1,042 surveys. In 2018, ten of the sixteen participating (63%) sites collected an aggregated total of 1,146 surveys.
Under IRB guidelines, data received from minors under the age of 18 could not be considered for analysis so this data was removed prior to analysis. With data from minors removed there were a total of 1,131 surveys for analysis from 2018 and 1,003 surveys for analysis from 2017. Data was also reviewed prior to analysis to identify surveyor and data entry errors. Additional information about how errors were handled for each question was documented in the “Read Me” tab of the survey data spreadsheet and is available on request. Analysis reports for both the IR counter data and survey data are available at t http://cttrailcensus.uconn.edu.
Limitations and Sources of Error
The citizen science based structure of this study creates some significant challenges with regard to generalization and communication of the data collected. The user identified nature of the sampling collection points means that sites selected to participate in the Trail Census may not necessarily be an accurate sample of bicycle/pedestrian trails in the state. Further, given that the data collected represents use within a particular community around a data collection point, the data cannot be used to accurately estimate total use along the entire length of a trail.
Following the pilot period of data collection in 2017, a significant effort was made to identify and reduce the potential sources of error. Because this will never be a fully research based design however, the data should alway be viewed as not broadly generalizable or representative of the full population of trail users. The goals in this review effort were to decrease the margin of error of volunteer administered intercept surveys while also increasing Site Coordinators and volunteers’ ability to successfully administer them.
Total Survey Margin of Error
In 2018 the Trail Census Survey instrument and processes were reviewed according to factors described in Groves and Lyberg’s “Total Survey Error: Past Present and Future” which discusses sources of error according to the Total Survey Error Model. The total survey margins of error were calculated based on the survey sample and the estimated total trail user population at each data collection site. The total user population was calculated using the adjusted infrared annual counter totals. Most counters had some days of missing data. For those days, the average daily count was added to the adjusted counter totals. Because the counters only count “uses,” not “users,” the total annual uses estimated were divided in half to adjust for “out and back” trips which account for the vast majority of trips on these trails. These adjustments in and of themselves present a source of error.
The following formula was used to calculate overall survey margins of error (MOE):
We assume a confidence interval of 95% (z=1.96). This means that we can say with 95% confidence that the characteristics of the actual population of trail users survey sample is within the range of the survey value plus or minus the margin of error shown in Chart 2 below. Some margins of error are quite large due to very small sample sizes. Since our samples were also not random, this margin of error does not account for sampling error but it gives us a feel for about how representative we think the survey data is of the overall population. Margins of error may also be calculated for individual quantitative questions on request.
Margins of error for 2018 survey samples ranged from 6.0% on the Hop River Trail in Vernon to 32% on the Farmington Canal Heritage Trail as shown below in Chart 2. Survey sites were asked to collect at least 100 surveys. Based on the known overall use counts on the trails in the program in 2017 this would result in roughly a 10% margin of error at a confidence level of 95%.
Chart 2. Overall Survey Margins of Error
Errors of non-observation
Errors of non-observation are primarily errors caused by erratic sampling times. Based on the considerations described above in the “Sampling schedule” the research team determined that the level of rigor needed to significantly reduce error is predicted to be unsustainable by volunteer groups. This would involve 30 randomly selected hours of survey data collection through the year. Refusal rates were observed to be very low on all sites, however, it should be noted that many refusals were bicyclists who could not or refused to stop. This may result in some bias toward foot based trail traffic as respondents.
The charts below compare the survey data collection months, days and hours to overall trail use across all of the trails for which Infrared Counter data was available. This demonstrates that while usage declines in Jan through March, the survey samples are heavily skewed to warmer months and may not be representative of winter users. Surveys were also heavily skewed to Sundays (likely due to surveyor availability). In terms of time of day, the survey samples appeared to more accurately represent hours of the day as there are fewer users before 6 am and after 9 pm. These use patterns are likely to vary significantly from trail to trail based on use patterns at each location. When comparing surveys collected to day of week estimates for the counter data, as shown in Chart 3, significantly more surveys were collected on the weekend compared to overall use estimates. Hour of day comparisons, as shown below indicate that survey collections were roughly comparable to overall use times. It should be noted that use patterns are likely to vary from trail to trail.
Errors of observation and surveyor error
Following the 2017 pilot the survey tool was corrected to address several sources of error. In particular, some demographic data was collected by the interviewer in 2017 which was primarily observation based. Discrepancies in the way interceptors asked respondents to complete some questions also led to error with regard to (in particular) group size, spending on that trip to the trail, and annual spending. Another common issue in the 2017 survey was the completion of one survey to represent multiple people. These questions were modified in 2018 to reduce this error and more significant training was provided to interceptors to avoid these issues.
Balancing a rigorous research design with the needs of a volunteer based project and overall project goals have been the primary methodological challenge of the Connecticut Trail Census. Regardless of recommendations provided to increase the validity of the data, volunteers are unpredictable; schedules change, weather conditions affect people’s willingness to survey or conduct manual counts, and people’s willingness to participate overall changes. People who completed the training aren’t always the volunteers conducting the counts or survey and those who pass on information to other volunteers may leave out important information. During the 2016 pilot we also discovered that the capacity and involvement of the Trail Site Coordinators (TSCs) played an important role in each site’s ability to meet the recommendations of the project. Some site coordinators already worked with an existing group of volunteers or had the skills to reach out and engage volunteers in the effort. Other Trail Site Coordinators needed support in volunteer engagement or did not see this as part of their role (if they served as professional staff assigned to this role for instance) and so were not able to fulfill recommendations.
However, we know that this data is being used. According to results of a formal program evaluation survey completed in September 2018 with 26 respondents survey indicated they have used Census data:
- To make more informed trail decisions – 72.2 % of all respondents said they either have (38.9%) or plan to (33.3%) reference CTTC trail use data to make trail decisions. “[We have] used the data to plan where to focus maintenance needs; CTDOT has used [CTTC trail use data] to justify paving versus stone dust applications.” “[We] are using data to prove need for a port-a-potty in this area.”
- In long term planning efforts – 72.2% reported either already (50%) or planning to (22.2%) integrate CTTC trail use data in long term planning efforts. “Hope to use the data to advance the DOT shovel ready plan to create a bike lane.” “[Using the data to] expose..the need for a more robust and coordinated approach to trail operation, maintenance, promotion, funding, data gathering/analysis.” “It is important that when promoting the trails, especially the building/improvement… that we have statistics supporting their use to counter the “I never see anyone using the trails” argument.”
- Communicate trail data to local officials or the public 77.8% of respondents said they have (61.1%) or plan to (16.7%) communicate CTTC trail use data to local officials or the public. “The data sure made our Mayor and Administration stand up and take notice! Prior to CTTC, they were totally unaware of how much our trails were used. Public also! The value it adds to the local economy, and to quality of life in our town took front seat for the few days of the announcement of the Trail Census results.”
- Identify patterns and trends on the trail 77.8% of respondents reported having already (50.0%) or planning to (27.8%) use the data to identify patterns and use trends on their trail. [We have used the data to] “understand the most popular trails, where policing needs to be done.” “Many in our community have the impression that not many people use the trails. Jaws dropped when the numbers were shared at a public meeting.”
- Leverage other resources 72.2% of respondents have (44.4%) or plan to (27.8%) use CTTC trail use data to leverage other resources. “We will use the data for fundraising and for public relations at various events we attend (Farmer’s Market, Main Street Marketplace, etc.).” “Will make note of trail use in further RTP grant applications.”
The results of the evaluation show that use of the Connecticut Trail Census data is primarily at the local community level. However, as a program evaluator commented, “The data [obtained through the Trail Census program] is pushing Connecticut into a role as a model for other states.” Implementing this program as an ongoing uniform, consistent, statewide initiative offers the potential for significant expansion and broader impacts.
Given these results the research team has decided to focus primarily on the educational potential and partnership opportunities this program offers, rather than solely on rigorous research validity. The following, however, are some research questions this project and the data may be used to inform:
- Evaluate the effectiveness of the volunteer based/citizen science method – How does survey training affect the quality of data collected? What did we learn? What were the errors/gaps/challenges/benefits of this method?
- Can we statistically identify similarities between trails by the counter data? Can typologies or factor groups be used to generalize or better understand expected trail use? Apply Trail typologies to the sites to understand – those documented by Greg Lindsey, U of Minnesota and Jeffery Wilson IUPU-I for the IHTC.
- We hypothesize that health impacts represent the greatest economic impacts of the trails. What additional health data would be helpful to understand health of current users? How does that differ from non trail users?
- Can we design a sampling system for collection of counter data that does not involve 24/7 counts but short term counts based on the extensive data we have so far?