Sep
1
By JBNA Quality of Life and Environment Committee
During the month of July, hundreds of James Bay residents completed a Quality of Life Survey sponsored by the James Bay Neighbourhood Association (JBNA).
The JBNA Quality of Life and Environment Committee created the study to identify and measure issues of interest to all residents of James Bay.
The questionnaire design, although based upon the work of other larger jurisdictions, was focused on issues which are of concern to James Bay residents and which are within the purview of the JBNA. A content analysis of JBNA minutes over a three year period, which determined the frequency topics were mentioned in meetings, resulted in the following table.
The survey instrument was pre-tested amongst the Quality of Life Committee members and the JBNA Board. The pre-test led to several modifications to the wording of questions and to respondent instructions.
Survey results are intended to provide direction to the JBNA Board in setting tasks and priorities for the coming year.
The JBNA Quality of Life and Environment Committee looks forward to presenting the survey results at a JBNA meeting this fall and through the James Bay Beacon.
What gets measured gets managed
From this work, the following categories of topics were developed.
-
Community Safety
-
Traffic and Transportation
-
Access to Amenities
-
Quality of Property Development
Given the prominence of People Movement issues, this topic was further investigated by mode (buses, motorcycles, etc.).
Different issues affect different areas within James Bay. Therefore, the survey also contained questions related to the location and orientation of the respondents' residences as well as other characteristics.
The sampling area was the entire residential community of James Bay.
Questionnaires were made available to residents through the James Bay Beacon, delivery of individual questionnaires by volunteers to street accessible mail boxes, delivery of individual questionnaires to a large number of multi-unit buildings where access was granted, and placement of questionnaires at several outlets distributed throughout the neighbourhood.
With a returned sample size of 555 and with a population of 10,760 residents, a sample will accurately represent the total population, 95% of the time (19 times out of 20). This is referred to as the Confidence Level. Moreover, the results for any percentage will be accurate to within +/- 4%. For example, if 50% of residents believe "Littering" should be a community priority, then we can be sure that 19 times out of 20, a random sample of 555 residents will accurately represent the true population value (50%) within an interval between 46% and 54%. This is referred to as the Confidence Interval.
The high JBNA survey response means that the survey is highly predictive of community views.
There is often bias in surveys. The topic of bias has two broad connotations - Objective or Methodological Bias, and Subjective or Researcher Bias.
Objective or Methodological bias does not suggest that the researcher is dishonest - it simply means that there is the potential for error in one or more of the sampling process, the survey questionnaire, or the analysis of the results.
Sampling bias occurs when:
-
Particular groups are under-represented in the sample (for example if the sample was taken only from shoppers at Thrifty's Supermarket). In the case of this survey, all James Bay residents were surveyed and therefore this is unlikely to be a source of bias.
-
The sampling relies on volunteers (for example, if we required the respondents to attend a JBNA meeting to register). Again - this survey included all James Bay residents and therefore this is unlikely to be a source of bias.
-
Sub-groups within the population choose not to respond. The survey results are somewhat biased in this respect and, in particular, the sample is under-represented in terms of younger James Bay residents. It is important to recognize this element of bias when interpreting the results and to consider ways of reducing it in subsequent surveys. The under-representation of sub-groups does not obviate the usefulness of the survey based on those who did respond.
Survey instrument bias occurs when there are leading questions, ambiguous questions, or poorly worded questions. The process of "pre-testing" the questionnaire can identify many such problems. In this particular survey, there were two possible sources of instrument bias.
-
The first is that Question 4 asked whether a particular subject (e.g. Littering) was better, worse, or unchanged, and then asked which of the 28 subjects were the top ranked five priorities requiring attention. A number of respondents either did not rank the priorities or did so in ways other than instructed. This resulted in marginally fewer responses to the priority portion of the question; however, because of the large sample size and assuming that the problematic responses are randomly distributed, the results can be considered an accurate indication of community priorities.
-
There are a number of reasons why individuals may not answer a question. Typically these include "Don't Know", "No Opinion", and "Did Not Respond". Ideally each one of these different responses would be included even though they are often grouped for analysis purposes as "No Response". In this survey - due to space limitations - it was decided that individuals, for whatever reason, could choose to "Not Respond" to a particular question and it would be coded in that manner. We therefore cannot determine the underlying reasons for "no response".
Overall, the survey results are a very good representation of the views of James Bay residents.
Subjective bias is a more difficult issue to identify and address because it implies dishonesty by the researchers. It can be identified by statements such as "There is a hidden agenda!" or "They designed the survey to get what they wanted!" The statements call into question the results of the survey by questioning the integrity of the researchers.
Such statements are not unusual because:
-
Those who receive the survey results are interested in the subject and have invested effort in the outcome. For example - a manager receiving a critical report on customer satisfaction will feel "judged" by the outcome.
-
We all have opinions which we believe are correct. Different opinions are not always welcome (i.e., we "like" surveys that confirm that we are right.).
-
Unlike qualitative public participation methods, (such as community meetings, focus groups, "shareholder" meetings, etc.), quantitative opinion surveys provide a well defined set of priorities and directions for improvement. The lack of ambiguity can diminish the ability of various interest groups to control the agenda of public debate, the processes by which it happens, and the necessary corrective actions.
-
Survey responses can create an expectation of change. This can be difficult for those who, even if they agree with the actions, must marshal the resources to effect change.
Accusations of subjective bias can be addressed in a number of ways including:
-
Confronting the issue by addressing any suggestion of subjective bias as in this article.
-
Providing opportunities for open ended comments so that topics and viewpoints not covered by the closed-ended questions can be provided; this was done in the survey.
-
Asking an independent body to verify or replicate the results of the survey.
The results are now being analyzed and will be presented at an early opportunity.