printed copies of my survey in French and English to give out to participants
I know what information I need to calculate P flows in the city, but asking questions that actually get the information I need is really a separate topic in and of itself. What seems straight-forward to me, in my PhD student context and love of P cycling, isn’t necessarily as clear in the context of a waste manager, a market manager, or a farmer/gardener. Apparently there is an art of developing survey questions, and I am giving it a try.
For every P flow I want to calculate I try and break the flow into components that make sense to the person I will ask. That is, I might be looking for one value for the P coming into the city, but if I am interviewing a gardener who works on his own little plot he will not know about the city scale, and possibly not about P specifically. I need to break the question down to the type of inputs he might use in his garden that contain P, and ask him about how much he uses in a year and where he gets these inputs from.
Every time I write up and rewrite a version of the survey (which is different for every type of actor in the system I want information from) I ask myself:
Am I asking the question that will give you the answer I am are looking for?
In order to test if indeed the answer to the question above is yes, I took the following steps:
0: Review literature on surveys, and articles that used surveys for data collecting and look at them.
1. Doing a pilot survey to see what happens:
I gave my survey to friends and family through email (and 2 in person) who garden and asked them to fill out my survey. This exercise gave me the opportunity to see where people didn’t answer at all (because they didn’t understand or it was “too hard” and where they didn’t answer in the way I expected or needed them to (i.e. not putting the information I was really asking for). It also allowed me to see if how the “delivery” method of the survey worked (online vs in person). I was lucky enough to have a friend with a master’s in social psychologist who has done a lot survey work in my pilot group. She was thus able to give me some specific design suggestions about things in my survey that were not working the way I wanted them to.
2. Make changes to the survey based on feedback: Here is a list of the big changes I made to my survey after the pilot study.
-Put more hierarchy in my questions by asking my “big” questions organized in categories (general information, inputs, production, consumption, ect) but asking very specific sub-questions.
-Minimize open-ended questions by creating pre-determined and organized answer choices with multiple choices, drop down menus, and fill in the blank tables. I always allowed for an “other” category, but giving choices increases your chances of response as the person taking the survey doesn’t need to concentrate and remember as much. The one big draw-back is that it makes the survey look very long, even though it really isn’t (only 8 questions).
-Add prompts for each section of questions that explains the goal of the questions (giving context for the survey respondent and also allowing them to “switch gears” between sections)
-Add definitions or examples for any terms in the questions or answer-choices that might be (or at least I saw in the pilot) interpreted more than one way or might be technical terms.
4.Get survey reviewed by peers, and experts in your study system:
Its good to make the survey answerable, but it also needs to be collecting data that we can use as scientists and to answer research questions. I got my advisor, field assistants, collaborators, labmates, and colleagues in social science labs to read through the survey. One example of feedback at this stage was that adding some of more general (non-quantitative) questions could ease the respondent in and then he/she is in the right mind space to answer quantitative questions. My collaborators at another university have more experience in the urban agriculture field in Montreal so their feedback and comments were extremely important to validate questions and survey design.
5.Match each survey question to a P flow I want to quantify in the city :
This step allowed me to see if I had any questions (and possible answers) that were not directly relevant to the data I needed to quantify flows or if some flows did not have an associated survey question (or other possible data source). It was important to note that some questions were there to determine “local context” and allow for general data collection on which to make assumptions if the respondent did not provide quantitative data (e.g., “check all the inputs you use on this list” instead of just “what quantity of the following inputs do you use”).
6.Create equations (with units and data sources) for each flow to be quantified and check it against the list of survey questions:
Step 6 is in some ways the opposite of step 5 and gave me the opportunity to really make sure I was collecting all the data I needed to calculate my flows.
7. Test survey with a “real” respondent in the field:
Once my collaborators and I were ready to start collecting data we chose an actor we knew would be responsive to taking the survey and had a lot of the information we were trying to collect already documented. After filling-in the survey with them, we realized that we needed to sightly simplify the level of detail of our answer choices as if this respondent did not have this detailed data it was very very unlikely others would have it. We realized we could not ask for input data for each type of garden but rather for all gardens managed by the actor or by site they managed.
Once the survey is created, one still needs to contact respondents and administer the survey. There are many options for both of these steps and I will discuss some of them in my next post.