Much of the information that will be used to determine the outcome of the VLE review will come directly from students. Data will be collected through surveys and direct interviews with focus groups, and will then be analysed to determine the general student feeling towards the VLE. The same process will be undertaken with academic staff as they are likely to have different views.
Collecting data via a survey is not as simple as just coming up with questions. In order to get effective responses to the survey and to draw significant conclusions, the survey needs to be designed with the intended responses in mind. It is also difficult to avoid inducing bias with the phrasing of the questions, and there is a balance to be had between being thorough and covering lots of information, and being brief enough to get plenty of responses.
Principle of the Review
It has been determined that the review will focus on two main objectives, to make the outcome as clear as possible. Firstly, the review will determine criteria for a successful VLE, marking out the properties that the VLE should have in order to be most effective for learning and for teaching. Secondly, a range of possible VLEs will be evaluated against these criteria, including the current VLE, Blackboard. The degree to which Blackboard meets the specification is the benchmark; if other VLEs can beat this benchmark then a change can be considered.
Using this two-part structure allows at least some form of quantitative comparison between the alternatives, which will allow the systems to be ranked mathematically. This is important for removing bias – although human input is used to determine the ranking, the ranking is not specified by humans. This subtle difference is the fine line between a subjective decision and an objective decision. If the decision can be made objectively, it is more likely to be widely accepted, as opposed to a subjective decision where the reviewer can influence the result. An objective evaluation is more likely to determine the system that is the best for the most number of people.
Creating a Specification
The specification, or list of criteria, is part one of the review. Determining the specification points, and therefore coming up with a complete list of points on which the VLE will be judged, is something that can be done manually and subjectively. However, it needs to be sure to cover all of the possible points, because the importance of these points can then be established using the questionnaires.
The specification will include points such as the cost, which is easily measured and very objective, but also the ease of use, which is much harder to measure and is much more subjective. The importance of this point will therefore have to be evaluated in the questionnaire.
The draft specification is the initial set of criteria, and is created by the review team. Once the draft specification has been completed, it can be used as the basis of the questionnaire, which will aim to determine how important the subjective points are, and how well existing systems match these points.
Writing the Questions
Writing effective questions means writing questions which have meaningful answers. In fact, it makes more sense to play a Jeopardy style game, where a relevant and objective statistic is given as the answer, and the question is created from this answer. The value of the statistic will later be determined from the range of responses. Creating questions in this way guarantees that every question can be used to produce an objective parameter.
For example, one of the specification points is that the VLE should be accessible in some form on mobile devices. To establish whether this is really important to the student group, we wish to be able to state that “x% of students would use the VLE on mobile devices, if the functionality was available”. To fill in the gap requires a set of answers: yes or no. So this leads to the question being written as something with a yes/no answer: and it becomes something along the lines of “Would you like to use the VLE on a mobile device?” From this, we can establish how important it is that a VLE is available on a mobile device, and present an objective, numerical answer.
This is one type of question. There is a second type of question that must also be considered – and this involves evaluating a VLE against a certain measure. It may be desirable to assess how useful a particular feature of a VLE is; the corresponding statistic would be “y% of students felt that this feature benefitted their studies, while z% had not used the feature”. It is important to include the second clause here, because if a feature has not been used it is not possible to tell if it is useful or not.
The answers to the question must therefore include at least ‘useful, not useful, never used’, and it would provide clearer definition if a range of usefulness was provided (‘very useful, useful, no benefit, unhelpful, very unhelpful, never used’). The question becomes: “How helpful did you find this feature to understanding your course?” Note that this is not “How useful did you find this feature”, or “How much did this benefit your studies”, as they are subjective questions and do not rank the feature clearly on how effective it is at improving learning in particular.
Analysing the Results
Once the questions have been written in the above way, with the intended responses linked to parameters that we wish to know, calculating the value of the parameter is left to the realm of statistics.
The survey will return a statistic based on the responses of a sample. Because we are not interviewing every single student at the university (known as the ‘population’), our results are only relevant to a ‘sample’, which is a small group of the population. The parameter that we wish to establish is a value which relates to the population, but because we only have data from the sample all that is known is a ‘statistic’. The statistic therefore needs to be converted into a parameter, and this requires knowledge of the sample, and some neat mathematics.
The sample will have returned a statistic, such as ‘72% of the sample would use the VLE on mobile devices’. In very simple terms, we could use this to estimate that 72% of the whole population would also use the VLE on mobile devices. However, it is possible that the sample consisted mostly of History students, who for some reason do not use the VLE on mobile devices. It does not matter what this reason is, or what the biased group is, but it can be corrected for.
Sampling
This means using a technique called stratified sampling. If History students make up 10% of the population, then even if they made up 70% of the statistic, their responses are only given a 10% weight in the final parameter. Other responses are given a greater weight, so that every faculty group has their relevant weight across the overall population. This makes the scaling process more accurate.
There will also be uncertainty in the value of the parameter, due to random fluctuations in the responses that cannot be ironed out. It is correct therefore to establish an uncertainty in the value, which is normally defined as 1 divided by the square root of n, the number of samples. In the case of 100 samples, this uncertainty is 10%. This is valid for the yes/no questions, where the response is binary and the result is a simple fraction.
However, for more complex questions, such as how useful the system is, the responses will follow a distribution. The nature of the distribution means it will have a mean and a variance – an average point, and an amount by which the value varies according to chance. The sample mean and variance both have these uncertainties in when scaled up to the population size, and as such stating an absolute value for the population can only be done with caution – the variance could be large.
Using the Data
Once the population parameters have been found, they can be used to fine-tune the VLE specification. In addition, the results will be used to evaluate the current VLE – Blackboard – against this specification and therefore set a benchmark that other VLEs can be compared against.
The methods described are essential in forming comparisons between the different systems in a way that is fair and unbiased to all. Measuring each against a well-defined, objectively written specification ensures that the review is rigorous and complete, and allows future re-evaluation if the review is conducted again in the future.