The problem isn't that we gather data - in both teaching and marketing data is very important. The problem is that we're not doing the right things with the data. If we're really going to make a difference - whether in student outcomes or in changing relationships with stakeholders, we need to be sensible and smart with this data. We need to put data from different sources together (creating 'information') and evaluate this information against our existing understanding (creating 'knowledge').
Exhibit A: The Key Stage 2 Data problem
It's becoming really scary how important data from Key Stage 2 exams (taken in the final year of primary school in the UK) are becoming.
Because OFSTED put so much emphasis (wrongly in my opinion for the reasons outlined below) on them as a way of assessing the quality of a school, secondary school leaders are responding by assuming they're a) accurate and b) capable of being used to predict the progress of individuals. As a result, they're being used to set targets for GCSEs across all subjects - I've heard of schools giving these predictions to Y7 students in their first reports! They then, unbelievably, have to show 'progress' every term for the next 5 years!
The problems of this are manifold. Take the assumptions about accuracy for example. Key Stage 2 exams are only taken in English and Maths but are used to set Art and Music targets; they are only taken at one stage during the year; they can be 'crammed' for and many schools are spending the whole of Y6 focusing on them; some parents hire tutors for SATs and so on. This 'data' is then abused - in the Guardian example it's used to overpower the professional knowledge of specialist teachers.
The truth is that we can only use KS2 data as part of the picture of a student. It does give some indication of their ability, but it might also show someone who had supportive parents in primary school, or someone who was young for their year. It certainly isn't an accurate predictor of their GCSE scores.
So how should this data be used? It needs to be taken in context - aggregated with other data (such as CAT tests, class tests and teacher assessments) to provide 'information' - best seen as a snapshot of all the data available on the student. This information then needs to be analysed against the personal non-numerical data about the student - are they having problems at home? do they enjoy some subjects more than others? This can only be done by teachers who know the students and creates 'knowledge'.
As a result of this, by all means use this knowledge to set the student (and your teachers) targets. But if they are unrealistic (and this happens in both directions) don't be surprised if the student becomes disillusioned or over-confident. And don't be surprised if teachers develop stress or start cheating.
As I've written before, the best schools (private and state) are good at developing 'knowledge' on their students and developing personal 'knowledge-based' targets. The worst schools are those who are blinkered by the 'data' and lose sight of the students as individuals. And they're also very unlikely to achieve the OFSTED 'targets'.
The lessons for marketing are similar. Don't jump on one piece of market research that stands out from others, or give the feedback from one parent or student too much weight - instead work methodically and consider a wide range of data and your understanding of your market. Markets do change, but not overnight!
And of course - if your school is one of the ones looking after individuals - make sure you shout about it. According to research by Pearson, 80% of students want more time with teachers to 'help them understand their dreams' - sounds exactly like developing knowledge rather than treating them as data!