Posts Tagged ‘elearning’

Open Technologies and Social Media as Enablers of Collaborative Learning

Posted in papers, Writing on May 4th, 2011 by admin – Be the first to comment

Technologies and Social Media as Enablers of Collaborative Learning

Stephen Quinton and Matthew Allen

to be published in Collaborative Learning 2.0: Open Educational Resources (eds Alexandra Okada, Teresa Connolly and Peter Scott; IGI Publishing, 2011) see http://books.kmi.open.ac.uk/cl2oer/ for more details of this volume

Abstract

While many educational institutions throughout the world have introduced online learning as a delivery option, there is little evidence to indicate a predominance of solutions that advance pedagogical diversity and learning effectiveness. Aside from a few innovative exceptions, the design of most online learning environments is structured around the conventional instructional model, which inherently does not afford the flexibility required to take full advantage of the socialising and information sharing potential of web 2.0 technologies.

Online learners are not equipped with the tools required to organise their work, group learning is not always readily available, team-focussed problem-based learning activities are not easily supported and managed, and productive engagement with the wider community is not always feasible. Outside the campus intranet, countless people ‘collaborate’ with each other using ‘virtual’ online communities such as Facebook, MySpace, and Twitter.

The Internet continually offers new tools to support such activities, but there is an obvious disparity between what people experience on the Internet and what university online delivery platforms provide. Bridging this gap is only part of the solution as there is also the unrealised potential of students’ web 2.0 expertise to consider. There is something incongruous in the notion of applying web 2.0 technologies to learning and teaching without enlisting the support of the very audience that by and large have been the drivers of web 2.0 innovations.

Full pre-publication paper

Innovative Education Online: Ideas for the future of learning & the Internet

Posted in Reports, Writing on April 25th, 2011 by admin – Be the first to comment

In 2009 I ran a series of workshops as the first main component of my ALTC Fellowship to group brainstorm and analyse ideas about online learning and web 2.0 technologies.  During these workshops, so many good ideas were raised that I felt compelled to write up a report distilling the wisdom of more than 200 participants at 7 locations so that it might provide something of a guide for others.

At the same time, as I reflected on the workshops and what happened within them, I realised that they gave me an insight into the discourse of e-learning and Web 2.0 versions thereof in contemporary Australian higher education. Thus, I have also reported my responses to and analysis of those workshops. It’s one reason why the report has taken a while to produce and finalise.

Finally, then, here is the report Innovative Education Online:  Ideas for the future of learning & the Internet

My thanks again to everyone who attended and helped organise these events.

 

Beyond the Edgeless University

Posted in Events, Summits and Workshops on April 21st, 2011 by admin – Be the first to comment

Upcoming Workshop

A Question of Boundaries: What Next for the ‘Edgeless University’?

(Workshop details)

I will be organising and facilitating a workshop on the impacts of network technologies on universities at the Oxford Internet Institute in May, focusing on a critical appraisal of the notion of ‘edgelessness’. Here is the extended abstract and plan for the workshop:


In 2009, the UK Demos Foundation released a report, The Edgeless University (Bradwell, 2009), exploring the impact of digital network technologies on British universities.

Subtitled, ‘Why higher education must embrace technology’, its author, Peter Bradwell, argues cogently for both the opportunity and necessity to remake higher education according to the new realities of a world relentlessly connected, digitised and increasingly distributed in time and space away from centralised locations.

Re repurposing Richard Lang’s insights about the edgeless city, in which the functions of the city occur, but the form is now more fluid, dispersed and without the clear boundaries which have previously helped define ‘the urban’, Bradwell proposes a shift in higher education analogous to that within the popular music industry: technologies will not ‘do away’ with universities but, to prosper, those institutions must change systematically and with a “coherent narrative” to embrace digital networks.

While much has changed in terms of the funding, politics and general cultural climate around higher education in recent times, nothing has changed to make less the threat of conservatism in the face of global knowledge networking, nor to reduce the opportunity which universities have to become central to the new forms of knowledge and learning which the Internet and related technologies demand.

This workshop will explore practical opportunities and problems that confront academics and institutions of higher learning in light of Bradwell’s prognosis for the technology-oriented future. The focus for the workshop is to ask:

what exactly should a modern comprehensive university do that will unleash the creativity of students and staff and maximise the potential of distributed, edgeless learning while, at the same time, also making the most of the physical spaces which will remain critical markers of ‘a university’?. In other words, how can we utilise digital technologies and networks to fashion ‘new’ edges — temporary boundaries, if you like — that assist us in making education a collaborative, collective experience?

Format of Workshop

The workshop will be 1/2 day, including lunch:

  • Introduction / overview (30 minutes – Matthew Allen)
  • Open discussion: what are the key changes needed for enhanced, engaged teaching and learning within the edgeless university paradigm? (30 minutes – plenary)
  • Groups work on the key changes proposed, examining critically its validity, refining it and making sense of the likely outcomes (30 minutes – sub-groups)
  • Lunch (45 minutes)
  • Report back and group presentations and discussion, including consideration of the need for edges to be re-introduced at times (60 minutes)
  • Conclusion, including overall response (15 minutes – speaker TBA)

Examples of authentic learning in Internet Communications III: NET204

Posted in Ideas, Presentations on December 4th, 2010 by admin – Be the first to comment

See also other posts including the first one, on Web Communications 101, which explains more of the context.

Internet Communities and Social Networks 204

(basic unit description)

One of the most authentic learning experiences we try to offer students in the BA (Internet Communications) is the network conference, the focal point and driving force for the unit NET204. In this unit, the whole learning journey is designed around a 3-week online asynchronous conference in the latter stages of the study period: the first part of the unit involves writing the conference paper, improving it after feedback, and also designing and discussing how to run the conference and promote it.

Because every element of the unit is designed ‘around’ the conference, this unit is more than just an authentic assessment task: rather, it is an authentic learning experience, with the assessment almost ‘blending’ in with that experience. For example – the ‘conference paper’ is submitted, assistance given and then students can improve it, rather than in traditional approaches simply being done and marked. Very few activities in the real world involve submission of intellectual work that can’t be improved once completed.

While we set up the website and managed submissions, the academics were not the only ‘producers’ and users of the web for knowledge networking, producing a Youtube video, using a NIng group and promoting the conference through Facebook and Twitter.

Portfolios, digital and reflection: interleaving Michael Dyson

Posted in Conferences, Events, Ideas on December 2nd, 2010 by admin – Be the first to comment

Listening to Michael Dyson, from Monash talking about portfolios in teacher education: great presentation.

Dyson says:

  • Education of educators is first of all premised on turning them into people who practice self-development. gives example of very first unit. [So, care of the self is central, and making students include themselves as subjects in the learning process - nice!]
  • Learning is change dramatically – globalisation, computing, and so on. [But, perhaps, there is an important qualification on some of the more optimistic claims for 'new' learning: learning is embedded within society in ways that shape those possibilities in ways that are not entirely concerned with 'better' learning. At the very least, the definition of better is contested: is it cheaper? is it more orderly and commodifiable? is to linked to national norms and needs?]
  • The creating mind is the goal. [Interesting - not creative, but more positive and active - creating. Good difference]
  • Reflection is essential to achieving the kind of succcesses in self-developmental learning; using Dewey (2003), emphasises “active persistent and careful consideration”; reflection is not taking “things for granted…[leading to] ethical judgment and strategic actions” (Groundwater-Smith, 2003).  [ Further work needed, perhaps, to understand reflection for this new generation, if one takes as given the significant changes in knowledge: is reflection as developed in 20th c the right kind of reflection?]
  • ALACT model – action, looking back, awareness of the essential aspects, create alternatives, trial.

image of ALACT

[This is really helpful - I like the added 5th step, compared to the normal action research 4-step model]

  • “the artefacts placed in their portfolio showcase who they are and their current onling learning”; these artefacts are attached to the standards which define what it is to be an educated teacher according to outcomes required. [So portfolios are a clear negotiation of the student's understanding of those requirements and standards?]
  • Exploration of the actual portfolios that students have created, using a paid-for service iwebfolio (was subsidised). Variety of successes and failures, all the material goes into a digital, not paper portfolio. Notes the fact that the metadata on when and how material uploaded is available, unlike other means of generating a portfolio. [I emphasise: the portfolio is a genuine, real requirement for teaching employment. It is authentic learning]
  • Use of standards / outcomes as information architecture to drive cognition in inputting information (adding artefacts, commenting etc [So, the portfolio is 'scaffolding' into which a building goes, with a clear design brief. It might be a hghly structured knowledge engine]

I am wondering if the students genuinely are doing this work for themselves or if they imagine an audience of ‘judges’ – their teachers who grade the portfolio or the employers who might use it? Managing multiple audiences is tricky, even with technology that allows it – because if you can shape the portfolio for several audiences…. then does the self audience survive?

Then again, maybe the whole point is that the students are not yet capable of being their own audience.

Some other portfolio software (and look how it is more than just a portfolio…)

http://www.pebblepad.com

Examples of authentic learning in Internet Communications II: WEB206

Posted in Ideas, Presentations on December 1st, 2010 by admin – 1 Comment

See also other posts including the first one, on Web Communications 101, which explains more of the context.

Web Publishing 206

(basic unit description)

Students doing the BA (Internet Communications) learn, in WEB101, to create a web presence that acts as the primary locus of their online identity, with links to other services and applications. In Web Publishing 206, the focus moves much more directly to writing effectively for the web (where writing can also including other media, but emphasises the written word).

The authenticity of the assessments in Web Publishing 206 are principally mobilised by requiring students to write regularly, on their blog, exploring different aspects and techniques of good online writing. The blog is assessed in its own terms, and also as the basis for students’ reflective essays which ensure that students are thinking about (as well as doing) this crucial online communication task.

Some examples of students’ blogs are:

Notably, most students make virtually no reference to the ‘study’ component of these blogs: these are genuine blogs addressing audiences outside universities. Use of the tag Web206 however enables academic staff to look into them to find relevant content! And one student cleverly ‘colonised’ the name WEB206 : WEB206 | a Curtin University of Technology unit

While in WEB101 there was a strong sense that other students were the audience (along with the teacher), in WEB206 students are developing a much greater awareness of real audiences. In this respect, if no other, the assessment task is significantly advantaged by making it public knowledge networking.

As before, the blogging linked with other services and tools, pricipally delicious, as in these examples:

Once again, we see the value of the tag – the tag Web206 enables just the relevant links to be pulled from delicious into the blog, enabling a student to also use delicious for many other purposes. In this way, knowledge networking drives the nature of the assessment completion.

More findings from Web206 (which has only just run for the first time in late 2010)  will emerge over time. Thanks to Dr Helen Merrick, chief wrangler of publishing.

—————————————————-

Examples of authentic learning in Internet Communications I: WEB101

Posted in Ideas, Presentations on December 1st, 2010 by admin – 1 Comment

The first of several posts, each relating to a different unit of study at Curtin

Introduction

Over the past two years, students in Internet Studies, Curtin University studying the BA (Internet Communications) and related courses have been doing a lot of authentic assessment involving online activities. These assignments are  authentic in that they are ‘true’ to the content of their studies (that is, aligned with the outcomes), ‘ real’ within the likely fields of employment for graduates, and ‘natural’  for the the emerging dominance of knowledge networking in society. More on these three variations on authenticity in a moment.

Not all assessments fit this pattern (nor should they), but we have seen significant improvements in the motivation of students to complete and exceed the requirements of assignments, as well as a greater degree of creativity and expression suggesting deeper engagement with learning. It has also, we think, improved students’ attention to more scholarly traditional assignments (such as essays) because of the variety we engendered across all assignment tasks. (And, it should be noted: essays are authentic – to the lifeworld of academic which also remains important as well as work and elsewhere).

Much of what makes these assessment approaches authentic is that they are public. Here, then, are some examples which suggest some of the value of embracing public knowledge networking as the basis for assessment, at least in courses that involve digital media and communications but, most likely, in any course where students need to work with, communicate and reflect on knowledge and, in doing so, become producers, not just receivers.

Web Communications 101 (WEB101)

A major component of the assessment in this unit is a ‘web presence’. More than a website and blog, a web presence interlinks a central node with linked  services and nodes to expand the digital footprint of a user and established their online identity. The negotiation and communication of identity is central to this unit: it’s not just ‘how to blog’.

A very small number of examples of these web presences are:

Over 400 students have taken the unit: sorry, can’t show them all. In particular, look at how some students have made their web presence almost entirely ‘real’, with bare hints of what it connects to (their study); others have not. Some students, as evidenced by these presences, are now using them as part of other units of study too.

Note that students happily created their own informal, computer-mediated network spaces such as Web101 – Curtin University | Facebook; and staff teaching also use the web as it was intended – free and rapid information exchange – to support this unit:  Web101 Assignments FAQ.

A big part of the unit also involves the use of twitter: see the most recent  Twitter search; delicious is also used.

Please look at “I Tweet Therefore I am?” by Dr Tama Leaver, chief architect of the WEB101 learning experience.

———————————-

As I have argued elsewhere: the authenticity of these assessments is not a simple ‘flip’ from artificial academic work into ‘real’ web work. They are a negotiation and a compromise in which equally valid requirements from both knowledge networking and education are brought into a creative and productive tension. In the next instalment, I will provide some examples of what happens for students in the followup unit to WEB101.

Authentic learning: presentation to NCIQF

Posted in Conferences, Events, keynotes on November 30th, 2010 by admin – Be the first to comment

On Thursday 2 December, I am presenting at the National Curriculum Innovation and Quality Forum on the subject, “Risks and opportunities in authentic learning via the Internet”.

The basic brief for this keynote presentation is to:

  • summarise approaches to authentic learning in the BA (Internet Communications) at Curtin University;
  • identify the key benefits in using a public knowledge networking approach to authentic learning; and
  • highlight risks and strategies for managing those approaches in the pursuit of authentic learning online.

While I hope to do that, with a particular emphasis on giving some examples from the great work that students in the BA (Internet Communications) have done, I also have found that in preparing my talk I have had to develop a more coherent argument about the nature of authenticity in learning and the relationship between education and learning.

The talk can be found here: https://netcrit.net/content/nciqf2010.pdf

This paper draws also on some specific work I have done on the authentic assessment in our online conference unit, Internet Communities and Social Networks 204 and more generally on social media and authentic assessment (presentation in the UK, May 2010)

Some of the examples I refer to will be listed on my blog within the week.

Something new: a “blogshop” on online learning + more online learning tools

Posted in Events, Summits and Workshops on November 23rd, 2010 by admin – Be the first to comment

Tomorrow I move out of my comfort zone in presenting on the uses of online learning in higher education. I am at the University of Newcastle and will, in the morning, give another version of my presentation on Web 2.0 tools for online learning at university (search for “Matthew Allen”). This presentation will be fine: it has worked well before but is very didactic and controlled.

In the afternoon I am giving a “blogshop” which is my neologism for a workshop-involving-blogging. It involves co-present, computer-mediated interactions in which the users (aka labrats) will join and participate in a collaborative blog just for the period of the workshop. The blogshop is called ’5 Steps Towards new-fashioned online learning’ (at http://knl.posterous.com ).

Amongst other things, the blogshop is going to involve Todaysmeet back channelling, identity creation and management via Gmail (for Posterous and Slideshare) and exploring another ‘top 10′ Web 2.0 tools. I’ve already been extolling the virtues of Posterous, Slinkset, Mind42 and others. Now we are going to start exploring:

  • Chartle (Chartle.net tears down the complexity of online visualizations – offers simplicity, ubiquity and interactivity instead)
  • Flexlists (With FLEXlists you can create simple databases of anything you want, with every field you need.You can share the list with others, invite them to edit the list or just keep it for yourself)
  • Groups (Roll your own social network)
  • Moreganize (Moreganize is a  multifaceted organisation tool. It is suited for both professional and private use and is especially convenient if a larger group of people needs to get organized!)
  • Planetaki (A planet is a place where you can read all the websites you like in a single page. You decide whether your planet is public or private.)
  • Qhub (Qhub is a platform you can use on your blog or website that allows your audience to ask questions and get real answers, it doesn’t just help answer questions it allows a genuine community to develop around your site.)
  • Scribblar (Simple, effective online collaboration Multi-user whiteboard, live audio, image collaboration, text-chat and more)
  • Spaaze (Spaaze is a new visual way to organize pieces of information in a virtual infinite space. Your things, your way.)
  • Squareleaf (Squareleaf is a simple and intuitive virtual whiteboard, complete with all the sticky notes you’ll ever need. Unlike the real thing, our notes don’t fall off all of the time.)
  • Survs (Survs is a collaborative tool that enables you to create online surveys with simplicity and elegance.)
  • Voicethread (With VoiceThread, group conversations are collected and shared in one place from anywhere in the world. All with no software to install.)

(all quotes from the websites concerned)

Posterous rocks. I am now too wedded to the flexibility and power of WordPress to change my main blog, but I think Posterous really has a great ease-of-use factor that, if you want simplicity, recommends it.

The substantive point is this:

developing people’s ability to engage in innovative online learning design is not about the software per se: it is about their ability and attitude to work with the cognitive engineering available via the web to create interactive learning experiences (where interactive implies interactions between computers and humans, as well as humans themselves). Therefore the blogshop provides, I hope, an experiential learning activity: learning by doing, while thinking, and communicating about that experience.

Contact me if you want to repurpose, reuse or otherwise mashup the knowledge networked learning blogshop – it’s creative commons

Surveys of students’ perceptions of teaching: a cautionary tale

Posted in Ideas on July 19th, 2010 by admin – 1 Comment

In semester 1 this year Internet Studies staff ran the very successful unit Internet Communities and Social Networks 204/504, through both Curtin and OUA. The centrepiece for this unit was the 3-week online conference which students participated in, by writing conference papers, posting them to our website and then discussing both their own and others’ papers. This very successful conference is now over but you can observe the results at the Debating Communities and Networks site. The unit was, clearly, not your normal ‘teaching and learning experience’ – all assessment, tasks and activities, resources and discussions, were aligned with making the conference work successfully – and ‘learning’ was a secondary (but very successful) outcome.

I am now, in concert with the unit controller Dr Michael Kent, doing some research into the experiences of this unit and what it might tell us about online learning, student motivation, and authentic assessment. I will be sharing some of these thoughts with you elsewhere, including giving a paper called “Going Public with Learning” at a conference in September organised at Murdoch University by Ingrid Richardson. (abstract)

However, something interesting is emerging from the research as it relates to the use and interpretation of the student surveys we use at Curtin (known as Evaluate. Because the unit ran in almost identical fashion for three different cohorts of students, at the same time and with the same teaching staff, curriculum and so on, we are now able to compare and contrast the results from Evaluate based on the differences that might be discerned from the students who respond. The only significant difference is that one cohort was most likely to have also attended a physical classroom for 2 hours a week as well as doing all of the online activity.

This situation is important. As we know, evaluation of teaching at university has become standard now in Australia. Some of the reasons for this situation are good: it is important for academics to treat their teaching as research and to inquire, empirically, into how it is working, both to improve individual units of study and also to become better all-round teachers. But some of the reasons are bad: surveys are often used in crude ways to manage teaching performance (rewards and criticisms both), or they are reported in generalised ways to show how great an area, course or university is for marketing. And, while there may be some contestation over my characterisation of the reasons as good or bad (after all, perhaps t is good to manage performance using surveys), there can be no doubt that the validity of the research or management based on student surveys rests on the quality and sophistication of the instrument: does the survey measure what it purports to measure?

Evaluate, Curtin’s instrument, has its strengths and weaknesses which you can judge for yourself: here are the items in the survey (to which students respond using a classic Strongly Agree/Agree/Disagree/Strongly Disagree/No opinion scale):

  1. The learning outcomes in this unit are clearly identified
  2. The learning experiences in this unit help me to achieve the learning outcomes
  3. The learning resources in this unit help me to achieve the learning outcomes
  4. The assessment tasks in this unit evaluate my achievement of the learning outcomes
  5. Feedback on my work in this unit helps me to achieve the learning outcomes
  6. The workload in this unit is appropriate to the achievement of the learning outcomes
  7. The quality of teaching in this unit helps me to achieve the learning outcomes
  8. I am motivated to achieve the learning outcomes in this unit
  9. I make best use of the learning experiences in this unit
  10. I think about how I can learn more effectively in this unit
  11. Overall, I am satisfied with this unit

The aim, broadly speaking, is that the survey assess the curriculum and content of the unit and the design of the learning experience, rather than specific teachers. In other words, Evaluate attempts to assess curriculum, abstracted from the specifics of the teaching and learning activities. It also attempts to provide insight into the students’ mindset through items 8-10 though in practice these items are treated at Curtin as they were comments by students also on the quality of the unit or its teachers. Thus, in general terms, Evaluate attempts to use student perceptions as a direct measure of the realities of the quality of the teaching and learning experience, with students positioned as informed and reliable judges of that quality.

In most cases at Curtin there is just one cohort of students for each unit, completing the Evaluate survey. There is no demographic information to enable internal comparisons. But, for NET204, in semester 1 2010, we had a very unusual situation in which the same unit was taught using three different unit codes, for 3 different groups, thus enabling 3 different and differentiated data sets to be generated. One offering was for OUA students (all external); one was for Curtin-based undergraduates (mostly internal); one was for Curtin-based graduate students (e.g. new-to-area coursework students, not higher degree students) – (mostly external). (the samples and populations were: OUA n=21, from 68 possible respondents; Curtin undergrad n=16 from 35 possible respondents; Curtin graduate n=9 from 16 possible respondents)

So what happens when different results achieved in the Evaluate survey for these three different cohorts, remembering that with the exception of the classroom contact for internals, and some separation of students for the first 1/3 of the study period, all were treated to an effectively equivalent experience? What can we learn about Evaluate itself when we compare results from a similar activity but assessed by three different sorts of students – where the main difference in the ‘learning’ comes from the students themselves?

First of all, the immediate obvious finding is that Curtin undergrads were less likely to be satisfied with the unit overall – (item 11). 95% of OUA students, and 100% Curtin graduates ‘agreed’ (either SA or A) that theu were satisfied; only 75% of Curtin undergrads agreed. And, on average, these undergrads scored the unit 10% lower on all 12 items. In other words, even with caveats about sample size, response rate and so on (caveats that rarely matter for internal management in any case), we get a face-value difference that is somewhat troubling.

The only reasonable conclusion I can draw from this is that the STUDENTS, not the curriculum or teaching, explain the difference. Curtin undergrads had a class *as well as* all the online work and thus can be assumed to have had a richer / better teaching experience of the same content. Yet they were less satisfied. I conclude that the most likely reason for this is that, on the whole, Curtin undergrads have a more teacher-centric approach to their studies and thus an authentic, challenging learning experience is not as satisfying for them because it does not fit their expectations.

How do I arrive at this conclusion? Well, digging deeper into the data, Curtin undergraduates were notably more like to agree that they had made best use of the learning experiences (+7% from average) and were more likely to agree they thought about how best to study (slightly more than OUA; a lot more than graduate students). Graduate students and OUA students had lower scores on these self-rating items. I draw the inference that Curtin undergraduates *believe* they are studying well and perceive the difficulties to be the teacher’s fault (they are not taking responsibility for their learning as much as the others); OUA and, especially, graduate students are actually studying well, but take more responsibility for problems, thinking it is their fault. They are therefore more likely to be satisfied with a unit (even if they don’t make as much of it as they could) which challenges them to be responsible for what they are learning.

Let’s also look at the item on feedback: we know feedback is the most troublesome area in all student evaluations and usally the source of the worst scores on Evaluate. Remember that, in this case, all students – across the 3 groups – received exactly the same extensive feedback (including that they had their main assignment marked, commented, suggestions for improvement and then were able to resubmit it with improvements for a better grade). Even the classroom contact would not have materially changed this situation (and might even have allowed for more feedback). Despite this equivalence, Curtin undergraduates rated feedback 19% lower than the other two groups! My interpretation is that students’ responses to the feedback item are not a reflection of the feedback given, but – rather – students’ interpretation of what feedback should be. In other words, because Curtin undergraduates got extensive and helpful feedback which required them to do more (so as to learn and improve), they actually believed that was ‘poor’ feedback – because it didn’t fit with their inflated expectation first time around or that the teacher ought to have told them how to do a good job before the assessment and therefore poor performance, leading to critical feedback, is not their fault in the first place.

Finally, let’s look at the key question of motivation (the unit was specifically designed to maximise motivation by giving students responsibility for their learning). Curtin undergraduates varied in their agreement with motivation by 12% – in other words, despite identical approaches to motivating students, the Curtin undergraduates felt themselves to be less motivated. What this suggests (again not surprisingly) is that motivation is correlated with the internal dynamics of the student, and not necessarily amenable to control by what teachers do. Of course, teachers must be focused on motivating students (indeed that is the point of authentic assessment in many cases): but surveys must be used cautiously when assessing the degree to which teachers have achieved that goal since it is, in truth, only possible for students to be motivated when a partnership (rather than a relation of domination and control) is at least approximated.

In conclusion, this unusual situation – 3 different cohorts, all responding in significant numbers to the same survey, on the same unit, with all variables pretty much the same except for cohort membership – shows the challenge of Evaluate and similar surveys. They do a good job of assessing student perceptions of teaching and learning. With some fine analysis they can also suggest ways of managing those perceptions for the better. But what they cannot do is substitute student perceptions for measures or evaluations of actual quality.

Disclaimer: This analysis is not a rigorous statistical reading of the data. That task is, in fact, impossible because of the way it is collected and presented and, moreover, would require different items to be asked in the first place. It may not be statistically significant that these variations emerge but, that said, it does on the face of it, make me suspect that there is a major difference between the purported measurement and the actual measurement goals. Furthermore, since the survey results are used for management purposes with little regard to good statistical practice, I am playing by the same rules as those who require the surveys of is