Events

Web 2.0 from the ground up: take 1

Posted in Conferences, Ideas on September 29th, 2011 by admin – Be the first to comment

Speaking in a couple of weeks at the Internet Research 12.0 conference ‘Performance and Participation’

My paper, Web 2.0 from the ground up: defining the participatory web in its own terms, is based on an analysis using Leximancer of 750,000+ words used to describe 12,000+ Web 2.0 applications. Some of the fun I am having includes generating dubious yet intriguing infographics….

However, I am still struggling to find the right way to explain how I get from this to what I seek to conclude, concerning the way the discourse of Web 2.0, very much a language of computing, is now reshaping our sense of self.

What was Web 2.0? Versions and the politics of Internet history

Posted in Events, Presentations, Seminars and presentations on May 4th, 2011 by admin – Be the first to comment

This presentation, given at the Oxford Internet Institute May 4 2011, is a reduced version of a paper of the same title that includes substantially more examples and all appropriate references. Please refer to this paper for a full account.

What was Web 2.0? [ full paper]

Introduction

In 2008, the journal Fibreculture published an issue entitled “After Convergence” exploring questions of human, social and technological connectivity within a world where computer networks had led to the convergence of formerly disparate cultural practices. In describing the several contributions to the issue, the editors wrote:

we have asked not only what makes ‘2.0’ distinct from ‘what came before’ but also how it will be understood in the future. We ask this question not least because we are somewhat alarmed by visions of proliferating version control as 2.0 merges with 3.0 and 4.0 looms on the horizon (Bassett et al.)

My paper, in general terms, takes its lead from this critical interest in things ‘2.0’, focusing specifically upon Web 2.0. I will outline the way in which the emergence of Web 2.0 brought to the web the discourse of versions. A history created in versions, a particular form of an object’s history, required that, as well as Web 2.0, there had to be as Web 1.0 and of course presupposed the emergence of Web 3.0. The articulation of one depended, explicitly or implicitly, on the others.

Is Web 2.0 dead?

Web 3.0 labels many trends in the development of the web that might presage a ‘new’ time involving such ideas as the Semantic Web, both systemically and in specific application; new investment opportunities; and even new scholarly critiques and theories. So, perhaps it is time to ask: “Le Web 2.0 est-il mort?” (Lequien). Far and wide across the web, the phrase Web 3.0 yields a vast array of returns from search engines whether they reference marketing slogans, political commentary, technical discussions or techno-evangelist opinion.

Yet the discourse of Web 3.0 bears an uncanny resemblance to the rise of Web 2.0: different in time, not substance, and marked by the same jumble of competing, but inherently irreconcilable, differences of perspective and purpose as people position themselves, their technologies, and their ideals in relation to what has come before and what might come in future. In such circumstances, the real questions to ask, then, are:how did the web come to have versions in the first place; what is the discursive process by which these versions come to make sense; and what is revealed by analyzing this history of versions?

Web 2.0 and Web 1.0 – continuity or change

Around the start of 2006, Web 2.0 became the principal way to describe the then-current web rather than being a term which looked towards an as-yet unreached future. Yet several businesses and web services thought exemplary of, or essential to, Web 2.0 date from much earlier times, as do the technologies on which they rely. Examples include blogging (Blogger and Livejournal), distribute payment services (Paypal), crowd-sourced, user-generated content (Wikipedia), social networking (Sixdegrees), and algothrimic search and associated marketing (Google).

Equally, behaviours and sensibilities which have for several years regularly been discussed in terms of Web 2.0 pre-date its origin and extend back beyond the web itself. It has been claimed that:

… the essential difference between Web 1.0 and Web 2.0 is that content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content, while any participant can be a content creator in Web 2.0 and numerous technological aids have been created to maximize the potential for content creation. (Cormode and Krishnamurthy).

The visibility of such behaviours in earlier Internet times suggest it would be less clear that Web 2.0 created this change but merely promoted it. Examples from before the web include USENET, bulletin boards, email lists, chat environments, and MUDs, all demonstrated the founding basis of socio-technological networking: people wished to share information, create content and work with others in doing so. The World Wide Web did not mean an end to these earlier forms, but enabled a rapid increase in their utility and visibility; there was also significant website creation and curation by individual users and the concept of the ‘webring’ emerged to enable networks of users and authors to emerge.

These examples stand in contradiction to the current ‘history’ of Web 2.0, which generally views the technologies, businesses and social formations of the past several years as initiating and making possible an online world of participatory, user-generated and open content and communication. Of course, the point is not that this current history is wrong: histories are not wrong, per se, but contingent on the circumstances and purposes of their creation and circulation and current contestations are fought out through, rather than by reference to, historical accounts.

Juxtaposing examples which suppose a continuity of development online, with a dominant history of Web 2.0 as radical transformation allows us to look more deeply at the cultural complexity of the notion of ‘version’. The idea of versions is becoming a cultural commonplace of today’s world because of the rise of computing: the conventions of software development are coming to play a critical role in the consumption of goods and services dependent on digital technology. While it is not new for goods and services to be promoted and sold on the basis that they replace what came before (for example, a new model of car or washing machine), it is definitely the case that, in computing culture, consumers are now receptive to the idea of purchasing something which they know will be replaced in a short period of time by a new version and willingly enter into this transaction, becoming part of the development cycle as much as being its recipients. Consumers accept, if reluctantly, that the digital products they are not entirely finished, and will be regularly ‘patched’.

These points are not merely a comment on current software marketing: they reveal the semiotic work that the discourse of versions can do. Versions allow products to claim to be new, but not threateningly so, because they also sustain continuity and promise an easy transition from that which came before. At the same time, versions allow new products to derogate earlier incarnations of themselves as limited or otherwise demanding of replacement. Through this duality of appeal, a discourse of versions both reassures consumers that they were right to buy the product first time around, but must now, of course, consume again and not rely on what they had already bought. Ultimately, the fetish object for consumer gratification becomes the process of upgrading, rather than what is possessed after that act.

Perhaps, then, technology might legitimize the move to a new, second version of the web while also sustaining continuity? Code within Web 2.0 is more sophisticated and enables developers and users to do far more; and in many different ways. Yet many Web 2.0 sites do not demonstrate technological sophistication, nor rely on innovations in code. Further the importance for the computing industry of shifting the way we think of the Internet from channel (media discourse) to platform (computing discourse) cannot be understated as a rationale, and implies technological change is consequential to a more fundamental commercial re-orientation. Thus, as Berry has argued, the insistence on technology as a discriminator between Web 2.0 and things other, is more of a demand for what ought to be, rather than an objective description of actual change. Further if technology is to authorize the legitimacy of claims for a transition to Web 2.0, then necessarily technology is presumed to determine or at least substantially control the consequences and meanings of that change.

Not all explanations or discussions of the transition to Web 2.0 relied on appeals to technology, however. We can contrast contemporary commentators Schauer and Hinchcliffe who argued respectively that new behaviours emerged because of technological development, or that the traditional behaviours became influential because of the large number of regular Internet users which the web created. In both cases however, these authors but demonstrate a fundamental tension within the language of Web 2.0: the need to manage the transition between versions, to explain the 2.0 which both ‘breaks’ with the past, and also connects to it. This tension is not easily resolvable because, in truth, the tension is what gives ‘versions’ their semiotic cogency.

Yet, without an articulation of change and continuity between ‘then’ and ‘now’ there could be no rationale for Web 2.0 and thus, even as this form of the web was proposed as novel, it had to be presented as less than novel. Throughout the texts of Web 2.0, simple dichotomies of new and old are presented hand-in-glove with the assertion of a contradictory, more developmental path from earlier times, often within a few short sentences of such assertions. While popular advice might be that “The definition of Web 1.0 completely depends upon the definition of Web 2.0.” (Strickland), in fact the existence of Web 2.0 depends utterly on Web 1.0. Without it, the absences and failures which Web 2.0 solves would not be knowable.

Web 2.0 and Web 0.0 – realignment to ideals

There are other ways in which the creation of Web 2.0 came to define the particular sense we have of the history of the web with strong claims that Web 2.0 returned to the origins of the web, and indeed the Internet more generally, realigning everyday technology and social practice with the ideals which had first given birth to the web. In other words, Web 2.0 was not a continuation from Web 1.0 so much as a ‘reset and restart’ returning the web to its alpha version 0. This alpha version, according to Berners-Lee, was:

a common information space in which we communicate by sharing information [and] … dependent on the Web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which we work and play and socialize… we could then use computers to help us analyse it, make sense of what we are doing, where we individually fit in, and how we can better work together

In this respect, the web represented ideals of social practices through network connectivity inspired by the prototypical cultural forms of the Internet As Dean puts it “Web 2.0 designates nonetheless the surprising truth of computer-mediated interactions: the return of the human”.

Within this particular conception of Web 2.0 the web needed a restart because Web 1.0 represented the failure of 1990s business. Not appreciating the web’s ‘true’ origins and seeking only commercial gain, business had imposed upon the web ideas and expectations drawn from the traditional media. Not only had most businesses involved in Web 1.0 mismanaged their own affairs pursuing the illusory goal of media convergence, but in doing so had threatened the potential of the web to transform the world. The dotcom crash showed that the reality of how and why people used the Internet was not what business had thought and thus proved the original ideals of open communication, sharing and so on were not only good, but true: “the Internet was literally given back to the people” (Raffl et al.).

Yet the emergence of Web 2.0 saw a return to speculative behaviour and commercial exploitation of the common information space:

The 2005 Web 2.0 conference reminded me of Internet trade shows during the Bubble, full of prowling VCs looking for the next hot startup. There was that same odd atmosphere created by a large number of people determined not to miss out. Miss out on what? They didn’t know. Whatever was going to happen—whatever Web 2.0 turned out to be (Graham).

Thus if Web 2.0 was a return to an earlier time, before Web 1.0, it would be marked by the same political economy as the 1990s, as part of informational capitalism and with competing forces vying to constitute the web as that particular fusion of technology and capital necessary to their commercial interests. Thus Web 2.0 could not completely ‘reset’ the web’s development, for it was intrinsically part of the competition within capital over the ways in which to appropriate the value of consumers’ attention, labour and tastes.

The particular nature of the wrong direction of the web is best understood by a specific analysis to the problem of ‘design’ and its relationship to the technologies through which design came to dominate people’s internet-mediated interactions. The importance of design is evident in the brief article of DiNucci which, in 1999, first coined the term Web 2.0, and in the opposed views of good design  of visual designers (e.g. David Siegel) and HCI experts (e.g. Jakob Nielson). Exemplifying the way the web disrupted conventions and refused easy definition of norms and standards, the debate sums up the primary techno-capitalist challenge of the 1990s: how might  media and computing combine, whether within one site, or within corporations set on paths of convergence. Yet this debate shows how Web 1.0 had diverged from the origins of the Internet, which were resolutely outside of the media, forming a space of information and collaboration that specifically did not draw on classic media forms, tropes or models. By 1995, from the corporate media perspective, rather than being a novelty suited to computer enthusiasts, the “WWW was seen principally as something you watched, rather than something you built a small corner of yourself” (Roscoe) and the source of this maturity was the imposition of media models to explain its future significance.

Hagel at the time stated that “In many respects, Web 2.0 represents a return to origins of Internet”, portraying Web 1.0 as radical discontinuity from the ‘the web’ which might have existed.  Web 2.0 could then be proposed as a further discontinuity which would undo Web 1.0. However Web 2.0 could never fully return to that time as if there had not been this misdirection. The need for the version (rather than a return to just ‘the web’) stemmed from the fact that commercialization could not be undone, and a new form and approach was required. Furthermore, a vast ‘audience’ that had developed, the users whom Berners-Lee’s idealistic vision was to serve. Their needs and expectations, courtesy of Web 1.0, were not what had be assumed in that vision. Just as Web 1.0 could be said to have failed because it did not understand the cultures of use of the Internet, so too, there was no way to reset the web without now accounting for the new cultures of use that had emerged in the 1990s.

The discourse of versions

The emergence of Web 3.0 in recent years was inevitable once Web 2.0 started to be used. It is just too easy for technology evangelists to slip into the language of versions to help communicate their messages. But, it is not just superficial talk. A move from discussing ‘the web’ to discussing Web 2.0 creates the foundations for a teleology of development legitimising what had come before, was current, and what was still to come. The emergence of one version, to replace another, ineluctably requires there to be yet another version, still to come, which in time will become current and, eventually itself be replaced. How the web came to have a history shows us that, in building this meaningful narrative, a discourse of versions works in three ways.

First, the discourse enables a return to origins, to create the legitimacy for current moves that, far from being developed from the previous version, in fact realign with a trajectory of development originally intended. The recent past is placed to one side and ‘normal’ progress is resumed. Web 1.0 becomes the repressed other, only visible because it explains the contradiction of the move to Web 2.0 from the alpha version zero. In this repression, certain key features of the Internet in society (excess value appropriation; the complex relationship of the Internet and media; and contestations within informational capitalism) are strategically obscured.

Second, a discourse of versions enables a different movement from the past into the present, whereby the recent past is normalised as creating the pre-conditions for what has now emerged. The past is overturned by incorporating it within the self of the present, not repressing Web 1.0 but adding “1” to it. Here, version zero is repressed, since the latest iteration only references that which immediately preceded it and thus helps explain away contradictions which might produce a critique of the latest version.

Third, versions create the conditions for knowing and anticipating the future in an orderly manner, managing what is to come as astutely as what has been, positioning those who control the specific meaning of each revision as the authorities on what ought to be, based on their success in modifying current reality. Web 2.0 is a part-completed project, the model for Web 3.0: thus, the reasons for current failings and problems can be safely ignored because solutions are just one step away, to the next version. The discourse of versions enables ‘erasure’ the current version, even as it speaks it, locating attention towards the version next-to-come. The perfectibility of the Internet, and along with it, the whole technocratic project that it signifies, is reassured.

Conclusion

The dominant, popular history of the web is told through versions; these versions. provide the semiotic sites at which critical debates about financial, technological and regulatory issues can be played out, in a fight to define the future, through control over the meaning of the past and the referential present. For that reason there is no single, stable accounting of the versions and what they mean: the web is not software engineering, where versions represent agreed and defined iterations of the design and coding process. Yet, the origin of versions in engineering is apt: versions create order, control and mastery over a process that might otherwise become impossibly flawed in the absence of a consensus about the history of the application or product. A history of the web told in versions is all about the way that people seek to influence the direction of future development to suit their ideals, profits, or personal ambitions but only insofar as this historical account becomes the basis for collective, or shared, understanding.

What of the other sorts of histories to which we should pay attention? Shared history of versions of the web, where periods are defined, originators and pioneers identified, and generalisations made, occludes the private, personal histories of Internet use that tell of the individual experience of connectivity and which reveal a very different kind of relationship between technology and individuals. I will conclude with two examples.

Consider the case of Justin Hall, as described by Walker:

Hall’s narration of his life online began in January 1994, with a simple homepage, and extended into a detailed hypertextual version of his life told in traditional node and link HTML. When weblogging software began accessible, Hall started using it, and posted almost daily fragments in this decade-long autobiographical project until early 2005. At this point, Hall posted a video where he discussed the problems of publicly narrating one’s life at the same time as relating to the people in one’s life, and ceased his personal blogging.

This history (and there are many more like it) stands in stark contrast to the idea that Web 1.0 was a time of static, commercially oriented content produced for a mass audience and that this inappropriate form of the web was heroically undone by the expertise (whether business or technical) of the Web 2.0 revolution, enabling people to lead social lives online. Hall can be characterized as Web 2.0 before this term existed; and returned to Web 1.0 during the time of Web 2.0.

Second, the web is effectively designed by the preferences, behaviours and interests of its users and not by software engineers (or indeed communications designers). As Millard and Ross found:

… the relationship between Web 2.0 and those original [hypertext pioneers’] visions is more complex than [expected]: many of the aspirations of the hypertext community have been fulfilled in Web 2.0, but as a collection of diverse applications, interoperating on top of a common Web platform (rather than as one engineered hypertext system)… The Web 2.0 model is heterogeneous, ad-hoc, evolutionary rather than designed…

This example suggests an entirely different history of the web, consisting in the innumerable and largely invisible minute acts by all the users of the web which, in their effects, become a crowd-sourced history and, in the end, visible only in its effects or in the recollection of the place within the crowd which any individual occupied at a given time.

Web 2.0 engendered a history of the Internet (its technologies, peoples, businesses and politics) that both depends on and asserts the primacy of the discourse of versions as the correct way to tell this history. This effective claim that the only legitimate way in which web histories can be told is with due deference to the technical language of the originating discipline is, ultimately, the most profound consequence of Web 2.0. Perhaps, if now we are asking “Is Web 2.0 dead?” we might more positively ask: what other ways might we explore the histories of the web such that users’ agency in their own historicity is more fully realized.

Growing Knowledge: what is the future of research?

Posted in Events, Seminars and presentations on May 4th, 2011 by admin – Be the first to comment

 

Disclaimer: Live blogging

Growing Knowledge: what is the future of research?

(details)

A Times Higher Education debate hosted by the British Library, featuring Matthew Gamble, David Gauntlett, Alex Krotoski, Ben Hickey and chaired by Phil Baty.


Phil Baty starts the debate: it is fundamentally about the way that IT will profoundly change the nature of research. Introduces the speakers.

Hickey

(A-level student)

Has grown up surrounded by network technologies and assumes they will be crucial at his time at university. he ponders however whether the research collaboration between people and computers might lead more traditional people to question the validity of his work because the boundaries between him as researcher and technology are indeterminate. [Cyborg researcher?]. perhaps universities, because of their traditional outlook, may hinder learning and research. On the other hand maybe technology creates too narrow a vision and the voice of experience from earlier times can shed revealing light on a problem. Points to a problem – younger people with whom Hickey spoke are largely uninterested in universities and research, seeing it as irrelevant and distanced from the real-world problems they face.

What is revealing about Hickey’s contribution is the way in which someone who have grown up with technologies of networks, intelligent agents and so on construes the role of technology in research: as something that, in effect, stands OUTSIDE of the normal practices of researchers and potentially enables research and learning directly from / with computing code, without human (academic) intervention

Gamble

(PhD candidate)

Mismatch between the potential that technology provides (connectivity, immediacy and scale) and what is current normal practice in academic research. this potential is, however, what causes the problems as well. The web might become the “invisible college” which promotes the circulation of scholarly literature outside of the norms of academic journal publishing and, indeed, the formal structures of universities.

Provides example of crowd-sourcing data analysis within Galaxy Zoo project where large amounts of data was given to many individuals online for them to do mciro-analysis of data, out of interest in the subject. discovered things which the researchers were not even aware they should be looking for.

Gamble’s critique of traditional science is important: he reveals that lurking within the technologies of network collaboration is, in fact, a deeply ideological project towards openness and altruism. Open science, while often construed as made possible through the Internet and similar tools, is more about a reaction against the institutionalised narrow and profit-oriented sciences which have emerged over the past fifty years

Notes the resistance of scientists who resist open data (the so-called “selfish scientist”) and who are obsessed with publishing, not finding things out. “Altruism is quickly beaten out of young scientists”. So, there are tools for collaboration but are not used significantly.

Concludes by calling for a different mode of publishing: it’s not just open publishing, but also publishing of data, the methods, processes, the discussions about projects and so on.

Krotoski

Discussing Web 2.0 and scholarship. Ponders the reality of such technology in the real world, outside of the world of enthusiasts (such as myself I should admit). Recounts how she spoke with phd students as they commenced their studies – almost none of them had any kind of online presence, definitely not blogging and so on. Students told her that they were discouraged by their supervisors from being online and open. They certainly were not taught about how to do it. this was, from the traditional perspective, ‘wrong’.

So, she continues, what of the future? She emphasises the validity of blogs or similar: ideas can be trialled and discussed with peers, useful self-promotion (on the basis of quality, not spin), writing becomes a habit and reflection possible. Krotoski views scientific / technological research in the USA, where this use of social media and Web 2.0 is more prominent, as being influenced by industry, who are not interested in long-term peer review publishing but rapid and iterative publishing of ideas and their development.

I wonder if there needs to be greater discrimination between types of ‘web 2.0′ use [which I had discussed with Aleks before the event, so no criticism here]. This discrimination is, pretty much, about identifying the unkown, but useful tools of the web which, probably, critics of ‘web 2.0′ use but don’t realise these tools could, from another perspective, be seen as web 2.0

K. comes back to key point: how do we trust what is online; is it valid and reliable; how can we assess that? Normal position emphasised — it’s about training people to have that capacity to assess. Baty contributes a point: traditional publishing filters the content to give it more reliability.

Gauntlett

Online publishing and distribution of information is very useful, even required, for academics. Open publishing helps the world and is ethically required; it is great, too, for academics because it makes them self-reliant. moreover, the web and similar tools makes academics public intellectuals again, rather than closeted.

scholarly publishing — from a time when distribution was very limited, and filters needed because of low bandwidth. G. has a great view on the failures of the peer-review system because it assumes reviewers are entirely uninvested in the outcome except from a rational scientific perspective. Perhaps academics can do the filtering themselves by using what is good, from their view.

Gauntlett noted he first built a website in 1997; some of the most keen advocates for web 2.0 and knowledge networking are often longer-term Internet users who, perhaps, have understood the web more from a self-creative perspective?

debate now ensues

Something of a confusion emerges from the discussion between the academics about peer review – there’s a slight problem with comparing and contrasting peer review with complete ‘openness’ (eg Twitter). In fact, the discussion might more usefully concern the reshaping of peer review so that it is more productive, in improving and expanding work in a supportive manner. One example is the peer review process of Critical Studies in Peer Production.

Question from audience regarding new kinds of research methods which the Internet might produce. — too much data produces new methods; online behaviour produces new methods; nice contradiction between Gamble enthusing about the Semantic Web vs Krotoski worried about the missing human condition.

Gauntlett makes an interesting comment — it appears that crowd-sourcing can elevate people to being partners in science (as in the Galaxy Zoo), “citizen scientists”; this is like citizen journalists and so on. I read this as another example of the meme/trope of participation and democracy which is ideally or occasionally true but, in fact, is a general mythos within which hierarchies and elites persist.

What was Web 2.0? Versions past, present, future and the development of Internet historicity

Posted in Events, Seminars and presentations on April 22nd, 2011 by admin – Be the first to comment

Upcoming Seminar at oii, Oxford
 
What was Web 2.0? Versions past, present, future and the development of Internet historicity

4 May 2011

UPDATE: my paper is slightly different, now that it is finished. I have concentrated more on detailing the particular way in which versions came to the web, the consequences of that, and generally exploring the way ‘versions’ work as a particular kind of (popular) historiography. I will work on the historicity stuff next!


In this paper, I discuss the emergence of the historicity of the Internet – that is, the explicit sense with practical consequences that the Internet has a history, and that it occupies a place in history which, through our use of it, also defines us as beings in time. While the term historicity has a long tradition within religious scholarship, marking efforts to determine the factual (as opposed to mythic) status of various ‘historical’ figures, I use the term with a more postmodern perspective. From this perspective it might be said all facts are myths and all myths are facts except that the politico-cultural discourses within which we know the world determine for us very clear, if contingent, boundaries between fact and myth. Historicity is better understood, therefore, as marking out that state in which the history of a phenomenon is established, and used, for particular purposes and said phenomenon is therefore experienced as having ‘a place’ in history.

For many years, the Internet existed as a kind of cultural future-in-the-present. For example in the 1990s, talk of the ‘Internet frontier’ was a metaphor to give cultural substance to this new and inexplicable space called cyberspace. But it was also a temporal metaphor: the frontier was the future, as much as it was a place (perhaps the past as well, so influential was America’s colonial history in this time). The speculative economics of the dot com boom were, similarly, a future-in-the-present, exploitation of which would (when that future actually arrived) bring untold wealth, as a bare handful of clever domain name squatters found. The alterity of the Internet, where people found freedoms not imaginable in ‘the real world’ was also an alternative time, if you like, a world of future possibilities, made real through the magic of networked computing. The Internet might have had a history (traced on Zakon’s timeline, sketched in Where Wizards Stay Up Late) but it had no historicity.

That has changed because of Web 2.0; not, so much, the technologies of Web 2.0 but the snowballing effects of Tim O’Reilly’s creative marketing of the term. There never was a Web 1.0 … until he (and we) started to discuss Web 2.0 Even then, 1.0 existed as a kind of shadow, rarely spoken but always implicit. Moreover, almost as soon as Web 2.0 had become popular, Web 3.0 was soon being used as well despite the fact that what it labelled (the semantic web) preceded Web 2.0 (see Allen, 2009).

What can we make of the last decade or so of the web, which has in popular commentary, clever marketing, and actual socio-technological development, become a second version of the web we had in the 1990? What are the consequences of coming into history for the Internet and is there another version yet to come? Or have we reached a time when all we have is ‘the contemporary web’? My conclusions will hopefully inform both our understanding of the Internet itself and give some guide to how we might research it.

(some of this has been sketched before when I presented on Historicising the Internet at the OII Doctoral Summer School in 2009).

Beyond the Edgeless University

Posted in Events, Summits and Workshops on April 21st, 2011 by admin – Be the first to comment

Upcoming Workshop

A Question of Boundaries: What Next for the ‘Edgeless University’?

(Workshop details)

I will be organising and facilitating a workshop on the impacts of network technologies on universities at the Oxford Internet Institute in May, focusing on a critical appraisal of the notion of ‘edgelessness’. Here is the extended abstract and plan for the workshop:


In 2009, the UK Demos Foundation released a report, The Edgeless University (Bradwell, 2009), exploring the impact of digital network technologies on British universities.

Subtitled, ‘Why higher education must embrace technology’, its author, Peter Bradwell, argues cogently for both the opportunity and necessity to remake higher education according to the new realities of a world relentlessly connected, digitised and increasingly distributed in time and space away from centralised locations.

Re repurposing Richard Lang’s insights about the edgeless city, in which the functions of the city occur, but the form is now more fluid, dispersed and without the clear boundaries which have previously helped define ‘the urban’, Bradwell proposes a shift in higher education analogous to that within the popular music industry: technologies will not ‘do away’ with universities but, to prosper, those institutions must change systematically and with a “coherent narrative” to embrace digital networks.

While much has changed in terms of the funding, politics and general cultural climate around higher education in recent times, nothing has changed to make less the threat of conservatism in the face of global knowledge networking, nor to reduce the opportunity which universities have to become central to the new forms of knowledge and learning which the Internet and related technologies demand.

This workshop will explore practical opportunities and problems that confront academics and institutions of higher learning in light of Bradwell’s prognosis for the technology-oriented future. The focus for the workshop is to ask:

what exactly should a modern comprehensive university do that will unleash the creativity of students and staff and maximise the potential of distributed, edgeless learning while, at the same time, also making the most of the physical spaces which will remain critical markers of ‘a university’?. In other words, how can we utilise digital technologies and networks to fashion ‘new’ edges — temporary boundaries, if you like — that assist us in making education a collaborative, collective experience?

Format of Workshop

The workshop will be 1/2 day, including lunch:

  • Introduction / overview (30 minutes – Matthew Allen)
  • Open discussion: what are the key changes needed for enhanced, engaged teaching and learning within the edgeless university paradigm? (30 minutes – plenary)
  • Groups work on the key changes proposed, examining critically its validity, refining it and making sense of the likely outcomes (30 minutes – sub-groups)
  • Lunch (45 minutes)
  • Report back and group presentations and discussion, including consideration of the need for edges to be re-introduced at times (60 minutes)
  • Conclusion, including overall response (15 minutes – speaker TBA)

Portfolios, digital and reflection: interleaving Michael Dyson

Posted in Conferences, Events, Ideas on December 2nd, 2010 by admin – Be the first to comment

Listening to Michael Dyson, from Monash talking about portfolios in teacher education: great presentation.

Dyson says:

  • Education of educators is first of all premised on turning them into people who practice self-development. gives example of very first unit. [So, care of the self is central, and making students include themselves as subjects in the learning process - nice!]
  • Learning is change dramatically – globalisation, computing, and so on. [But, perhaps, there is an important qualification on some of the more optimistic claims for 'new' learning: learning is embedded within society in ways that shape those possibilities in ways that are not entirely concerned with 'better' learning. At the very least, the definition of better is contested: is it cheaper? is it more orderly and commodifiable? is to linked to national norms and needs?]
  • The creating mind is the goal. [Interesting - not creative, but more positive and active - creating. Good difference]
  • Reflection is essential to achieving the kind of succcesses in self-developmental learning; using Dewey (2003), emphasises “active persistent and careful consideration”; reflection is not taking “things for granted…[leading to] ethical judgment and strategic actions” (Groundwater-Smith, 2003).  [ Further work needed, perhaps, to understand reflection for this new generation, if one takes as given the significant changes in knowledge: is reflection as developed in 20th c the right kind of reflection?]
  • ALACT model – action, looking back, awareness of the essential aspects, create alternatives, trial.

image of ALACT

[This is really helpful - I like the added 5th step, compared to the normal action research 4-step model]

  • “the artefacts placed in their portfolio showcase who they are and their current onling learning”; these artefacts are attached to the standards which define what it is to be an educated teacher according to outcomes required. [So portfolios are a clear negotiation of the student's understanding of those requirements and standards?]
  • Exploration of the actual portfolios that students have created, using a paid-for service iwebfolio (was subsidised). Variety of successes and failures, all the material goes into a digital, not paper portfolio. Notes the fact that the metadata on when and how material uploaded is available, unlike other means of generating a portfolio. [I emphasise: the portfolio is a genuine, real requirement for teaching employment. It is authentic learning]
  • Use of standards / outcomes as information architecture to drive cognition in inputting information (adding artefacts, commenting etc [So, the portfolio is 'scaffolding' into which a building goes, with a clear design brief. It might be a hghly structured knowledge engine]

I am wondering if the students genuinely are doing this work for themselves or if they imagine an audience of ‘judges’ – their teachers who grade the portfolio or the employers who might use it? Managing multiple audiences is tricky, even with technology that allows it – because if you can shape the portfolio for several audiences…. then does the self audience survive?

Then again, maybe the whole point is that the students are not yet capable of being their own audience.

Some other portfolio software (and look how it is more than just a portfolio…)

http://www.pebblepad.com

Authentic learning: presentation to NCIQF

Posted in Conferences, Events, keynotes on November 30th, 2010 by admin – Be the first to comment

On Thursday 2 December, I am presenting at the National Curriculum Innovation and Quality Forum on the subject, “Risks and opportunities in authentic learning via the Internet”.

The basic brief for this keynote presentation is to:

  • summarise approaches to authentic learning in the BA (Internet Communications) at Curtin University;
  • identify the key benefits in using a public knowledge networking approach to authentic learning; and
  • highlight risks and strategies for managing those approaches in the pursuit of authentic learning online.

While I hope to do that, with a particular emphasis on giving some examples from the great work that students in the BA (Internet Communications) have done, I also have found that in preparing my talk I have had to develop a more coherent argument about the nature of authenticity in learning and the relationship between education and learning.

The talk can be found here: https://netcrit.net/content/nciqf2010.pdf

This paper draws also on some specific work I have done on the authentic assessment in our online conference unit, Internet Communities and Social Networks 204 and more generally on social media and authentic assessment (presentation in the UK, May 2010)

Some of the examples I refer to will be listed on my blog within the week.

Something new: a “blogshop” on online learning + more online learning tools

Posted in Events, Summits and Workshops on November 23rd, 2010 by admin – Be the first to comment

Tomorrow I move out of my comfort zone in presenting on the uses of online learning in higher education. I am at the University of Newcastle and will, in the morning, give another version of my presentation on Web 2.0 tools for online learning at university (search for “Matthew Allen”). This presentation will be fine: it has worked well before but is very didactic and controlled.

In the afternoon I am giving a “blogshop” which is my neologism for a workshop-involving-blogging. It involves co-present, computer-mediated interactions in which the users (aka labrats) will join and participate in a collaborative blog just for the period of the workshop. The blogshop is called ’5 Steps Towards new-fashioned online learning’ (at http://knl.posterous.com ).

Amongst other things, the blogshop is going to involve Todaysmeet back channelling, identity creation and management via Gmail (for Posterous and Slideshare) and exploring another ‘top 10′ Web 2.0 tools. I’ve already been extolling the virtues of Posterous, Slinkset, Mind42 and others. Now we are going to start exploring:

  • Chartle (Chartle.net tears down the complexity of online visualizations – offers simplicity, ubiquity and interactivity instead)
  • Flexlists (With FLEXlists you can create simple databases of anything you want, with every field you need.You can share the list with others, invite them to edit the list or just keep it for yourself)
  • Groups (Roll your own social network)
  • Moreganize (Moreganize is a  multifaceted organisation tool. It is suited for both professional and private use and is especially convenient if a larger group of people needs to get organized!)
  • Planetaki (A planet is a place where you can read all the websites you like in a single page. You decide whether your planet is public or private.)
  • Qhub (Qhub is a platform you can use on your blog or website that allows your audience to ask questions and get real answers, it doesn’t just help answer questions it allows a genuine community to develop around your site.)
  • Scribblar (Simple, effective online collaboration Multi-user whiteboard, live audio, image collaboration, text-chat and more)
  • Spaaze (Spaaze is a new visual way to organize pieces of information in a virtual infinite space. Your things, your way.)
  • Squareleaf (Squareleaf is a simple and intuitive virtual whiteboard, complete with all the sticky notes you’ll ever need. Unlike the real thing, our notes don’t fall off all of the time.)
  • Survs (Survs is a collaborative tool that enables you to create online surveys with simplicity and elegance.)
  • Voicethread (With VoiceThread, group conversations are collected and shared in one place from anywhere in the world. All with no software to install.)

(all quotes from the websites concerned)

Posterous rocks. I am now too wedded to the flexibility and power of WordPress to change my main blog, but I think Posterous really has a great ease-of-use factor that, if you want simplicity, recommends it.

The substantive point is this:

developing people’s ability to engage in innovative online learning design is not about the software per se: it is about their ability and attitude to work with the cognitive engineering available via the web to create interactive learning experiences (where interactive implies interactions between computers and humans, as well as humans themselves). Therefore the blogshop provides, I hope, an experiential learning activity: learning by doing, while thinking, and communicating about that experience.

Contact me if you want to repurpose, reuse or otherwise mashup the knowledge networked learning blogshop – it’s creative commons

Research for action: a report on a workshop, Making Links 2010

Posted in Events, Summits and Workshops on November 15th, 2010 by admin – 1 Comment

On 15 November, as part of the Making Links Conference, Marcus Foth and I organised a workshop entitled Research for Action: Networking University and Community for Social Responsibility . Participants included researchers and activists, based in both universities and community organisations, and the following is a broadbrush summary of some things I learned from participating in a great day (with apologies for any errors in interpretation of what went on).

(Posted before the final discussion, so I can concentrate on that plenary)

Acknowledgment of the great people who spoke today at the bottom.

Some of what I discuss is:

  • There is no one model for cross-sectoral collaborative research organisation
  • Research projects change; research is projection
  • The grant or article is not the motivation
  • The silent partners of research
  • Who is the researcher?
  • Research and action have different timeframes
  • Learning / Education and research
  • Research, knowledge work, networked ICTs
  • Show me the money

There is no one model for cross-sectoral collaborative research organisation

Three (or more) models of research for action (drawn from contributions from Kath Albury, Marian Tye and Helen Merrick)

  • the collaborative project, articulating complex array of partners around a specific issue, involving funding, participants and university researchers
  • the choreography – diverse bodies in motion, all in relatively simple partnerships, but the overall result is complex
  • the personal is professional – passion in life underpins research and living. There’s no ‘group’ out there except one to which researcher already belongs.

All three models involve aspects of the other.

Difference between research as a specific search for new knowledge (more or less applied), for which a partnership might be needed or which must be used, and research as a state of mind or view of engaging with the world. Research collaborations could be one or the other or lead from one to the other. Just be careful about keeping clear what is needed and possible.

Research projects change; research is projection

The research project does not exist extant of the collaborative partnership: the project must be regularly reframed to suit the shifting circumstances that are exposed by doing the research. The shifts in circumstances can be both organisational (ie changes in the way the partnership is being operationalised) and also epistemological (what you ‘know’ changes). The project is a ‘projection’, a throwing forward or imagining of where the outcomes will emerge and how, rather than a fixed container.

The grant or article is not the motivation

Researchers who are employed within universities should not approach the collaborative community partnerships by saying ‘what is the grant I can get; what are the publications I can generate IF I make this link’. Rather, they should ask ‘what is the social, knowledge benefit that can be achived FOR the people who are in action’. The consequences (grants, publications) – the currency of academic success – can follow from the productive partnership rather than being the reason for its existence. At the point when the partnership (not the academic) will benefit from such ‘academic’ successes then they can become part of the explicit doing of the partnership’s work. While this statement might be seen solely as a moral stance which admits to the requirements to understand the aims of social research for action, it is also a pragmatic statement of efficiency: seeking the grant, thinking of the article will not actually create the conditions for partnership.

The silent partners of research

Research collaborations between community organisations and academic researchers always involve ‘silent’ partners whose needs and expectations must also be considered. For example, a silent partner for an academic might be her Head of School who manages her workload and thus, even while not present, is still involved in some way, involved in the sense that this person influences the research almost without knowing it. Community organisations represent, but are not the same as, the whole population that is their collective: these people too are silent partners. Governments, funding bodies, the media who might report on research are all silent partners too. I advance this idea simply to suggest that one of the ways in which trust and explicit sharing of expectations and needs can be done more effectively is if the speaking partners articulate, to one another, the silent partners who might, nevertheless, influence that project.

Who is the researcher?

Research for action implies that the identity of the researcher is not as clear as in traditional research collaborations (between the normatively academic researcher and the group who has a problem they can’t solve). Researchers, from the academy, should perhaps be research coaches who empower the researchers already working within the community by simply refining the attitudes so that they come to see their work as research. Similarly, the community workers may be doing the research by simply doing their job: the research outcomes might first occur within the community and only then become extracted into ‘research forms’ that are conventionally understood as research. The academic researcher, in this case, is a follower, or observer. The academic researcher might also be the solution to the organisation’s needs!

A related point: whether something is research is also contestable – and ‘doing research’ has its own politics: if it suits a community organisation to be ‘doing research’ then academics can help reframe the work in that way. Researchers, from universities, often need to suspend their desire for research outcomes to recognise instead their desire for involvement. Such a step might actually be quite liberating and productive for it frees academics from some of the more foolish ‘research management’ games which the formal assignment of the title ‘research’ can entail.

An additional point, picking up a different meaning of ‘who’: there is a big difference between researchers who are established institutional academics, emerging academics, doctoral students and so on.

Research and action have different timeframes

One theme from the workshop – timeframes are different for ‘research’ and ‘action’. These timeframes can come into conflict in several ways  – time taken to prepare the research (funding, ethics, clearance); time taken to publish results (scholarly journals): these are not very ‘active’ in the sense that a pressing problem require more urgent action. At the same time, while not invalidating the need for action, research works because it doesn’t jump to conclusions: the suspension of judgment opens the space to discover something new; the time taken can be productive of the civic intelligence that research can build (it is very process oriented). Maybe one answer is to run the research project as action in parallel with the research project  research and pragmatically map out the points of overlap

Related point: the disinterest of the researchers is paramount within the scientific paradigm; the degree of disinterest actually operating in many social realms might be much less than ideal but disinterest itself remains part of the conventional discourse of research. In many community research projects, interest is the mainspring for both initiating the research and the research methods chosen. I make this point because the apparatuses of research that take the time (ethics clearance etc) are often design to ensure / embed the ‘disinterest’.

Learning / Education and research

I am unable to make a clear statement on this point yet, but it seems to me that the collaboration for many, between university researchers and community organisations, is a collaboration between systemic learning (the outcome of research) and individual learning (becoming educated): researchers, on their own, discover knowledge that might or might become learned; but through links with community organisations – where the research comes from that organisation – enable action that is for the knowledge to become learned. Apologies: this doesn’t make much sense yet. But, to bring order to these thoughts: consider ‘popular education’ as the basis for knowledge production questions and activities (thanks Dan!).

Research, knowledge work, networked ICTs

In relation to ICTs and networks, many of the projects identified during the workshop involved social media, digital stories, creative online media, politics of information, expliciting networking and so on. These kinds of projects produce some interesting effects in relation to research. First, they generate textual materials which then serve as the research object; but, more importantly, they remodel the knowledge production process. In the end, research is about producing knowledge: where the tools or means of production (as well as the maintenance of social conventions about what counts as knowledge) remain the hands of a narrow elite, then everyday people cannot ‘do research’ because they are excluded from the knowledge-work apparatus. But now, ICTs, networks, creative digital technologies, make people researchers in the sense that they create, share, mashup and reflect upon knowledge. Once again, research becomes a tricky word: is it the meeting point for academics and community and political groups? Does the shared work of understanding what ‘research’ is (when it is now spread across society in different domains) create the basis for the productive partnerships?

Show me the money

(Apologies: it is not just money). What mostly connects community organisations and researchers within universities is the desire to achieve something new, which changes lives, but without all or most of the resources available to achieve them. To conclude this summary of what I took away from the workshop: partnerships make sense where the work necessary to achieve the partnership achieves more than simply using that time in other productive ways. Researchers and social activists and community developers just don’t have time to waste: but they don’t have enough resources to not contemplate how working together might save them time. Similarly, a researcher might need community organisation to secure the money; a community organisation might need a researcher to get funding. Only when those interests align can they two collaborate to ‘steal’ from those who dispense the funds – normally governments and corporations with other agendas, limited money and requirements that can be met by alliance.

—————————————————————————

Thank you to:

Doug Schuler: Will we be smart enough soon enough?

Posted in Conferences, Events, keynotes on November 15th, 2010 by admin – 1 Comment

Disclaimer: live blogging

Will we be smart enough soon enough?
Putting Civic Intelligence into Practice

Doug Schuler

(Keynote paper, Research for Action Workshop, Making Links 2010 Conference)


Civic Intelligence defined pragmatically: people to have the ‘smarts’  by which to acquire the things they need to prosper in society.

The world needs ‘our’ help: global problems, local problems – all need attention and those in power, and the operation of the free market will not solve them. Doug frames his work by asking: “How smart need we be to solve these problems? Will we be smart soon enough for the problems to be solved before they overwhelm us?”.

Civic intelligence is a concept to lead us to the answer to these questions. It refers, effectively, to a judgment of how smart a group might be relative to the problems it faces; it is a form of collective intelligence, focusing on shared problems (eg the problems that define the group). Civis intelligence is about being smart, through civic means, to achieve civic goals. A particular modality of this form of collective intelligence is its distribution throughout society. Civic intelligence as a paradigm for activists and researchers.

Examples:

Sustainable prisons: question – “Can prisons save money and the environment while changing lives?”

Sidenote This example suggests that productive action to solve significant social problems lies in joining together multiple problems – it is not so much finding innovative answers to a single problem but, rather, actively constructing a new problem set in which the action serves two or more problems at once. In this example, spending money on a sustainability project within prison not only makes prisons better at the ostensive goal (rehabilitation), but also contributes to the problem of educating people about how to live and act sustainably while also, potentially, making prisons more productive and therefore cheaper

Beehive Collective’s work in relation to land degradation and renewal, “The True Cost of Coal” – sophisticated interweaving of skills and action, notion of research through action at the grass roots.

Sidenote This example suggests that productive action involves very different paradigms of knowledge work where creativity, sharing, working together to represent the world and tell stories about it is more effective in addressing problems (and in doing so building civic intelligence) than traditional models of ‘research’

Liberating Voices project: promote and assist citizen engagement through thought and action – pattern language responses. Everyone is an activist. Patterns are not recipes: “tools for thought”; patterns “change the flow of what would have happened in its absence”.

Patterns here could be understood as scaffolding for cognitive developmental action – without them, people don’t know where to start even if they know what the goal might be. Patterns don’t determine the outcome but give sufficient support for people to begin work. Moreover, patterns provide a shared language through which people can identify commonalities and work together. Without them, they remain individuated. So, do patterns create a kind of autonomous foundation for collective engagement?

Interesting diverse list of points to define civic intelligence, interesting because of its diversity of categories:

civic intelligence builds more civic intelligence (it is productive beyond any specific act)
inclusive and participatory
efficient and creative
real problems (e.g. inequality, not just increased wealth for a few)
addresses several problems at once

The last point is especially revealing: “Make activism cool (again)”. Schuler comments – “what is preventing people from doing this stuff? It’s not cool”

I believe this comment taps into the increased knowledge- and engineering-focused state of contemporary society – what is now ‘cool’ is doing knowledge work so demonstrations, ranting, protesting which used to be cool forms of social activism now appears to be insufficiently ‘efficient’ and ‘creative’ for our contemporary society.