‘Assessment is Textured and Finely Grained’.

Robinson, K. (2009):

A Critical Commentary of Assessment in Art and Design Higher Education.

Author: Finola Gaynor (C) 2013



This paper provides a critical commentary on a subset of assessment, focusing on the synthesis of assessment within studio practice units, as opposed to personal professional development or the written element of a programme, of Level 4 Graphic Design students at a UK Creative Arts University, the Quality Assurance Agency (QAA) Code of Practice and the QAA, Art and Design Subject Benchmarks. For the purposes of anonymity the University will be identified as ‘Ottimo University’.

In order to identify assessment, there is a requirement to understand the current assessment frameworks. The QAA Code of practice is not mandatory, but documents recommended actions, institutional structures and polices related to the quality and standards in higher education. In addition the QAA, Framework for Higher Education (FHEQ)

‘enables higher education providers to communicate to employers; schools; parents; prospective students; professional, statutory and regulatory bodies (PSRBs); and other stakeholders the achievements and attributes represented by the typical higher education qualification titles’. (ENQA, 2008).

The QAA also provides a specific Subject Benchmarks Statement for Art and Design QAA (2008).

‘which provides a means for the academic community to describe the nature and characteristics of programmes in a specific subject or subject area’ (QAA, 2008).

This paper will explore the generic model of assessment as outlined within the QAA guidelines from the assessment policy and practice at Ottimo University.

The paper draws on a broad body of literature and research to examine the practice of assessment within an undergraduate academic framework. The researcher will also refer to the detailed knowledge and experience of Ottimo lecturers taken from open-ended discussions held at Ottimo. From this data, the paper intends to highlight key assessment practices and identify the quality of learning through assessment practices through the use generic learning outcomes and assessment criteria of Ottimo University. It is hoped that the understanding gained from the paper will enable future research.



I have worked within design practice and education for a number of years and like most design educationists practitioners’; my teaching came from the historic relationship of teacher-practitioners in art and design Higher Education (HE). Unlike that of more traditional text-based courses, this mode of practice has been commonplace since the Royal College of Arts (RCA) began delivering a specialised design portfolio of courses in the 1930-40s (RCA 2011). The RCA considered it appropriate that all staff were engaged in industry practice as well as fulfilling their teaching roles. In addition the Higher Education Academy (HEA) notes, that often art and design academics would:

‘Work part-time as a practitioner while simultaneously fulfilling management and academic roles’ (Rees, 2006).

More recently the teacher-practitioners’ role has emerged towards a professional educationalists position:

‘This culture of learning through practice has persisted and teacher-practitioners today represent a significant number of those delivering and developing the undergraduate curriculum’ (Clews and Mallinder, 2010).

Most of my academic work has been focused on the practice of learning, teaching and assessment within the subject of graphic design. For this paper I will also draw on my experience as an assessor in order to better extrapolate comparisons of assessment practice.

The most recent influences to assessment in higher education was the introduction of the Bologna Process; one of its principal aims is to provide a comparable, consistent and compatible European educational framework. Consequently the European Higher Education Area (EHEA) was defined and is currently made up of 47 countries (International Unit 2012). The EHEA educational framework is made up of three cycles, Bachelor, Master and PhD. Each cycle contains generic descriptors of typical abilities and achievements associated with completion of that cycle (EHEA 2010). Please see figure 1.


Figure 1. First Cycle, Bachelor, Qualification Framework Descriptors (Bologna Process, 2009).

Following the Bologna process the Quality Assurance of Student Assessment Working Group was formed and the UK QAA joined in 2007, its brief was to explore the subject of quality assurance of student assessment practices in HE. The result was the European Association for Quality Assurance (EAQA published report The Assessment Matters, (2008). The report concluded a number of key generic points directly related to assessment:

  • emphasis needs to be placed on the careful design of assessments, in particular in terms of validity and reliability.
  • assessment must be aimed at showing achievement of specific learning outcomes
  • assessment should be undertaken within an holistic framework that does not miss or ‘ hide’ the achievement of other, non-explicit outcomes.
  • assessment should be designed to ensure that appropriate links are made between the assessment of a module and the overall learning outcomes of the programme.
  • assessment practices should be kept under review in order to ensure that the impact of new learning environments is recognised (p6).

In 2008 the QAA incorporated the key action points into the FHEQ (2008) qualification descriptors. The FHEQ divides the qualification descriptors in to 2 discrete areas of subject knowledge, and generic skills.

Ottimo University directly maps Its academic infrastructure and assessment regime against FHEQ Level 4 (Certificate in Higher Education). Please see figure 1 and 2. Each Unit refers to Learning Outcomes (LO’s) that map against the QAA Code of Practice and FHEQ. This university has generic learning outcomes for every course of study For example, if a student is taking Fine Art or Graphic Design, the LO remains the same, what distinguishes the programme is the introduction of course-specific unit project briefs. Please see figure 3

The QAA Art and Design Subject Benchmark define the subject principles acknowledging the complexity and diversity within the discipline as described

‘The outcomes of engagement with these characteristics are equally varied in art and in design, but both require the development of particular cognitive attributes. The role of imagination in the creative process is essential in developing the capacities to observe and visualise, in the identifying and solving of problems, and in the making of critical and reflective judgements. While convergent forms of thinking, which involve rational and analytical skills, are developed in art and design, they are not the only conceptual skills within the repertoire employed by artists and designers. More divergent forms of thinking, which involve generating alternatives, and in which the notion of being ‘correct’ gives way to broader issues of value, are characteristic of the creative process QAA’ (2008 p. 3 2.3).

Ottimo University’s model of synthesise with QAA ensures the internal quality of the education framework and its assessment process for the learner, the educator and the university. Please see figure 4.


Figure 4. Visual framework of qualitative assessment Gaynor (2013)

In theory, given the transferable and generic nature of the framework any university could replicate Ottimo’s academic infrastructure. However, the intellectual scaffolding in reference to the purposes of assessment methods need to be identified. Why are we assessing?


The Importance of Assessment

Assessment and the understanding of assessment is important to both the educator and the student, there is plenty of research in the quality of student learning. The research undertaken by the likes of Marton, Saljo and Dahlgren (1976 and 1984) provide a concept model concerned with the learning approach of students’ based on the outcomes produced.

Within a phenomenographic perspective, this concept of learning refers to ‘deep’, ’surface’ and ‘strategic’ learning approaches. This is where the student may adopt a multi-modal approach to their learning, either ‘deep’ intrinsic or ‘surface’ extrinsic. Further studies since, elaborated by Ramsden (1992), Biggs (1987, 1993), Entwistle (1997) expand on the notion of deep and surface learning, indicating that during deep learning, the learning intention indicates a requirement of rich understanding, whereby the learning approach focus is ‘signified’, and the learner makes connections to previous knowledge with new knowledge. Whereas, surface learning relates directly to the specific completion of the task, defining the learning focus as ‘signs’. This refers to discrete elements of requirements, rote learning and assessment procedures. A strategic learning approach is a derivative of surface learning, whereby the learner is motivated by marks, and organizes their learning towards the expectations of the process and that of their tutor.

Please see figure 5.

Figure 2. Defining features of approaches to learning (Entwistle 1997 Ch1. p19).


Entwistle (1998) concludes that the learner’s approach critically impacts on their (students) level of understanding, therefore, demonstrating that students’ outcomes, relate directly to the students learning approach, whether surface, deep, strategic or a combination of all three.

The QAA Code of practice for the assurance of academic quality and standards in higher education, Section 6: Assessment of students (2006), corroborates a phenomenographic viewpoint, generically stating the importance of learning outcomes in promoting student learning, to improving performance, evaluating knowledge and promoting the understanding of abilities or skills (QAA 2006, p4.)

Further research indicates that it is the design of courses (learning outcomes and assessment criteria) and teaching methodology that encourages or dispels this learning approach (surface or deep) of students. Bowden (1990 cited by Bowden and Marton 1998) suggests that attributes of course design in HE compel students to adopt the surface approach.  For example, the lack of formative feedback, mark-only feedback, timely feedback and disconnected learning units would encourage a surface learning approach as an unintended consequence of the course design. Which in turn prevents the student using the deep learning approach. It could be argued that the creation of assessment criteria that instrumentalises learning and prioritizes specific areas of assessment may be only for the purposes academic integrity. For example, the excessive adherence to particular reference styles, teacher omniscience and disproportionate assessment within a module.

‘I suspect that referencing has become a shibboleth for ulterior reasons. It is one of the few respects in which a submission primarily on the practice of teaching can be “correct” or “incorrect”, so it becomes where the academic credibility of the work resides when confidence has been lost in passing professional judgement. The Harvard system of course is correct/incorrect because it is a wholly artificial (but nevertheless sensible and effective) system. Practice is much more muddy’. Atherton (2011).

The QAA Code of practice for the assurance of academic quality and standards in higher education, Section 6: Assessment of students (2006), correlates a phenomenographic view point, generically stating the importance of learning outcomes to promote student learning, to improve performance, evaluate knowledge and understand abilities or skills (QAA 2006, p4.)

There is limited research specific to the level of learning and learning outcomes within the field of art and design. Dahlgren’s (1978) case studies define the categorization and analyses of learning outcomes did not include the subject of art and design.

Furthermore, Biggs and Collis (1982) general taxonomy describes a generic structure to levels of learning outcome called SOLO (Structure of the Observed Learning Outcome).

‘In the SOLO taxonomy five levels of learning outcome can be distinguished, of increasing complexity:

  1. Prestructural no evidence of anything learned.
  2. Unistructural one correct and relevant element is present.
  3. Multistructural several relevant elements are present but in an unrelated way, often-in list form.
  4. Relational the relevant elements are integrated into a generalised structure; there is evidence of induction.
  5. Extended: abstract the structure of elements is related to other relevant domains of knowledge; answers are not bounded by the question.’

Biggs and Collis (1982)

Despite the general category applicability of the SOLO’s taxonomy, Gibbs (1993) invites the notion that the formulation creates issues in assessing the learning outcomes for students within the subject of art and design. Gibbs concludes that as the taxonomy is constructed verbally or in writing then the assessment process asserts an outcome in the shape of a written report or similar.

Conversely, Ottimo University references Bloom’s Taxonomy (Bloom 1956) throughout its academic infrastructure and pedagogic development. Bloom’s taxonomy refers to cognitive learning in order to classify levels and forms of learning. Three domains of learning are identified, each domain is organised as series of hierarchical or ‘progressive’ levels. The suggestion is that the learner cannot effectively address higher levels until those lower down have been assimilated. Please see figure 6. For example, during first-aid training the learner may only be concerned at that point with the domains of Knowledge, Comprehension and Application, whereas a junior doctor would also be concerned with Synthesis and Evaluation.

Figure 6: Blooms Taxonomy diagram (Based on Bloom 1956).

The revision of Blooms Taxonomy by Anderson and Krathwohl (2001) introduced a change in the syntax and the meaning through changing the nouns to verbs, and replacing ‘evaluation’ with ‘creating’, explicitly identifying the notion of creating new knowledge. Please see figure 7.

Figure 6: Revised taxonomy of the cognitive domain
following Anderson and Krathwohl (2001).

Ottimo University’s use of Bloom’s Taxonomy provides a clear framework and terms of reference for academic staff to use as an aid in promoting the six categories of learning, knowledge, comprehension application, analysis, synthesis, and evaluation. The Ottimo guide to assessment and feedback

(Ottimo 2012) articulates directly with the level descriptors as well as a discrete link to their learning outcomes and assessment feedback.

Initially the use of a taxonomy at Ottimo, caused great debate amongst academics in relation to its definition and how it might be applied. The academic staff had to reach an agreement as to whether the taxonomy was relevant to the discipline and then to its application within the context of teaching, learning and assessment.

Ottimo in a conversation with the Pro Vice-Chancellor (PVC) on 6 Jan 2013, they stated that:

‘The use of a taxonomy has proven fundamentally important in creating a meaningful conceptual framework for the delivery of teaching, learning and assessment for art and design at Ottima. Within the context of our academic infrastructure, a taxonomy provided a starting point for debate and an opportunity to forge a broad academic agreement around language, meaning and intentionality that came to shape the curriculum and the staff/student experience.’  (PVC, Ottimo 2013).

Ottimo’s PVC expanded to the importance of a shared understanding towards the overall student experience within assessment, assessment feedback and learning outcomes.

‘The importance of a semantic consensus in designing curricula is all too often overlooked in scouring the minutiae of criteria, weightings and outcomes at the expense of a more holistic vision of a start, middle and endgame for learning within the context of a three-year undergraduate degree.’ (PVC, Ottimo 2013).

Therefore it is necessary that the students’ experience of learning is clear, motivated and achievable.

The National Union of Students’ Student Experience Report (2008) found that their assessment feedback expectations were not met, with only a quarter of students receiving effective feedback:

‘only 25 per cent of students receive individual verbal feedback on their assessments, compared with 71 per cent who want individual verbal feedback. A quarter (25 per cent) of students have to wait more than five weeks for feedback on their coursework.’ (NUS 2008).

Similarly, Williams and Kane (2008) notes the need for timeliness of assessment feedback. Williams and Kane (2008) evidences that student’s value assessment feedback, not only as a measure attainment and progression, but in order to inform their future work:

‘If tutors could give students feedback on the assignments sooner it would help because students could take that feedback advice when completing later assignments’. BCU, LHDS, 2007.” Williams and Kane (2008).

Summarising that all student’s value to the timeliness and quality of assessment feedback. Therefore, reflective and verbal communicative practices towards the enhancement of meaning and understanding are crucial to effective course design and its assessment practices.

Art and Design Assessment in Practice.

Nichol’s (2008) research raises concerns that the way art and design academics assess could be the cause of negative results related to assessment and feedback in National Students’ Survey (NSS), (Jul 2012) within the discipline of art and design. Nichol presents the concept that ‘better’ feedback is crucial to successful student learning.

‘To achieve this (effective feedback), students must be actively involved in the processes of assessment, in the different components of the assessment cycle.  In other words, assessment is a partnership, which depends as much on what the student does as what we do as teachers.’ (Nicol, 2008; Nicol, in press).

As an internal moderator observing the assessment process of academics to level 6 (Year 3, undergraduate) my experience was the notion of an ‘academic relationship’. Whereby, the academic relationship is that the lecturer and the student had a shared understanding of what the student had learned and what knowledge and skills have been acquired and demonstrated for the purposes of final mark. Orr (2012) asserts:

‘Lecturers in my studies shared the view that student identities, their artistic practices and their artworks are enmeshed. Art and design assessment practices are premised on this assumption.’ Orr (2012).

Orr (2012) continues stating that any disagreement in relation to marks would be deferred to the lecturer ‘who might know the student’ Orr (2012). Furthermore she reiterates that ‘any attempts to dislocate the student from the work are problematic’.

Within art and design, formative assessment manifests itself during group critiques and one-to-one tutorials. As student’s identify and develop a self-awareness and confidence within a ‘real worklife’ scenario. Students identify themselves as ‘the designer’ and the lecturer as ‘the creative director’ receiving peer feedback from ‘other designers’. Blair (2001) alludes to this stating ‘Students also found the group crit environment supportive to their learning experience’ Blair and Orr (2007). Often this practice of assessment is combined where both summative and peer assessment occurs simultaneously. However, the issue of inevitable variables occur depending on the quality and purpose, consequently causing varying quality to the encouragement of learning and learning outcomes.


There is evidence that current assessment practices in art and design in higher education has evolved into the practices of encouraging learning or learning outcomes. Essentially, assessment is usually centred on the holistic quality of the design outcome.

‘assessment in art and design might be best understood as an artful social practice’ Orr (2012).

Moreover there is a growing interest in this kind of assessment practice in more traditional ‘text-based’ courses for example, Doolan and Morris (2010) Dialogic Assessment and Feedback (DAF) project based upon a socialist constructivist pedagogical approach, whereby, the feedback process is co-constructed between the tutor –student, student–student and the student–tutor.  The DAF project was presented in the form of a learning activity workshop during the ALT Conference 2010. Richards (2012) suggests the academic relationship creates a shared ownership within the assessment process, whereby this is not best-defined as ‘feedback’ but is a dialogic process. This socially-constructed paradigm shift requires lecturers to be explicit about the assessment purpose and process, otherwise students are not able to make the link. As the] may define assessment as an end point and fail to recognise that assessment feedback is transferable to their learning.

Currently dialogic assessment normally refers to formative assessment; this method of assessment is, and has been, embedded in art and design assessment practice since the 16th century.  The potential for dialogic approaches and understanding with traditional text-based courses needs to be researched further as the evidence suggests that both staff and students agree that the dialogic approaches help forge a shared language and understanding of purposes and benefits of assessment function, Thus broadening the student’s powers of critical evaluation.

There is little doubt that the synthesis of assessment process and practices at Ottimo University with QAA’s Code of Practice, FHEQ and Art and Design Subject Benchmarks are both evident and successful; moreover Ottimo’s academic framework provides a robust institutional educational infrastructure that explicitly contributes to students learning. A further research towards the demystification of art and design assessment practices would assist the student learning of traditional ‘text’ based subjects. As academics consider the context towards new methods of assessment, one must also consider the purpose of the assessment, in other words one needs to identify what skills or knowledge is being assessed. It is established that the closer the connection between teaching, learning outcomes and assessment the more effective the learning is for the student. If the focus of the assessment is related more to what is defined in the learning outcome, than to the ease of how to assess, where is the learning?


Ottimo University Definitions (2012)

Course Definition: A validated combination of units (see below) which leads to a designated award.

Learning Outcomes (LO) Definition: That which has been learned or which a student is able to do as a result of study or training.

Level Definition: The level of study for undergraduate and postgraduate courses, as defined by the UK Quality Code. This describes the relative academic complexity of, and intellectual challenge, depth of learning and degree of learner autonomy required to attain, the Learning Outcomes associated with course units.

The Generic Descriptors are, derived from the UK Quality Code, articulate the general characteristics associated with each level and provide a template against which units may be aligned. The general ‘fit’ of units against these descriptions allows them to be ascribed to a particular level, assisting the planning of course routes.

Year 1 BA/BSc undergraduate (QAA FHEQ Level 4 – Certificate of Higher Education)?Work at this level will enable students to have a sound knowledge of the underlying concepts and principles associated with their area of study and an ability to evaluate and interpret these within the context of that area of study. Students will be able to present, evaluate and interpret qualitative and quantitative data, to develop lines of argument and make sound judgments in accordance with basic theories and relevant concepts. Typically students will gain the qualities needed for employment requiring the exercise of some personal responsibility.



Atherton J S (2011) Doceo; Reflective Journal [On-line: UK] from http://www.doceo.co.uk/reflection/index.htm(Accessed on 12 January 2013).

Biggs, J.B. and Collis, K.F. (1982). Evaluating the Quality of Learning – the SOLO Taxonomy. New York: Academic Press. xii + 245 pp.

Biggs JB (1996) Enhancing Teaching Through Constructive Alignment. Higher Education.32.

Biggs, JB (1999) Teaching for Quality Learning at University, SRHE & OU Press.

Blair, B and Orr, S. (2007) Critiquing the Critique. HEA: Brighton

Available at http://www.adm.heacademy.ac.uk/library/files/adm-hea-projects/learning-and-teaching-projects/crit-final-report.pdf (Accessed on 6 Jan 2013)

Blooms, B. S. (1956) Taxonomy of Educational Objectives, Handbook 1: The Cognitive Domain New York, David McKay Co Inc.

Bologna Process (2009) The framework of qualifications for the European Higher Education Area Bologna Process: Belgium

Available at http://www.ond.vlaanderen.be/hogeronderwijs/bologna/actionlines/QF_three_cycle_system.htm (Accessed 6 Jan 2013)

Bowden, J. A. (1990) Deep and surface approaches to learning. in M Akbar Hessami and J. Sillitoe (eds.) Deep vs. Surface Teaching and Learning in Engineering and Applied Sciences, Victoria University of Technology, Footscray.

Clews, D. and Mallinder, S. (2010), Looking Out: Effective Engagements with Creative and Cultural Enterprise, Arts Higher Education and the Creative Industries, the Higher Education Academy Art, Design, Media Subject Centre, University of Brighton, Brighton. Avaialable at http://www.creative-campus.org.uk/uploads/1/0/9/7/10973203/clews.pdf (Accessed 5 Jan 2013).

Dahlgren, L.O. and Marton, F. (1978). Students’ conceptions of subject matter: an aspect of learning and teaching in higher education. Studies in Higher Education. 3, 25-35.

Doolan, M. A., Thornton, H. A., & Hilliard, A. (2006). Collaborative Learning: Using technology for fostering those valued practices inherent in constructive environments in traditional education. Journal for the Enhancement of Learning and Teaching , 3 (2), 7 – 17.

Doolan, M and Morris, P (2010) ALT Conference 2010: Alternative Perspectives Dialogic Assessment & Feedback (DAF) Workshop ALT: Leeds. Available at http://homepages.stca.herts.ac.uk/~ct07abf/comqmad/events/2010/alt-c-daf/ (Accessed 11 Jan 2012)

Dunn K. E.  & Mulvenon, S. W. (2009) Critical Review of Research on Formative Assessment: The Limited Scientific Evidence of the Impact of Formative Assessment in Education. Practical Assessment, Research & Evaluation. Volume 14, Number 7. Available at http://www.pareonline.net/pdf/v14n7.pdf (Accessed 7 Jan 2013).

European Higher Education Area (2010) Home EHEA: Romania Available at http://www.ehea.info/ (Accessed 6 Jan 2013).

European Association for Quality Assurance in Higher Education, International working group (2008) Assessment Matters: The quality assurance of student assessment in higher education ENQA: Brussels. Available at http://www.enqa.eu/files/QA%20of%20Student%20Assessment%20Report.pdf (Accessed on 6 Jan 2013).

Gergen, K.J. (1999) An invitation to social construction. London:Sage

International Unit. (2012) The Bologna Process. London: UK. p7.

Available at http://www.international.ac.uk/policy/ehea-bologna-process.aspx (Accessed 6 Jan 2013)

Je?rey, B. and Craft, A. (2004). Teaching creatively and teaching for creativity: distinctions and relationships. Educational Studies, 30(1), pp. 77–87.

Lave, J., & Wenger, E. (1990). Situated Learning: Legitimate Peripheral Participation. Cambridge, UK: Cambridge University Press.

Mac Donald, J. (2006) Blended Learning and Online Tutoring: a good practice guide. Gower Publishing: USA

Marton, F. and Saljo, R. (1984). Approaches to Learning in Marton, Hounsell and Entwhistle

Marton F And Saljo, R. (1976) “On Qualitative Differences in Learning — 1: Outcome and Process” Brit. J. Educ. Psych. 46, 4-11

Nicol, D. (2009), Assessment for learner self-regulation: Enhancing achievement in the first year using learning technologies. Assessment and Evaluation in Higher Education, 34 (3) pps 335 -352.

Nystrand, M. (1996). Opening dialogue: Understanding the dynamics of language and learning in the English classroom. New York: Teachers College Press.

Otter S (1995), Learning Outcomes in Higher Education in Burke J, (ed) Outcomes Learning and the Curriclum: Implications for NVQs, GNVQs and Other Qualifications. Falmer Press. London.

Ottimio University (2012) Undergraduate Framework.

Ottimio University (2012) Assessment Handbook.

Ottimio University (2012) Course Guide, Graphic Design.

Ottimio University (2012) Unit Assessment Feedback sample.

Quality Assurance Agency (2006). Code of practice for the assurance of academic quality and standards in higher education. Section 6: Assessment of students. QAA: Gloucester. Available at http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/COP_AOS.pdf (Accessed at 4 Jan 2013).

Quality Assurance Agency (2008) The framework for higher education qualifications in England, Wales and Northern Ireland. . QAA: Gloucester. Available at http://www.qaa.ac.uk/Publications/InformationAndGuidance/Documents/FHEQ08.pdf (Accessed at 4 Jan 2013).

Quality Assurance Agency (2008) Subject benchmark statement: Art and design. QAA: Gloucester. Available at http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/Subject-benchmark-statement—Art-and-design-.aspx  (Accessed at 4 Jan 2013).

Quality Assurance Agency, International working group (2008) Assessment Matters: The quality assurance of student assessment in higher education. QAA: Gloucester

Popovich, K (2006) Designing and Implementing Exemplary Content, Curriculum, and Assessment in Art Education, Art Education, Vol. 59, No. 6 pp. 33-39.

Rees, C.,Forbes, P. & Kubler, B. (2006) Student Employability Profiles: A guide for higher education practitioners. Brighton: HEA. p45.

Reynolds. M & Trehan, K. (2000): Assessment: A criticalperspective, Studies in Higher Education, 25:3, 267-278.

RCA (2011) History of the RCA 1837–2011

http://www.rca.ac.uk/Default.aspx?ContentID=161281&GroupID=160461&CategoryID=36283&Contentwithinthissection&More=1 (Accessed 4 Jan 2013).

Richards, C. (2012) Are you sitting comfortably? The Psychologist Vol 24 No 12 P904

Robinson K (ed) (1998), NACCCE Report. Creative Education. NACCCE: London

Tomlinson, M (2008) The degree is not enough : students’ perceptions of the role of higher education credentials for graduate work and employability. British Journal of Sociology of Education, vol 29, no 1, pp 49-61.

Wenger, E. (1998) Communities of Practice: Learning, Meaning and Identity. Cambridge: Cambridge University Press

Williams, J. and Kane, D. (2008) Exploring the National Student Survey: Assessment and feedback issues: HEA: York Available at http://www.heacademy.ac.uk/assets/documents/nss/nss_assessment_and_feedback_issues.pdf (Accessed on 4 Jan 2013).




ASSIST.  Questionnaire available from University of Ulster, County Londonderry.

Barrow, R. and Woods, R. (1988) An Introduction to Philosophy of Education, third edition.  London and New York: Routledge.

Bowden, J and Marton, F. (1998) The University of Learning. London: Kogan Page.


Davies A and Reid A (2001). Variation in teacher’s and students’ understanding of professional work and teaching and learning in design, in Re-inventing Design Education. Conference proceedings. Curtin University, Perth.

Entwistle, N. (1998) Conceptions of Learning, Understanding and Teaching in Higher Education [online].  SCRE Fellowship.  Available from: http://www.scre.ac.uk/fellow/fellow98/entwistle.html (Accessed 6 December 2012).

Entwistle, N. (2000) Promoting deep learning through teaching and assessment: conceptual frameworks and educational contexts [online].  (Paper presented at TLRP Conference, Leicester, November 2000).  (Accessed 6 December 2012).

Evans, T. and Murphy, D. (1994) Research in Distance Education 3, third edition.  Geelong: Deakin University.

Falk, J. H., Dierking, L. D. (2002). Lessons Without Limit: How Free-Choice Learning is Transforming Education. Altamira Press

Gibbs, G. (1981) Teaching Students to Learn. Milton Keynes and Philadelphia: Open University Press.

Gosling D & Moon J (2001) How to Use Learning Outcomes and Assessment Criteria, SEEC publications

Marton, F. and Entwistle, N. (mimeo) Phenomenography. Available from: http://tip.psychology.org/marton.html (Accessed 6 December 2012).

Meyer, J. H. F. and Muller, M. W. (1990) ‘Evaluating the quality of student learning. I – an unfolding analysis of the association between perceptions of learning context and approaches to studying at an individual level’. Studies in Higher Education, Vol.15, No. 2, p. 131-154.

Morgan, A. (1993) Improving Your Students’ Learning.  London and Philadelphia: Kogan Page.

Rust, C. (ed., printed 1998) Improving Student Learning – Improving Students as Learners.  Oxford: The Oxford Centre for Staff and Learning Development.

Vygotsky, L.S. (1978) Mind in Society. Cambridge MA: Harvard University Press.