Home
About Level3
Search archives
Issues
- June 2007
- August 2006
- May 2005
- June 2004
- November 2003
DIT Home

Read postings about this article   |   Post a comment about this article  |  print this article [pdf]


Developing a Participatory Consultation Process for Quality Reviews: The initial stage of the European University Associations Quality Review of the Dublin Institute of Technology

Author - Aidan Kenny


[<<previous   |  next>>]


Show/ hide article menu (click icons opposite)

Communication

Accurate and accessible information flow was viewed as crucial to the success of the consultation process. The majority of the DIT community have access to the DIT intranet (active emails accounts: 16,350 (students) and 2,050 (staff)). Therefore electronic communication was identified as a primary conduit for information flow. An EUA consultation webpage was constructed and hosted on the Academic Affairs page. This was continuously updated as the process unfolded and all relevant documents were posted on this site. Access was obtained to utilize the ‘all-staff’ and ‘all-student’ email lists. This allowed information and surveys to be sent directly to colleagues' and students' personal addresses. All interested parties were invited to make submissions or suggestions relating to the consultation process, via direct correspondence to their email accounts.

In order to stimulate awareness and create a readiness to engage in the consultation process, a series of presentations were organized. The principal target groups were; the six Faculty Boards, Academic Council, Directorate, the Partnership committee, Human Resources Department and stakeholders (academic staff trade union TUI, non academic staff trade unions AMICUS, IMPACT, SIPTU, students representative body DITSU) see Table 1 for schedule of presentations.

Research design

The research design utilized a robust multi-method model, which provided data comprised of several different modes of investigation. A quantitative mode which provided statistical data through the online surveys. Qualitative mode, which provided descriptive transcripts from focus group sessions and submissions. The premise was to encourage the sample groups to engage in critical self reflection form their ‘lived experience’ of the DIT community. The gathered data mapped out participants attitudes and opinions on potential strengths and weaknesses. The commonality of the research modes was limited to the six themes (variables). It was not envisaged that one mode would feed off the other but rather that they should stand alone. However findings could be used in a complimentary fashion to align mutual trends or clusters of common issues. It was noted that the quantitative mode is more suitable to generalizations, while the qualitative mode provides depth and insight. It was envisaged that by utilizing comparison mapping between qualitative and quantitative findings a gauge of the validity of the study can be extrapolated.

The author suggests that both the validity and reliability of the study was bolstered by the nature and experience of the research team. In essence the SC and support staff are all members of the DIT community, with diverse expertise and ‘lived experience’. Their accumulative understanding of the DIT community (policies, strategies, practices, resources) is far reaching, this gives the research an invaluable ‘knowledge stock’. The author proposes that this type of model could be compared to an action research mode, which Thomson and Perry (2004: 405) link to Critical theory. The primary characteristics of comparison are, (1) collaborative approach, (2) critical self reflection, (3) practical application, (4) participant/researcher (5) produce data that informs institutional enhancement ,While these five characteristics resonate with the tenets of action research the author locates the mode within the interpretive paradigm with a strong alignment with the naturalistic paradigm of Guba and Lincoln.

Quantitative research

The quantitative research comprised of two structured online survey instruments, one for staff and one for students. A small team developed both questionnaires; items were constructed from DIT documents relating to the six themes, the criteria in the EUA Guidelines, and the team members’ personal experience and understanding of the DIT community. A pilot test run of both online surveys was carried out with ten participants before the surveys were operationalised, to ascertain their usability and technical reliability. The questionnaires were then administrated to the target populations; ‘all-student list’ and the ‘all-staff list’. Three reminders were sent out during the operational periods. In the case of the staff survey, different mail-out lists were used: (i) all-staff list; (ii) faculty staff list; (iii) `Update' staff electronic magazine.

The student questionnaire consisted of a three-question student profile (locations of study, full or part-time, classification of registration), and a 14 item attitude and opinion questionnaire. A Likert scale was used (see Appendix 2). The student population is 20,000 of which 16,500 have active email accounts: this was the target population. In order to achieve a representative sample size from the target population De Vaus (2002) suggests 660 and Sarantakos (1998) suggests 377 would be the necessary sample size. However the actual response rate was 960 see Figure 4 for profile.

The staff questionnaire consisted of a six-question staff profile (location, grade (2), category, length of service and age), and a 60 item attitude and opinion questionnaire with a Likert scale (see appendix 3). The staff population is 1800, however active staff email accounts are 2200 (part time bring the population up). In order to achieve a representative sample size from this target population De Vaus (02) suggests 237, Sarantakos (98) suggests 322 would be the necessary sample size. However the actual response rate was 472 (see Figure 5 for response rate profile).

 


[<<previous   |  next>>]