menu
×

Strategies & Processes

Internal Quality Assurance Cell (IQAC) has contributed significantly for institutionalizing the quality assurance strategies and processes

Response:

MJCET adopted ‘Outcome Based Education’ system for its undergraduate programs in the year 2014. A ‘Program Assessment Committee (PAC)’ was established for each of the eight programs in the institution. The PAC functions as a quality assurance body at the program level and the inputs are leveraged for quality improvement at the institutional level by IQAC. Two significant quality assurance strategies implemented in MJCET are as follows:

I. Hierarchical academic system

The hierarchical academic system is a means of achieving guided participatory management of the faculty in order to improve the teaching-learning process. The system consists of the following hierarchical structure

Every faculty member offering a course is a course coordinator who is responsible for planning, delivering and assessing the outcomes of the courses offered by him/her during the semester/year.

Course advisor is designated wherever the same course is offered by different course coordinators to different sections/classes in order to ensure uniformity of course coverage and assessment. The senior most faculty member teaching the course is designated as course advisor. Course Advisor advise the other course coordinators regarding relative importance of the units, problems to be solved and pedagogy to be adopted for effective delivery of the course material.

Module Coordinator is responsible for supervision of the course coordinators and course advisors offering courses under the module. Each module encompasses all the courses relevant to that specialization and has one module coordinator.

The Program Coordinator oversees the planning, course delivery and attainment of course outcomes. The PC chairs the PAC meeting which takes stock of the attainment of COs and POs and makes recommendations for improvements.

II. Pedagogical Initiatives

On the assessment side, the IQAC recommended that each CO should be assessed by using the following direct tools:

  1. CIE – I or CIE-2
  2. Assignment
  3. Any one of the following tools depending upon the suitability to a course
    1. Tutorials:As per the OU curriculum Tutorial is a part of the scheme of instructions in a few courses. However, the Department has introduced tutorial in at least half of the courses in every semester & are made to solve the problems during the tutorial class and submit them at the end of the session.
    2. Quiz: In courses without tutorial sessions, quiz is used as an assessment tool. The course coordinator administers the quiz to the students in regular class
    3. Classroom Problem Solving: In some courses the course coordinator prefers to use classroom problem solving as an assessment tool. The students solve the given problem in the class and the course coordinator assesses the performance in the class itself.
    4. Group Assignment: In certain courses which involve system or component design, group assignment may be preferred. In this the students are divided into groups of 4-5 and each group is given a separate question or same question with varying data.
    5. Seminar: In some courses students may be required to present seminars on selected topics which will be used for CO assessment.
Outcomes through IQAC
Response

The program assessment committee and academic audit cell reviews the academic performance of the program after every semester with reference to quality assurance of teaching learning processes. The attainment of COs and POs is also reviewed during the audit. The following two examples demonstrate the role of PAC/AAC/IQAC in reforming the academic structures:

Different assessment tools are employed for assessing the learning levels of the students. The assessment process itself can be either Formative or Summative type. The formative assessment is done throughout the course duration and the internal assessment marks are decided based upon the scores obtained therein.

During the academic audit process, it was decided to lay down guidelines for assessing the course outcomes by employing direct assessment tools. The recommended guidelines are

  1. Four assessment tools should be chosen for each of the Course Outcome. Three of these are mandatory, namely
    1. OU end examination (Summative)
    2. Class Test (summative)
    3. Assignment (Formative)
  2. Along with the three mandatory assessment tools, any one of the following may be used as the fourth assessment tool:
    1. Tutorials (Formative), in courses where tutorial is part of work load
    2. Class room problem solving (Formative) in courses where problems exist
    3. Quiz (Formative) in theoretical courses where no problems exist
    4. Minute question (Formative) in any type of course
    5. Seminar (Formative) in any type of course
    6. Group assignment (Formative) in any course
    7. Case Study (Formative) in courses where it is feasible
    8. Any other appropriate tool which is applicable to the entire class

During review of the question papers for internal assessment, assignment and tutorial, the academic audit observed disparity in the quality and standard of the questions employed in tests, assignments, tutorials etc. It was recommended to base the setting of question papers by following Blooms Taxonomy.

The IQAC introduced the concept of Bloom’s Index in order to quantify the question paper quality in terms of a numeric value.

The following numeric scale is employed to convert data from subjective realm to a quantitative scale. The resulting scale is a continuous real range between 0-10.

0-2 - Remember

3-4 – Understanding

5-6 – Applying

7-8 – Analyzing and Evaluating

9-10 - Creating

The course coordinator assigns appropriate level to each question at the time of paper setting. The Blooms index is computed for the question paper by taking the weighted average of all the questions in the paper.

Bloom’s index = sum of weighted average (maximum marks of the question x numeric value of blooms scale) of all questions in the paper over the sum of maximum marks of all questions in the paper.

For the purpose of assessment of quality, the acceptable range of Bloom's index is obtained as follows:

A fair quality of question paper / assignment is presumed to contain the following levels of questions:

  1. 30% of questions from 'Remembering' (Score of 2)
  2. 30% of questions from ' Understanding' (Score of 4)
  3. 30% of questions form 'Applying' (Score of 6)
  4. 10% of questions form 'Analyzing and Evaluating' (Score of 8)

For the above levels and corresponding percentage of questions, the Bloom's index is

Bloom’s Index = 0.3 x 2 + 0.3 x 4 + 0.3 x 6 + 0.1 x 8 = 4.4

Hence, the empirical range of Bloom's Index for satisfactory quality of question papers and assignments is taken as 4-6. The full range and its interpretation is presented as

Bloom’s Index “0-2” - Standard Lower than Acceptable

Bloom’s Index “4-6” - Acceptable range of Bloom's Index for Satisfactory quality

Bloom’s Index “8-10” - Standard Higher than Acceptable

Using rubrics to enrich reports’ evaluation process :

After the introduction of the outcome based education system, computation of attainment of course outcomes became essential. A CO is said to have been attained if the score obtained by a student for a particular question exceeds the satisfactory score benchmark set by the course coordinator.

In case of assessment tools like questions in internal assessment, tutorial and assignment, the evaluation is based upon the key which is prepared and made available to all the students. This ensures not only consistency of evaluation but also transparency of the evaluation system.

When it comes to evaluation of reports such as laboratory records, seminars reports, and project reports, there is a scope for discrepancy in evaluation by different teachers. The student is also unaware about the benchmarks used for evaluation of the reports.

In order to overcome the deficiency of consistency and transparency in evaluation of reports the academic audit committee recommended framing and introduction of appropriate rubrics for assessment of reports. Accordingly rubrics have been defined and adopted for:

  1. Project Reports.
  2. Seminar Reports.
  3. Aboratory Records.
Project reports:

Project report is evaluated by the project supervisor for a score of 100 by using Project Report Assessment Rubric and then scaled to 25. The evaluation parameters are as follows:

  1. Objective, Problem Statement and Methodology - 10
  2. Analysis - 10
  3. Implementation/Design - 20
  4. Project Planning - 10
  5. Results/Drawing/Graphical artifact /Conclusion - 10
  6. Results/Drawing/Graphical artifact /Conclusion - 10
  7. Project Diaries - 10
Seminar Reports:

The seminar report is evaluated by the seminar incharge / one of the course coordinators or any faculty member in the Department specialized in the area of the topic, for a score of 50 by using the Seminar Assessment Rubric, the evaluation parameters are as follows:

  1. Written Report - 15
  2. Speaker’s enthusiasm - 5
  3. Speaker’s posture - 5
  4. Ability to speak clearly and distinctly - 5
  5. Speaker’s slides - 10
  6. Speaker’s slides - 10
  7. Speaker’s ability to answer questions - 5
Laboratory Reports:

The Laboratory Record is evaluated for a score of 50. The following are rubric parameters for evaluation:

  1. Write up format - 15
  2. Experimentation Observations & Calculations - 20
  3. Results and Graphs - 10
  4. Discussion of results - 5