Ismail Aby Jamal

Ismail Aby Jamal
I say man, am I leader...

Sunday, August 15, 2010

Malaysian way of developing our own brand of creativity culture

THE New Economic Model (NEM) WHAT ARE WE TO DO AS AN EXECUTIVE


BY TAN SRI LIN SEE-YAN

It is not good enough to have policies to attract and retain talent. Weaknesses have to be dissected and addressed

THE New Economic Model (NEM) was unveiled in March and the 10th Malaysia Plan (2011-15) in June. These aim to transform Malaysian life and fortunes. At the heart is innovation.



The Prime Minister takes every opportunity to drive this home – to succeed, innovation must be pushed harder and harder until it becomes an integral part of the nation’s culture.



As a concept, innovation simply means the nurturing of talent for creativity. Here, creativity can be likened to producing something original and useful.



Viewed differently, to be creative means to deal with the classic creativity challenge of getting divergent thinking (producing unique ideas) and convergent thinking (putting ideas together to improve life) to work in tandem.



According to Prof Paul Torrance (who created the gold standard in creativity assessment), a creative person has an “unusual visual perspective”, matched with an “ability to synthesise diverse elements into meaningful products.”



It’s essentially about getting the left and right brains to operate as one. A recent IBM CEOs poll identified creativity as the No. 1 “leadership competency” of the future. Unfortunately, we don’t have such a culture.



A culture thing



Since Tun Mahathir Mohamad’s Look East policy, we have yet to succeed in emulating Japan’s innovation culture. Three main elements of this culture remain alien to us: its mentor system of management; acceptance of starting at the bottom to understand a firm’s workings at every level; and the Japanese function in unison as a workforce and the future of the firm. Whatever we have since achieved is still very much work in progress.



As a matter of public policy, we did try to create a Malaysian way of developing our own brand of creativity culture by: making Malaysia an attractive place to live, with security, good infrastructure and communications, within a unique and relaxed way of life that is multi-racial, multi-religious and multi-cultural, which foreigners can easily adapt to; and trying to position the nation as a base for foreign direct investments (FDIs) to come, expand and prosper, with widespread use of English.



We tried these to make up for what is special to the Japanese, but there was only limited success. We just don’t have the culture, and we can’t (and won’t) change readily enough to develop such a culture.



Earlier this year, in a column titled “On productivity and talent management”, I wrote: “Human capital lies at the core of innovation. Raising productivity requires a labour force of high calibre – committed, motivated and skilled enough to drive transformational change based on excellence over the long term. It’s about trapping potential through acquisition of new skill sets in designing new products and services, and devising new processes and systems to do things smarter and more efficiently. This requires ready access to a talent pool of critical skills and expertise.”



Frankly, we don’t as yet have such a pool. Therefore, we need to go back to basics. This means transforming our education system to emphasise meritocracy and lay the foundation for creative thinking and analysis from day one.



For a start, teaching curriculum, pedagogy and management of education have to be reformed. US President Barack Obama is right: “If we want success for our country, we can’t accept failure in our schools.”



Fortunately, as evident from a recent supplement in The Economist magazine, creativity can be taught. It starts with recognising the new view that creativity is part of normal brain function. The trick is to get the classic divergent-convergent creativity challenge working as a matter of habit.



First, we need to discard the emphasis on IQ in favour of CQ (creativity quotient). It is already proven that Torrance’s creativity index is a good predictor of kids’ creative accomplishments as adults.



According to Prof J. Plunker of Indiana University, the correlation to lifetime creative accomplishment was more than three times stronger for childhood CQ than childhood IQ. However, unlike IQ scores (which rise 10 points every generation because presumably, enriched environments produce smarter kids), CQ scores in the US and many other rich nations have fallen, of late.



This no doubt reflects that kids now spend more and more time in front of TVs and playing video games, rather than engaging in creative activities. Also, there’s the growing lack of creativity development in schools and at home.



The same decline is happening in Malaysia. Reform must adopt a problem-based learning approach – where education is revamped to emphasise ideas generation, curricula is driven by real-world enquiry, and pedagogy acquaints teachers with neuroscience of creativity.



Critics argue our kids already have too much to learn. This is a false trade-off. Creativity thrives on fact-finding and deep research.



High global curriculum standards can still be met – but it needs to be taught differently. Creativity is prized in Malaysia, but, we don’t seem to be committed politically to unlock it.



Continuing denial



We have not produced (and are unlikely to produce) talent in sufficient numbers to take us to the next level of becoming a high-income nation. For sure, what got us to where we are today will not get us to where we want to go. To begin with, we have to broaden the human capital base. For this, we need to transform our education system to secure at least a quality supply flow in the next generation.

In the end, it’s not just about sustaining economic growth. We are surrounded with matters of national and international importance crying for creative solutions – from striving for excellence to raising productivity to delivering quality healthcare.



Such solutions emerge from an open marketplace of ideas. These can be sustained by a workforce constantly contributing original ideas and being receptive to ideas of others. What is required is real leadership to effectively harness the vast energies engendered.



The Prime Minister is right in highlighting government as a key component of the creative ecosystem, in what he calls “bringing innovation into government and government into innovation.”



This is to enable the formulation of framework, regulation and policies that support and not hinder innovation. It’s a great policy move but in reality, the Government at large has yet to buy into this transformational change.



If you ask around – as far as talent development and retention goes – much of the Government remains in denial. President Ronald Reagan once said jokingly the nine most feared words in the English language are “I am from government and I am here to help.” This rings all too true!



Come on, get real



Studies by an old friend Prof Rajah Rasiah identified three underlying causes for Malaysia’s poor showing in last year’s FDI inflows, according to the 2010 World Investment Report – its narrow human capital base, absence of synergy between research and development labs and industry, and inadequate technological absorption, in the face of intensifying competition in Asia especially for talent.



Like it or not, the talent game is dynamic as it is intense. It is not good enough to have a set of policy responses to attract and retain talent. Weaknesses have to be dissected and addressed, and practical solutions neatly designed for effective implementation in a well coordinated fashion.



Most policy pronouncements reflect incentives offered by the Government which it considers attractive. Nobody bothers to ask the targeted talent what they want and what it takes to make them want to move.



The tendency is to assume that, given the right incentives, Malaysian talent overseas would move back and foreign talent is readily attracted to come to Malaysia. Hence, the dismal failure of the “brain-gain”

programme. The approach is all wrong. Get real!



The bar on talent has since been raised. Fuelling the war for talent, enterprises in Asia are providing higher salaries and perks.



A sea change is taking place in the way businesses organise themselves, create wealth and market their brands and wares worldwide.



The rise of the Web and tech-based professions in logistics, biotech, life sciences and information technology put a premium on scientists, engineers, financial analysts and computer geeks.



In Asia, soft skills which were previously sidelined (such as adaptability, English and Chinese skills, ease in fitting into other cultures, negotiation and political savvy), are now in demand.



It’s no longer enough to be talented in Oracle and Java. Global experience, an ability to lead multicultural teams, and diplomatic know-how to move seamlessly across borders, are among the skills in short supply.

The globalised economy has changed everything. Indeed, businesses will ultimately have to rethink the way they recruit and steward talent.



Today, China and India are becoming sources of innovation. Already, these nations are benefiting from “brain-circulation”, with capital and talent returning after value-adding in skills and experience abroad.

This is occurring without government incentives. National ecosystems are evolving nicely for them. It’s happening simply because it makes good business sense. There is much Malaysia can learn from the new reality.



As wealth and power change hands, talent is no longer a buyer’s market for the traditional rich. By 2015, the International Monetary Fund projects that Asia-Pacific will make-up 45% of global gross domectic product as against 20% by the US and 17% by Euro-zone.



The talent drain can only get more intense. We now have a world where talent can be found anywhere. The problem is particularly acute in Asia and Latin America, where breakneck growth is pushing management to the limit.



The talent crunch is real. Throwing money and incentives at talent won’t necessarily solve the problem. We need to think long-term and re-think old ways.



To do that, corporations are already investing to create the talent they lack, going so far as to establish their own universities to shape raw recruits into corporate leaders. In the end, nations need to have a workable process to recognise talent, fast-track careers, and provide fresh opportunities; essentially, to understand what makes them tick.



It needs high-potential programmes to attract and retain key talent within an ecosystem that provides for high living standards, where security and rule of law are taken for granted.



But, risks remain in the global economy. Concerns of citizens must be addressed by developing and investing in them. The quality of tertiary and vocational education has to be raised as a matter of priority. Imported talent will reinforce local talent; only bring in people who can contribute. Striking the right balance is vital.



>Former banker Dr Lin is a Harvard-educated economist and a British Chartered Scientist who now spends time writing, teaching and promoting the public interest. Feedback is most welcome at

Five Sweeping Trends That Will Shape Your Company's Future

The New Workforce


Five Sweeping Trends That Will Shape Your Company's Future



Author: Harriet Hankin

Pub Date: November 2004



Overview

Think beyond today’s human resources issues...and into the future.

Today’s workplace is already a tapestry comprising people of countless different backgrounds, ethnicities, age groups, regions, and more. But that diversity is just the beginning of a radical shift in the makeup—and requirements—of tomorrow’s workforce.

The New Workforce gives you a clear picture of the rapid changes now underway—along with the steps required to attract and retain motivated, loyal, and productive employees. Based on a wealth of statistics, research, interviews, and firsthand experience, the book pinpoints five sweeping trends:

* An increasingly aging yet active population: Lifestyle changes and medical advances are keeping people alive and fit into their 90s; financial pressures and personal desire are keeping them working as well. Companies that can harness the power of these experienced and skilled employees will reap concrete financial benefits.

* The decline of the nuclear family and the rise of alternative households: Once considered the unshakable norm, the traditional nuclear family now represents only a small fraction of households. Today’s workforce increasingly consists of female heads of households, same-sex partners, stay-at-home dads, dual-income families, unmarried couples, and other arrangements. And the benefits programs required to support and retain them are quickly evolving to make flexibility a key component.

* Four generations working side by side—with a fifth on the way: The Silent Generation, the Baby Boomers, Generation X, the Baby Boom Echo, and the newest entrants to the world—the Millennium Generation…. Each has competing needs, values, expectations, and working styles. Smart companies will mine the wisdom and experience of their older employees with the energy and stamina of the younger ones to create a powerful multi-generational workforce.

* A workplace that is growing more diverse and more blended: Whether it’s race, ethnicity, religion, gender, or sexual orientation, the workforce is growing more diverse at a faster rate than ever before. Truly successful companies won’t just tolerate diversity; they will accept and respect their workforce in a blender.

* The need for a “higher purpose” in the workplace: A paycheck is not the only thing that employees want. Studies show that they also seek a spiritual component, which includes personal growth, balance, and meaningful purpose. Organizations that champion trust, individual respect, and ethical conduct will build committed workforces and creative thinkers.

In addition to mapping the path from current needs to future requirements, The New Workforce supplies powerful ideas for radically revamping HR policies, recruiting efforts, compensation and benefits, and learning and training, including advice on: flexible scheduling, in-house medical support, double family leave, telecommuting, literacy tutoring, sabbatical programs, digital matching, aptitude testing, total-rewards strategies, mentoring up, and much more.

The New Workforce is indispensable for human resources professionals, managers and executives, and entrepreneurs. It’s an all-in-one resource for peering into the immediate future and preparing for the rapidly changing face of tomorrow’s workforce.

Think beyond today’s human resources issues...and into the future.

The New Workforce


Five Sweeping Trends That Will Shape Your Company's Future



Author: Harriet Hankin

Pub Date: November 2004

Your Price: $27.95

ISBN: 9780814414989

Format: Paper or Softback



Overview

Think beyond today’s human resources issues...and into the future.

Today’s workplace is already a tapestry comprising people of countless different backgrounds, ethnicities, age groups, regions, and more. But that diversity is just the beginning of a radical shift in the makeup—and requirements—of tomorrow’s workforce.

The New Workforce gives you a clear picture of the rapid changes now underway—along with the steps required to attract and retain motivated, loyal, and productive employees. Based on a wealth of statistics, research, interviews, and firsthand experience, the book pinpoints five sweeping trends:

* An increasingly aging yet active population: Lifestyle changes and medical advances are keeping people alive and fit into their 90s; financial pressures and personal desire are keeping them working as well. Companies that can harness the power of these experienced and skilled employees will reap concrete financial benefits.

* The decline of the nuclear family and the rise of alternative households: Once considered the unshakable norm, the traditional nuclear family now represents only a small fraction of households. Today’s workforce increasingly consists of female heads of households, same-sex partners, stay-at-home dads, dual-income families, unmarried couples, and other arrangements. And the benefits programs required to support and retain them are quickly evolving to make flexibility a key component.

* Four generations working side by side—with a fifth on the way: The Silent Generation, the Baby Boomers, Generation X, the Baby Boom Echo, and the newest entrants to the world—the Millennium Generation…. Each has competing needs, values, expectations, and working styles. Smart companies will mine the wisdom and experience of their older employees with the energy and stamina of the younger ones to create a powerful multi-generational workforce.

* A workplace that is growing more diverse and more blended: Whether it’s race, ethnicity, religion, gender, or sexual orientation, the workforce is growing more diverse at a faster rate than ever before. Truly successful companies won’t just tolerate diversity; they will accept and respect their workforce in a blender.

* The need for a “higher purpose” in the workplace: A paycheck is not the only thing that employees want. Studies show that they also seek a spiritual component, which includes personal growth, balance, and meaningful purpose. Organizations that champion trust, individual respect, and ethical conduct will build committed workforces and creative thinkers.

In addition to mapping the path from current needs to future requirements, The New Workforce supplies powerful ideas for radically revamping HR policies, recruiting efforts, compensation and benefits, and learning and training, including advice on: flexible scheduling, in-house medical support, double family leave, telecommuting, literacy tutoring, sabbatical programs, digital matching, aptitude testing, total-rewards strategies, mentoring up, and much more.

The New Workforce is indispensable for human resources professionals, managers and executives, and entrepreneurs. It’s an all-in-one resource for peering into the immediate future and preparing for the rapidly changing face of tomorrow’s workforce.

About the Author

Harriet Hankin is the national director of business development for the North America Employee Benefits Practice at Willis, Inc., a large global insurance broker. Previously she was the president and an owner of GCI Consulting Group, a benefits design consulting and administration company, which was acquired by Willis in 2005. A featured speaker at conferences throughout the United States, she focuses on the link between general business topics, benefits, and work-life balance. She has won numerous awards, including Pennsylvania's Best 50 Women in Business (2000) and Greater Philadelphia's Ernst & Young Entrepreneur of the Year (2001). She lives in Glenmoore, Pennsylvania, and can be reached at harriet.hankin@willis.com

Thursday, January 28, 2010

A Brief Introduction to Reliability, Validity, and Scaling

A Brief Introduction to Reliability, Validity, and ScalingÓ
Reliability
Simply put, a reliable measuring instrument is one which gives you the same measurements when you repeatedly measure the same unchanged objects or events. We shall briefly discuss here methods of estimating an instrument’s reliability. The theory underlying this discussion is that which is sometimes called “classical measurement theory.” The foundations for this theory were developed by Charles Spearman (1904, “General Intelligence,” objectively determined and measures. American Journal of Psychology, 15, 201-293).
If a measuring instrument were perfectly reliable, then it would have a perfect positive (r = +1) correlation with the true scores. If you measured an object or event twice, and the true scores did not change, then you would get the same measurement both times.
We theorize that our measurements contain random error, but that the mean error is zero. That is, some of our measurements have error that make them lower than the true scores, but others have errors that make them higher than the true scores, with the sum of the score-decreasing errors being equal to the sum of the score increasing errors. Accordingly, random error will not affect the mean of the measurements, but it will increase the variance of the measurements.
Our definition of reliability is . That is, reliability is the proportion of the variance in the measurement scores that is due to differences in the true scores rather than due to random error.
Please note that I have ignored systematic (nonrandom) error, optimistically assuming that it is zero or at least small. Systematic error arises when our instrument consistently measures something other than what it was designed to measure. For example, a test of political conservatism might mistakenly also measure personal stinginess.
Also note that I can never know what the reliability of an instrument (a test) is, because I cannot know what the true scores are. I can, however, estimate reliability.
Test-Retest Reliability. The most straightforward method of estimating reliability is to administer the test twice to the same set of subjects and then correlate the two measurements (that at Time 1 and that at Time 2). Pearson r is the index of correlation most often used in this context. If the test is reliable, and the subjects have not changed from Time 1 to Time 2, then we should get a high value of r. We would likely be satisfied if our value of r were at least .70 for instruments used in research, at least .80 (preferably .90 or higher) for instruments used in practical applications such as making psychiatric diagnoses (see my document Nunnally on Reliability). We would also want the mean and standard deviation not to change appreciably from Time 1 to Time 2. On some tests, however, we would expect some increase in the mean due to practice effects.
Alternate/Parallel Forms Reliability. If there two or more forms of a test, we want to know that the two forms are equivalent (on means, standard deviations, and correlations with other measures) and highly correlated. The r between alternate forms can be used as an estimate of the tests’ reliability.
Split-Half Reliability. It may be prohibitively expensive or inconvenient to administer a test twice to estimate its reliability. Also, practice effects or other changes between Time 1 and Time 2 might invalidate test-retest estimates of reliability. An alternative approach is to correlate scores on one random half of the items on the test with the scores on the other random half. That is, just divide the items up into two groups, compute each subject’s score on the each half, and correlate the two sets of scores. This is like computing an alternate forms estimate of reliability after producing two alternate forms (split-halves) from a single test. We shall call this coefficient the half-test reliability coefficient, rhh.
Spearman-Brown. One problem with the split-half reliability coefficient is that it is based on alternate forms that have only one-half the number of items that the full test has. Reducing the number of items on a test generally reduces it reliability coefficient. To get a better estimate of the reliability of the full test, we apply the Spearman-Brown correction, .
Cronbach’s Coefficient Alpha. Another problem with the split-half method is that the reliability estimate obtained using one pair of random halves of the items is likely to differ from that obtained using another pair of random halves of the items. Which random half is the one we should use? One solution to this problem is to compute the Spearman-Brown corrected split-half reliability coefficient for every one of the possible split-halves and then find the mean of those coefficients. This mean is known as Cronbach’s coefficient alpha. Instructions for computing it can be found in my document Cronbach’s Alpha and Maximized Lambda4.
Maximized Lambda4. H. G. Osburn (Coefficient alpha and related internal consistency reliability coefficients, Psychological Methods, 2000, 5, 343-355) noted that coefficient alpha is a lower bound to the true reliability of a measuring instrument, and that it may seriously underestimate the true reliability. They used Monte Carlo techniques to study a variety of alternative methods of estimating reliability from internal consistency. Their conclusion was that maximized lambda4 was the most consistently accurate of the techniques.
l4 is the rsb for one pair of split-halves of the instrument. To obtain maximized l4, one simply computes l4 for all possible split-halves and then selects the largest obtained value of l4. The problem is that the number of possible split halves is for a test with 2n items. If there are only four or five items, this is tedious but not unreasonably difficult. If there are more than four or five items, computing maximized l4 is unreasonably difficulty, but it can be estimated -- see my document Estimating Maximized Lambda4.
Construct Validity
Simply put, the construct validity of an operationalization (a measurement or a manipulation) is the extent to which it really measures (or manipulates) what it claims to measure (or manipulate). When the dimension being measured is an abstract construct that is inferred from directly observable events, then we may speak of “construct validity.”
Face Validity. An operationalization has face validity when others agree that it looks like it does measure or manipulate the construct of interest. For example, if I tell you that I am manipulating my subjects’ sexual arousal by having them drink a pint of isotonic saline solution, you would probably be skeptical. On the other hand, if I told you I was measuring my male subjects’ sexual arousal by measuring erection of their penises, you would probably think that measurement to have face validity.
Content Validity. Assume that we can detail the entire population of behavior (or other things) that an operationalization is supposed to capture. Now consider our operationalization to be a sample taken from that population. Our operationalization will have content validity to the extent that the sample is representative of the population. To measure content validity we can do our best to describe the population of interest and then ask experts (people who should know about the construct of interest) to judge how well representative our sample is of that population.
Criterion-Related Validity. Here we test the validity of our operationalization by seeing how it is related to other variables. Suppose that we have developed a test of statistics ability. We might employ the following types of criterion-related validity:
Concurrent Validity. Are scores on our instrument strongly correlated with scores on other concurrent variables (variables that are measured at the same time). For our example, we should be able to show that students who just finished a stats course score higher than those who have never taken a stats course. Also, we should be able to show a strong correlation between score on our test and students’ current level of performance in a stats class.
Predictive Validity. Can our instrument predict future performance on an activity that is related to the construct we are measuring? For our example, is there a strong correlation between scores on our test and subsequent performance of employees in an occupation that requires the use of statistics.
Convergent Validity. Is our instrument well correlated with measures of other constructs to which it should, theoretically, be related? For our example, we might expect scores on our test to be well correlated with tests of logical thinking, abstract reasoning, verbal ability, and, to a lesser extent, mathematical ability.
Discriminant Validity. Is our instrument not well correlated with measures of other constructs to which it should not be related? For example, we might expect scores on our test not to be well correlated with tests of political conservatism, ethical ideology, love of Italian food, and so on.

Scaling
Scaling involves the construction of instruments for the purpose of measuring abstract concepts such as intelligence, hypomania, ethical ideology, misanthropy, political conservatism, and so on. I shall restrict my discussion to Likert scales, my favorite type of response scale for survey items.
The items on a Likert scale consist of statements with which the respondents are expected to differ with respect to the extent to which they agree with them. For each statement the response scale may have from 4 to 9 response options. Because I have used 5-point optical scanning response forms in my research, I have most often used this response scale:
A
B
C
D
E
strongly disagree
disagree
no opinion
agree
strongly agree

Generating Potential Items. You should start by defining the concept you wish to measure and then generate a large number of potential items. It is a good idea to recruit colleagues to help you generating the items. Some of the items should be worded such that agreement with them represents being high in the measured attribute and others should be worded such that agreement with them represents being low in the measured attribute.
Evaluating the Potential Items.
It is a good idea to get judges to evaluate your pool of potential items. Ask each judge to evaluate each item using the following scale:
1 = agreeing indicates the respondent is very low in the measured attribute
2 = agreeing indicates the respondent is below average in the measured attribute
3 = agreeing does not tell anything about the respondent’s level of the attribute
4 = agreeing indicates the respondent is above average in the measured attribute
5 = agreeing indicates the respondent is very high in the measured attribute
Analyze the data from the judges and select items with very low or very high averages (to get items with good discriminating ability) and little variability (indicating agreement among the judges).
Alternatively, you could ask half of the judges to answer the items as they think a person low in the attribute to be measured would, and the other half to answer the items as would a person high in the attribute to be measured. You would then prefer items which best discriminated between these two groups of judges -- items for which the standardized difference between the group means is greatest.
Judges can also be asked whether any of the items were unclear or confusing or had other problems.
Pilot Testing the Items. After you have selected what the judges thought were the best items, you can administer the scale to respondents who are asked to answer the questions in a way that reflects their own attitudes. It is a good idea to do this first as a pilot study, but if you are impatient like me you can just go ahead and use the instrument in the research for which you developed it (and hope that no really serious flaws in the instrument appear). Even at this point you can continue your evaluation of the instrument -- at the very least, you should conduct an item analysis (discussed below), which might lead you to drop some of the items on the scale.
Scoring the Items. The most common method of creating a total score from a set of Likert items is simply to sum each person’s responses to each item, where the responses are numerically coded with 1 representing the response associated with the lowest amount of the measured attribute and N (where N = the number of response options) representing the response associated with the highest amount of the measured attribute. For example, for the response scale I showed above, A = 1, B = 2, C = 3, D = 4, and E = 5, assuming that the item is one for which agreement indicates having a high amount of the measured attribute.
You need to be very careful when using a computer to compute total scores. With some software, when you command the program to compute the sum of a certain set of variables (responses to individual items), it will treat missing data (items on which the respondent indicated no answer) as zeros, which can greatly corrupt your data. If you have any missing data, you should check to see if this is a problem with the computer software you are using. If so, you need to find a way to deal with that problem (there are several ways, consult a statistical programmer if necessary).
I generally use means rather than sums when scoring Likert scales. This allows me a simple way to handle missing data. I use the SAS (a very powerful statistical analysis program) function NMISS to determine, for each respondent, how many of the items are unanswered. Then I have the computer drop the data from any subject who has missing data on more than some specified number of items (for example, more than 1 out of 10 items). Then I define the total score as being the mean of the items which were answered. This is equivalent to replacing a missing data point with the mean of the subject’s responses on the other items in that scale -- if all of the items on the scale are measuring the same attribute, then this is a reasonable procedure. This can also be easily done with SPSS.
If you have some items for which agreement indicates a low amount of the measured attribute and disagreement indicates a high amount of the measured attribute (and you should have some such items), you must remember to reflect (reverse score) the item prior to including it in a total score sum or mean or an item analysis. For example, consider the following two items from a scale that I constructed to measure attitudes about animal rights:
· Animals should be granted the same rights as humans.
· Hunters play an important role in regulating the size of deer populations.
Agreement with the first statement indicates support for animal rights, but agreement with the second statement indicates nonsupport for animal rights. Using the 5-point response scale shown above, I would reflect scores on the second item by subtracting each respondent’s score from 6.
Item Analysis. If you believe your scale is unidimensional, you will want to conduct an item analysis. Such an analysis will estimate the reliability of your instrument by measuring the internal consistency of the items, the extent to which the items correlate well with one another. It will also help you identify troublesome items.
To illustrate item analysis with SPSS, we shall conduct an item analysis on data from one of my past research projects. For each of 154 respondents we have scores on each of ten Likert items. The scale is intended to measure ethical idealism. People high on idealism believe that an action is unethical if it produces any bad consequences, regardless of how many good consequences it might also produce. People low on idealism believe that an action may be ethical if its good consequences outweigh its bad consequences.
Bring the data (KJ-Idealism.sav) into SPSS.
Click Analyze, Scale, Reliability Analysis.

Select all ten items and scoot them to the Items box on the right.

Click the Statistics box.

Check “Scale if item deleted” and then click Continue.


Back on the initial window, click OK.
Look at the output. The Cronbach alpha is .744, which is acceptable.

Look at the Item-Total Statistics.

There are two items, numbers 7 and 10, which have rather low item-total correlations, and the alpha would go up if they were deleted, but not much, so I retained them. It is disturbing that item 7 did not perform better, since failure to do ethical cost/benefit analysis is an important part of the concept of ethical idealism. Perhaps the problem is that this item does not make it clear that we are talking about ethical cost/benefit analysis rather than other cost/benefit analysis. For example, a person might think it just fine to do a personal, financial cost/benefit analysis to decide whether to lease a car or buy a car, but immoral to weigh morally good consequences against morally bad consequences when deciding whether it is proper to keep horses for entertainment purposes (riding them). Somehow I need to find the time to do some more work on improving measurement of the ethical cost/benefit component of ethical idealism.

1. People should make certain that their actions never intentionally harm others even to a small degree.
2. Risks to another should never be tolerated, irrespective of how small the risks might be.
3. The existence of potential harm to others is always wrong, irrespective of the benefits to be gained.
4. One should never psychologically or physically harm another person.
5. One should not perform an action which might in any way threaten the dignity and welfare of another individual.
6. If an action could harm an innocent other, then it should not be done.
7. Deciding whether or not to perform an act by balancing the positive consequences of the act against the negative consequences of the act is immoral.
8. The dignity and welfare of people should be the most important concern in any society.
9. It is never necessary to sacrifice the welfare of others.
10. Moral actions are those which closely match ideals of the most "perfect" action.

Factor Analysis. It may also be useful to conduct a factor analysis on the scale data to see if the scale really is unidimensional. Responses to the individual scale items are the variables in such a factor analysis. These variables are generally well correlated with one another. We wish to reduce the (large) number of variables to a smaller number of factors that capture most of the variance in the observed variables. Each factor is estimated as being a linear (weighted) combination of the observed variables. We could extract as many factors as there are variables, but generally most of those factors would contribute little, so we try to get just a few factors that capture most of the covariance. Our initial extraction generally includes the restriction that the factors be orthogonal, independent of one another.
Copyright 2009, Karl L. Wuensch - All rights reserved.
Return to Wuensch’s Statistics Lessons
Ó Copyright 2009, Karl L. Wuensch - All rights reserved

Three-Way Nonorthogonal ANOVA on SPSS

Three-Way Nonorthogonal ANOVA on SPSSã

The data for this exercise are from the research which was presented in the article: Castellow, W. A., Wuensch, K. L., & Moore, C. H. (1990). Effects of physical attractiveness of the plaintiff and defendant in sexual harassment judgments, Journal of Social Behavior and Personality, 5, 547-562. The classification variables are DEATTR (experimentally manipulated physical attractiveness of the male defendant accused of sexual harassment), GENDER (gender of the mock juror), and PLATTR (experimentally manipulated physical attractiveness of the female plaintiff). The criterion variable is RATING, the mock juror’s rating of the physical attractiveness of the defendant (on a 9-point scale). Please note that this research is “quasi-experimental” in the sense that two of the predictor variables are experimentally manipulated but one (gender) is not.

Download the data file, SS1234.dat, from my data files page. Open the data file with a text editor, such as Word, just to see how the data are arranged. There is one line of data for each subject. A blank space is used as the delimiter (to separate one score from the next score). For each subject, the first score is gender (1 for male and 2 for female), the second is plattr (1 for not attractive, 2 for attractive), the third is deattr (1 for not attractive, 2 for attractive), and the fourth is rating.


Reading A Text Data File Into SPSS
Close the data file and boot up SPSS for Windows. If a dialog window comes up atop the data editor, click CANCEL. From the command bar at the top of the screen, select FILE, READ TEXT DATA. Point SPSS to the directory in which you have placed the data file, SS1234.dat. Change the Files of type parameter to Data(*.dat), select the SS1234.dat file, and click Open.

Now the Text Import Wizard comes to your assistance. The Step 1 screen looks like this:


You can see the first few lines of data in the window. Just click NEXT at this point, advancing to Step 2.


It is all too easy to get in the habit of just automatically clicking Next on Step 2, but you should carefully check to see if the wizard has correctly guessed about the format of your data file. Sometimes it guesses incorrectly, and if you do not correct it, your data will be corrupted during importation. Here the wizard has guessed correctly – the data are delimited and the first row does not contain variable names. Click Next to advance to Step 3.


The data do start on line 1, each line does represent one case (data from one subject), and you do want to read all cases, so just click NEXT again to advance to Step 4.


Blank spaces are used as the delimiters, so you just click NEXT again to advance to Step 5.


Click on the V1 tab above the first column of data, which selects that column. Change the variable name from V1 to gender, then move on to the second column, change its name to PL_Attr, then column three to DE_Attr, and column four to Rating. Then click Next to advance to Step 6..


Just click FINISH and you are returned to the data editor, where you can see the data entered into SPSS. You could ask to save the formatting specifications under a given name that you could specify in Step 1 on a future importation of a text data file with the same structure. You could ask to save the syntax, which would save in a syntax file the commands used to import these data. You could then simply run that syntax file to import the data at a later time. If you were going to use these data in SPSS again, it would be a good idea to save them in a SPSS system file (*.sav). That way you would be spared repeating this routine of reading a text file. To save the data in an SPSS system data file, just click, on the command bar at the top, FILE, SAVE AS to get this window:


Point the window at the directory where you want to save the *.sav file, give it a name (SS1234), and indicate type SPSS (*.sav). Click SAVE and you are all set.

Conducting the ANOVA
Now, let us do an ANOVA on these data. From the command bar, click ANALYZE, GENERAL LINEAR MODEL, UNIVARIATE. Select the rating variable from the list of variables and use the arrow to move it into the Dependent Variable field. Now select DE_Attr, Gender, and PL_Attr (in that order) as Fixed Factors (your “independent” variables).


Click OPTIONS and under “Estimated Marginal Means” ask to Display Means for the DE_Attr effect. Check “Compare main effects” and take the default LSD (no adjustment of alpha to control familywise error). Under Display ask for Estimates of Effect Size. There are numerous other optional statistics which you could request here.


Click CONTINUE and then OK. You get the analysis.
In the source table note the following
Type III sums of squares is the default.
The “Total” sum of squares is uncorrected for the mean – that is, it is simply the sum of the squared scores on the criterion variable.
The “Corrected Total” sums of squares is the corrected sums of squares. This is what we have commonly referred to as the total sum of squares.
The effect size estimate is partial η2. The regular η2 will smaller. For example, for the main effect of DE_Attr, partial η2 is .890. Regular η2 is 1275.998 ÷ 1476.234 = .84. For DE_Attr there is little difference between partial η2 and regular η2 because the size of the other effects is quite small. The partial η2 for the DE_Attr x Gender interaction is .091. The regular η2 for that effect is 15.894 ÷ 1476.234 = .011, much smaller than the partial η2.
If you wish to put confidence intervals on the values for partial η2, you can use my program Conf-Interval-R2-Regr.sas. If you desire confidence intervals for regular program η2 you will need to compute a modified F with the sums of squares for all other tested effects added to the error term.
Notice that SPSS provides unstandardized confidence intervals for the estimated marginal means and the differences between them.

Type I and Type II Sums of Squares
If you wanted those strange Type II sums of squares, you could repeat the analysis, but this time click the MODEL button and then, at the bottom of the window, select Type II sums of squares. If you have previously run the SS1234.sas program, you can verify that the SPSS output is the same as the SAS output for Type II sums of squares.


If you select Type I sums of squares, you will find that the SPSS output is not the same as the SAS output. When you select a full factorial model in SAS with the statement “model rating=DGP” the effects are ordered this way: D G D*G P D*P G*P D*G*P, but when you do the same with SPSS, the effects are ordered this way: D G P D*G D*P G*P D*G*P.

Copyright 2006, Karl L. Wuensch - All rights reserved.
ã Copyright 2006, Karl L. Wuensch - All rights reserved.

Two-Way Independent Samples ANOVA with SPSS

Two-Way Independent Samples ANOVA with SPSSã
Obtain the file ANOVA2.SAV on my SPSS Data page. The data are those that appear in Table 17-3 of Howell’s Fundamental statistics for the behavioral sciences (6th ed.) and in Table 13-2 of Howell’s Statistical methods for psychology (6th ed.). The independent variables are age of participant (young or old) and depth of cognitive processing (manipulated by the instructions given to participants prior to presentation of a list of words). The dependent variable is number of words correctly recalled later.
Bring the data file, ANOVA2.SAV, into SPSS. To conduct the factorial analysis, click Analyze, General Linear Model, Univariate. Scoot Items into the Dependent Variable box and Age and Condition into the Fixed Factors box. Click Plots and scoot Conditon into the Horizontal Axis box and Age into the Separate Lines box. Click Add, Continue. Click Post Hoc and scoot Conditon into the "Post Hoc Tests for" box. Check REGWQ. Click Continue. Click options, check Descriptive Statistics and Estimates of Effect Size, click Continue. Click OK.
Look at the plot. The plot makes it pretty clear that there is an interaction here. The difference between the oldsters and the youngsters is quite small when the experimental condition is one with little depth of cognitive processing (counting or rhyming), but much greater with higher levels of depth of cognitive processing. With the youngsters, recall performance increases with each increase in depth of processing. With the oldsters, there is an interesting dip in performance in the intentional condition. Perhaps that is a matter of motivation, with oldsters just refusing to follow instructions that ask them to memorize a silly list of words.
Do note that the means plotted here are least squares means (SPSS calls them estimated means). For our data, these are the same as the observed means. We had the same number of scores in each cell of our design. If we had unequal numbers of scores in our cells, then our independent variables would be correlated with one another, and the observed means would be 'contaminated' by the correlations between independent variables. The estimated means represent an attempt to estimate what the cells means would be if the independent variables were not correlated with one another. These estimated means are also available in the Options dialog box.
Look at the output from the omnibus ANOVA. We generally ignore the F for the "Corrected Model” -- that is the F that would be obtained if we were to do a one-way ANOVA, where the groups are our cells. Here it simply tells us that our cell means differ significantly from one another. The two-way factorial ANOVA is really just an orthogonal partitioning of the treatment variance from such a one-way ANOVA -- that variance is partitioned into three components: The two main effects and the one interaction. We also ignore the test of the intercept, which tests the null hypothesis that the mean of all the scores is zero. If you divide each effect's SS by the total SS, you see that the condition effect accounts for a whopping 57% of the total variance, with the age effect only accounting for 9% and the interaction only accounting for 7%. Despite the fact that all three of these effects are statistically significant, one really should keep that in mind, and point out to the readers of the research report that the age and interaction effects are much less in magnitude than is the effect of recall condition (depth of processing).
Look at the within-cell standard deviations. In the text book, Howell says "it is important to note that the data themselves are approximately normally distributed with acceptably equal variances." I beg to differ. Fmax is 4.52 / 1.42 > 10 -- but I am going to ignore that here.
The interpretation of the effect of age is straightforward -- the youngsters recalled significantly more items than did the oldsters, 3.1 items on average. The pooled within-age standard deviation is computed by taking the square root of the mean of the two groups’ variances -- . The standardized difference, d, is then 3.1/4.977 = .62. Using Cohen's guidelines, that is a medium to large sized effect. In terms of percentage of variance explained,
The interpretation of the recall condition means is also pretty simple. With greater dept of processing, recall is better, but the difference between the intentional condition and the imagery condition is too small to be significant, as is the difference between the rhyming condition and the counting condition. The pooled standard deviation within the intentional recall and the counting conditions is . Standardized effect size, d, is then , an enormous effect. In terms of percentage of variance explained by recall condition,
Although the significant interaction effect is small (h2 = .07) compared to the main effect of recall condition, we shall investigate it by examining simple main effects. For pedagogical purposes, we shall obtain the simple main effects of age at each level of recall condition as well as the simple main effects of recall condition for each age.
Notice that SPSS gives you values of partial eta-squared. Also note that they sum to more than 100% of the variance. If you want to place confidence intervals on the obtained values of eta-squared, you must compute an adjusted F for each effect, as I have shown you elsewhere. To place confidence intervals on partial eta-squared you need only the F and df values that SPSS reports. Using the NoncF script, here are the confidence intervals:

Return to the Data Editor. Click Data, Split File. Tell SPSS to organize the output by groups based on the Conditon variable. OK. Click Analyze, Compare Means, One-Way ANOVA. Scoot Items into the Dependent List and Age into the Factor box. OK.
The results show that the youngsters recalled significantly more items than did the oldsters at the higher levels of processing (adjective, imagery, and intentional), but not at the lower levels (counting and rhyming). The tests we have obtained here employ individual error terms – that is, each test is based on error variance from only the two groups being compared. Given that there is a problem with heterogeneity of variance among our cells, that is actually a good procedure. If we did not have that problem, we might want to get a little more power by using a pooled error term. What we would have to do is take the treatment MS for each of these tests, divide it by the error MS from the overall factorial analysis, and evaluate each resulting F with the same error df used in the overall ANOVA. Our error df would then be 90 instead of 18, which would give us a little more power.
Return to the Data Editor. Click Data, Split File. Tell SPSS to organize the output by groups based on the Age variable. OK. Click Analyze, Compare Means, One-Way ANOVA. Leave Items in the Dependent List and replace Age with Conditon in the Factor box. OK.
Note that the effect of condition is significant for both age groups, but is larger in magnitude for the youngsters (h2 = .83) than for the oldsters (h2 = .45). I don't think that the pairwise comparisons here add much to our understanding, but lets look at them briefly. Among the oldsters, mean recall in the adjective, intentional, and imagery conditions was significantly greater than in the rhyming and counting conditions. Among the youngsters, mean recall in the adjective conditions was significantly greater than that in the counting and rhyming conditions and significantly less than that in the imagery and intentional conditions.

Writing up the Results – Here is an Example
A 2 x 5 factorial ANOVA was employed to determine the effects of age group and recall condition on participants’ recall of the items. A .05 criterion of statistical significance was employed for all tests. The main effects of age, F(1, 90) = 29.94, p < .001, hp2 = .25, CI.95 = .11, .38, and recall condition, F(4, 90) = 47.19, p < .001, hp2 = .68, CI.95 = .55, .74, were statistically significant, as was their interaction, F(4, 90) = 5.93, p < .001, hp2 = .21, CI.95 = .05, .32; MSE = 8.03 for each effect. Overall, younger participants recalled more items (M = 13.16) than did older participants (M = 10.06). The REGWQ procedure was employed to conduct pairwise comparisons on the marginal means for recall condition. As shown in the table below, recall was better for the conditions which involved greater depth of processing than for the conditions that involved less cognitive processing.

Table 1. The Main Effect of Recall Condition

Recall Condition

Counting
Rhyming
Adjective
Imagery
Intentional
Mean
6.75 A
7.25A
12.90B
15.50C
15.65C
Note. Means sharing a letter in their superscript are not significantly different from one another according to REGWQ tests.

The interaction is displayed in the following figure. Recall condition had a significant simple main effect in both the younger participants, F(4, 45) = 53.06, MSE = 6.38, p < .001, h2 = .83, CI.95 = .70, .87) and the older participants, F(4, 45) = 9.08, MSE = 9.68, p < .001, h2 = .45, CI.95 = .18, .57), but the effect was clearly stronger in the younger participants than in the older participants. The younger participants recalled significantly more items than did the older participants in the adjective condition, F(1, 18) = 7.85, MSE = 9.2, p = .012, h2 = .30, CI.95 = .02, .55 the imagery condition, F(1, 18) = 6.54, MSE = 13.49, p = .020, h2 = .27, CI.95 = .005, .52, and the intentional condition, F(1, 18) = 25.23, MSE = 10.56, p < .001, h2 = .58, CI.95 = .23, .74, but the effect of age fell well short of significance in the counting condition, F(1, 18) = 0.46, MSE = 2.69, p = .50, h2 = .03, CI.95 = .00, .25, and in the rhyming condition, F(1, 18) = 0.59, MSE = 4.18, p = .45, h2 = .03, CI.95 = .00, .27.


Return to the SPSS Lessons Page

Copyright 2007, Karl L. Wuensch - All rights reserved.
ã Copyright 2007, Karl L. Wuensch - All rights reserved.

One-Way Independent Samples ANOVA with SPSS

One-Way Independent Samples ANOVA with SPSSã
Download the data file ANOVA1.sav from my SPSS data page. These are contrived data (I created them with a normal random number generator in the SAS statistical package). We shall imagine that we are evaluating the effectiveness of a new drug (Athenopram HBr) for the treatment of persons with depressive and anxiety disorders. Our independent variable is the daily dose of the drug given to such persons, and our dependent variable is a measure of these persons' psychological illness after two months of pharmacotherapy. We have 20 scores in each of five treatment groups.
Bring the data file, ANOVA1.SAV, into SPSS. To do the analysis click Analyze, Compare Means, One-Way ANOVA. Scoot Illness into the Dependent List box and Dose into the Factor box. Click Contrasts, check Polynomial, and select Degree = 4th. Click Continue. Click Post Hoc, check Bonferroni and REGWQ. There are many other pairwise procedures available here too. Click Continue. Click Options and select Descriptive Statistics and Means Plot. Click Continue, OK.
At the bottom of the output is a plot of the means. Take a look at the plot. It appears that the drug is quite effective with 10 and 20 mg doses, but that increasing the dosage beyond that reduces its effectiveness (perhaps by creating problems opposite to those it was intended to alleviate). With data like these, a “trend analysis” would be advised. In such an analysis one attempts to describe the relationship between the independent and dependent variables in terms of a polynomial function. If you remember polynomials from your algebra course, you will recognize that a quadratic function (one with one bend in the curve) would fit our data well. By selecting polynomial contrasts we get, along with the one-way ANOVA, a test of how well a polynomial model fits the data. I selected degree = 4th to get a test not only of a quadratic model but also of more complex (cubic and quartic) polynomial models. The highest degree one can select is k-1, where k is the number of levels of the independent variable.
The descriptive statistics at the top of the output reveal considerable differences among the group standard deviations, but Fmax (ratio of largest group variance to smallest group variance) remains below 4, so we are OK with the homogeneity of variance assumption.
The ANOVA clearly shows that dose is significantly related to illness (between groups p < .001). The trend analysis shows that there is no significant linear relationship between dose and illness (p = .147), but that higher order polynomial trends (quadratic, cubic, and quartic) would account for a significant proportion of the variance in illness (deviation p < .001). The quadratic trend is large (h2 = 6100.889/14554.24 = 42%) and significant (p < .001). The "deviation" test shows us that cubic (which would allow two bends in the curve relating dose to illness) and quartic (three bends) trends (combined) would account for a significant additional proportion of the variance in illness (deviation p = .047). The cubic trend is significant (p = .032), but accounts for so little of the variance in illness (h2 = 389.205/14554.24 = 3%) that it is not of great importance. The quartic (4th order) trend is trivial and not significant. Please do note that if my independent variable were qualitative rather than continuous, then a trend analysis would not be appropriate and I would not have asked for one – I would still get the standard analysis.
Under the title of Post Hoc Tests, SPSS reports first the results of the Bonferroni tests. Each row in this table represents the difference between the mean illness at one dosage and the mean illness at another dosage. The Sig. column tells you whether the difference is significant or not and then you are given a confidence interval for the difference. All of the differences are significant with the exception of 0 mg vs 40 mg, 10 mg vs 20 mg, and 10 mg vs 30 mg.
The results of the REGWQ test are presented in a different format. The table under the title Homogeneous Subsets shows that the mean for 20 mg does not differ significantly from that for 10 mg and the mean for 0 mg does not differ significantly from that for 40 mg. Although not covered in Howell's Fundamentals textbook, the REGWQ is my recommendation for the pairwise comparison procedure to employ in almost all cases where you have more than three groups – but you cannot really do it by hand, you have to use a computer. If you have only three groups, your best choice is to use Fisher's LSD procedure. With four or more groups I strongly recommend the REGWQ.
The overall h2 is computed by hand by taking the among groups sums of squares and dividing by the total sums of squares. This estimates the proportion of the variance in the criterion variable which is “explained” by the grouping variable. You should report both the point estimate of that proportion and also put a 95% confidence interval about it.
Below is an example of how to write up these results. While the underlining means method of presenting pairwise comparisons is dandy when you are writing by hand, it is cumbersome when you are using a word processor, and you never see it in published manuscripts. Instead, I present such results in a table, using superscripts to indicate which means differ significantly from which other means. I chose to present the results of the Bonferroni test rather than the REGWQ test, because the pattern of results from the Bonferroni test are more complex and I wanted to show you how to present such complex results.
An analysis of variance indicated that dose of Athenopram significantly affected psychological illness of our patients, F(4, 95) = 20.78, MSE = 81.71, p < .001, h2 = .47, CI.95 = .30, .56. As shown in Table 1, Bonferroni tests indicated that low doses of the drug were associated with significantly better mental health than were high doses or placebo treatment. A trend analysis indicated that the data were well fit by a cubic model with the quadratic component accounting for a large and significant proportion of the variance in illness (h2 = .42, p < .001) and the cubic trend accounting for small but significant proportion of the variance (h2 = .03, p = .032).

Table 1
Psychological Illness of Patients
As a Function of Dose of Athenopram
Dose (mg)
M
SD
40
101.8A
10.66
0
100.8A
8.82
30
92.5B
7.24
10
85.0BC
11.01
20
81.1C
6.60
Note. Means with the same letter in their superscripts do not differ significantly from one another according to a Bonferroni test with a .05 limit on familywise error rate.
Please see my document Using SPSS to Obtain a Confidence Interval for R2 From Regression. Here are screen shots showing how I got the confidence interval for eta-squared.
Copyright 2006, Karl L. Wuensch - All rights reserved.
ã Copyright 2006, Karl L. Wuensch - All rights reserved.

ANCOV and Matching with Confounded Variables

ANCOV and Matching with Confounded Variablesã
Suppose we are interested in the effect of some categorical independent variable upon some continuous dependent variable. We have available data on an extraneous variable that we can use for matching subjects or as a covariate in an ANCOV. If we were manipulating the independent variable, we could match subjects on the covariate and then within each block randomly assign one subject to each treatment group. If our covariate is well correlated with the dependent variable but not correlated with the independent variable, the randomized blocks design or ANCOV removes from what would otherwise be error variance the variance due to the covariate, thus increasing power. If we measure the covariate prior to administering our experimental treatment and then randomly assign subjects to treatment groups (within each block for a randomized blocks design), then any apparent correlation between covariate and independent variable is due to sampling error, and statistically removing the effect of the covariate removes only error variance.
If, however, we cannot randomly assign subjects to levels of the independent variable or if our covariate is measured after administering the treatments, then removing the effect of the covariate may also result in removing the effect of the treatment. In other words, when the independent variable and the extraneous variable are correlated (confounded), you cannot remove from the dependent variable variance due to the extraneous variable without also removing variance due to the independent variable.
Consider this case: We have a nonmanipulated dichotomous "independent variable" and continuous data on a covariate and a "dependent" or criterion variable. We shall imagine that criterion variable is score on a reading aptitude test, the covariate is number of literature courses taken, and the grouping variable is gender. Download the data file Confound.sav from my SPSS Data Page at http://core.ecu.edu/psyc/wuenschk/SPSS/SPSS-Data.htm. Bring the data into SPSS and take a look at them. I recommend that you print a copy of the data and bring the printed copy to class when we discuss them. To print the data, click “File” on the screen that shows the data in data view and select “Print.” In the “Print” window click the “Properties” button and select “Landscape” orientation. Click OK, OK.
The first three columns of scores (after the leftmost column, which has case numbers) are gender (1 is female, 2 is male), number of courses, and aptitude. We match participants on number of courses (before looking at their aptitude scores), obtaining 10 pairs of participants perfectly matched on the covariate. The 4th column of scores indicates matched pair number. Participants with a missing value code (a dot) in this column could not be matched, so they are excluded from the matched pairs analysis. Note that this excludes from the analysis the female participants with very high covariate scores (and, given a positive correlation with the criterion variable, with high aptitude as well) and the male participants with very low covariate (and criterion) scores. The last three columns of data are scores on the criterion variable for matched participants (female, male) followed by the difference score.
Now click Analyze, Correlate, Bivariate. Scoot gender, courses, and aptitude into the “Variables” box and click OK. Look at the output. Number of courses is indeed well correlated with aptitude, and the women scored higher than the men on both courses and aptitude (the negative sign of the point biserial correlation coefficients indicating that the gender 2 scores are lower than the gender 1 scores).
Now click Analyze, Compare Means, Independent Samples T Test. Scoot courses and aptitude into the “Test Variables” box and gender into the “Grouping Variable” box. Click “Define Groups” and enter the number 1 for “Group 1” and 2 for “Group 2” and then click Continue. Click OK and look at the output. The output shows us again that women score higher than men on both courses and aptitude, and gives us the means etc. Note that the analyses so far are based on all 34 cases.
Now click Analyze, Compare Means, One Sample T Test. Scoot apt1, apt2, and diff into the “Test Variable” box, leave the Test value at zero, and click OK. This is equivalent to conducting correlated t tests comparing men and women for our matched pairs. The output shows us that with the matched pairs data, men have reading aptitude (M = 42.5) significantly greater than that of women (M = 37.5). Now, can we make sense out of this? Ignoring the covariate, women had a significantly higher mean than did men, but if we “control” the covariate by matching (excluding high scores from one group and low scores from the other group), we not only remove Group 1’s superiority, but we get Group 2 having the significantly higher mean. In other words, if the two groups did not differ on the covariate, Group 2 would have the higher mean -- but the two groups do differ on the covariate, so asking if the groups would differ on reading aptitude if they did not differ on number of literature courses is somewhat absurd.
Now, let us do a quick ANCOV (analysis of covariance) using all 34 participants. Click Analyze, General Linear Model, Univariate. Scoot aptitude into the “Dependent Variables” box, gender into the “Fixed Factors” box, and courses into the “Covariates” box. Do not click OK yet. “Fixed Factors” identifies the categorical predictor variable(s), with “Fixed” meaning that we have sampled all of the values of interest for the factor(s). If we had randomly sampled values from the factor of interest, we would use the “Random Factors” box. “Covariates” identifies continuously distributed predictor variables.
Click the Model button and select the Custom model. Highlight “gender(F)” in the “Factors & Covariates” list and then click the “Build Term(s)” arrow to place “gender(F)” as the first variable in the “Model” list. Place “courses(C)” as the second variable in the model. Now, be sure that “Interaction” is showing in the box just below the Build Term(s) arrow, and click on both “gender(F)” and “courses(C)” in the Factors and Covariates box. That should result in both “gender(F)” and “courses(C)” being highlighted. With those two terms highlighted, click the Build Term(s) arrow. This will result in the third term in the model being “courses* gender,” which is an interaction term. SPSS creates it by computing for each subject the product of the numerical code for gender and the score on the courses variable. If our factor had more that two levels, the factor would be coded as a set of k-1 dummy variables (each coded 0,1), and the interaction component would consist of a set of k-1 products between the covariate and the dummy variables, where k is the number of levels of the factor.
In the “Sum of squares” box, change the type to “Type I.” Verify that the “Univariate: Model” window looks like that below and then click Continue, OK.
Look at the output. Our only interest is in the test of the interaction component, Gender x Courses. The F reported for the interaction component tests the null hypothesis that the slope for predicting aptitude from courses is the same in women as in men. This must be so if we are to do a standard ANCOV, since the ANCOV “adjusts” the criterion scores in both groups (statistically to remove the effect of the covariate on the criterion) using a slope pooled across both groups. The F is clearly nonsignificant, so we go on to do the ANCOV with the interaction term dropped from the model.
Click Analyze, General Linear Model, Univariate, Model. Remove the “courses*gender” interaction term from the Covariates box -- highlight it and click the “Build Terms” arrow. Click Continue, Options. Scoot gender into the “Display Means For” box and check the “Display Descriptive Statistics” box. Verify that the “Univariate: Options” window looks like that below, and then click Continue and then OK.

Look at the output for our ANCOV. Notice that we are still using Type I sums of squares. Type I sums of squares are sequential -- that is, the effect of the first term in the model is evaluated ignoring all of the other terms in the model. Next, the effect of the second term in the model is evaluated after removing from it any overlap between it and the first term in the model -- that is, statistically holding constant the effect of the first term. In statistical jargon, that is evaluating the effect of the second term “adjusted for” or “partialled for” the second term. If there were more than two terms, this would continue, with each effect adjusted for all effects that precede it in the model but ignoring all effects that follow it in the model. If we had selected Type III (unique) sums of square, each effect in the model would be adjusted for every other effect in the model.
With our model and Type I sums of squares, the effect of the covariate was first removed from the aptitude scores. This results in the adjusted aptitude scores of participants who took many literature courses being lowered and the adjusted aptitude scores of participants who took few literature courses being raised. Look back at the scores in columns one through three. Since the women had high covariate scores and the men had low covariate scores, this results in the adjusted mean on the criterion variable being lowered in the women and raised in the men.
The F reported for gender in this analysis tests the null hypothesis that the two adjusted means (given under “Estimated Marginal Means”) are equal in the population. After taking out the "effect" of number of literature courses, men have a mean reading aptitude that is significantly higher than that of women. Once again, statistically controlling the covariate with these confounded data has resulted not only in removing Group 1’s superiority but in producing a significant difference in the opposite direction.
Please beware the use of matching or ANCOV in circumstances like this. I have contrived these data to make a point, exaggerating the degree of confounding likely with real data, but we shall see this problem with real data too. For our contrived data, women have significantly higher reading aptitude unless we statistically remove the “effect” of taking more literature courses. Does this mean that men really have higher reading aptitude that is just masked by their not taking many literature courses? I doubt it. People generally take more courses in areas where their aptitude is high rather than low, so statistically removing the gender difference in number of literature courses taken also removes (or reduces or even reverses) the (real, unadjusted) gender difference in aptitude.
Suppose that for these contrived data Group 1 was men, Group 2 women, the covariate a measure of amount eaten daily, and the criterion body weight. Men are significantly heavier than women, but if we statistically held constant the amount eaten, women have higher weights than do men. If women ate as much as men, they would weigh more than men. So what, not eating that much is part of being a woman, women eat significantly less than men do!
Despite numerous warnings from statisticians about the use of matching and ANCOV with confounded data, psychologists persist in doing it. You be a critical reader and be aware of the severe limitations of such research when you encounter it.
Lest I have overstated the case against ANCOV and matching with covariates confounded with the independent variable, let me state that I believe such analyses can be informative when interpreted with caution and understanding. Multiple regression (which is really what we are doing here) generally involves obtaining partialled (adjusted) statistics (reflecting the contribution of each predictor variable partialled for some or all of the other predictor variables). Such analyses are especially useful with nonexperimental data, where causal attribution is slippery at best. Consider the data collected by statistics student Dechanile Johnson, and used in PSYC 6430 (first semester graduate statistics). Download the data file Weights.sav from my SPSS Data Page at http://core.ecu.edu/psyc/wuenschk/SPSS/SPSS-Data.htm. Bring the data into SPSS and take a look at them. The variables are gender, height, and weight. Use SPSS to conduct the following analyses:
Correlate each variable with each other variable.
Use t tests to compare men with women on both height and weight.
Verify that the interaction between gender and height is not significant with respect to their association with weight.
Conduct an ANCOV to compare the genders on weight, using height as the covariate. Use Type I sums of squares with the covariate entered first in the model. Obtain descriptive statistics and adjusted means (mean weight for men and for women after taking out the gender difference in heights).
Look at your output. Notice that height is well correlated with weight, and that men are significantly taller and heavier than women (point biserial correlations). The T-Test output shows us the means by gender along with associated statistics. Our General Linear Model output shows us that the slope for predicting weight from height does not differ significantly between men and women, and that men still weigh significantly more than women after adjusting for height. The men averaged 163.76 - 123.36 = 40.4 lb. heavier than the women and 70.57 - 64.89 = 5.68 inches taller. These are quite large differences, 2.5 standard deviations in the case of weight, 2.3 in the case of height. The adjusted means differ by less, by only 35.2 lb (160.8 - 125.6). Removing the effect of height did not make the weight difference nonsignificant (if it did, would we conclude that men don’t really weigh more than women?), but it did reduce the difference from 40.4 to 35.2. In other words, some part of the sex difference in weight is due to men being taller, but even if we statistically hold height constant, men are significantly heavier. Why? Well, men have stockier builds and perhaps more dense tissue (more muscle, less fat, not to mention denser crania J).







Copyright 2003, Karl L. Wuensch - All rights reserved.

ã Copyright 2003, Karl L. Wuensch - All rights reserved.

HOW TO: Work With Research Supervisors

HOW TO: Work With Research Supervisors
This page offers suggestions, advice and tips to help doctoral (PhD / DPhil) students enjoy a productive and effective relationship with their supervisors. The page includes: How the nature of supervision should develop over the period of the research programme and the importance and nature of meetings with supervisors
What to expect from a research degree supervisor
Summarised from:

Click book for further information
New students tend to expect supervisors to tell them what to do. Indeed, this may be justified for very short research projects or where the work is tied into a group project and bounded by the efficient use of expensive and heavily utilised equipment. Where this is not so, students may wait for their supervisors to tell them what to do because they think that demonstrating dependence in this way also demonstrates respect. Fortunately, good supervisors realize that they have to wean many students gradually into independence; so they may provide a well-defined task, as something on which supervisor and student can both build – perhaps a pilot project of some sort. If this is what your supervisor does, it may give you a sense of security, but things are unlikely to carry on that way. Many people would argue that they ought not to carry on that way.
Sections in the chapter on interacting effectively with supervisors
The importance of student-supervisor relationships
The composition of supervisory teams
Points to watch for with team supervision
Roles and responsibilities of supervisors and students
The developing nature of supervision
Arranging meetings with a supervisor
Making the most of meetings with supervisors
Keeping records of meetings with supervisors
Asking a supervisor for feedback and advice
Responding to feedback and criticism from a supervisor
Handling dissatisfaction with supervision
At the other extreme, some supervisors toss out a multitude of ideas at the first meeting, which can be overwhelming. If this happens to you, just realize that the ideas are merely possibilities for you to consider, not tasks that you necessarily have to do. Your best course of action is probably to make a note of them and then take them away to think about, to decide which ones comprise essential groundwork and which ones are merely alternative possibilities. There is no single best way to research a topic, although there are numerous bad and non-viable ways. It is you and you alone who have to be intimately involved with what you are doing over a considerable period. So, for all but the shortest of projects, it is essential that you design it so that it appeals to you as well as being acceptable to your supervisor.
As your work progresses, supervisions should become two-way dialogues. Your supervisor will expect you to develop your own ideas – which may have to be bounded for various reasons – but will want to discuss them with you, to give advice and to warn in good time against possible dangers. It is not a sound interpretation of ‘independent work’ for students to continue along their own way, on the mistaken assumption that they do not need supervisions.
Since research means going beyond published work and developing something new, your relationship with your supervisor must accommodate the natural and inevitable fact that you will eventually come to know more about your work than your supervisor. You will need to become comfortable with this and with engaging him or her in academic debate as between equals.
...

Arranging meetings with a supervisor
It is important to distinguish between formal supervisions and informal meetings. There will be specific policies about the timing and duration of the former, probably around a minimum of eight meetings per year. The dates may be roughly laid out for an entire programme of research and require specific documents to be completed and signed at each meeting.
Informal meetings can also form part of the supervisory process, more so in some subject areas than in others. Supervisors may be torn in two directions as far as scheduling these is concerned, and it is helpful to understand why. On the one hand supervisors want to do what they can to be supportive, but on the other they do not want to interfere on the grounds that independent students ought to take the initiative when they need to discuss work which should, after all, be their own. This latter view is reinforced by the formal dictate of most institutions that it is the responsibility of the student to take the initiative in raising problems or difficulties, however elementary they may seem, and to agree a schedule of meetings with the supervisor.
The practical way forward is for you to take steps early on to find out how scheduling supervisions is likely to work best for the unique partnership between you and your supervisor. It is polite to wait a while, to give your supervisor time to raise the matter.


HOW TO: Develop a research proposal for a research degree
Whether or not students are required to prepare a formal research proposal depends to a large extent on their field of study. The extracts on this page outline the essential elements of any research proposal and make some initial suggestions on how to progress to a full and viable proposal.
The contents of a research proposal
Each institution will probably have its own terminology for its formal requirements for a research proposal. In general terms, though, students will be expected to show that the proposed work:
Summarised from:

Click book for further information
· is worth researching
· lends itself to being researched
· is sufficiently challenging for the level of award concerned
· can be completed within the appropriate time
· can be adequately resourced
· is not likely to be subjected to any serious constraints
· is capable of being done by the student
Sections in the chapter on the research proposal
The requirement to write one’s own research proposal
How the research proposal helps everyone concerned
The limitations of a research proposal
Essential elements of a research proposal
Fleshing out the research proposal
Putting boundaries on the research proposal
The writing style of the research proposal
Issues of time when preparing a research proposal
Adapting the proposal to apply for a small grant or other funds
These criteria may seem deceptively simple, but each one can subsume a multitude of others and, depending on the nature of the proposal, there is likely to be cross linking between them. The detail and emphasis for your particular research proposal must depend on your topic, the department, school or faculty in which you are registered (particularly if your work is multidisciplinary) and the rigour required by your institution, which will be the final arbiter. So use the points to set yourself thinking. You will soon see how some depend on others, and then suitable headings and cross-references will probably present themselves naturally. It is very unlikely indeed that the headings that you end up with will directly reflect the above bullet points.
You may find that a technique known as a ‘mind-map’ is helpful in developing the ideas about what to include in the proposal. On the other hand, you may not. Mind maps do seem to generate strong feelings, one way or the other. If, having read what follows, you prefer to find your own alternative ways of developing content, there is no reason why you shouldn’t do so. Advice on how to use mind maps is widely available, and is also described in the book.
...

Fleshing out the research proposal
... A sound research proposal requires much more than the above orientation. Obviously supervisors will help, but they are busy people, who will expect you to do your own groundwork.
To show that the work is worth researching, you will need to set it into a context of other work that has and has not been done in the general area. This requires a literature survey. Issues of methodology and terminology should guide your thinking. Ethical considerations, depending on your particular research topic, may vary in importance from minimal to very considerable indeed. All these are elaborated on in the book.
Regarding length and detail, you will need to look at the requirements of your institution, as listed in the student handbook or the website. For the norms of your field of study, look at some research proposals which have previously been accepted.


HOW TO: get into a productive routine
It is all too easy to work hard, in terms of putting in time and effort, while achieving next to nothing. One very useful way of overcoming this problem and making sure that your work is always on-target is to stop and check that you are always in one of the roles outlined below. Through appreciating which one you are in, or should be in, at any particular time, work will become much more productive.
Roles in which research students need to operate
Based on:

Click book for further information
Sections in the chapter on getting into a productive routine
The importance of a productive routine
Maintaining a sense of direction: roles in which researchers need to operate
Keeping records of on-going work
Finding out where your time goes
Using time efficiently when supervisions and seminars are cancelled
Matching the task to the time slot
Handling interruptions
Coping with information overload
Managing time at home with partners and family
Managing time at the computer and on the Internet
Attending training
Using research seminars
Networking and serendipity
Keeping ‘office hours’ versus using the ‘psychological moment’
Keeping ‘office hours’ versus keeping going for hours at a time
Matching your approach to your preferred learning style
Using music to manage yourself
Directing your research to suit your personal needs and preferences
Maintaining a healthy lifestyle
Being realistic with yourself
There are four main roles in which research students need to operate, and they are presented below roughly in the order in which research students need to occupy them. There will, however, inevitably be a certain amount of to-ing and fro-ing between them and cycling around them.
An explorer to discover a gap in knowledge around which to form the research problem or problems (or questions etc.). (Students may of course be using a different terminology, e.g. ‘research questions’, ‘hypothesis’, ‘focus’, ‘topic’. However, no-one should be gathering data for the sake of it, so research students should always be able to couch what they are doing in terms of a problem to solve, even if different terminology appears in the thesis.) For those students who know their research problems from the outset, the time spent in this role can be very short, although not non-existent because the problem still needs some refinement. Other students can spent a considerable time in the role. Most of the time this is likely to involve reading round the subject, but research can be such a variable undertaking, that students may to drop into the role at any stage.
A detective and/or inventor to find solution(s) to the research problem(s) (or questions etc.). The role is that of a detective where the problem is about something unknown and an inventor where the problem is to develop or produce something.
A visionary or creative thinker to develop an original twist or perspective on the work and a fall-back strategy if things don’t go according to plan. Also, if necessary, to find a way of ring-fencing nebulous or discrete investigations into a self-contained piece of work appropriate for the award concerned.
A barrister to make a case in the thesis for the solutions to the research problem, problems or questions (rephrased if necessary in terms of terminology appropriate for the work and field of study.)
Research students may, of course, occupy other roles at times, such as firefighter, manager, negotiator, editor, journalist, etc., but these reflect the sorts of task which everyone, research student or not, has to handle on occasions, and do not generate any sense of overall direction in the research.
... Also of course it is essential to take time off for relaxation and creativity as considered in Chapter 20.


HOW TO: Write progress reports for research
Progress reports are a requirement from all students on research programmes, but how best to construct and use them is often misunderstood. This page offers suggestions, advice, tips and general help, in particular on the developing the content of a progress report and the use of literature.
Developing the content of a progress report
Summarised from:

Click book for further information
... The content of a report must depend on its purpose. For most fields of study, the content of early reports probably ought to be such as to review progress to date and to identify a plan of action for the next phase of the work. Reviewing progress is not merely a matter of cataloguing what tasks one has done, although this will come into it. Rather, it should make a case that what one has done has been thoughtful, directed and competent.
Students should probably include the following in the report, presented where possible as a substantiated argument rather than as a straight description:
How one has defined or developed the research question(s), topic(s) or theme(s) etc., with which the report is concerned – possibly with reference to the original research proposal.
How one is developing the research methodology, stressing how it is appropriate.
How one expects to ensure that appropriate data will be collected which is convincing for its purpose.
How the literature is being used.
How any constraints are being handled.
How subjectivity, where relevant, is being handled.
Progress to date.
Problems or potential problems to be flagged up.
General reflections. These should be relevant, not just padding, and the nature of what is required is likely to vary considerably from one discipline to another.
A plan for the next phase of the work.
Interim reports should build on previous ones and, where appropriate, refer to them. Thus there should be no need for repetition of previously reported material that remains unchanged.
With a formal report such as that to a funding agency, certain headings or sections may be obligatory. They can seem bureaucratic or irrelevant, and if so, they may be there to provide the institution or funding agency with data for other purposes. So it is probably a good idea to start the report by drafting brief notes along the lines indicated by the above bullet points first, and then, in negotiation with supervisors, to edit these together to fit under the required headings. If the headings seem particularly bureaucratic or irrelevant, the help of supervisors will be essential for handling them.

Citing literature in a report or thesis chapter
Sections in the chapter on progress reports for research
The importance of reports during the research programme
Developing the content of a report
Structuring the report
Using basic word processing features to aid structuring
Constructing the introductory paragraph as an orientation to the report
Constructing the final paragraph for effective closure of the report
Citing literature
Adding figures and tables
Adding appendices
Developing an academic writing style
Making the writing process more effective and efficient
Capitalizing on all the features of word processing software
Using reports to get feedback and advice
Towards writing the thesis


HOW TO: avoid unintentional plagiarism in research
This page introduces intellectual ownership and plagiarism. Students from some cultures may reproduce the work of others verbatim in the belief that they are honouring them or merely reproducing the 'best' way of expressing something. Whatever the motives, this is regarded as plagiarism and the page gives some pointers on how to avoid it.
Recognising intellectual property and plagiarism
Summarised from:

Click book for further information
Everyone has what is known as ‘intellectual copyright’ or ‘intellectual property rights’ on what they write. No formal patent is necessary. Plagiarism is taking the written work of others and passing it off as one’s own – although the meaning is increasingly becoming blurred to include passing off the ideas of others as one’s own. It is not plagiarism to quote short passages, provided that one points out where the quotation comes from and uses it for illustration or criticism. It is plagiarism to copy a chunk of material and present it without indicating its source as if it is one’s own. Plagiarism is a form of fraud and malpractice.
Sections in the chapter on handling ethical issues
The place of ethics in research
Towards an ethical research proposal
Getting the research proposal approved for ethical considerations
The ethics of ownership in research: conflicts of interest
The ethics of ownership of the work of others: plagiarism
Avoiding 'unintentional' plagiarism
What to do if you meet malpractice and fraud
Subject specific ethical guidelines
The Internet, particularly on-line academic journals, may seem to provide considerable scope for taking the written work of others and passing it off as one’s own. Cases are even reported of students with short research projects buying complete theses or dissertations on the Internet. This is something they could never get away with on a full research programme like a PhD, as there are too many checks along the way, which would immediately alert supervisors. In particular supervisors can often spot plagiarized chunks of text because the different authorship of the various sections is so obvious from the different writing styles. To add to the armoury against plagiarism, there are on-line tools which take only minutes to analyze and compare text. Supervisors can run the software themselves, but common practice is to ask students to do it as part of their personal development, and to produce the downloaded report as evidence.
Blatant plagiarism is being taken very seriously indeed. Do it at your peril. Not only would you be risking the most severe of penalties, you would also be destroying the educational value of your programme of work.

Avoiding 'unintentional' plagiarism
Although plagiarism is simply wrong, students from some backgrounds do it in good faith – to indicate that they have studied what the ‘experts’ have written and to honour those experts. Understandable as this may be, it cannot be allowed to continue. It is unlikely to remain unnoticed for long, and no-one would ever accept that a student of more than a few months into a research programme is anything but fully aware that plagiarism is unacceptable. The penalties can be very severe indeed, and can be applied retrospectively, even after students have graduated.
The way to avoid this sort of plagiarism is simple. Every time you use someone else’s work, simply say so and cite the source. If you feel uncomfortable about this or find that your work is consisting of too many quotations or citations from elsewhere, you are probably not subjecting the material to your own independent thought. Your personal critical analysis is what is important. So try to present the work of others in terms of what they 'consider' / 'describe' / 'suggest' / 'argue for' / 'explain' / 'conclude' … etc and then add how much confidence you feel that their work generates and why.
Another plagiarism-avoidance technique is to rewrite what someone else has written, but concentrating on leaving out what is peripheral to one’s own argument (while not misrepresenting); and then stressing where it is in agreement, where it is in disagreement and where it is particularly fascinating from your point of view. By the time you have done this, you may feel quite comfortable that what you have written genuinely is your own and that all you need to do is to cite the source material.


HOW TO: Plan, monitor and record your skills development - Personal Development Planning
Employers expect holders of research degrees such as the PhD to have transferable skills which are not only directly associated with the topic of the doctoral programme but also of a more general nature, appropriate for a wider range of work and for working effectively with others. This page offers suggestions, advice, tips and general help, based on how to recognise a skilled individual; how to recognise one's own skills; and personal development planning (PDP).
How to recognise a skilled individual
Summarised from:

Click book for further information
Being skilled carries with it a sense of satisfaction at a job well done. Broadly speaking, a skill is the ability to do something well within minimal time and with minimal effort. A skilled typist, for example, can type a report quickly and accurately, probably without even looking at the keyboard, whereas an unskilled person would have to keep looking for keys and would probably press the wrong ones by mistake. The typing would be awkward, would require excessive concentration and would take an excessive time. It might still get done eventually, but the final product would almost certainly have an amateur look about it. Typing is an example of a skill which is largely manual, but skills can also be interpersonal and intellectual. For example a skilled speaker can comparatively effortlessly hold an audience spellbound; an unskilled speaker might have a go, but the task would consume a great deal of preparation time and emotional energy and would probably not be received particularly well by the audience anyway.
The straight division of 'skilled' and 'unskilled' is of course an over-simplification, as there are varying degrees of skills-proficiency. However, knowing what is involved in a skill is never the same as being skilled.

Recognising your own skills
Sections in the chapter on skills development and personal development planning (PDP)
The importance of skills
The characteristics of a skill
The process of becoming skilled
The transferability of skills
Ways of thinking about the skills developed in postgraduate research
Recognizing the skills that you will develop in your own research
A do-it-yourself training needs analysis/skills audit
The joint statement on skills by the Research Councils
Collecting and using evidence to demonstrate skills proficiency
Locating suitable training
‘Personal Development Planning’ (PDP)
The place of PDP in formal assessment processes
In order to extend and develop your skill-set it is important to recognize the skills which you already have. To some extent, all students develop skills as a natural part of progressing through their studies and receiving guidance and feedback from their supervisors. However, unless students are specifically alerted to the fact, few seem to appreciate the richness what they acquire this way. Once alerted, the skills can be built on and readily developed further.
The following extract suggests a framework for the sorts of skills that are most likely to be developed during an extended research programme. The word 'framework' is used advisedly, because all the skills could be described differently, summarised, elaborated or subdivided. It is important to make adaptations yourself in order to make the terminology more relevant to you and your field of study. All the skills are more sophisticated and have a wider scope than those which first degree graduates can normally claim.


All MPhil/PhD graduates who are adequately able and were properly supervised should be able to claim skills in the specialist research-related aspects of their MPhil/PhD topic. The extent to which these skills are 'transferable' to employment will depend on the individual concerned, nature of the MPhil/PhD work and the requirements of the employment.
In addition, there are numerous skills which are more 'transferable', which employers would understand and value, and which it is reasonable to expect from PhD and possibly MPhil graduates, over and above those transferable skills which have received so much attention at undergraduate level:
All MPhil/PhD students will, by the time they complete, have spent two, three or more years on a research programme, taking it from first inception through its many and various highs and lows. This is no mean feat and should develop the transferable skill of being able to see any prolonged task or project through to completion. It should include to varying extents which depend on the discipline and the research topic the abilities: to plan, to allocate resources of time and money, to trouble-shoot, to keep up with one's subject, to be flexible and able to change direction where necessary, and to be able to think laterally and creatively to develop alternative approaches. The skill of being able to accommodate to change is highly valued by employers who need people who can anticipate and lead change in a changing world, yet resist it where it is only for its own sake.
All MPhil/PhD students should have learned to set their work in a wider field of knowledge. The process requires extensive study of literature and should develop the transferable skills of being able to sift through large quantities of information, to take on board the points of view of others, challenge premises, question procedures and interpret meaning.
All MPhil/PhD students have to be able to present their work to the academic community, minimally through seminars, progress reports and the thesis. Seminars should develop the oral communication skills of being effective and confident in making formal presentations, in intervening in meetings, participating in group discussions, dealing with criticism and presenting cases. Report and thesis-writing should develop the transferable written communication skills needed for composing effective reports, manuals and press releases and for summarising bulky documents. These communication skills should go far beyond the level acquired during a first degree.
The road to completion of an MPhil/PhD can be a lonely one, particularly in the humanities and social sciences. Yet the skills of coping with isolation are 'transferable' and can be highly valued by employers. They include: self-direction; self-discipline; self-motivation; resilience; tenacity and the abilities to prioritise and juggle a number of tasks at once.
MPhil/PhD students working on group projects, which is most common in the sciences, should be able to claim advanced team-working skills.
Further examples of transferable skills are many and various and depend on the interests of the student and the nature of the research programme. Possibilities include advanced computer literacy, facility with the Internet, the skills of being able to teach effectively, to negotiate access and resources, to network with others, to use project management techniques, and to find one's way around specialist libraries or archives.
A digest of a framework for a transferable skill-set for MPhil/PhD students [extracted with minor modifications from Cryer, P 'Transferable skills, marketability and lifelong learning: the particular case of postgraduate research students', Studies in Higher Education, 23, 2, 207-216, 1998.]

'Personal Development Planning' (PDP)
It is common practice for institutions to offer skills-development to their research students. Although the schemes differ in detail from one institution to another and possibly from one field of study to another, they all provide some sort of framework by which students can monitor, build on and reflect on their personal development. The schemes are generally known as ‘Personal Development Planning’ or PDP.
'[PDP is] a structured and supported process undertaken by an individual to reflect upon their own learning, performance and/or achievement and to plan for their personal, educational and career development’ (QAA 2004, para. 27).
The words ‘structured’, ’supported’ and ’process’ are not included lightly. PDP is not a one-off activity. It is a process because it takes place over time. It is supported in that advice and training activities are on hand during the process. It is structured in that it is tied to phases of the research programme or registration and is rigorously documented. If the structure and support are not there, the procedure is not genuine PDP. So no student can sign up to PDP in its pure form without being in a group, department, or institution which supports it. That is of course no reason why students working in isolation should not adopt what they can of its precepts.
Students will probably be introduced to PDP at their induction where they will be provided with templates of some sort for documenting the process. These may be paper based, on-line or in the form of text files or log books, and they will facilitate looking backwards in a reflective mode and forwards in a planning mode as well as recording achievement. Each student is expected to take the initiative for keeping the documentation up-to-date, although some records will be kept by the professionals who are overseeing the PDP. In some cases these are the individual supervisors, and in other cases they are dedicated PDP staff. There will be regular meetings with professionals, training needs analyses and opportunities for reflection and training.
Out of the training needs analyses will emerge lists of requirements for particular training. In theory, students only have to make a good case for attending a training event (like for example a UKGRAD event) for it to be funded. Limits on financial grounds do not seem to present major obstacles because institutions receive dedicated pots of money for students who are funded by the UK Research Councils, and they try not to be divisive for their other students. In practice, too, the funds are not taken up as they might be due to the constraints of students’ time. Part-time students seem particularly loathe to take time out for training.
PDP generates various documents. Because schemes differ in detail across institutions, it is impossible to generalize about what these documents may be. The following are offered as a broad outline and for guidance only, and are not necessarily comprehensive. If you are participating in a PDP scheme and find a lack of correlation between your documents and these, it is probably because of different terminology or because some documents are contained within others.
Personal information such as name, registration and contact information
Previous qualifications and experience, where relevant
Lists, with dates of, for example, supervisions, courses or conferences attended; presentations delivered; reports written; publications; etc
Documents such as reports of supervisions; training needs analyses; action plans; work plans; laboratory notebooks or log books; reports; records of achievement; etc.
There is always scope for innovative documentation, like ,for example, e-portfolios.
The preparation of PDP documents should aid students’ reflections on their personal and professional development; prepare them for lifelong learning generally and their on-going personal and professional development in the world of work; and form a basis for eventual job applications. With respect to job applications, evidence of willingness and ability to learn and records of achievement are particularly important.


HOW TO: Succeed as an 'overseas' / 'international' student
Research students working away from their home countries often face challenges that 'home' student do not. This page considers two of these: how funding issues can affect progress and the possible challenge of having to think independently in a culture where teachers and supervisors do not expect to be treated as 'knowing it all'.
How funding can affect progress
Summarised from:

Click book for further information
If your funding comes from your own country, you need to be aware that it can be cripplingly expensive for your funding body. Not only are the fees so much higher for overseas students, the exchange rate may not be favourable. Consequently funding bodies require and demand value for money. Funding for three or (in some cases) four years may seem a comfortable deal at the outset, but students will need to hit the ground running to be sure that all aspects of the work are completed before the money runs out. Only in very special circumstances and with a great deal of paperwork will funding be extended. Furthermore the people back home will expect anyone who has studied away for so long to be returning as a success. This can ‘hang over’ international research students as a source of unremitting strain and worry. If this applies to you, you would be well advised to familiarize yourself with the rest of this book as soon as possible, so that you can understand and manage what lies ahead.
Sections in the chapter on succeeding as an 'overseas' research student
The challenges of being a postgraduate research student outside your home country
Preparing yourself while still at home
Selecting a suitable institution
Funding issues and their implications
Timing the application
The challenge of working in another language
The challenge of thinking independently
Other possible challenges
In particular, do make sure that the project you undertake will not be too ambitious in terms of data collection and analyses. Aiming for 'quantity' is not necessarily the best way of achieving the quality, originality and significance appropriate for work at PhD level. You will need to think independently and take advice from supervisors while not following instructions blindly.

Showing that you can think independently
International students may face another significant challenge. It applies where they come from cultures which expect a student never to stray from giving the outward appearance that a teacher is right in all respects all of the time. These cultures value deference, humility and compliance, without displays of emotion. Students from such cultures face a major readjustment when they first arrive in a Western educational system where independent thinking is valued and where students, particularly research students, are expected to demonstrate this in ways which may seem alien and uncomfortable.
Most supervisors are sensitive to the issues and help their students to handle them, but supervisors who have never worked outside their own country may not be. This puts the onus on the international students. The issues will not go away. Remedies are matters for individual preference, often worked out with guidance from more experienced members of the same culture. Often all that is needed is a form of ‘permission’ from supervisors that academic argument and creative thinking are acceptable within the framework of the research; that this is what will please supervisors; and that it will not be regarded as lack of respect. Chapter 6 considers the move towards independence in more detail and suggests ways of taking initiatives with supervisors on this and various other matters.
A related matter is that students from these cultures tend to think that their written work should include chunks copied verbatim from the publications of experts, because this shows that they honour those experts. Whatever the intentions and rationales of the students doing the copying, it is nevertheless an attempt to pass of the work of others as one’s own. This is known as ‘plagiarism’ and the temptation to do it must be overcome. Plagiarism is considered more fully on another page of this site..
Whatever the culture at home, postgraduate research students in a Western culture are expected to work things out for themselves. At the level of postgraduate research no supervisor or teacher will tell students what to do – at least not after a relatively short induction period. General training will be given but, after that, supervisors are there to advise, warn and encourage. It will be a good idea to watch how British students interact with supervisors and take that as a rough model. It is also important to realize that, because supervisors are not all-knowing, they can, just like everyone else, be sufficiently insecure to feel threatened in certain situations.
In contrast there are students from some cultures who may give the impression that, having paid their fees to the institution, it is obliged to give them the corresponding award, regardless of anything else. Such students need to appreciate that their fees are buying opportunity, i.e. the opportunity to develop themselves, and that it is up to them how they use this opportunity. In particular no academic with any professionalism will sign certificates of attendance at training where the student has not participated.


HOW TO: Recognise and develop originality in research
For research to be of PhD standard, all institutional regulations require it be 'original', but the concept of originality is often misunderstood. This page offers suggestions, advice, tips and general help.
Ways of thinking about originality
Summarised from:

Click book for further information
A useful way to appreciate the scope of originality is through an analogy, where the research programme can be likened to an exploration into a wilderness at a time in history when the world was still largely unexplored and when explorers still had considerable personal autonomy. In the analogy, the explorer may have certain visions in mind concerning what he or she hopes the expedition will achieve, but appreciates that these may not materialize and is open to alternatives. To avoid cumbersome repetition, the explorer and student will be taken as having different sexes, arbitrarily male and female respectively.
...
Originality in tools, techniques and procedures
Sections in the chapter on originality in research
The need for originality in research
Originality in tools, techniques and procedures
Originality in exploring the unknown/unexplored
Originality in exploring the unanticipated
Originality in data
Originality in transfer of mode or place of use
Originality in byproducts
Originality in the experience
Originality as ‘potentially publishable’
The variety of interpretations and configurations of originality
The balance between originality and conformity
Protecting the ownership of original work
Putting originality into perspective
... In the analogy the explorer uses all the information he can to firm up on why he wants to explore the wilderness and how he might do so within the resources at his command and within any constraints that may exist. He uses this information to plan and organize what background knowledge, procedures, tools, equipment and personnel he will need, tailored to the available resources and constraints. Some procedures may have to be specially designed, some tools and equipment may have to be specially made and some personnel may have to be specially trained or brought in.
Similarly, the student studies the literature, talks to experts and attends relevant training to get background knowledge and to develop an appropriate research methodology. The latter must include decisions about the procedures, tools and techniques, and possibly also the people to be involved. These may be fairly standard in the field of study, but if she uses them in new untested ways, this would justify a claim for originality. Or if she develops new procedures, tools and techniques for a specific purpose, this, too would justify a claim for originality. If neither is the case, her claim for originality must lie in later stages of the work, as suggested in the next few sections.
Originality in exploring the unknown/unexplored
In the analogy the expedition begins along the pre-planned route. If this is previously unexplored, the mere exploration is original work.
Similarly, if the student is conducting a major investigation on something which has never been investigated before, such as a recently discovered insect, star, poem, etc., the work will necessarily be original.
Although ‘originality’ in some types of research is built in, in many fields of study it is not, and its pre-existence should never be taken entirely for granted. So students undertaking research have to learn to live with a certain amount of uncertainty. Living with uncertainty may be difficult, but it is a fact of life for researchers, and can be ameliorated to some extent by welcoming the uncertainty as a precursor of creativity; thinking of the uncertainty as fascination with the unknown; and realizing that committed students do normally manage to complete their programmes of research and earn the award for which they are registered.
Originality in exploring the unanticipated
In the analogy the main route may already have been broadly explored. However, the explorer will, from time to time, come across unexpected and unexplored sidetracks. He may not notice them; or he may continue on the planned route anyway, in which case nothing original is involved. If, however, he does notice the sidetracks, he has to make decisions about whether to explore any of them, and if so, which ones. These decisions may be difficult, because he cannot know whether anything of interest will turn out to lie along them without at least partially exploring them, and doing so will use resources of time and equipment which will delay the expedition on its main route. Yet, one or more of the sidetracks could contain something of such great interest and importance that it would be worth abandoning the expedition as first planned and putting all the resources into exploring the sidetrack.
Similarly, in fairly mundane research, one phase of the work can open up alternative ways forward which have never previously been researched, and it is often these that can provide ‘originality’, as well as the fascination with the unknown that ought to accompany research. They can, on the other hand, equally turn out to be dead-ends which consume time and effort fruitlessly. Researchers cannot know without devoting some time to looking, and even if nothing worthwhile results, a research student can at least claim to have searched for something original.
Originality in data
In the analogy the explorer may make notes and observations along the way which cannot be processed at the time. So he packs them up for carrying back home where they can be examined properly.
Similarly, the student may find herself collecting much unprocessed data which she hopes may provide something usefully ‘original’ later when processed or analysed. This is a perfectly possible way of incorporating originality into work, but it is not at all safe. To do it successfully, students need either good hunches about how the data might be used to advantage or considerable creative abilities.
Originality in transfer of mode or place of use
The explorer may collect all manner of goodies along the way, ranging from what he hoped for when planning the expedition to the entirely unanticipated. These goodies may have an obvious uniqueness, beauty or value, like gold or precious stones. More likely, though, the goodies are commonplace where they were found, but unknown back home, like the potato which Sir Walter Raleigh brought to England from America.
Similarly, originality in research need not be new in absolute terms. It can merely be new to the situation or the discipline. Even data need not be new, in that it is both feasible and acceptable for researchers to make something original and significant with secondary data, i.e. data that they did not gather themselves. This is a route to originality that is often overlooked by research students.
Originality in byproducts
Things may go so badly wrong on the expedition that it has to be abandoned with seemingly nothing achieved. Yet, the illnesses of the team could be used to testify to the diseases that are rampant in the area. Or the torrential storms that washed away the collections of specimens could be monitored for interpretation in terms of what is already known about storms in that type of terrain. Neither of these would have been the purpose of the expedition, but they would be none the less valuable and count as original work.
Similarly, the student may be able to capitalize on things that seems to go wrong. Important equipment may not work; crucial resources may not be available; people may not agree to be interviewed; funding may be withdrawn; or there may be other serious and unforeseen obstacles. Just as in the analogy, a little creative thinking can rescue the situation, which is the primary reason for the third role in which students need to operate. There are almost always byproducts during any research, perhaps the development of a certain piece of equipment or some interesting secondary findings in the literature. These can be moved into the mainstream, focused on or developed further. When the thesis is written, the research problem, theme or focus merely needs to be reformulated to reflect the new nature of the work.
Originality in the experience
Whatever happens on the expedition, the explorer should, provided that he did not give up and return home early, have some interesting stories to tell.
Similarly students who stay the course with their research should be able to tease out something worthwhile from an academic or scholarly standpoint. The creative thinking techniques of Chapter 20 should help.
Originality as 'potentially publishable'
Departing from the analogy, another useful way to stimulate thinking about originality is through the concept of ‘potentially publishable’ in a peer-reviewed journal. This is increasingly being equated to ‘originality’ for students’ research. The work does not necessarily have to be published, only to be worthy of publication, in principle, if suitably written up at a later stage. ‘Potentially publishable’ is a useful notion, because most research, particularly at PhD level, ought to be able to generate at least one, and probably several, journal articles. The focus of any such article would provide an acceptable claim for originality. If, by the time of the examination, the work has already been accepted for publication in a peer reviewed journal, that is a considerable plus.
The variety of interpretations and configurations of originality
It is not very difficult to develop new and original twists to research, and Box 21.1[see book] gives some examples of how real students have done so. You should be able to do it too.


HOW TO: write the thesis / dissertation
Almost irrespective of what a postgraduate research student actually does, he/she is judged on the quality of the PhD or MPhil thesis/dissertation. It is therefore crucial to be able to round it off within the time available and make it of a standard that shows the work in the best possible light. This page offers suggestions, advice, tips and general help, in particular on creating a unified body of material and making the writing process more effective and efficient.
Linking chapters into a unified whole
Summarised from:

Click book for further information
Chapters of a thesis should link together to make a unified whole with one or more storylines that lead inexorably to make the case or cases for which the thesis is arguing. It is always worth wording the headings of chapters and sections so that they convey as comprehensively as possible what is in them. Then it is helpful to keep an up-to-date contents list, as you work, to be able to see a developing storyline at a glance. It is here that any lack of coherence is likely to show up first. So the technique can save hours of writing that later have to be discarded.
Sections in the chapter on preparing the thesis / dissertation
The importance of the thesis
The need to recap on the writing and referencing techniques of previous chapters
Orientating yourself for the task ahead
Developing a framework of chapters
Developing the content of a chapter
Sequencing the content within a chapter
Linking chapters into one or more storylines
Cross-referencing in the thesis
The writing process
Producing the abstract
Presenting the thesis in accordance with institutional requirements
It should be clear from a chapter’s introduction where that chapter fits into the rest of the storyline, i.e. where it carries on from previous chapters of the thesis. A good technique to accomplish this is to write a few keywords or notes under each of the following headings:
Setting the scene for the chapter, i.e. the general area(s) that the chapter considers.
The gap in knowledge or understanding which the chapter addresses – usually as identified as an issue in (an) earlier chapter(s).
How the chapter fills the gap.
A brief overview of what is in the chapter.
Then edit the notes together to form the introduction to the chapter.
The concluding section or paragraph of a chapter (except of course for the final chapter) should show how the theme of the chapter is carried on elsewhere in the storyline/thesis. The technique for doing this consists of writing a few keywords or some notes under each of the following headings:
What the chapter has done
What new questions the chapter has identified
Where these questions are dealt with.
Then edit the notes together.

Handling the writing process
Writing a thesis is generally a matter of progressively refining chapters in the light of their internal consistency and their relationship to other chapters. This cannot be done quickly, and most students underestimate the time it requires.
It is not usually productive to try to write the chapters of a thesis in sequence. Start with a chapter or several chapters that are currently fascinating you or that you have already come to grips with in your mind. Then develop them in whatever way is easiest for you, be it text on a computer, or scribble on blank sheets of paper, or as a ‘mind map’. The emphasis should be on producing a coherent structure, rather than on grammar or style. When you come to do the actual composition, it is most straightforward to do your own typing and then put it on one side for a time so that you can come back and edit it with a fresh perspective.
Ask your supervisors at what stage they would like to see the drafts. A common procedure is for students to write a chapter of a thesis, submit it to a supervisor and then rewrite to accommodate comments, but it is a mistake then to believe that the revised chapter is completely finished, never to need further modification. The ‘storyline’ of an entire thesis can never be clear from a single chapter. The full thesis is required, at least in draft. No supervisor will finally ‘approve’ a chapter in isolation. The scene-setting chapters are most likely to remain unchanged, but the analytical and interpretative ones depend too much on one another. The word ‘approve’ is in inverted commas, because it is the student’s not the supervisor’s formal responsibility to decide when a thesis (or chapter0 is ready for submission.
Updating drafts is so easy on a word processor that some students produce them copiously. So negotiate with your supervisor how many drafts he or she is prepared to comment on and in what detail. Most supervisors have to set some limits.
Your and your principal supervisor will have been very close indeed to your work for a considerable time. You, in particular, will know it inside out and back to front. So the links between its components may be entirely obvious to you both, while not being particularly clear to those who have met your work recently. It is important to minimize misunderstandings and to find out as early as possible where clarification is necessary. Giving departmental seminars will have helped, as will giving conference presentations and writing journal articles. If you have not done any of these recently, then try to find someone new to your work, who will listen to you explaining it or, ideally, will read the draft thesis and say where they have trouble following your arguments.
You must work through the final draft of the thesis in an editorial mode. Finalizing a thesis is always much more time-consuming than expected. The style must be academic; the text must be written to make a case; chapters have to be linked into a storyline; cross-references and ‘pointers’ need to be inserted to keep the reader orientated to what is where and why; there should be no typing or stylistic errors; and tables, figures and references should be complete, accurate and presented in whatever format has been agreed with the supervisor. Pay particular attention to the abstract, contents list, beginning and ends of chapters and the final chapter, as it is these which examiners tend to study first, and it is on these that they may form their impressions – and first impressions count. There may be departmental or institutional guidelines on maximum length.
Throughout the writing and editing process, be meticulous about keeping backups.
Most students choose to prepare the final versions of their theses themselves, although professional copy editors and typists can support to varying extents. If you need help, make enquiries well in advance of your deadline, because such individuals inevitably find that certain times of the year are busier than others. The departmental secretary or the students’ union should be able to make recommendations.
Although most students underestimate the time that a thesis takes, it is also worth pointing out that many students spend longer on it than necessary, either trying to bulk up the quantity or toying with unnecessary stylistic refinements.


HOW TO: Do yourself justice in the oral exam/viva
How students perform in the viva or oral examination can tip the balance of how the PhD thesis / dissertation is judged. This page offers suggestions, advice, tips and general help - on how to do oneself justice through advanced preparation and by conducting oneself appropriately when meeting examiners.
Preparing yourself for your oral/viva
Summarised from:

Click book for further information
A common suggestion is that students should prepare for the oral/viva through a mock examination with supervisors or others role-playing examiners. This may or may not be a positive thing as it may not be at all realistic. Only you and your supervisor can decide what is best for you.
Once you know who your examiners will be, it would be sensible to find out what you can about them, to familiarize yourself with their work and find links between it and your own. If at all possible, ask around to find out their examination style.
Since the date of the exam may be several months after completion of your work, you will have to reread your thesis some days before, so that it is at your fingertips. An oral examination is often called a thesis defence, which may help you to prepare better. Reread your thesis, as if trying to find fault. If possible, solicit the aid of a friend. Then prepare suitable defences. Defending is not the same as being defensive. If criticisms seem valid, prepare responses to show that you recognize this by saying, for example, what you would have liked to be able to do about them if there had been more resources or if you had thought about it at the right time, or what you hope that other researchers may still do about them.
Sections in the chapter on the oral examination/ viva
The form of the PhD/MPhil examinations
Submitting the thesis for the examination
The importance of the viva/oral examination/thesis defence
How orals/vivas are conducted
Preparing yourself for your oral/viva
Setting up tokens of appreciation
Dressing for the oral/viva
Conducting yourself in the oral/viva
Preparing for the result
It may be helpful to annotate your thesis, using 'Post-It' style stickers, so that you can find key areas quickly. Common early questions are likely to be ‘What did you enjoy most about your work?’ or ‘What would you do differently if you were starting out all over again?’ or ‘How did your Personal Development Planning or skills training influence your work?’. These questions may appear to be simple pleasantries to put you at your ease, but they may mask skilful probing into how well you can appraise your own work and your personal development as a researcher and scholar. Unless you prepare for them, they may throw you and affect how you conduct yourself in the rest of the examination.
Examiners may ask you to present parts of your work orally. They often do this to check that a thesis is a student’s own work and to gauge his or her understanding of it. Come prepared to talk through – and possibly also sketch out – the major ‘route maps’ through your work. This may mean repeating what is already written.
You may also like to prepare some questions for the examiners, although whether or not you use them should be a matter of judgement at the time. You will certainly want to impress with the quality of your thinking, but it would be unwise to raise issues which could seem peripheral and to which examiners might not be able to respond readily. Suitable questions might concern links which examiners might have on recent related work elsewhere or advice on how to go about publishing your work.
You will want to be in good form for the examination. Don’t think that drugs or alcohol or chewing gum will relax the tension. They will not. There is some evidence that they make performance worse, and they will probably lower the examiners’ view of you. A clean handkerchief or box of tissues is good insurance, to wipe sweaty palms and even tears, although any tension should disappear rapidly once discussion gets under way.

Conducting yourself in the oral/viva
Although it is understandable that you may be nervous at the prospect of the oral examination, most students find that they enjoy the experience of discussing their work with able and informed individuals. Remember, you are the world’s expert on your work, and your supervisor and the resources of your department should have provided you with sound support throughout your period as a research student. If you are not considered ready to be examined, you should have been told – and if you are considered ready, everything should go smoothly.
There are, however, a few guidelines on conducting yourself:
Take a pen and paper into the examination, along with your thesis.
Act with composure. Say good morning or good afternoon when you enter the room, but do not speak again until you are spoken to, or until the discussion reaches the stage of exhilarated debate. The examiners will want you to be pleasant but they will not be impressed by gregariousness.
Sit squarely on the chair, not poised on the edge. If there is anything about the room arrangement that disturbs you, ask politely for it to be changed.
Show that you are listening attentively to the examiners’ questions. They will expect you to argue, but try to do so without emotion, on the basis of evidence and keeping personalities out of it, showing that you take others’ points of view seriously, even if you do not agree with them. If you are in doubt about what examiners mean or whether you have answered a question in the way they are expecting, ask for clarification. Don’t defend every point; be prepared to concede some, but not too many.
Don’t hesitate to jot points down on paper if this helps you think.


"The Research Student's Guide to Success"




How to decide between qualitative and quantitative research methods
This page for MPhil and PhD students introduces the distinctions between qualitative and quantitative research. It aims to help MPhil and PhD students make better informed decisions about their choice of research methods and techniques and then to argue more effectively for the validity of their research outcomes.
The nature of 'truth': research paradigms and frameworks
Research should be about discovering 'truth' - but what exactly is 'truth'? It often depends on how someone is looking at things. It is therefore important as a researcher to understand how you are looking at your research and to be able to explain this to everyone else who needs to know about your research.
Common idioms which illustrate how there are (at least) two sides to most viewpoints
One person's junk is another person's treasure.One person's terrorist is another person's freedom fighter.One person's meat is another person's poison.One person's junk is another person's antique.One person’s vice is another person's virtue.One person's security is another person's prison.One person's blessing is another person's curse.
Quite generally a way of looking at the world is known as a 'paradigm'. A 'research paradigm' is a 'school of thought' or 'a framework for thinking' about how research ought to be conducted to ascertain truth. Different writers tend use different terminologies when discussing research paradigms, because of where they are coming from. For practical purposes, though, various paradigms can normally be simplified into just two:
The 'traditional' research paradigm which is essentially quantitative
The 'interpretivist' research paradigm* which is essentially qualitative
This distinction will serve for starters but be aware that there are any number of different research paradigms in the literature and that there is not agreement among academics on how many there are or the finer distinctions between them.
*The term 'interpretivist' research paradigm is due to Denzin and Lincoln (1994), Handbook of Qualitative Research. Beverly Hills, CA: Sage. p. 536.

Quantitative research and the traditional research paradigm
Summarised from:

Click book for further information
The traditional research paradigm relies on numerical (i.e. quantitative) data and mathematical or statistical treatment of that data. The 'truth' that is uncovered is thus grounded in mathematical logic. The traditional research paradigm lends itself to highly valid and highly reliable research. So why do researchers ever use anything else? The reason is that the traditional research paradigm can only be used where the variables that affect the work can be identified, isolated and relatively precisely measured – and possibly, but not necessarily, also manipulated. This is how research in the natural sciences normally operates. Researchers who can work in this paradigm are fortunate because high reliability and validity are held in great esteem. The proponents of the paradigm tend to take its advantages for granted, and theses grounded in it generally take the high reliability and validity as self-evident.
Living beings, however, are affected by numerous interacting variables, such as tiredness, hunger, stress, etc and these variables cannot normally be isolated from one another or measured, and it is certainly impossible, let alone normally unethical, to hold some constant while manipulating others. Nevertheless the traditional research paradigm can still lend itself to research touched by human and other animate behaviour if the data is numerical and if the sample is sufficiently large for the effects of individual vagaries effectively to cancel one another out. An example could be the performance of school leavers in national examinations across a country over a period of years. Another example could be an investigation into the yields of a hybrid crop using large fields of control and experimental plants.
Research set in this traditional research paradigm can answer questions about what is happening and the statistical chances of something happening in the future, but - and this is a big but - it cannot directly answer questions about why something is happening or may happen, nor about the existence of anything else that may be relevant, although answers to such questions may be provided by an established theory within which the research fits.

Qualitative research and the 'interpretivist' research paradigm
So the traditional research paradigm is generally not appropriate for research involving small samples of living beings. Then, the variables which stem from individual vagaries and subjectivity do not cancel one another out; neither can variables be readily identified or measured, let alone isolated and held constant while others are varied. Even with a large sample there are sometimes ethical or pragmatic reasons why variables cannot be held constant or manipulated experimentally.
So a different approach is needed and the research has to be set in the interpretivist research paradigm. What this involves is more like in-depth investigations to establish a verdict in a court of law than experiments in a laboratory. The evidence can be circumstantial and even where there are eye-witness accounts, doubt can always be cast on the veracity or reliability of the observers. A verdict must be reached on what is reasonable, i.e. the weight of evidence one way or the other and on the power of the argument. Data gathered within the interpretivist research paradigm is primarily descriptive, although it may be quantitative, as for example in sizes of living areas, coded questionnaires or documentary analysis. The emphasis is on exploration and insight rather than experiment and the mathematical treatment of data.
Research set in the interpretivist research paradigm can address questions about how and why something is happening. It can also address questions about what is happening in a wider context and what is likely to happen in the future - but it can seldom do so with statistical confidence, because the 'truth' is not grounded in mathematical logic. The 'truth' has to be a conclusion in the mind of a reader (or listener), based on the researcher's power of argument. So different recipients of the research may come to understand different 'truths', just as jurists may in a court of law. It is therefore important for those who use the interpretivist research paradigm to present their work as convincingly as possible. If you are working in this paradigm, your supervisor will advise you further.
Research students who use the interpretivist research paradigm normally have to do a considerable amount of justification. In contrast, those who use the traditional research paradigm often never even mention it.
Terminology
Alternative terms for research paradigms which are broadly similar to the traditional research paradigm are: quantitative, scientific, experimental, hard, reductionist, prescriptive, psychometric – and there are inevitably others. Alternative terms for research paradigms which are broadly similar to the interpretivist one are: qualitative, soft, non-traditional, holistic, descriptive, phenomenological, anthropological, naturalistic, illuminative – and again there are others. It must be emphasised that the similarities are in broad terms only. Many academics would argue fiercely about the significances of the differences.

Where next?
You may feel that this page leaves you, as a research student, with a sense of frustration that it does not say more. However the 'more' that individuals seem to want always turns out to be intimately associated with the requirements of their own particular field of study or programme of research. That is where the help of your supervisor is invaluable. Fortunately there is no shortage of books on research design, research methods and research techniques appropriate for particular fields of study, and you can readily find out what they are and study a selection. Then, under the guidance of others in your field of study and, in particular, your supervisor, you should be able to choose meaningfully how to progress your own research and argue for your conclusions.