Clinical reasoning in veterinary practice

Claire Vinten BVMedSci BVM BVS PhD MRCVS SFHEA1*

1Royal Veterinary College, Hawshead Campus, Hatfield, Hertfordshire, AL9 7TA
*Corresponding Author (clairevinten@outlook.com)


Vol 5, Issue 2 (2020)

Published: 21 May 2020

Reviewed by: Peter Denys Cockcroft (MA VetMB MSC MBA DCHP DVM&S DipECBHM MRCVS), Adam Swallow (BVSc AFHEA MRCVS) and Sarah Baillie (BSc BVSc RCVS Cert)

DOI: 10.18849/VE.V5I2.283


SECTION INDEX:    ABSTRACT | INTRODUCTION | DISCUSSION | CONCLUSION | CONFLICT OF INTEREST | REFERENCES


ABSTRACT

Clinical reasoning is the process by which veterinary surgeons integrate a multitude of clinical and contextual factors to make decisions about the diagnoses, treatment options and prognoses of their patients. The brain utilises two methods to achieve this: type one and type two reasoning. Type one relies on shortcuts such as pattern-recognition and heuristics to deduce answers without involving working memory. Type two uses working memory to deliberately compute logical analyses. Both reasoning methods have sources of errors, and research has shown that diagnostic accuracy is increased when they are used together when problem-solving. Despite this, it appears unlikely that clinical reasoning ‘skill’ can be improved; instead, the most effective way to improve reasoning performance experimentally appears to be by increasing and rearranging knowledge. As yet, there is no evidence that overall clinical reasoning error can be reduced in practice.

 

INTRODUCTION

Many times a day, practising veterinary surgeons in all domains have to make clinical decisions regarding the appropriate diagnosis, treatment and prognosis of their patients. Not only do these decisions rest on the clinical presentation of the animal before them, but also on a myriad of contextual factors including finances, available equipment, resources and client wishes (Everitt, 2011; and Durning et al., 2012). The amalgamation and rationalisation of the clinical and contextual factors of a case into a decision is known as clinical reasoning. This article aims to explain the reasoning process used by veterinary surgeons and explore possible ways to improve our clinical decision-making skills.

 

DISCUSSION
How do veterinary surgeons perform clinical reasoning?

Clinical decisions are made in the same way that humans make hundreds of other, major and minor, decisions within their daily lives. To understand the mental processes that embody clinical reasoning, we must look towards cognitive psychology, the scientific study of cognitive abilities (Norman, 2005). Researchers in this field have determined that humans use two overarching reasoning methods, known as type one and type two reasoning. These two systems have their own unique advantages and disadvantages, hence our need for both.

The fundamental difference between the two types of reasoning is their utilisation of working memory (Stanovich & Toplak, 2012; and Evans & Stanovich, 2013). Working memory functions like a ‘whiteboard’ for our brain, it temporarily holds information to make it available for processing, and then either discards that information or stores it in long-term memory (Baddeley, 2003; and Van Merrienboer & Sweller, 2010). It is responsible for our reasoning activities and thus behaviours, but has a very limited capacity. You have probably experienced this when trying to remember an address, or a phone number; after a certain load is reached, you lose the ability to store or process information. Distractions play a key role in the effectivity of working memory, they take up space, and thus reduce the ability to reason. This includes both cognitive tasks and environmental distractions, both of which are common occurrences within a veterinary consultation (Mamede et al., 2017; and Norman et al., 2016).

Working memory involvement provides the distinction between the two reasoning methods (Stanovich, Toplak 2012). Let us first consider type two reasoning, which uses our working memory to process information and reach a decision. This system is usually slow, analytical and consciously directed by the clinician (Eva 2005). It is an important process as it allows us to think abstractly, and separate relevant and non-relevant elements of a problem. There are several specific reasoning modes we can use within this system, including hypothetico-deductive reasoning, where a hypothesis is tested (‘If there is an infection then I should find elevated leucocytes’) – and inductive reasoning, where data is used to reach a conclusion (‘There are elevated leucocytes therefore there could be an infection’) (Croskerry, 2009; and Evans & Stanovich, 2013). In addition to these modes, we can apply frameworks to assist with problem solving – for example, the use of decision analysis, whereby quantitative calculations are used to determine the treatment option most likely to succeed (Cockcroft 2007). This is also where evidence-based medicine can assist with decision-making, as the findings are incorporated into analytical processes.

In comparison, type one reasoning is defined by non-reliance on working memory (Stanovich, Toplak 2012). This means that type one strategies must occur without deliberate thought – usually making them quick and effortless (Croskerry, 2009). This allows us to make judgements in our daily life that we cannot afford to spend time computing – for example, reasoning that the animal coming towards you is very large, with big ears and a trunk and therefore must be an elephant. Rather than having to consciously calculate this, we can instantly recognise the pattern that is the ‘approaching elephant’ and act accordingly. If humans were unable to utilise type one reasoning for this purpose, we would struggle to achieve anything, as we would spend the majority of our time trying to decipher the world around us (Thammasitboon & Cutrer, 2013).

There are, again, several modes that can be implemented within the group of type one reasoning techniques (all selected and used unconsciously). These range from innate heuristics used by our brains to ‘save space’, to learned associations through repeated exposure until the point of automaticity (Croskerry, 2009). Pattern recognition (as per the elephant example above) is commonly used by clinicians (Eva, 2005; and May, 2013). This entails recognising patterns based on either a prototype (a previous case that matches) or an exemplar (amalgamation of experience of several cases that form a set of ‘general rules’)(Evans and Stanovich, 2013). The brain reacts to a trigger (often a case presentation) that stimulates retrieval of a relevant exemplar/prototype – for example, an 8-year-old entire bitch presenting with increased thirst and severe lethargy 6 weeks post-oestrus might trigger ‘pyometra’ exemplar retrieval without conscious thought by the clinician. Context has been shown to be very important in pattern recognition, with studies finding that non-contributing aspects of a condition can be stored within an exemplar (Hatala et al., 1999). For veterinary surgeons this might include characteristics such as coat colour or approachability, factors which often are not related to the clinical condition.

Another commonly used mode of type one reasoning is the unconscious application of heuristics. These are ‘shortcuts’ that our brains use to save energy and process problems more quickly (Croskerry, 2009a; Stanovich & Toplak, 2012). For example, when searching for a possible diagnosis, previous salient diagnoses (perhaps recent, or with a strong emotional impact) will be assumed to be more common and thus more likely – this is the availability heuristic (Norman et al. 2017). Stereotypes are another example of a heuristic that can impact clinical reasoning; have your assumptions of the likelihood that an owner will pay for a particular treatment, or the assumption that certain breeds of dog will be likely to be suffering from certain conditions, ever impacted on your decision-making? It may seem that the use of these heuristics would be detrimental to the reasoning process, however that is not necessarily the case, as they have developed from an evolutionary need to reason quickly and, for the most part, accurately (Croskerry, 2009a).

The key properties of the two types of reasoning are summarised in Table 1.

 

Table 1. Types of reasoning: A comparison of features of type one and type two reasoning (Croskerry, 2009b; and Stanovich & Toplak, 2012)
Table 1. Types of reasoning: A comparison of features of type one and type two reasoning (Croskerry, 2009b; and Stanovich & Toplak, 2012)

 

Sources of error in reasoning

Both type one and type two reasoning have weaknesses that need to be managed in order to ensure successful use (Norman et al. 2017). We have already discussed the main limitations of type two reasoning, which are inherent in the nature of the process: time and working memory consumption (Eva 2005). Type one reasoning can be flawed by the shortcuts it relies on, which become known as cognitive biases when they fail (Klein 2005). The most significant of these within medical fields is premature closure (Graber, 2005), which occurs when other possible differentials are not considered once the first diagnosis is reached. Another example is the tendency for clinicians to search out information that confirms their working diagnosis, and disregard information that opposes it; this is known as confirmation bias (Eva & Norman, 2005). Type one processes are also highly context-bound and as such can be influenced by emotions, which under certain circumstances could cause reasoning error (Croskerry, Norman 2008; Slovic et al. 2004).

Interestingly, research examining the impact of educating clinicians to detect their own cognitive biases has shown no effect on diagnostic accuracy (Shimizu et al., 2013; and Sherbino et al., 2014). There is also no strong evidence to suggest that attempting to activate type two processes and slow the decision-making process down will improve diagnostic accuracy (Norman et al. 2017). In the past, type one reasoning was thought to be inaccurate and thus was discouraged, particularly in students. However, a multitude of research has now shown that type one is, at the very least, as accurate as type two reasoning; if not more so (Coderre et al., 2003; and Eva, 2005). The most effective strategy, however, has been shown to be a combination of the two – dual-process reasoning (Ark et al., 2006).

 

Dual-process reasoning

Dual-process reasoning, illustrated in Figure 1, is the use of both type one and type two reasoning to solve a problem. In reality, all reasoning that occurs in non-laboratory conditions is dual-process, as we cannot stop intuitive type one processes taking over when they are able to, nor use them when there are no pattern or heuristics triggered (Evans & Stanovich, 2013).

Dual-process reasoning starts when a stimulus (here, the clinical history and presentation of an animal) triggers type one methods to begin. There has been debate in the literature about whether it is possible to also trigger type two reasoning at the outset, but this seems unlikely due to the fast and unconscious nature of type one processes; they will, essentially, get there first (Croskerry, 2009a; and Evans & Stanovich, 2013). It is possible, however, that pattern recognition will not be able to offer a solution to the problem, in which case type two methods take over automatically. This would be the point at which the clinician would first be consciously aware of their thought processes, as they try and solve the problem analytically. As they do this, pattern recognition and heuristics will be still be used wherever possible, remembering that these are not controlled by the clinician. In fact, the most the clinician can do is be aware that heuristics have been used and ‘double-check’ the conclusions they suggest. This leads to a fluid system where clinicians change between the two systems of reasoning frequently, eventually reaching a conclusion. There may be any combination of periods of logical problem solving and periods of automaticity (Croskerry et al. 2014).

Dual-process reasoning allows for the weaknesses of both types of decision-making to be minimised; type one will prevent cognitive overload and save time, whilst type two will resist bias and lack of experience. However, it is important to be aware that the weaknesses of both methods can still impact decision-making, particularly in the face of lack of knowledge (Evans & Stanovich, 2013).

 

Figure 1. Dual-process reasoning. The dual-process model of clinical reasoning in a diagnostic situation, adapted from Croskerry (2009a).
Figure 1. Dual-process reasoning. The dual-process model of clinical reasoning in a diagnostic situation, adapted from Croskerry (2009a).

 

Improving clinical reasoning

Diagnostic error has been found to occur in 10–15% of medical cases (Graber et al., 2005; and Graber, 2013). It is likely that diagnostic error is a significant problem within veterinary medicine also (Oxtoby et al. 2015), but data is not available to estimate prevalence. This concerning figure has promoted much investigation into the improvement of clinical reasoning skills in both students and experienced clinicians. However, as previously noted, mechanisms to reduce error that initially seemed promising, removing cognitive biases and adopting analytical processes, have been shown not to impact diagnostic accuracy (Norman et al. 2017).

So how can clinical reasoning be improved? The answer is not particularly surprising; studies have indicated that the presence and arrangement of knowledge appears to have the biggest impact on improving medical reasoning performance (Norman, 1989; Dory et al., 2010; and Norman et al., 2017). This is supported by a phenomenon known as ‘content specificity’, whereby reasoning accuracy depends on the case knowledge and not on the level of ‘reasoning skill’ of the practitioner; i.e. performing well on one case does not predict performance on a different case (Norman, 2005; Dory et al., 2010). The implication is that there appears to be no shortcut to effective reasoning that we can employ as clinicians, or teach to students. What is required is extensive accessible knowledge, not the ‘heard it in a lecture once’ kind of knowledge (Norman et al., 2017).

The knowledge used for clinical reasoning resides within illness scripts, mental models of specific illnesses which are stored in long-term memory (Charlin et al., 2007; and Schmidt & Rikers, 2007). Veterinary surgeons have many of them; ranging from common disorders such as flea allergy, to rare but important diseases such as bovine spongiform encephalopathy. Information including signs, epidemiology and treatment options are stored together in this way. When presented with a new patient problem, the brain will automatically search through the bank of scripts until one that matches the patient’s condition is found. If there is a direct match, type one reasoning can process the script without the need for working memory involvement. If there are discrepancies between reality and the script, type two reasoning will be required to fill in the gaps (Cutrer et al., 2013). Illness scripts exist at varying levels of completion – for instance, a student that has only learnt about a particular disease in a lecture will have script that is mostly incomplete. A veterinary surgeon that has been practising for many years and has encountered the disease countless times with a variety of clinical presentations and outcomes will have, in comparison, a much more robust script.

It is therefore logical that veterinary students should focus on obtaining in-depth knowledge of the common conditions that they will be required to treat regularly in practice, including experiencing a range of possible clinical presentations and treatment options (May 2013). Having the necessary background knowledge allows for analytical processes (type two reasoning) to be used, particularly alongside reasoning frameworks. Refining this knowledge through clinical experience leads to development of more complete illness scripts, which allow for the benefits of type one reasoning to also be used.

But what about experienced veterinary surgeons, with knowledge sufficient to have built complex scripts? Several studies have indicated that rearrangement of knowledge may be effective in improving diagnostic accuracy in experienced practitioners (Norman et al. 2017). Research has focused on the use of structured reflection for this purpose; revisiting a conclusion (no matter which initial reasoning type was used) and re-evaluating the evidence leading to it, considering possible alternatives (Mamede et al., 2008). It is thought that the reflective process impacts upon the storage and retrieval of knowledge, however, the mechanism for this is not yet known. Based on this finding, other methods of retrieving and reorganising knowledge may also have a role in improving clinical reasoning through increasing script functionality; for example, the framework for problem-based inductive clinical reasoning developed by Maddison et al. (2015), which provides a logical approach to decision-making in practice.

The eventual goal should be to form complete illness scripts that allow type one reasoning, thus saving time and working-memory for other clinical tasks, such as communication and calculations. However, effective type two reasoning is vital to reach this point.

 

Problem solved?

Reducing error in clinical reasoning has been considered important to improve patient safety within medicine and, more recently, veterinary medicine (Oxtoby et al., 2015). However, evidence is lacking which demonstrates that errors in the process of clinical reasoning can be significantly reduced (Eva & Norman, 2005; and Norman et al., 2017). Thus, we may need to circumvent the inaccuracy of human reasoning to improve patient outcomes.

Evidence-based medicine (EBM) and decision support systems (DSS) provide two possible options for achieving this. The former uses clinical research findings to suggest a course of action, the latter uses databases consisting of qualitative and quantitative information and decision-making algorithms (Cockcroft & Holmes, 2008). Their ability to influence the daily practice of veterinary surgeons is as yet unconfirmed and their use is dependent upon sufficient time and resources being available (Vandeweerd et al. 2012). However, further research into the potential for EBM and DSS to improve clinical outcomes in veterinary patients is warranted.

 

CONCLUSION

There are two methods of clinical reasoning used by humans: type one, which does not rely on working memory, and type two, which does. The former is usually fast and unconscious, whereas the latter is usually slow and deliberate. Both methods have been found to be most effective when used in combination, as dual-process reasoning. Whilst we might be tempted to assume that type one reasoning leads to the majority of errors, this is not the case. In fact, the evidence suggests that increasing knowledge is the only way to reliably improve reasoning performance. There may also be a role for knowledge reorganisation in improving reasoning, although this needs further exploration. There is, as yet, no evidence that improving clinical reasoning will lead to reduced diagnostic error in practice.

 

Conflict of Interest

The author declares no conflicts of interest.

 

References

  1. Ark, T.K., Brooks, L.R. & Eva, K.W. (2006). ‘Giving Learners the Best of Both Worlds: Do Clinical Teachers Need to Guard Against Teaching Pattern Recognition to Novices?’, Academic Medicine, 81(4): 405–409. DOI: http://dx.doi.org/10.1097/00001888-200604000-00017
  2. Baddeley, A. (2003). ‘Working memory: Looking back and looking forward’, Nature Reviews Neuroscience, 4(10): 829–839. DOI: http://dx.doi.org/10.1038/nrn1201
  3. Charlin, B., Boshuizen, H.P.A., Custers, E.J. & Feltovich, P.J. (2007). ‘Scripts and clinical reasoning’, Medical Education, 41(12): 1178–84. DOI: http://dx.doi.org/10.1111/j.1365-2923.2007.02924.x
  4. Cockcroft, P. (2007). ‘Clinical reasoning and decision analysis’, Veterinary Clinics of North America: Small Animal Practice, 37(3): 499–520. DOI: http://dx.doi.org/10.1016/j.cvsm.2007.01.011
  5. Cockcroft, P. & Holmes, M. (2008). Handbook of evidence-based veterinary medicine. Wiley-Blackwell.
  6. Coderre, S., Mandin, H., Harasym, P.H. & Fick, G.H. (2003). ‘Diagnostic reasoning strategies and diagnostic success’, Medical education, 37(8): 695–703. DOI: http://dx.doi.org/10.1046/j.1365-2923.2003.01577.x
  7. Croskerry, P. (2009a). ‘A universal model of diagnostic reasoning’, Academic Medicine, 84(8): 1022–8. DOI: http://dx.doi.org/10.1097/ACM.0b013e3181ace703
  8. Croskerry, P. (2009b). ‘Clinical cognition and diagnostic error: Applications of a dual process model of reasoning’, Advances in Health Sciences Education, 14(1 SUPPL): 27–35. DOI: http://dx.doi.org/10.1007/s10459-009-9182-2
  9. Croskerry, P. & Norman, G. (2008). ‘Overconfidence in clinical decision making’, The American Journal of Medicine, 121(5): S24-9. DOI: http://dx.doi.org/10.1016/j.amjmed.2008.02.001
  10. Croskerry, P., Petrie, D.A., Reilly, J.B. & Tait, G. (2014). ‘Deciding about fast and slow decisions’, Academic Medicine, 89(2): 197–200. DOI: http://dx.doi.org/10.1097/acm.0000000000000121
  11. Cutrer, W., Sullivan, W. & Fleming, A. (2013). ‘Educational strategies for improving clinical reasoning’, Current Problems in Pediatric and Adolescent Health Care, 43(9): 248–57. DOI: http://dx.doi.org/10.1016/j.cppeds.2013.07.005
  12. Dory, V., Gagnon, R. & Charlin, B. (2010). ‘Is case-specificity content-specificity? An analysis of data from extended-matching questions’, Advances in Health Sciences Education, 15(1): 55–63. DOI: http://dx.doi.org/10.1007/s10459-009-9169-z
  13. Durning, S., Artino, A., Boulet, J., Dorrance, K., van der Vleuten, C. & Schuwirth, L. (2012). ‘The impact of selected contextual factors on experts’ clinical reasoning performance (does context impact clinical reasoning performance in experts?)’, Advances in Health Sciences Education, 17(1): 65–79. DOI: http://dx.doi.org/10.1007/s10459-011-9294-3
  14. Eva, K. (2005). ‘What every teacher needs to know about clinical reasoning’, Medical Education, 39(1): 98–106. DOI: http://dx.doi.org/10.1111/j.1365-2929.2004.01972.x
  15. Eva, K. & Norman, G. (2005). ‘Heuristics and biases--a biased perspective on clinical reasoning’, Medical Education, 39(9): 870–2. DOI: http://dx.doi.org/10.1111/j.1365-2929.2005.02258.x
  16. Evans, J.S.B.T. & Stanovich, K.E. (2013). ‘Dual-Process Theories of Higher Cognition: Advancing the Debate’, Perspectives on Psychological Science, 8(3): 223–241. DOI: http://dx.doi.org/10.1177/1745691612460685
  17. Everitt, S. (2011). Clinical Decision Making in Veterinary Practice. University of Nottingham.
  18. Graber, M.L. (2013). ‘The incidence of diagnostic error in medicine’, BMJ Quality and Safety, 22(SUPPL.2): 21–27. DOI: http://dx.doi.org/10.1136/bmjqs-2012-001615
  19. Graber, M.L., Franklin, N. & Gordon, R. (2005). ‘Diagnostic Error in Internal Medicine’, Archives of Internal Medicine, 165(13): 1493. DOI: http://dx.doi.org/10.1001/archinte.165.13.1493
  20. Hatala, R., Norman, G. & Brooks, L. (1999). ‘Influence of a Single Example on Subsequent Electrocardiogram Interpretation’, Teaching and Learning in Medicine, 11(2): 110–117. DOI: http://dx.doi.org/10.1207/S15328015TL110210
  21. Klein, J. (2005). ‘Five pitfalls in decisions about diagnosis and prescribing’, British Medical Journal, 330(7494): 781–783. DOI: http://dx.doi.org/10.1136/bmj.330.7494.781
  22. Maddison, J.E., Volk, H.A. & Church, D.B. (2015). Clinical Reasoning in Small Animal Practice. John Wiley & Sons Ltd.
  23. Mamede, S., Van Gog, T., Schuit, S.C.E., Van den Berge, K., Van Daele, P.L.A., Bueving, H., Van Van der Zee, T., Van den Broek, W.W., Van Saase, J.L.C.M. & Schmidt, H.G. (2017). ‘Why patients’ disruptive behaviours impair diagnostic reasoning: A randomised experiment’, BMJ Quality and Safety, 26(1): 13–18. DOI: http://dx.doi.org/10.1136/bmjqs-2015-005065
  24. Mamede, S., Schmidt, H. & Penaforte, J. (2008). ‘Effects of reflective practice on the accuracy of medical diagnoses’, Medical Education, 42(5): 468–475. DOI: http://dx.doi.org/10.1111/j.1365-2923.2008.03030.x
  25. May, S. (2013). ‘Clinical Reasoning and Case-Based Decision Making: The Fundamental Challenge to Veterinary Educators’, Journal of Veterinary Medical Education, 40(3): 200–209. DOI: http://dx.doi.org/10.3138/jvme.0113-008R
  26. Van Merrienboer, J. and Sweller, J. (2010). ‘Cognitive load theory in health professional education: design principles and strategies’, Medical Education, 44(1): 85–93. DOI: http://dx.doi.org/10.1111/j.1365-2923.2009.03498.x
  27. Norman, G. (1989). ‘The Development of Expertise in Dermatology’, Archives of Dermatology, 125(8): 1063. DOI: http://dx.doi.org/10.1001/archderm.1989.01670200039005
  28. Norman, G. (2005). ‘Research in clinical reasoning: past history and current trends’, Medical Education, 39(4): 418–427. DOI: http://dx.doi.org/10.1111/j.1365-2929.2005.02127.x
  29. Norman, G.R., Monteiro, S.D., Sherbino, J., Ilgen, J.S., Schmidt, H.G. & Mamede, S. (2017). ‘The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking’, Academic Medicine, 92(1): 23–30. DOI: http://dx.doi.org/10.1097/ACM.0000000000001421
  30. Oxtoby, C., Ferguson, E., White, K. & Mossop, L. (2015). ‘We need to talk about error: Causes and types of error in veterinary practice’, Veterinary Record, 177(17): 438. DOI: http://dx.doi.org/10.1136/vr.103331
  31. Schmidt, H. & Rikers, R. (2007). ‘How expertise develops in medicine: knowledge encapsulation and illness script formation’, Medical Education, 41(12): 1133–1139. DOI: http://dx.doi.org/10.1111/j.1365-2923.2007.02915.x
  32. Sherbino, J., Kulasegaram, K., Howey, E. & Norman, G. (2014). ‘Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial’, Canadian Journal of Emergency Medicine, 16(1): 34–40. DOI: http://dx.doi.org/10.2310/8000.2013.130860
  33. Shimizu, T., Matsumoto, K. & Tokuda, Y. (2013). ‘Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis’, Medical Teacher, 35(6). DOI: http://dx.doi.org/10.3109/0142159X.2012.742493
  34. Slovic, P., Finucane, M., Peters, E. & MacGregor, D. (2004). ‘Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality’, Risk Analysis, 24(2): 311–322. DOI: http://dx.doi.org/10.1111/j.0272-4332.2004.00433.x
  35. Stanovich, K.E. & Toplak, M.E. (2012). ‘Defining features versus incidental correlates of Type 1 and Type 2 processing’, Mind and Society, 11(1): 3–13. DOI: http://dx.doi.org/10.1007/s11299-011-0093-6
  36. Thammasitboon, S. & Cutrer, W. (2013). ‘Diagnostic Decision-Making and Strategies to Improve Diagnosis’, Current Problems in Pediatric and Adolescent Health Care, 43(9): 232–241. DOI: http://dx.doi.org/10.1016/j.cppeds.2013.07.003
  37. Vandeweerd, J., Kirschvink, N., Clegg, P., Vandenput, S., Gustin, P. & Saegerman, C. (2012). ‘Is evidence-based medicine so evident in veterinary research and practice? History, obstacles and perspectives’, The Veterinary Journal, 191(1): 28–34. DOI: http://dx.doi.org/%2010.1016/j.tvjl.2011.04.013

Intellectual Property Rights

Authors of articles submitted to RCVS Knowledge for publication will retain copyright in their work, and will be required to grant to RCVS Knowledge a non-exclusive licence of the rights of copyright in the materials including but not limited to the right to publish, re-publish, transmit, sell, distribute and otherwise use the materials in all languages and all media throughout the world, and to licence or permit others to do so.

Disclaimer

Any opinions expressed in articles and other publication types published in Veterinary Evidence are the author's own and do not necessarily reflect the view of the RCVS Knowledge. Veterinary Evidence is a resource to help inform, and the content herein should not override the responsibility of the practitioner. Practitioners should also consider factors such as individual clinical expertise and judgement along with patient’s circumstances and owners’ values. Authors are responsible for the accuracy of the content. While the Editor and Publisher believe that all content herein are in accord with current recommendations and practice at the time of publication, they accept no legal responsibility for any errors or omissions, and make no warranty, express or implied, with respect to material contained within. For further information please refer to our Terms of Use.



Open Access Peer Reviewed