Because of the snail's pace that education has developed at, most of us don't really know how to study because we've been told lectures and reading thousands of pages is the best way to go, and no one really wants to do that all day. That's not the only way to study. My first year of medical school... You know when you start the year so committed, then eventually you skip lectures once or twice... then you just binge on skipping? Kinda like breaking a diet "Two weeks in: oh I'll just have a bite of your mac n' cheese... oh is that cake? and doritos? and french fries? Give me all of it all at once." Anyways, when that happened in first year I started panicking after a while; but after studying with friends who had attended lectures, I found they were almost as clueless as I was. I'm not trying to say lectures are useless... What my fellow first years and I just didn't know was how to use the resources we had– whether we were keen beans or lazy pants, or somewhere in between. I still struggle with study habits, but I've formed some theories since and I'm going to share these with you. Reading While reading should not be the entire basis of your studying, it is the best place to start. Best to start with the most basic and detailed sources (ex.Tortora if it's a topic I'm new to, then Kumar and Clark, and Davidson's are where I usually start, but there are tons of good ones out there!). I do not feel the need to read every section of a chapter, it's up to the reader's discretion to decide what to read based on objectives. If you do not have time for detailed reading, there are some wonderful simplified books that will give you enough to get through exams (ICT and crash course do some great ones!). I start with this if exams are a month or less away. Later, it's good to go through books that provide a summarized overview of things, to make sure you've covered all bases (ex. Flesh and Bones, the 'Rapid ______" series, oxford clinical handbook, etc.). These are also good if you have one very specific question about a subject. Video Tutorials**** After all that reading, you want the most laid back studying you can find. This is where Meducation and Youtube become your best friend. (I can post a list of my favourite channels if anyone is interested).I always email these people to thank them. I know from the nice people that run this website that it takes a tremendous amount of effort and a lot of the time and it's just us struggling students who have much to gain. Everyone should use video tutorials. It doesn't matter if you're all Hermione with your books; every single person can benefit from them, especially for osce where no book can fully portray what you're supposed to do/see/hear during examinations. Some youtube channels I like https://www.youtube.com/user/TheAnatomyZone https://www.youtube.com/user/ECGZone https://www.youtube.com/user/MEDCRAMvideos https://www.youtube.com/user/awolfnp https://www.youtube.com/user/harpinmartin https://www.youtube.com/user/RadiologyChannel Lectures We're all thinking it, lectures can be boring. Especially when the speaker has text vomited all over their slides (seriously, If I can't read it from the back of the lecture hall, there's too much!. It's even worse when they're just reading everything to you, and you're frantically trying to write everything down. Here's the thing, you're not supposed to write everything down. If you can print the slides beforehand or access them on your laptop/ipad/whatever you use and follow along, do that. You're meant to listen, nod along thinking (oh yes I remember this or oooh that's what happens? or Oh I never came across that particular fact, interesting!). It's also meant to be a chance for you to discuss interesting cases from the a doctor's experiences. If you're lucky to have really interactive lecturers, interact! Don't be shy! Even if you make a fool of yourself, you're more likely to remember what you learned better. If you happen to be in a lecture you're completely unprepared for (basically 70% of the time?). Think of it as "throwing everything at a wall and hoping something sticks." Pull up the slides on your smart phone if you have one, only take notes on interesting or useful things you hear the speaker say. If all else fails, these lectures where tell you what topics to go home and read about. Tutorials My university has gradually increased its use of tutorials, and I couldn't be happier. Make the most out of these because they are a gift. Having the focused attention of a knowledgeable doctor or professor in a small group for a prolonged period of time is hard to lock down during hospital hours. Ask lots of questions, raise topics you're having trouble understanding, this is your protected time. Discussions In group study activities, this is particularly hard to make the most of when everyone in your group varies in studying progression, but even so, it can be beneficial. Other people's strengths might be your weaknesses and vise versa– and it's always helpful to hear an explanation about something from someone at your level, because they will neither under or over estimate you, and they will not get offended when you tell them "ok I get it that's enough." Myself and 3 of my medic friends would meet once a week the month or two leading up to exams at one of our houses to go through OSCE stations and concepts we didn't understand (food helps too). Besides peer discussions, you should take advantage of discussions with doctors. If the doctor is willing to give you their time, use it well. Practice Questions I am a practice question book hoarder. Practice questions book not only test and reaffirm your knowledge, which is often hard to find if your exams are cumulative and you have little to no quizes/tests. They also have concise, useful explanations at the back and, they tell you where the gaps in your studying are. For my neuro rotation, the doctor giving the first and last lecture gave us a quiz, it was perfect for monitoring our progress, and the same technique can be used in your studies. Practical Clinical Experiences If you freeze up during exams and blank out, and suddenly the only forms of text floating around your brain are Taylor Swift lyrics, these are bound to come to your rescue! "Learn by doing." Take as many histories as you can, do as many clinical exams in hospital, and on your friends to practice, as you can, see and DO as many clinical procedures as you can; these are all easy and usually enjoyable forms of studying. Teaching Have you ever had an experience where one of your peers asks you about something and you give them a fairly good explanation then you think to yourself "Oh wow, I had no idea that was actually in there. High five me." If there is ever an opportunity to teach students in the years below you or fellow students in your year, do it! It will force you to form a simplified/accurate explanation; and once you've taught others, it is sure to stick in your head. Even if it's something you don't really know about, committing yourself to teaching others something forces you to find all the necessary information. Sometimes if there's a bunch of topics that nobody in my study group wants to do, we each choose one, go home and research it, and explain it to each other to save time. If you're doing this for a presentation, make handouts, diagrams or anything else that can be used as an aid.
almost 4 years ago
This diagram was created to summarise my dissertation. It shows numerous methods of immune evasion methods of a cancer cell. I did a lot of research around this subject and never found a diagram that brought this number of methods together, so created one.
over 5 years ago
A bite size summary of Atheroma in 60 seconds I created this as part of a research project into the application of art and media in medicine. Its purpose is to give a quick and memorable visual summary of the pathological process of atheroma. Created by Cilein Kearns using Blender 2.5 beta All modelling, lighting, rigging, materials, texturing, animation etc is 100% my own work, please do not redistribute or host without my advance permission.
over 6 years ago
Maybe it’s just me, but I cannot get my head around pharmacology and antibiotics are certainly doing their best to finish me off! My group at uni decided that this was one area that we needed to revise, and the task fell on my hands to provide the material for a revision session. So, the night before the session I began to panic about how to come up with any useful tips for my group, or indeed anyone at all, to try to remember anything useful about antibiotics at all. If only Paracetamoxyfrusebendroneomycin was a real drug, it would make our lives so much easier. Come on Adam Kay and Suman Biswas, get the trials started and create your wonderful super drug. For the mean time I guess I will just have to keep blissfully singing along to your song. However, that is not going to help me with my task in hand. After a lot of research that even took me beyond the realms of Wikipedia (something I do not often like to do), I found various sources suggesting remembering these Top 10 Rules (and their exceptions) All cell wall inhibitors are ?-lactams (except vancomycin) All penicillins are water soluble (nafcillin) All protein synthesis inhibitors are bacteriostatic (aminoglycosides) All cocci are Gram positive (Neisseria spp.) All bacilli are Gram negative (anthrax, tetanus, botulism, diptheria) All spirochetes are Gram negative Tetracyclines and macrolides are used for intracellular bacteria Pregnant women should not take tetracyclines, aminoglycosides, fluroquinolones, or sulfonamides Antibiotics beginning with a ‘C’ are particularly associated with pseudomembranous colitis While the penicillins are the most famous for causing allergies, people may also react to cephalosporins If those work for you, then I guess you can stop reading now… If they don’t, I can’t promise that I have anything better, but give these other tips that I found a whirl… Alternatively, I have created a Page on my own blog called Rang and Dale’s answer to Antibiotics, which summarises their information, so please take a look at that. Most people will suggest that you can categorise antibiotics in three ways, and it’s best to pick one and learn examples of them. Mode of action: bactericidal (kill) bacteriostatic (stop multiplying) 2 mnemonics to potentially help you remember examples: We’re ECSTaTiC about bacteriostatics? Erythromycin Clindamycin Sulphonamides Tetracyclines Trimethoprim Chloramphenicol Very Finely Proficient At Cell Murder (bactericidal) - Vancomycin Fluroquinolones Penicillins Aminoglycosides Cephalosporins Metranidazole Spectrum of activity: broad-spectrum (gram positive AND negative) narrow (gram positive OR negative) Mechanism of action Inhibit cell wall synthesis Inhibit nucleic acid synthesis Inhibit protein synthesis Inhibit cell membrane synthesis If you have any more weird and wonderful ways to remember antibiotics, let me know and I will add them! As always, thank you for reading.
Mrs Malaika Smith
almost 4 years ago
In a recent article in the BMJ the author wonders about the reasons beyond the rising trend diagnosing Attention Deficit Hyperactivity Disorder (ADHD). The article attempts to infer reasons for this. One possible reason was that the diagnostic criteria especially DSM may seem for some to be more inclusive than ICD-10. The speculation may explain the rise of the diagnosis where DSM is used officially or have an influence. In a rather constructive way, an alternative to rushing to diagnosis is offered and discussed in some details. The tentative deduction that the Diagnostic Statistical Manual (DSM) may be one of the causes of rising diagnosis, due to raising the cut-off of age, and widening the inclusion criteria, as opposed to International Classification of Diseases, 10th revision (ICD-10), captured my attention. On reading the ICD-10 diagnostic criteria for research (DCR) and DSM-5 diagnostic criteria, I found them quite similar in most aspects, even the phraseology that starts with 'Often' in many diagnostic criteria, they seem to differ a bit in age. In a way both classification, are attempting to describe the disorder, however, it sounds as if someone is trying to explain a person's behaviour to you, however, this is not a substitute to direct clinical learning, and observing the behaviour, as if the missing sentence is 'when you see the person, it will be clearer'. El-Islam agrees with the notion that DSM-5 seems to be a bit more inclusive than ICD-10. A colleague of mine who is a child psychiatrist and she is doing her MSc. thesis in ADHD told me, that DSM-5 seems to be a substantial improvement as compared to its predecessor. The criteria - to her - though apparently are more inclusive, they are more descriptive with many examples, and she infers that this will payback in the reliability of the diagnosis. She hopes gene research can yield in biological tests for implicated genes and neurotransmitters in ADHD e.g. DRD4, DAT, gene 5,6,11 etc. One child psychiatrist, regretted the fact that misdiagnosis and under-diagnoses, deprive the patient from one of the most effective treatments in psychiatry. It is hoped the nearest forthcoming diagnostic classification (ICD-11), will address the issue of the diagnosis from a different perspective, or else converge with DSM-5 to provide coherence and a generalised newer standard of practice. The grading of ADHD into mild, moderate, and severe seem to blur the border between disorder and non-disorder, however, this quasi-dimensional approach seems realistic, it does not translate yet directly in differences in treatment approaches as with the case of mild, moderate, severe, and severe depression with psychotic symptoms, or intellectual disability. The author states that one counter argument could be that child psychiatrists are better at diagnosing the disorder. I wonder if this is a reflection of a rising trend of a disorder. If ADHD is compared to catatonia, it is generally agreed that catatonia is less diagnosed now, may be the epidemiology of ADHD is not artefact, and that we may need to look beyond the diagnosis to learn for example from environmental factors. Another issue is that there seems to be significant epidemiological differences in the rates of diagnosis across cultures. This may give rise to whether ADHD can be classified as a culture-bound syndrome, or whether it is influenced by culture like anorexia nervosa, or it may be just because of the raising awareness to such disorders. Historically, it is difficult to attempt to pinpoint what would be the closest predecessor to ADHD. For schizophrenia and mania, older terms may have included insanity, for depression it was probably melancholia, there are other terms that still reside in contemporary culture e.g. hypochondriasis, hysteria, paranoia etc. Though, it would be too simplistic to believe that what is meant by these terms was exactly what ancient cultures meant by them, but, they are not too far. ADHD seems to lack such historical underpinning. Crichton described a disorder he refers to as 'mental restlessness'. Still who is most often credited with the first description of ADHD, in his 1902 address to the Royal College of Physicians. Still describes a number of patients with problems in self-regulation or, as he then termed it, 'moral control' (De Zeeuw et al, 2011). The costs and the risks related to over-diagnosis, ring a warning bell, to enhance scrutiny in the diagnosis, due to subsequent stigma, costs, and lowered societal expectations. They all seem to stem from the consequences of the methodology of diagnosis. The article touches in an important part in the psychiatric diagnosis, and classifications, which is the subjective nature of disorders. The enormous effort done in DSM-5 & ICD-10 reflect the best available evidence, but in order to eliminate the subjective nature of illness, a biological test seems to be the only definitive answer, to ADHD in particular and psychiatry in general. Given that ADHD is an illness and that it is a homogeneous thing; developments in gene studies would seem to hold the key to understanding our current status of diagnosis. The suggested approach for using psychosocial interventions and then administering treatment after making sure that it is a must, seems quite reasonable. El-Islam, agrees that in ADHD caution prior to giving treatment is a recommended course of action. Another consultant child psychiatrist mentioned that one hour might not be enough to reach a comfortable diagnosis of ADHD. It may take up to 90 minutes, to become confident in a clinical diagnosis, in addition to commonly used rating scales. Though on the other hand, families and carers may hypothetically raise the issue of time urgency due to scholastic pressure. In a discussion with Dr Hend Badawy, a colleague child psychiatrist; she stated the following with regards to her own experience, and her opinion about the article. The following is written with her consent. 'ADHD is a clinically based diagnosis that has three core symptoms, inattention, hyperactivity and impulsivity in - at least - two settings. The risk of over-diagnosis in ADHD is one of the potentially problematic, however, the risk of over-diagnosis is not confined to ADHD, it can be present in other psychiatric diagnoses, as they rely on subjective experience of the patient and doctor's interviewing skills. In ADHD in particular the risk of under-diagnosis is even more problematic. An undiagnosed child who has ADHD may suffer various complications as moral stigma of 'lack of conduct' due to impuslivity and hyperactivity, poor scholastic achievement, potential alienation, ostracization and even exclusion by peer due to perceived 'difference', consequent feelings of low self esteem and potential revengeful attitude on the side of the child. An end result, would be development of substance use disorders, or involvement in dissocial behaviours. The answer to the problem of over-diagnosis/under-diagnosis can be helped by an initial step of raising public awareness of people about ADHD, including campaigns to families, carers, teachers and general practitioners. These campaigns would help people identify children with possible ADHD. The only risk is that child psychiatrists may be met with children who their parents believe they might have the disorder while they do not. In a way, raising awareness can serve as a sensitive laboratory investigation. The next step is that the child psychiatrist should scrutinise children carefully. The risk of over-diagnosis can be limited via routine using of checklists, to make sure that the practice is standardised and that every child was diagnosed properly according to the diagnostic criteria. The use of proper scales as Strengths and Difficulties Questionnaire (SDQ) in its two forms (for parents SDQ-P and for teachers SDQ-T) which enables the assessor to learn about the behaviour of the child in two different settings. Conner's scale can help give better understanding of the magnitude of the problem. Though some people may voice criticism as they are mainly filled out by parents and teachers, they are the best tools available at hands. Training on diagnosis, regular auditing and restricting doctors to a standard practice of ensuring that the child and carer have been interviewed thoroughly can help minimise the risk of over-diagnosis. The issue does not stop by diagnosis, follow-up can give a clue whether the child is improving on the management plan or not. The effects and side effects of treatments as methylphenidate should be monitored regularly, including regular measurement height and weight, paying attention to nausea, poor appetite, and even the rare side effects which are usually missed. More restrictions and supervision on the medication may have an indirect effect on enhancing the diagnostic assessment. To summarise, the public advocacy does not increase the risk of over-diagnosis, as asking about suicidal ideas does not increase its risk. The awareness may help people learn more and empower them and will lead to more acceptance of the diagnosed child in the community. Even the potential risk of having more case loads for doctors to assess for ADHD may help give more exposure of cases, and reaching more meaningful epidemiological finding. From my experience, it is quite unlikely to have marked over-representation of children who the families suspect ADHD without sufficient evidence. ADHD remains a clinical diagnosis, and it is unlikely that it will be replaced by a biological marker or an imaging test in the near future. After all, even if there will be objective diagnostic tests, without clinical diagnostic interviewing their value will be doubtful. It is ironic that the two most effective treatments in psychiatry methylphenidate and Electroconvulsive Therapy (ECT) are the two most controversial treatments. May be because both were used prior to having a full understanding of their mechanism of action, may be because, on the outset both seem unusual, electricity through the head, and a stimulant for hyperactive children. Authored by E. Sidhom, H. Badawy DISCLAIMER The original post is on The BMJ doc2doc website at http://doc2doc.bmj.com/blogs/clinicalblog/#plckblogpage=BlogPost&plckpostid=Blog%3A15d27772-5908-4452-9411-8eef67833d66Post%3Acb6e5828-8280-4989-9128-d41789ed76ee BMJ Article: (http://www.bmj.com/content/347/bmj.f6172). Bibliography Badawy, H., personal communication, 2013 El-Islam, M.F., personal communication, 2013 Thomas R, Mitchell GK, B.L., Attention-deficit/hyperactivity disorder: are we helping or harming?, British Medical Journal, 2013, Vol. 5(347) De Zeeuw P., Mandl R.C.W., Hulshoff-Pol H.E., et al., Decreased frontostriatal microstructural organization in ADHD. Human Brain Mapping. DOI: 10.1002/hbm.21335, 2011) Diagnostic Statistical Manual 5, American Psychiatric Association, 2013 Diagnostic Statistical Manual-IV, American Psychiatric Association, 1994 International Classification of Diseases, World Health Organization, 1992
Dr Emad Sidhom
almost 4 years ago
Dr Danielle Reddi is a Pain Research Fellow and Speciality Registrar in Anaesthesia at University College London Hospital, London, NW1 2BU, Dr Natasha Curran is Consultant in Pain and Anaesthesia, UCLH and Dr Robert Stephens is Consultant in Anaesthesia, UCLH.
about 2 years ago
Introduction to Epidemiology and public health Epidemiology is the study of the patterns, causes, and effects of health and disease in defined populations. It is the cornerstone of public health, and informs policy decisions and evidence-based medicine by identifying risk factors for disease and targets for preventive medicine. Epidemiologists help with study design, collection and statistical analysis of data, and interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies and, to a lesser extent, basic research in the biological sciences. Public health is "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals" (1920, C.E.A. Winslow). It is concerned with threats to health based on population health analysis. The population in question can be as small as a handful of people or as large as all the inhabitants of several continents (for instance, in the case of a pandemic). The dimensions of health can encompass "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity", as defined by the United Nations' World Health Organization. Public health incorporates the interdisciplinary approaches of epidemiology, biostatistics and health services. Environmental health, community health, behavioral health, health economics, public policy, insurance medicine and occupational health (respectively occupational medicine) are other important subfields. From Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Epidemiology
almost 4 years ago
What is Problem Based Learning? During my time at medical school, I enjoyed (at times) a curriculum delivered through the traditional model. As the name suggests, this is an approach experienced by the majority of doctors to date. The traditional model was first implemented by the American Medical College Association and American Academy of Medicine in 1894 (Barr, 2010) and has been used by the majority of medical schools. It traditionally consists of didactic lectures in the initial years covering the basic sciences followed by clinical years, where students learn clinical medicine while attending hospital placements. Is It Better? A few years after my graduation I found myself teaching at a university which had fully adopted the use of problem based learning (PBL) in the delivery of their curriculum. PBL is a philosophy of teaching that has increasingly been used in medical education over the past 40 years. It has rapidly been replaced or supplemented in medical education as opposed to the traditional model. PBL seeks to promote a more integrated and active approach to learning right from the first year with less reliance on didactic lectures. Having been involved in these two different approaches to medical education, I was interested to explore what the evidence was for and against each. For the purposes of this blog, I have looked at four specific areas. These include student attitudes, academic achievement, the academic process of learning and clinical functioning and skills. Student Attitudes Student attitudes to PBL have been highly featured in studies and many show that there is a clear favourability towards this philosophy of teaching. Blumberg and Eckenfel (1988) found that students in a problem based preclinical curriculum rated this three times higher than those in the a traditional group in terms of what they expect to experience, what they would like, and what they actually experienced. Heale et al (1988) found physicians in the problem-solving sessions rated a Continuing Medical Education short course higher compared to others who attended traditional lectures and large-group sessions. Vernon and Black (1993) performed a Meta analysis on 12 studies that looked at attitudes and towards PBL and found PBL was favored in some way by all studies. PBL appears to be preferred by the majority of students at a range of academic levels. However, Trappler (2006) found that converting a conventional curriculum to a problem based learning model for part of a psychopathology course did not show complete favourability. Students preferred the conventional lectures given by experts, rather than PBL groups run by mentors and not experts. They did however show preference towards PBL small group sessions run by experts Academic Achievement Academic achievement is an important factor to assess. Vernon and Blake (1993) compared a number of studies and found that those, which could be compared, showed a significant trend favouring traditional teaching methods. However, it was felt this might not be reliable. When looking at the heterogeneity of the studies there was significant variation that could not be accounted for by chance alone. Interestingly, they found that there was significant geographical variation across the United States such that New Mexico showed consistently negative effects and Michigan State showed consistently positive. Other studies have shown that the traditional method may show a slightly better outcome when assessing academic achievement. Schmidt et al (1987) looked at the same progress test taken among students in six different Universities in the Netherlands and found that those taught by a traditional approach showed slightly better outcomes. Baca et al (1990) compared performances of medical students in two separate tracks, one PBL the other a traditional model. Baca et al found that PBL students scored slightly lower in the National Board of Medical Examiners (NBME) examinations. Dochy et al (2003) conducted a meta analysis comparing 43 studies and found that when considering the effect of PBL on the knowledge of students the combined effect size is slightly negative. The academic process of learning It is important in medical education to enable people to continue life long learning, to overcome problems and fill in knowledge gaps. Coles (1990) and Entwistle (1983) found that PBL students would place more emphasis on understanding and meaning compared to just rote learning, seen more in those taught by a traditional approach. Students on a PBL course also place more focus on using resources such as the library and online sources rather than those taught in a traditional approach (Rankin, 1992). Students taught by a traditional model place more emphasis on the resources supplied by the faculty itself. It has also been shown that students who learn through a process of problem solving, are more likely to use this spontaneously to solve new problems in the future compared with those taught in a traditional way (Bransford et al, 1989). Clinical functioning and skills Clinical competence is an important aspect in medical education and has been measured in studies comparing PBL and traditional methods. The traditional model focuses acquisition of clinical competence in the final years of a program with hospital placements. In a PBL course it may be more integrated early on. There are however, only a few studies that look at clinical competence gained in undergraduate PBL courses. Vernon and Blake (1993) compared some of these studies and found that students obtained better clinical functioning in a PBL setting compared to a traditional approach. This was statistically significant, however there was still significant heterogeneity amongst studies and for conclusive results to be made 110 studies would have to be compared, rather that the 16 samples they were able to use. They also found that in contrast to the NBME I giving better results in the traditional model, PBL students score slightly higher in NBME II and federation licensing examination which related more on clinical functioning than basic sciences. On reflection, this evidence has indicated to me that PBL is a very valuable approach and it has a number of benefits. The traditional model in which I was taught has provided a good level of academic education. However, it may not have supported me as well as a PBL course in other areas of medical education such as academic process, clinical functioning and satisfaction. On reflection and current recommendations are for a hybridisation of the PBL and traditional approach to be used (Albanese, 2010) and I would support this view in light of the evidence. References Baca, E., Mennin, S. P., Kaufman, A., and Moore-West, M. A Comparison between a Problem-Based, Community Orientated track and Traditional track Within One Medical school. In Innovation in Medical Education; An Evaluation of Its Present Status. New York: Springer publishing Barr D. (2010) Revolution or evolution? Putting the Flexner Report in context. Medical Education; 45: 17–22 Blumberg P, Eckenfels E. (1988) A comparison of student satisfaction with their preclinical environment in a traditional and a problem based curriculum. Research in Medical Education: Proceedings of the Twenty-Seventh Annual Conference, pp. 60- 65 Bransford, J. D., Franks, J. J., Vye, N. J., & Sherwood, R. D. (1989). New Approaches to Instruction: Because Wisdom Can't Be Told. In S. Vosiadou & A. Ortony (Eds.), Similarity and Analogical Reasoning (pp. 470 297). New York: Cambridge University Press. Coles CR. (1990) Evaluating the effects curricula have on student learning: toward a more competent theory for medical education. In: Innovation in medical education: an evaluation of its present status. New York: Springer publishing; 1990;76-93. Dochy F., Segersb M., Van den Bosscheb P., Gijbelsb D., (2003) Effects of problem-based learning: a meta-analysis. Learning and Instruction. 13:5, 533-568 Entwistle NJ, Ramsden P. Understanding student learning. London: Croom Helm; 1983 Heale J, Davis D, Norman G, Woodward C, Neufeld V, Dodd P. (1988) A randomized controlled trial assessing the impact of problem-based versus didactic teaching methods in CME. Research in Medical Education.;27:72-7. Trappler B., (2006) Integrated problem-based learning in the neuroscience curriculum - the SUNY Downstate experience. BMC Medical Education 6: 47. Rankin JA. Problem-based medical education: effect on library use. Bull Med Libr Assoc 1992;80:36-43. Schmidt, H G; Dauphinee, W D; Patel, V L (1987) Comparing the effects of problem-based and conventional curricula in an international sample Journal of Medical Education. 62(4): 305-15 Vernon D. T., Blake R. L., (1993) Does Problem-based learning work? A meta-analysis of evaluated research. Academic Medicine.
Dr Alastair Buick
over 4 years ago
I recently read a question on meducation posted around a year ago, the jist of which was “as a medical student, is it too early to start developing commitment to a specialty?” I.e. “even though I haven’t graduated yet, should I start building a portfolio of experience and evidence to show that specialty X is what I really want to do?” MMC revolutionised (for better or worse) the medical career structure forcing new graduates to decide on a career path much earlier. Many have appreciated the clear delineation of their career pathway. Others have found the 15 month period between leaving university and applying for specialty training too short to make an informed decision (just ask the 10% of FY2s that took a career break last year (i)). Whether right or wrong, there is now less time to rotate round ‘SHO’ jobs, decide on a career and build a CV capable of winning over an interview panel. You’ll probably find you’re in one of 2 camps at university: Those who are absolutely 110% certain there is nothing they want to do, ever, other than specialty X, or Those who really like specialty X, but also like specialties W, Y and Z and haven't made up their minds (A few people find themselves feeling they don’t want to be part of any medical career, but that’s for another post.) Students identifying with the first statement are usually concerned they will not get enough general experience, or that they will be stuck with their decision if they change their minds later on. Those who are leaning more towards statement 2 may not build as strong a body of evidence for any one specialty; however it’s possible to get involved in activities either relevant to a few career options, or several specialty-specific activities and subsequently edit the CV for a specific interview. The key message is that whether you think you have your career mapped out or not, medical school is the perfect time to start collecting evidence that you’re interested in a career in a particular specialty: time for extra-curricular activities only becomes scarcer when you have a full time job complete with working long days, nights and weekends. Your experiences at medical school can then be supplemented with taster weeks, teaching and judicious use of your study budget for training days and conferences; bear in mind that all specialties allow at least 3 years* following FY2 before starting specialty training which can be used for gaining further experience (but be prepared to justify and defend your actions). It’s also important to consider the manner in which individual specialties require such a commitment to be demonstrated: In general terms, the more niche and/or competitive the specialty, the more they will want you to demonstrate that you a) really know what the job entails and b) have made a concerted effort to further your knowledge of the subject. To get a job in neurosurgery for example, which is not only niche but had a completion ratio of 4.9 in 2013(ii) you’ll need to have gone to courses relevant to neurosurgery and have achievements related to the specialty such as a neurosurgical elective, attachment or taster experience(iii). Some specialties assess commitment in a variety of situations e.g. the radiology interview this year had stations on the general overview and future of radiology as a career, a CV based demonstration of commitment to specialty as well as a station requiring the interpretation of images. General Practice on the other hand which in its very nature is very broad, at no point allocates marks specifically for commitment to specialty (or anything else on a CV for that matter) as it is entirely dependent on an exam (SJTs and clinical questions) and skill-based stations at a selection centre. The person specification* details what is expected and desirable as demonstration of commitment in each specialty. So, how do you actually show you’re committed to a specialty? It may be pretty obvious but try to get a consistent and well-rounded CV. Consider: • Joining a student committee or group for your specialty. If there isn't one at your university, find some like-minded people and start one • Asking the firms you work for if you can help with an audit/research even if data collection doesn’t sound very interesting • Finding a research project (e.g. as part of a related intercalated or higher degree) • Prizes and examinations relevant to the specialty • Developing a relevant teaching programme • Selecting your selected study modules/components, elective and dissertation with your chosen specialty in mind • Going to teaching or study days aimed at students at the relevant Royal College Remember it’s not just what you’ve done but also what you’ve learnt from it; get into a habit of reflecting on what each activity has helped you achieve or understand. This is where most people who appear to have the perfect CV come unstuck: There will always be someone who has more presentations and publications etc. etc. but don’t be put off that it means they are a dead cert for the job. Whatever you do, make sure you have EVIDENCE that you’ve done it. Become a bit obsessive. Trust me, you forget a lot and nothing counts if you can’t prove it. Assessing commitment to specialty aims to highlight who really understands and wants a career in that specialty. From my own recent experience however, just identifying experiences explicitly related to a specific specialty ignores the transferable and clinically/professionally/personally important skills one has that would make them a successful trainee. I’d be very interested in your views on ‘commitment to specialty’: for example do you think the fact someone has 20 papers in a given specialty means they are necessarily the best for the job? Or are you planning to take a year out post-FY2 to build on your CV to gain more experience? Let us know! References *See person specifications for specialty-specific details at http://specialtytraining.hee.nhs.uk/specialty-recruitment/person-specifications-2013/ i. http://www.foundationprogramme.nhs.uk/download.asp?file=F2_career_destination_report_November_2013.pdf ii. http://specialtytraining.hee.nhs.uk/wp-content/uploads/sites/475/2013/03/Specialty-Training-2013.pdf iii. http://specialtytraining.hee.nhs.uk/wp-content/uploads/sites/475/2013/03/2014-PS-NEUROSURGERY-ST1-1.02.pdf
Dr Lydia Spurr
almost 4 years ago
Nikhil Wagle, MD, discusses new research and how it is leading the way toward improved treatments for ER+ metastatic breast cancer. Wagle is a physician with t…
almost 2 years ago
As recently as the 1980s, many professionals thought that by the time babies are born, the structure of their brains was already genetically determined. However, emerging research shows evidence of altered brain functioning as a result of early abuse and neglect. The key to why this occurs appears to be in the brain.
over 1 year ago
This is a review of 'Research Skills for Medical Students' 1st Edition (Allen, AK – 2012 Sage: London ISBN 9780857256010) Themes – Research Skills, Critical Analysis Medical Students Thesis – Research and critical analysis are important skills as highlighted by Tomorrow’s Doctors Detailed Review Allen, drawing on many years’ experience as a researcher and lecturer in the Institute of Education, at Cardiff University has bridged the gap in Research methodology literature targeted at medical students. Pushing away from comparative texts somewhat dry and unengaging tones, this book encourages student interaction, empowering the student from start to finish. Not so much a book as a helpful hand guiding the student through the pitfalls and benefits of research and critical analysis from start to finish. Part of the Learning Matters Medical Education series, in which each book relates to an outcome of Tomorrow’s Doctors, this book is written from the a lecturers standpoint, guiding students through making sense of research, judging research quality, how to carry out research personally, writing research articles and how to get writings published. All of these are now imperative skills in what is a very competitive medical employment market. This concise book, through its clarity, forcefulness, correct and direct use of potentially new words to the reader, Allen manages to fully develop the books objectives, using expert narrative skills. With Allen’s interest in Global health, it is little wonder why this books exposition is clear and impartial, Allen consistently refers back to the Tomorrows doctors guidelines at the beginning of each chapter, enabling students to link the purpose of that chapter to the grander scheme. This enables Allen to argue the relevance of each chapter to the student before they have disregarded it. Openly declared as a book aimed at medical students (and Foundation trainees where appropriate) the authors style remains formal, but with parent like undertones. It is written to encapsulate and involve the student reader personally, with Allen frequently using ‘you’ as if directly speaking to the reader, and useful and appropriate activities that engage the reader in the research process, in an easy to use student friendly format. This book is an excellent guide for all undergraduate health students, not limited to medical students, and I thank Ann K Allen for imparting her knowledge in such a useful and interactive way.' This was original published on medical educator.
over 4 years ago
Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture.
almost 2 years ago
Global Epidemiology of HIV Infection among Men Who Have Sex with Men | The New York Academy of Sciences
At the NYAS March 2011 Music, Science and Medicine conference, Assistant Professor of Psychiatry and Neuroscience at Mount Sinai School of Medicine and 2010 Blavatnik Award winner, Daniela Schiller, talks to Roger Bingham about how she got into science and reviews research in modifying fear memories.
over 3 years ago