How to offer students feedback
In This Episode.
Or not. Steve and Dave tackle the complex issue of how to respond to student work effectively. Spoiler alert: It’s somewhere between a pat on the back and psychiatric analysis.
How to offer students feedback
Voiceover: Welcome to the Critical Thinking Initiative podcast, we bring you research driven solutions to critical thinking education. Why? Because as Bertrand Russell said, most people would sooner die than think. In fact, they do so. And now your hosts Steve Perlman and Dave Carillo.
Steve Pearlman: Welcome back to the Critical Thinking Initiative podcast. This is Steve.
Dave Carillo: I’m Dave.
Steve Pearlman: Before we get into our actual podcast, we want to make an exciting announcement. We have a new podcast that has already just launched, and it’s called smarter. That’s smarter and similar to this one. It has a focus on critical thinking and cognitive activity. But unlike this one, the smarter podcast is geared toward everyone, not just people in education and not just about educational issues. It’s a short podcast that offers tips for everybody to be able to be smarter and think more critically and understand how their brain works in their daily lives.
Dave Carillo: Yeah, what we realized is that even though we think that we’re proud of the work we’re doing in the Critical Thinking Initiative podcast for educators, we realize that not everybody is an educator. And even if you are, you have to leave school at some point and go out into the real world. And as such, we just wanted to cover that base for you as well.
Steve Pearlman: Part of what I like about what you’re saying, Dave, is just reminding educators that they do,
Steve Pearlman: In fact, leave school at some point. I think that’s an important
Dave Carillo: Reminder for everyone, right? For the love of God.
Steve Pearlman: Do you think you never leave? Look around a little because you might find that you’re actually outside of the office, right?
Dave Carillo: You may have already left. You have not even you just don’t even know exactly those are birds. And those things that the birds are in are called trees.
Steve Pearlman: Smarter will still be rooted in research because we think everything should always be rooted in research, but we decide to also be a little more playful with it. You’ll be able to hear us riff a little bit more and have a little more fun. There’s always a movie reference that’s anchoring the topic of the podcast, so we not only hope you’ll give it a try yourself. We really hope you’ll mention it to all of your friends and colleagues and everyone you know, and just encourage them to give the podcast a try. And of course, as always, like and share it and like and share this one and mention it to your friends. The more likes we could get on iTunes and the other platforms would really help us promote
Steve Pearlman: The podcast because it really makes a difference in terms of how quickly it comes up in searches and so on. So if you haven’t done that yet and you can take the 30 seconds to do it, we really, really appreciate it.
Dave Carillo: And at the very least, smarter, the podcast is shorter than the Critical Thinking Initiative podcast,
Steve Pearlman: So it’s called smarter s.m.a.r.t e r
Dave Carillo: E r, which makes it more smart.
Steve Pearlman: Well, it wasn’t. It wasn’t. This is how good the podcast is. We feel it wasn’t good enough for us to just call it smarter. Right. That didn’t rise to the level of achievement and quality and usefulness that we felt was embedded in the podcast. We had to call it smarter.
Dave Carillo: And the second thing is that the sound quality on this particular critical initiative podcast on feedback is not as great as we wanted. Yeah, we just
Steve Pearlman: Had a small audio problem and we even got it professionally filtered. But all the glitches aren’t out, but we think it’s certainly good enough for your listening pleasure. Still, we still apologize for that, and the next one will be back on track. So without further ado, let’s get back
Steve Pearlman: Into the podcast for today. So I’m going to talk about how to give feedback to students on their writing. And look, here’s the nutshell of this whole topic. As far as we’re concerned, this thing is a mess. There are just so many confounding variables involved in this, and much more research needs to be done in some very particular directions. But it prompted us to get on this. Today is not only that, it’s something that’s been on our docket for a while about things that we wanted to get into. It also has to do with a recent article that just came out in the Chronicle of Higher Education entitled How to Give Your Students Better Feedback with Technology.
Dave Carillo: And this, I believe, is part of a Chronicle series of sets of instructions and guidelines that they give for all sorts of classroom issues. This one’s by Holly FIAC and Heather Garcia. And we don’t want to spend too much time on this, but we do want to at least give you the basic premise so that you get a sense of not only where we’re trying to come at this subject from, but also at least some of the variables going into this. First and foremost, they frame this as quote a guide on how to use technology to better evaluate and comment on students work. And I think that that’s the first place where we can at least start to talk about this feedback conundrum here,
Steve Pearlman: At least from what we’ve seen over the years. In terms of our research on this, there’s no conclusive. Evidence that technology definitively improves feedback. There are some and there’s some that says that it doesn’t, and it’s just a very mixed bag. We’ll explain why that goes forward and they’re not wrong to necessarily suggest that it could be advantageous. It’s just that we really want to pull away from the certainty that they’re offering about right.
Dave Carillo: And if you want to hear a little bit more about that, we have a few key podcasts on technology. When we talk specifically about the technology of PowerPoint and the other we talk. I think about video games and whether video games can help students to think more critically. But Steve, I’m glad you brought that up because certainty is a good way of describing it. And the other thing that that I was thinking is that there are some assumptions made here that they don’t get to and we might not necessarily get to in terms of this article. But the idea of using technology to better evaluate and comment on students work, we have to ask ourselves, to what extent are we going to assume that the feedback or any feedback is going to better evaluate students work or if evaluation is happening at all? And then the other key moment from this article, and again, this goes back to what Steve saying, but I think it’s it’s important to mention in two ways. Basically, the authors go on to say quote to put our cards on the table. Up front, we are strong advocates of video and audio feedback, and that’s what you will see most emphasized in this guide. Whatever your reservations about audio and video, we would urge every faculty member to give it a try.
Dave Carillo: They go on to say quote written feedback is so easy to misconstrue. Students often read it as harsher than you would by providing feedback with your voice. However, your students will be able to listen to your tone and understand that you were being encouraging and are directing their learning. And so again, it goes back to what you said before, which is there’s not that much research that necessarily suggests that any kind of technology is going to strengthen student learning or engagement. And whether written feedback is easy to misconstrue is potentially an issue, but not necessarily depending on the kind of feedback you’re trying to give them, and then also the framework for your class and the evaluation criteria and so on. But we at least do recognize that there are moments where technology might be necessary. I mean, if you’re teaching an online class, you can’t avoid it. If you have a class like two hundred and fifty students, you might necessarily need to use something along these lines. But this article suggests that there’s there’s this relationship between the feedback that you’re giving and the technology that you’re using to do it. And there is this sense of certainty to what’s going to come out of this relationship.
Steve Pearlman: They’re trying to move in a positive direction and video feedback. Audio feedback can be perfectly valuable tools, and there is some reason to think that students appreciate those media more. We’re in a world of selfies and we’re all with video or live audio now. And also, there’s a personal level to that and that personal level. The relationship with Instructor has been correlated with students wanting to pay attention to feedback a little bit more. So there’s an arguable rationale and evidence for the value of that feedback. The point is simply that there’s a lot more to it than that, and we want to get into what else is involved in this because we’d like to try to offer a path through what is a very murky landscape about what works with respect to student feedback. And most of it, we’ll reference, is in fact our written feedback, which is where most of the research is still and what most people are still doing. But it’s actually one of the harder areas in terms of being able to offer what we would consider highly confident advice about this. But we’ll try to take you through some ideas and some of the research to give you a sense of some do’s and don’ts. This certainly could help everyone improve their ability to give feedback to students. On some level, we want to start by understanding that there is an acknowledgement among researchers who have recently started to examine what kind of feedback actually makes a difference in terms of the product and quality that students deliver after that feedback. So that would be formative feedback in the process of a paper or in the first draft of a paper or feedback on assignment.
Steve Pearlman: One trying to influence students behavior and the quality of their work on assignment number two and three and four and so forth. People who are studying them now are acknowledging that there is a dearth of research on what kind of feedback actually makes a real difference. And we will get into some of the limited research that’s coming out on that in a little while. But most of the research that’s also still valuable here really focuses on students attitude toward feedback, their disposition toward using feedback, why they may or may not engage the feedback, and what students at least feel is more valuable feedback than not. Whether they feel that it’s more valuable feedback and it actually does produce better results is a much murkier question. But we do have more clarity with respect to what students believe is. Were useful feedback and how they react to feedback as they counter it. The biggest takeaway from most of the literature on this is a simple one developed constructive with students deemed positive feedback is more valuable than what students would receive as negative feedback and undeveloped feedback. That is not surprising. Pointing to positive aspects of the student work doesn’t mean that you can’t also be critical. But what students report is that when the feedback is very negative, you’re writing really needs a lot of work, kind of a comment that it turns them off and makes them disinclined to really take that advice. It’s much more positive to find a way to suggest to students that although there’s an issue within their work or there’s something that’s challenging, then there’s a way to go about it and demonstrate that way to go about it and assure them that you believe that they have some ability, or if they work hard, they can be able to make that change.
Steve Pearlman: So students frequently report on the importance of the positive nature of feedback, however, and this is where it gets murky really fast. There’s a big variable here with respect to the emotional maturity of the students and with respect to their confidence in their writing. And those two are not the same things. Students were more emotionally mature and students who are more confident in their ability as writers are more inclined to use all feedback, and they’re certainly more resilient when the feedback becomes more critical. Students who are of low confidence in their ability as writers when they receive feedback, even if it’s somewhat positive. But certainly, if it’s negative, it turns them off entirely from using that feedback moving forward. It can decrease their effort going forward on future work because it’s reinforcing this idea that they’re not good writers and are never going to be able to do it. Essentially, this is out of Ursula Wingate article the impact of formative feedback on the development of academic writing, and one of the things that she found in her study when students were given feedback and they compared how it was received by low achieving students versus high achieving students, low achieving students. When interviewed later could hardly even remember some of the feedback that they got relative to the high achieving students, and that gives you a sense of how that feedback is being absorbed and the kind of impact that it’s having on them.
Steve Pearlman: At the same time, we have some evidence that students will say that it’s OK for feedback to be critical, but where we actually find out is that when they receive it, that theory gets changed really fast for those students who are low performing and they don’t really pay attention to it. So they might say, Yeah, it’s OK. We understand that it has to be critical. But for those who are especially low performing or low in emotional maturity, they certainly back off it very quickly. So let’s set up that initial complication. Students want more positive feedback. They want it to be constructive and they want it to be developed, meaning longer and more conversational rather than just snippets. Two word response. It’s like well done or a nice job or smart point. They want something that’s a little bit more dialogue, a little bit more explanatory, which goes back to something, by the way that Dave and I had talked about in an earlier podcast, which is trying not to use comments in the margins as much, especially if they’re short snippet, he answers because they just don’t have much of an effect on students overall. So again, on the one hand, there’s this desire for the students for the more positive feedback, but it’s complicated again by their emotional maturity and their confidence in their ability as writers. And that’s the first layer of complication on this. But it does start to reveal for us the importance of trying to keep feedback positive, at least in terms of student receptivity.
Dave Carillo: So I hear right off the bat a couple fairly concrete and easy to name variables and even what they expect from their educational experience, expect to learn or their assumptions, their or their expectations there, or if they have been experiencing one type of dynamic, they might expect certain things when we get into a class where the feedback is challenging them.
Steve Pearlman: That’s a good point because one of the other confounding variables here, and that’s why this Chronicle article, though perhaps advocating for a good thing overall, doesn’t tap into the full depth of the picture, is that there’s other research. The students expectation for their grades is a mitigating factor in how much they’re going to pay attention to the feedback. So students who get great that’s below what they expected are less inclined to pay attention to the feedback than the students who get a grade that’s equal to or above the grade that they expected, especially if they’re lower performing students to begin with. Again, the essential factor there is that that grade is effectively a negative comment, and even though there might be language around that and the research shows there should always be language around the grade and students do not like it when there is no language around the grade and they really expect to see language around the grade. The grade, nevertheless, is still a commentary on the student as a writer, and they’re having the same kind of reaction to that as other kinds of feedback. So merely getting the lower grade, they are less inclined. Pay attention to the feedback, especially if they’re lower performing students.
Dave Carillo: I have an article here by rolls out of Red Deer College in Alberta, Canada, entitled Assessment Higher Education and Student Learning that aim to quote examine the occurrence and diversity of assessment practices in higher education and their relationship to student learning, which on the face of it might not necessarily seem completely relevant to a feedback discussion. But there are some good moments in here that speak to this idea. That feedback doesn’t exist in this one to one ratio where feedback is given and students automatically learn. And so one of the ways that the author here attempts to gauge the extent to which certain assessments are being carried out in classrooms is by describing what’s known as a learning oriented assessment and a learning oriented assessment, he says, is a concept that represents assessment for learning. And one of the key elements to learning oriented assessment is one that is authentic. It promotes student learning for the present and the future. Later on the article, he goes on to substantiate that a little more. He quotes Sam Bell, saying that scholars suggest that authentic learning activities are meaningful, relevant and have value for lifelong learning. And that’s the first moment that I really want to key in on here. Students expectation in terms of either grade and or feedback is often molded by what they’ve experienced in the past. And what this research says is that the kind of assessments that are going on in classrooms aren’t necessarily authentic. This article goes on to say, and we have mentioned this before. Quote results indicate that educators still rely on testing as a main form of assessment, and testing is really only Kennebec information and getting a grade. And if a student goes from class after class of that kind of learning model, that kind of educational dynamic as it were, if and when they get to a class that asks for something that is more authentic, that is more challenging in terms of cognitive intellectual access that requires feedback, they’re going to respond to that feedback differently if it’s remarkably different than the test grade dynamic that they’re used to.
Steve Pearlman: I’m glad you’re bringing this up because it plays back to some other research that I don’t have in front of me right now about alternative forms of assessment and alternative forms of learning where students are more involved in the assessment process and constructing that in certain regards at least. And what that evidence shows is pretty clearly that students take that more seriously. They get more involved in it. They feel that it is more authentic. In fact, I think I’m referencing back to maybe fall Chekhov in the late nineties on some of this. But when I appreciate a lot about your point is that let’s put it candidly, if students don’t really give a crap about what they’re learning, they don’t give a crap about what they’re writing about to begin with. If they don’t give a crap about their educational experience, then yeah, they still might pay attention to feedback to the extent that they know they need or want a better grade, but they’re not really going to get invested in it. And the real first trick that you’re pointing to to get students invested in the feedback that they have is to think about the authenticity of the learning and the assessment to begin with, right? That’s critical for this discussion.
Dave Carillo: No, absolutely. And that’s the problem with just talking about while you have different options for delivering feedback because it’s not just about how you’re delivering the feedback, it’s the content is the purpose of the feedback, it’s the feedback in relation to the assessment. It’s the feedback in relation to the student’s expectations to go back to the article. The study showed quote a limited application of various types of authentic tasks. So those kinds of things that were geared less towards testing of a finite amount of information and more towards building on cognitive or intellectual tasks. And generally, those kinds of authentic tests are the harder things for students to do, especially if they only have experience cramming for tests, etc. The tasks that did show up that were quote deemed authentic were things like written papers where students were encouraged to quote research and become engaged with the information that they should learn. So some of this was going on, but it’s fairly obvious to even the extent to which even these things were authentic. We know a lot of writing assignments. Just ask for summary. But further, the results indicated that quote academics provide feedback on assignments believe in its value. However, they are not sure that students use or understand the feedback, and that’s one of the big takeaways. When feedback was being applied to these authentic tasks, they weren’t necessarily even sure whether students were using that feedback. Quote learning occurs only when there is student engagement in the feedback process. We don’t often see authentic learning. Task of this study did not see too many authentic learning tasks or learning assessments being applied in classes.
Steve Pearlman: I like one of the things you brought up there. There’s a lack of clarity among faculty of students or even understanding the right. There is some good consensus among those researching this that it’s good to use a rubric because students view rubrics as positive on the whole because they do often afford at least some sense of what they’re going to be assessed on, however. And we’ve talked about this before with respect to most rubrics, which is why we don’t really consider our critical thinking rubric or rubric, but a system. And it comes with a whole text for decoding it and explaining it so that it goes beyond just putting a quick rubric in front of people. Because what we see in I’m referencing a work here by Felicity Small and actually called undergraduate student responses to feedback, expectations and experiences. What they write and they’re referencing a few other authors at the same time is that quote. Research findings suggest that the problem lies in the fact that students may lack the tacit knowledge to understand the comments and are consequently unable to draw out meaningful implications that will help them improve their work. This inability to decode academic statements has been viewed as a reason for students not seeing the same value in feedback as faculty go on to do so. Even if we are trying our best in some respects to give feedback, that’s clear and precise, that’s referencing standards. There’s this problem about whether or not students even understand what those standards are. That’s to your point that unless we involve them and incorporate them to those standards, which is what we are always advocating with respect to our critical thinking model. That’s not just something that’s thrown at the students as an assessment, but it’s a process they’re always going to run into this.
Steve Pearlman: In fact, another study begins Hartley and Skelton in 2002, when they reviewed the extent to which students responded to assessment criteria. Only about a third of the students felt that they even understood what the criteria meant. And so what are we doing for students in our efforts to give feedback? If they’re not really engaged in that and co constructing it or understanding it, then they’re still struggling to understand what the feedback really means. And that’s beyond. And I want to steal this term from Ed Pitt and Lynn Norton always with attribution, of course, but we’ll totally steal this term. That’s without their emotional backwash. Yeah, I talked about earlier all those emotional things that come into play with respect to what they’re doing. So again, we see pretty clearly how murky the waters are here with respect to what’s really effective. What do students really want? They want more develop more positive feedback. They do like rubrics, but they don’t necessarily even understand them. They’re affected by what their emotional maturity is. They’re affected by what they expected in terms of a grade. Just as I said before, in fact, that students who got a lower grade could take that as a negative if they’re low performing other students, if they’re higher performing students and they get the grade that they wanted or it’s acceptable to them or then disinclined to look at the feedback as well. So it’s so complicated and so hard to be functional, much less powerful for students. If we just use feedback, as you said, as separate from the learning experience, as separate from something from which students are engaged.
Dave Carillo: Case in point, I’m looking at this other article, which is entitled The Impact of Computer Based Feedback on Students Written Work. It’s by Kaleido, Ebury and Scott. Yet they were looking at using a piece of software that I’m not going to name. Although it is mentioned, I believe in chronicle. That quote claims to be able to provide automatic feedback at word sentence, paragraph in text level, and they were testing whether this was going to make five hundred and forty nine Egyptian training ESL teachers better writers. Generally, the results were that this platform did seem to encourage students to produce second revised versions of the essays
Steve Pearlman: That they wrote. If I could, just to clarify, this is only offering feedback that’s sort of on the sentence level, and it’s not offering substantive feedback about about content development. The idea that’s being developed is that accurate?
Dave Carillo: Steve, it’s a great question. It’s unclear. It’s unclear to me, too. It’s unclear in this way. So what this platform claims to be able to measure and provide automated feedback on includes quote grammar mechanics, style usage and content and organization. And while it’s unclear and I don’t want to make any basic assumptions as to what this particular platform does if it functions along the lines of the overarching direction of writing and writing assignments. What we’re talking about in terms of content is less about idea development and more about probably presence of. So at this time, it needs four sources and maybe there’s four sources where we can track that.
Steve Pearlman: I think we’re pretty green. There’s no A.I. here that’s doing anything to measure the caliber of critical thinking or whether an idea is interesting or
Dave Carillo: Innovative or anything to that effect, right? And it would be interesting to see how that fleshed out. But one of the reasons why I brought this in is because part of what this platform seems to offer is exercises and guidelines and pre writing and as it turns. This particular platform did not seem to have an effect on this aspect of the writing process. It didn’t really seem to affect them whatsoever. Moreover, the big finding of this study, they say although it was effective in encouraging the students to produce second revised versions of the essays that they wrote, the resubmission showed evidence that the students had taken notice of the computer based feedback as the number of errors identified in the categories used by this platform generally showed a decrease in the second drafts of the essays. That’s good, right? Fewer good, fewer errors. Dave. Well, that’s why I’m here, Steve. The study goes on to say and kudos to them for pointing it out quote. It would be encouraging to feel that students were not only taking notice of the feedback, but learning from it too. However, more detailed examination of the results suggested that some students were sometimes simply omitting at least some of the errors identified, rather than learning from the feedback and producing corrected versions.
Steve Pearlman: So here’s a run on sentence noted as a reference, and so I removed the run on. Right, exactly. And therefore, I no longer have the run on sentence in my paper. Correct? Problem solved. Run on sentence.
Dave Carillo: Not burn, right? They talk about this in terms of this idea of avoidance feedback pointed out what was wrong, but they avoided learning anything from it. They use the feedback to omit the errors to produce a quote, brighter paper. And that’s very much like a test taking mentality in a lot of ways. So what this conference seems to be really good at and a lot of the technology we see is pointing out grammatical errors, but students do seem to be learning from them. They seem to be removing those errors to produce a greater draft, not a deeper, more substantive, more complex point.
Steve Pearlman: And what a great Segway into what research has shown about what feedback actually produces stronger work in reference to studies here. One is that Wingate, one I already mentioned before. The other one is writing helpful feedback the influence, a feedback type on students perceptions and writing performance by McGrath, Teller and Pichel. And that’s for 2011. What McGrath, Taylor and Rachel did was they took one hundred students who are in intro to psych class over the course of a couple of assignments. They all got generally positive feedback, but it was either what they consider developed feedback or undeveloped feedback. And again, the feedback is more narrative. It’s more informal, it’s more conversational, it’s more expansive and undeveloped feedback would be just a couple of words here and there. And then they studied not only the students perceptions of the feedback, but they also studied the students performance relative to the feedback and the way it worked was. Half the students got to develop feedback on the first assignment. Half students got undeveloped feedback on the first assignment and then they switched it. So eventually all students got a series of development, undeveloped feedback, and that’s going to actually be an important factor in this.
Steve Pearlman: So what’s the student’s perception? Perception is very clear. Develop feedback is better, and I don’t know if this study was mentioned specifically in the study, but always forward looking feedback is valued by students. Here’s how you do it the next right? Here’s how you do it better on the next assignment. And yes, you can do it better. Here’s how you’re going to go about doing that. That kind of move, rather than just pointing out what the floor is, is considered exceptionally important. But here’s the thing. Students who like to develop feedback better. We’re only the students who got it first. Interesting. The students who got the undeveloped feedback first and then got to develop feedback didn’t rate the developer feedback as any better. And I think that’s interesting. They don’t really have a complete explanation for it. It’s something around that their expectations for what assessment does right for what feedback is going to do in that class has shifted. All right.
Dave Carillo: Why the hell are you telling me this now? In October, when you should have said something in August,
Steve Pearlman: Something like that, we’re not sure exactly. But yeah, that’s it. Another thing that they found out, though, when they looked at improvement, they found some improvement in the students who received the developer feedback first, but it was not statistically significant improvement. Now, they don’t have a full explanation for that, but I think it comes back a lot to what we were talking about before. It’s not only the emotional backwash that we talked about, it’s the need for assessment and feedback to be integrated into the learning experience rather than something that’s unidirectional or just comes from faculty and so on. So that’s the deeper level of it. Now, as for the win game piece, they took sixty eight first year students in a language and learning class, and they focus again on what they called based off Walker future gap altering feedback, which is here’s how you do it better the next time instead of just here’s what the problem is. And they broke students into three categories after they received the feedback, and the categories were those who had improved and therefore paid attention to the feedback. Those who didn’t seem to observe the feedback didn’t improve and those who were high performing, so they didn’t improve. But that’s because they were doing so well the first time that they did just as well, the second time and what they found was that if students did. Take the feedback they, in fact did improve, but whether or not they took it again had a lot to do with the student self-efficacy right and had to do with the nature of the feedback and lower performing students who lack confidence didn’t take the feedback.
Steve Pearlman: And that’s a fascinating thing because also what they find in the research, not in this study, but not surprisingly lower performing students have typically in their history received more negative feedback, which is only turn them off all the more to feedback and to writing and to everything along the way. So why are they lower performing? It’s a cycle. And so what is the nutshell for us about feedback? How do we sum all this up? It’s pretty murky. We know to try to stay positive. We know to try to go with what we would call the forward looking to use the term again, future gap altering feedback, which is a term we love to try to avoid the emotional backwash. Even when we do that, though, we cannot be assured that feedback is directly correlated or directly prompting actual student improvement in their work unless it’s something that becomes more integrated into practice. Which is again, why you and I are always advocating for integrating an assessment that’s also a process. It’s a writing process or pedagogy. It’s the way everything around that educational structure is built. So then it’s not just something where we’re giving them feedback on something that’s unknown or something they can’t decode, but rather something that’s integral to the entire learning experience.
Dave Carillo: Right? You remove one element from the ecosystem. The whole ecosystem crashes. You want to build a strong critical thinking or any kind of environment in your class. It’s not just going to be some one to one. Relationship feedback equals this. There needs to be that consideration of what kind of assessments are you using, what kind of assignment, how much practice are they getting your pedagogy? Are you giving them enough opportunities to fail forward, not just how you’re delivering feedback or whether it’s audio or video, or it’s not just one thing.
Steve Pearlman: I hope that’s really what listeners take deeper into their heart. Can’t be
Dave Carillo: Emphasized enough. Right, right.
Steve Pearlman: But in the interest of trying to offer something to take away about feedback itself, I’m going to reference the article by Mantiene Ansari Shake and Lecture in 2015 called Positive Feedback a tool for quality education in the field of medicine, and they offer what’s actually a pretty good list of things to try to do in just offering feedback. And it’s probably a good exit for us from the podcast here. And I’m actually just going to read hitting Norton’s summary of this originally coming from man at all. And it basically says for feedback to be effective, it needs to focus on the performance and not on the individual should be clear and specific. Delivered in non-judgmental language should emphasize positive aspects be descriptive rather than evaluative, and should suggest measures for improvement if we’re going to follow some basic ground rules for the simpler task of giving feedback. That’s it right there. I think that’s a good summary of it. Sounds good to me. Please like and share the podcast anyone you can tell about it. We really appreciate it. And thanks for listening, everyone.
Dave Carillo: Take care.
Voiceover: Got questions about critical thinking questions about pedagogies related to critical thinking, questions about writing, reading, grading or anything else in the critical thinking realm? Contact Steve and Dave at Info at the Critical Thinking Initiative. Talk with your questions or your feedback about the podcast. Thanks for listening.