The

cti PODCAST

Fostering a Growth Mindset in Your Students

PUBLISHED: Aug 16, 2018
CATEGORIES: TCTI

In This Episode.

Steve and Dave welcome Dr. Marianne Fallon for a fascinating discussion about the power of a growth mindset for students’ educational performance. Marianne speaks to research on growth vs. fixed mindsets, how even high-performing students can still suffer from a fixed mindset, common mistakes educators make in reinforcing fixed mindsets, and easy measures any educator can use in any class, such as short reflective assignments, to foster “growthy” students!

Episode Archive

Podcast Transcript

Voiceover: Welcome to the Critical Thinking Initiative podcast, we bring you research driven solutions to critical thinking education. Why? Because as Bertrand Russell said, most people would sooner die than think. In fact, they do so. And now your hosts Steve Pearlman and Dave Carillo,

Steve Pearlman: Everyone, Steve here. Thanks for tuning in. Just a little heads up that we ran into some audio issues on this episode with respect to a new microphone that we will never be using again. We apologize. The issues are minor. Just want to give you a heads up not to try to adjust your sound system because the air is on our side, but we think you’ll enjoy the podcast nonetheless. Thanks again for listening. So welcome back to the Critical Thinking Initiative podcast. This is Steve Pearlman

Dave Carillo: And I’m Dave Carillo.

Steve Pearlman: And today we’re talking to you about a reaction we’re having to a phenomenon that’s emerging in academia, and we’re responding directly this at this moment to an NPR program with Tovia Smith. And it’s titled More States opting to robo grade student essays by computer. And this issue of computerized assessment for writing is certainly emerging a lot now, and there is certainly a lot more to discuss about it. But we’re going to locate our discussion today just on this article because it exposes a lot of the tensions that we want to talk about with respect to this issue. We certainly welcome you to go ahead. You can pull up this article for yourself. Again, it’s called more states opting to robot grade student essays by computer, and it’s from June 30th twenty eighteen on NPR. So support NPR and go out and get this and then please support the Critical Thinking Initiative as well. Recommend us to your friends and so on.

Dave Carillo: Ok, so we’re going to try to contextualize this article for you. Essentially, this is what’s going on. A lot of states are starting to use computers to grade student writing that is submitted for a lot of different sort of high stakes like tests like either placement tests or even juries are seeing this and other tests that are used to test students’ aptitude and a lot of different capacities, including writing and for a variety of reasons. One how quickly it happens to how quickly a computer can sort of return results and how ostensibly cheaper it is to use a computer rather than employ hundreds of actual humans. These kinds of programs are now sort of becoming more and more prevalent. This article is essentially trying to sort of frame the discussion, and the discussion that this article is framing for us is that on one side, you have the proponents of this kind of computerized grading saying that over the past twenty five years, the artificial intelligence has become so good in a lot of different realms, right? An example is they talk about self-driving cars and detect cancer and carry on conversations and so on and so forth that the AI is now good enough to score student writing. And proponents of this are really high on the ability of these programs to catch things like spelling errors and grammatical mistakes. But now they’re also making the claim that these programs can start to check on other things that are a little bit more comprehensive and substantive, such as staying on topic the flow of an argument, so on and so forth.

Dave Carillo: On the other side of this are a group of scholars, and again, this is not an exhaustive article. It gives us the overview. There’s a group of scholars and there are plenty of teachers that are dubious about this whole issue, making the argument that grammar is one thing, but you can’t necessarily ask a computer to grade forms of expression or judge the effectiveness of creativity or any kind of actual intellectual act is beyond these the ability of these programs. And there are some scholars also quoted and included in this article that have actually even tried to go ahead and and prove that by creating programs that seemingly very easily tricked these these grading programs into thinking that they’re reading a really smart essay when they’re actually just reading meaningless garbage, some other folks out there that understand that these programs might be here to stay and are starting to tutor students to write the essay that the computer is looking for. So that’s sort of the overarching conversation you have the proponents of this kind of grading saying that the AI is getting better every day. You have proponents of or English teachers and other educators saying there’s no way that a computer program is ever going to understand how to grade a student’s ability to make meaning. And then you have folks who are either trying to prove that these programs aren’t working or are recognizing that these programs are effective. Life and tutoring students is sort of either get through them or around them, and there’s a sort of fairly cover the sort of overarching yes.

Steve Pearlman: With this being a. Appropriate time where I may interject an educated opinion about this or do we have to hold that off?

Dave Carillo: I was under the opinion that we were never going to have just an opinion. You feel strongly, you’ve got steam coming out of your ears.

Steve Pearlman: Well, I think to to wrap it in a nice little bow, do that. I think largely speaking, though, I cannot speak to the future of what I would hold at our present level of technology. The capacity for an A.I. to assess the quality of a piece of writing I would characterize as bullshit. It’s never going. It’s not even.

Dave Carillo: Is that your educated opinion? It is OK.

Steve Pearlman: Although I will say this. Some of these programs I’ve experienced, some of them I have not. But I would think that even if we look at something as simplistic as grammar, which is probably not as intricate and subtle as some things like critical thinking, grammar at least has some rules that we could theoretically apply. You take a writer like Julian Barnes, who uses these wonderfully complex creative uses of punctuation that are legitimate from a grammatical perspective, but certainly not following the most rigid conception sometimes of what punctuation should do. And although I haven’t tested it on some of these programs, I imagine these programs would reject or perhaps even throw up in some computerized fashion at what Barnes is doing, even though he’s just a remarkably gifted craftsman.

Dave Carillo: I’m actually glad you brought that up, because just to let our listeners in on complex behind the scenes goings on of our podcast here, I have my list of things that we wanted to discuss, and Julian Barnes did not make that list. But I’m loving that because now that you’re talking about Barnes, I’m starting to think of other types of of writing and literature that my guess is would throw the program for a loop as well. And so you were talking Barnes. But I was I immediately started to think of John Berryman the dream songs, which are, if I remember, correctly, fairly well punctuated but extremely difficult to parse anyhow. And I wonder, again, you’re right. Even even on the subject of grammar, we might find major major flaws, but there are a lot of other issues that this brings up, right?

Steve Pearlman: Right. Let’s let’s move past the grammatical because I think for most what most students are doing in terms of their grammatical constructs, that is probably an accessible bar at some point for a computer program.

Dave Carillo: I would agree, although before we move on, I would say again, you’re bringing up a really good point in this idea of grammar. We don’t have to go into this, but grammar is as just a set of rules as opposed to grammar as as a way of making meaning or affecting meaning a very fascinating and ongoing discussion as we continue to learn more about how language

Steve Pearlman: Works, right, we can change the grammar of a sentence in multiple ways for it to be correct grammar, but change the meaning of the sentence entirely through grammatical shifts. Yes, and there’s no way that the I at this point up on,

Dave Carillo: Although I mean, let’s go with that, though, because one of the first things that we see in this article, and again, this is going to be a great sort of opportunity for me to read laterally on this subject. But for the purposes of this podcast, this article, the first the first sort of argument that we have is by Peter Foltz, and he’s a research professor at the University of Colorado Boulder, but he’s also the vice president for research for Pearson, which is one of the larger educational publishing companies out there. And Pearson has an automated scoring program that’s graded some thirty four million student essays, and Foltz is on the side of this, he says, quote It will always be people who don’t trust it, but we’re seeing a lot more breakthroughs in areas like content understanding and AI is now able to do things which they couldn’t do really well before end quote.

Steve Pearlman: Well, that certainly makes the AI more experienced than I am because I have not graded three or four million. Yes, I mean, maybe let’s be honest, 30 30, I’m coming on 30 million, but know it didn’t.

Dave Carillo: It didn’t occur to me that we might just be out of our league here because we’re just in the lower millions of student essays written rather than thirty four. That’s a lot of millions of us experience at any rate, Fultz says computers and this is quote. This is the quotes in the article. Quote, learn and quote what’s considered good writing by analyzing essays graded by humans and then the automated programs score essays themselves by scanning for those same features. And that’s something we want to get to. But he does go on to say not only do these programs catch things like spelling and grammar, but they’re also getting good at seeing whether students are on topic coherence or the flow of an argument and the complexity of word choice and sentence structure.

Steve Pearlman: Well, let’s pause right there because right, right, the the premise of the training that the humans are somehow consistent in their grading of essays or that humans are even valuing the best aspects of the writing is something that we know from a lot of other research isn’t necessarily true. You know, a large percentage of faculty grade. Were on mechanics and more on formatting than anything else, right, we know that many of them cannot necessarily recognize or articulate or quantify critical thinking that’s occurring within text or development of argument or interpretation of text or interpretation and meaning. And so that’s a fascinating way to train an A.I.. I don’t know what another way would be, but that’s a fascinating way to train AI because we don’t know what kind of training or or what’s predicating those human responses.

Dave Carillo: Exactly. There’s that initial issue right on the surface there. If the computer is being trained as to how to say, figure out whether a student is on topic and what’s being valued isn’t necessarily anything that’s worthwhile about staying on the topic, then the machine is still going to be awarding a great score for what would be very bad

Steve Pearlman: Thinking, right? I can write 50 pages tonight on a schnauzer and I will all be on schnauzers. Exactly. Don’t get me wrong, I will not write a sentence that’s not about. I should hope not, but I don’t know if you want to read my 50 pages of schnauzer writing.

Dave Carillo: I will tell you right now, Hey, don’t do that because I’m not going to read it. And B, I definitely don’t want to read your

Steve Pearlman: Snout, right? Maybe somebody would say or fan clubs.

Dave Carillo: Well, let’s let’s table this schnauzer, because maybe we’ll get back to that later on this podcast, Fultz goes on to say Yes, it’s it’s good at coherence and it’s good at flow of an argument. We can talk about what that might mean or what it might not mean, right? But then he goes on to demonstrate for the the reporter and to demonstrate he takes a quote not so stellar sample essay rife with spelling mistakes and sentence fragments, and runs it by the robot grader, which instantly spits back a not so stellar score. He also goes on to say quote It gives an overall score of two out of four. Computer also breaks it down into several categories of sub scores showing, for example, one on spelling and grammar, and two on task and focus and so on. So that’s issue number one is not only are we not sure what the AI is being trained on, but then it even seems when it comes down to it in this sort of like at least one demonstration we get instead of something where he’s able to prove that the sample essay is not on topic or it veers off into illogic, or it expresses some sort of racist point of view. All he does is give it something that’s rife with spelling mistakes and sentence fragments. So, yes, the easiest of the easy is given here as an example. Not only is Foltz making the case that these things can read for both lower level and then higher level issues. Other states are starting to look into it, and Utah cited Ohio as cited. Massachusetts is getting excited about this, apparently. Yeah, let’s

Steve Pearlman: Talk about the Utah one. Yeah. Cindy Carter, assessment development coordinator for the Utah State Board of Education, is talking about how they’ve started to do this. And one of the safeguards that they have in place for their AI system is that the computer can flag papers where they might say that something might be amiss, and they say that it flags about 20 percent of the cases and they go to a human reader as a backup mechanism for the assessment. Yeah. And I think that’s at least perhaps a worthwhile safeguard in intention

Dave Carillo: That makes total sense.

Steve Pearlman: Except we don’t know how the human raters are trained. We don’t know what the human raters are valuing. We don’t know the interactive reliability between the human rate or computer rater. Are we presuming, therefore that the human rater is the more authentic assessment, which I would be inclined to do? But the argument being made here is really that the computer is actually the more consistent means of assessment. So where is that reliability breakdown?

Dave Carillo: Yeah, no. I think that’s a great question. And the other thing that I want to sort of extend here is just a bit ago you talked about, well, what’s this human valuing, right? How are they now reading this essay in any different way than the computer? And that’s something that’s huge here, and it’s on our mind and our listeners know and our podcast talk about it a lot. But again, like if all we’re checking for human or computer is whether the essay is five paragraphs and a conclusion, then it really doesn’t matter who’s great, right? I mean, what’s happening is that a good grade is being given most likely to something that’s just structurally sound in terms of how many paragraphs it is. And that’s something that echoes throughout this article, and it’s something that we both feel strongly about and should probably continue to shout from the mountain. But we need to check seriously what’s being valued in our student writing, whether it’s being written by a human or a computer at every state.

Steve Pearlman: Well, one of the things that we know that the computers are looking for is the use of vocabulary, but that’s judged often by the length of the length of the word. And one of the things that I am constantly trying to get my students to do, especially as they just emerge from an sat mindset coming into college is not to inflate their diction and hyper register just for the sake of trying to sound smarter. I. Want them to choose the most appropriate word, and very often the most appropriate word is a short word. There is nothing wrong with that because it’s the clearest way to communicate the idea, which is what we in fact want them to accomplish. We don’t want these artificially inflated examples of their lexicons or thesaurus is just so that they can sound like they are smarter. We want them to actually be smarter.

Dave Carillo: And the implication here is that, well, an essay that is trying to look complexly at race relations but is using shorter words to do it is going to be graded more harshly by this computer than a NSA that has high falutin, moldy syllabic words that is preaching for the superiority of one race over another.

Steve Pearlman: I think it actually raises another question as well, which is what happens when we have an AI that is starting to grade on racism? To what extent can we trust a machine to place this value judgment, this ethical judgment right on the nature of a piece of writing? I mean, it’s hard for us as faculty members to make that assessment because we want students to be able to develop arguments sometimes that we don’t agree with. Now, where is the line for that? Well, it’s very contextual, right? Right, certainly. And I actually had a student once making a white supremacist argument in a paper for the white race. How do you contend with that? Do what? I can’t just not give it a good grade just because I object to the thinking, right? On the other hand, there’s a lot of illogic going on in the presentation of that argument. Exactly, which is not my personal opinion that we can. We can demonstrate why that’s illogical for the student and not based on evidence. But how is how is it II going to be able to make that assessment ethically and morally in terms of its responsibility to the student to talk about flaws of argumentation versus just the student stance on a particular subject?

Dave Carillo: I love that point because that’s oftentimes a question that we get asked by faculty when we’re dealing specifically with the assessment of writing, What are we actually going to try to value here? And to what extent are we doing a disservice to students if somehow, whether consciously or unconsciously, where grading them negatively because their thinking doesn’t necessarily align with our thinking? But that brings us at least to the other side of the argument here. And the first individual who is quoted on the other side of the argument is Kelly Henderson, who’s an English teacher at Newton South High School. And she’s quoted as saying the idea is bananas as far as I’m concerned. She goes on to say, quote an art form a form of expression being evaluated by an algorithm is patently ridiculous. End quote. Another English teacher, Robin Marder, is quoted as saying, What about original ideas? Where is the room for creativity of expression? Computer is going to miss all of that. These are all valid points to some extent, but that also sort of goes back to this idea of like, well, who’s teaching the AI and what are they teaching the AI to value? Because I think we would argue that, yeah, the AI is not going to catch these kinds of things right. But we know from research, like you said earlier that a lot of faculty, whether they want to or not, end up grading more on grammar because it’s more it’s quantifiable. It’s easy to do. And then there are a lot of other assignments that we’ve seen in faculty we’ve worked with that would be happy to give students strong grades simply by collecting and summarizing a lot of research, right? So the same thing that Henderson is challenging here, is it going to reward some vapid drivel that happens to be structurally sound is something that emerges as an overarching question whether or not we have this robo grader in the

Steve Pearlman: Conversation at all. It makes me think of this. There was a time where you and I were in running an assessment workshop at another university and we were looking at critical thinking. We were workshopping faculty right? And we had a faculty member who was looking at a paper that was generally vacuous in terms of what it was doing intellectually with respect to evidence and formulation of argument and complexity. But she said, Well, I have to give this paper a great grade because it touched my soul.

Dave Carillo: I remember that now. Yeah. Absolutely.

Steve Pearlman: What are we? So I don’t know how we can ever standardize the touching of the soul, nor should we. And that should not be a basis on which

Dave Carillo: Probably

Steve Pearlman: Not an argumentative paper is grating. It might be something that maybe there’s a piece of poetry or something very artistic where you are just moved by it in an emotional sense. But I don’t even want to distill the grading of creative writing down to something that is purely just an emotional response, either. That’s equally problematic.

Dave Carillo: Well, especially in creative writing, there’s always going to be that emotional response, and I’ve been in poetry workshops were really excellent. Poets have made certain points about how certain ideas or emotions are being expressed and how they might be received by the audience, and sometimes that comes down to other issues of craft. But yes, this is the other side of the argument is that like the computer is just not going to at least now be able to necessarily recognize an original idea or creativity of expression in any way, shape or form. And this is another. Unity for us to think about, well, if we’re arguing that grammar, is it the beyond of the writing, then what is what is it and can we articulate that? Can we assess which our listeners would be able to tell you that? Yes, you can. You can teach critical thinking, you can assess it.

Steve Pearlman: And I love this guy, Les Pearlman, who even though he spells his name wrong.

Dave Carillo: There you go again, thinking there’s only one way to.

Steve Pearlman: There’s only one way to spell Pearlman.

Dave Carillo: You’re so close minded about the word Pearlman.

Steve Pearlman: He and I might have a common ancestry for all I know. You don’t know. Shout out to Les. He’s created something called Babel, which stands for basic automatic BSA Language Yes, Generator, and it basically, as he as they explained it in this article, works like a computerized Mad Libs, what they say, and it’s designed to create papers. And so he shows how this thing works and instantly spits back a five hundred word essay based on a couple of keywords that he enters. And I’d like to read an excerpt from that because if you can,

Dave Carillo: If you can make it through because it’s magnificent,

Steve Pearlman: Right? Yes, I’m going to put this on my wall. History by Mimic has not and presumably never will be precipitously, but blithely ensconced society will always encompass imaginative ness. Many have scrutinized, but a few for an analysis.

Dave Carillo: Nueces, right? Amanuensis. Yes, I’m sure that’s it.

Steve Pearlman: The perjured, imaginative lies in the area of theory of knowledge, but also the field of literature. Instead of enthralling. The analysis grounds constitutes both a disparaging quip and a diligent explanation, and he is just perfectly clear in saying and I’m quoting him, he says. It makes no sense. There is no meaning. It’s not real writing. But what does it get on the grader? It gets a perfect score. David gets six out of six. There you go. And so he’s constructing this thing that’s built with big words and grammatical correctness and sort of some complex sentence structures and so on. But it’s blather, right? And he’s able to fool the machine, and he frankly says it’s scary that that works. But I’m going to read from the transcript now where Madani, who

Dave Carillo: Is knitting Magnani, is a senior research scientist at educational testing service ETS. That’s the company that makes Greece among other tests, and they also make the Greece automated scoring program.

Steve Pearlman: And so Magnani responds in a way to Pearlman here. And he says if somebody is smart enough to pay attention to all the things that automated system pays attention to to incorporate them in their writing, that’s no longer gaming. That’s good writing. So you kind of do want to give them a good grade. But the passage that we just read that got a good grade was nonsensical. And sure, it was hitting some of the check boxes that this AI is looking at. But it was monkeys throwing darts, right?

Dave Carillo: Thesaurus well, and my nanny comes back and says the great authors are still always scored by a human reader as well as a computer, so that pure babble would never pass the real test. But there are some states. The article goes on to say that only grade these tests by machines, but it brings us to this other point here. So we’ve got like the folks on one side are like, This is great. Are the folks on the other side of saying, this is lousy? And then there are these folks in the middle, which I find also interesting in its own way. And that’s that there are some individuals who recognize that for better or for worse, these machines are here to stay and the students that they tutor for the tutoring companies or that they work with to pass these tests are simply teaching the students how to write for these computers, and that’s where it becomes a difficult situation on another plane entirely. So, as we

Steve Pearlman: Say over and over again, right? What drives what students do? It’s the assessment, right? The nature of the assessment is the driving force for how students are going to go about engaging their academic work. And if we have an assessment that can be gained, then students are going to game it. They’re going to find a way to game it, whereas an authentic assessment that’s deep and rich in critical thinking and so forth will force students to do that authentic intellectual work. Nevertheless, that’s not exactly what we have going on.

Dave Carillo: No, no, not at all. But we have an individual. Orion Taliban executive director of Stellar Group, a tutoring company in San Francisco, realizes that this is the case, and Taliban is quoted as saying quote students really need to appreciate that they’re writing for a machine. And when students agonize over crafting beautiful, wonderfully, logically coherent and empirically validated paragraphs, it’s like pearls before swine. The computer can’t appreciate this. And so what Taliban talks about doing is he trains his students to fabricate evidence and fabricate fake studies that the computer is going to pick up as details and award as details, but are completely fabricated. So he says, I train them in fabricating evidence and fabricating fake studies, which is a lot of fun, he says, quickly adding, But I also tell them not to do this in real life, which is a major problem on another level entirely, because maybe you tell them not to do it in real life, but if they’re rewarded for doing it here, then they’re probably at least going to assess the risk in being awarded for it elsewhere. Turban advises students to take a basic formula and get creative, and the formula is like pick any year for a study and make up a professor’s name and then insert your favorite university and then say in which the authors analyze whatever the debate or the discussion of the topic of the essay is and go on from there.

Steve Pearlman: Now, I have no doubt that there’s going to be some point in which the eye is going to be able to immediately search that reference through all the databases. Find out that that reference potentially doesn’t exist, right? Right. And move on. It won’t be able to as easily. And I don’t know if ever, at least in our lifetimes, be able to search through the databases, find that that article exists. Read that article and understand the value of the students reference to the article. Right, exactly. So it might validate that the student is referencing a real article, but I could pull up five thousand articles to reference in a second in a search and just say that I’m referencing one and something that loosely says in a paraphrase.

Dave Carillo: Right. And then and this is something that that I want to add. And again, like we understand and we’ve experienced when we go to different schools and universities and different grade levels, like the various issues of workload and class size and administrative requirements and other requirements and testing that a lot of our compatriots are faced with. And we understand that. But what’s discouraging at times is every time we turn around, there’s another sort of opportunity for a student to be taught. Here’s here’s how you engage a piece of evidence. Here’s how you evaluate a piece of evidence. Here’s how you put one source into conversation with another source. And inevitably, like at every turn, they’re being taught not to do that or summary is all you need or just make up the evidence and just use these phrases, and it’s going to sound great. But this is what I wanted to get to. Then one end Mannone at ETS is saying Well, Pearlman’s blather would never get through because our humans read all our great essays. But Mannone is quoted later in relation to this particular tactic as saying, Yeah, we see a lot of that. And he’s talking about fake studies used as real details or evidence for an argument. But he goes on to say it’s not the end of the world, even human readers, and this is what I’m keying in on, who may have two minutes to read. Each essay would not take the time to fact check those kinds of datasets. So that’s the first point is like one on one side we’re getting now. That blather would never get through because we have humans reading it, but it seems he seems to be suggesting that our human readers only have two minutes.

Steve Pearlman: That’s really one of the nutshell of this right is that is that all of this is predicated on the need to assess people and students on this mass scale in this short period of time. Right. And regardless of what else is going on, we know that’s a flawed concept. And there is research for a while that the single biggest indicator of of getting good grade on the SAT writing was the length of what? Mm hmm. And so we’re not looking at a construct that’s really meant to assess quality in the first.

Dave Carillo: Right, exactly. And he goes on to actually say, and this is this goes to that quality. But if the goal of assessment is to test whether you are a good English writer, then the facts are second and it comes back to that initial point, right? What are we actually valuing in our students writing and what are we teaching the computers to value? And in this case, he seems to be saying that again, you know, even if even if they are making the argument that the AI is getting so much better that we can now check for the flow or the coherence of an argument, etc., etc. That’s not the case, and it might not be the case, whether it works or not, because he seems to be drawing a distinction between good English writer and how facts are used or the validity or the verity of these facts, right? And so make up the facts. If that gets through, so be it, because we’re not really testing for that. But don’t worry about it because we’re testing for all sorts of things. We’re assessing all sorts of things.

Steve Pearlman: You can be a great writer except for not saying anything or saying anything true. And this is why I essentially called it bullshit at the beginning, and it probably comes across in our tone today. I think what upsets me about this is the audacity of it, not that we can’t try to have. I do this. And that perhaps is a worthwhile effort for some people to engage in. And maybe something very fruitful comes out of that down the road. I don’t know skeptical of it, but maybe down the road that eventually proves worthwhile. But putting it into play now in any fashion, when the flaws in this are so obvious, when what it values is so flawed to begin with is what upsets me about, because I don’t mind that you’re trying to accomplish this. It just seems like any effort to put this into practice for this, just really for the sake of expediency is irresponsible. So the student, we’re not developing, therefore in the student and we are not respecting educators and education for what it should really be accomplishing it for the for the craft that it is and for the human importance that it serves right now.

Dave Carillo: And I think that that’s maybe a great place to conclude is that regardless of sort of what we’re talking about here, we’ve met and worked with enough teachers to know that they all want what’s best for their students. They are all working really hard to do what they can in their particular circumstances and to have. All that work be, I guess, shoved aside for a test that’s going to be created by a machine that’s not going to catch what it should be catching or what people think it’s catching, it’s going to catch something different and then really only test the students by testing their ingenuity and beating it. It throws a lot of sand in the face of our colleagues, who are working hard all the time to get every last bit out of their students in the face of sometimes staggering odds. So I guess people know where we stand based on our tone alone. But that’s why we started the critical thinking initiative to show our colleagues and friends that there are ways to get stronger critical thinking out of your students, stronger engagement in the material without necessarily taking up any more time in class or even pushing the content of your class to the side.

Steve Pearlman: So that’s a great place. And as we’re wrapping up the episode, first of all, we want to invite listeners to send us articles or other ideas for the podcast, if anything strikes you, right? Absolutely. You think would strike our interests? We try to stay on top of all this as much as we can, but an article like this could slip by us and we’d love you to bring our attention to these matters if you find something interesting out there and otherwise, thanks again for supporting the critical thinking initiative, and we’ll look forward to talking at you next time.

Dave Carillo: Take care!

Voiceover: This episode of the Critical Thinking Initiative is brought to you by the Critical Thinking Initiative. Org Visit the Critical Thinking Initiative for an ever expanding resource of books, syllabi, videos and other resources on critical thinking. The Critical Thinking Initiative is always seeking out other people in education who want to work towards raising critical thinking outcomes locally and nationally.

0 Comments

Submit a Comment