A few days ago, I posted this picture to Facebook, with the caption, “Find yourself someone who looks at you the way Boggle looks at me when he wants my soup.” Boggle, obviously, is the cat’s name.
A few hours later, my wife sent me this screenshot of Facebook’s suggested replies to this photo:
This struck us both as unsettling: not because the AI has gotten good enough to generate such accurate approximations of how someone might reply (including the cat’s name, the emoji usage, the soup reference, and the casual meme reference to a “spirit animal”), but because the pattern of offering these as a menu of reactions represents a misunderstanding of the function of these interactions in the first place. Even if the actual text (and emojis) of the response are identical, there feels to me something fundamentally different between knowing someone typed them out themselves rather than selected them from a menu of options.
What is it that makes it different, though? Is it the effort, knowing that the commenter had to actually go through the process of typing out the letters and selecting the emojis? I don’t think so; I imagined knowing that someone dictated these via voice-to-text or typed these on a computer rather than a smartphone, and it didn’t substantively affect my perception of the message.
Instead, there’s something different about knowing the content was generated by the commenter rather than merely selected. It’s in some ways akin to the distinction between recognition and recall, where the ability to recall something represents stronger understanding than merely the ability to recognize it when prompted. Similarly, the process of generating a response oneself feels to me as if it represents something stronger than merely selecting a pre-generated response. In many ways, this likely connects to why reader-generated comments are regarded as more impactful than merely the number of reactions: it represents something stronger about the feelings of the person leaving it. Offering the ability to select a pre-generated response circumvents that, even if the response selected is identical to the one that the commenter would have left on their own.
Generative AI, I feel, is in a position right now that all revolutionary technology goes through: we recognize its potential as a powerful new tool, but we haven’t yet identified what needs it addresses. Generative AI is a solution looking for problems. And in the process of searching for the right problems, we’re coming across lots of problems it is not good at solving—such as the problem of needing a first-person account of a real experience with a 2e child in the NYC gifted and talented program. A human could have generated the identical response to the Meta AI from that story, but if human-generated the response would have value—AI-generated, it does not.
Generative AI is a solution looking for problems.
This conundrum comes up a good bit in my teaching. My rules regarding how much students may copy from AI are generally more restrictive than the rules students may have in the workplace. The reason is similar: in an educational context, the work generated is valuable insofar as it represents the student’s knowledge of some content. In the workplace, the work generated is valuable insofar as it is able to accomplish a task. What the work represents is different. Copying code from AI accomplishes one goal, but not the other.
In the opening synchronous meeting for one such class this semester, I was asked about this policy: if the work itself is the same, what does it matter whether it came from AI or not? I explained my thoughts with an analogy: imagine you have an assistant, whether that is an executive assistant at work or a family assistant at home or anyone else whose professional role is helping you with your role. Then, imagine your child’s (or spouse’s, I actually can’t remember which example I used in class) birthday is coming up. You could go out and shop for a present yourself, but you’re busy, so you ask this assistant to go pick out something. If your child found out that your assistant picked out the gift instead of you, would we consider it reasonable for them to be disappointed, even if the gift itself is identical to the one you would have purchased?
My class (those that spoke up, at least) generally agreed yes, it would be reasonable to expect the child to be disappointed: the gift is intended to represent more than just its inherent usefulness and value, but also the thought and effort that went into obtaining it. I continued the analogy by asking: now imagine if the gift was instead a prize selected for an employee-of-the-month sort of program. Would it be as disappointing for the assistant to buy it in that case? Likely not: in that situation, the gift’s value is more direct.
This gets to the core distinction I feel tools using generative AI need to address: to what extent is the artifact they are generating valuable in and of itself, and to what extent is the artifact they are generating valuable only insofar as it is authentic to the way in which it is perceived to have been generated? In the workplace, a block of code may be valuable insofar as it accomplishes the goal of the program, while in a class, it may be valuable only for what it says about the student. In gift-giving, a gift may be valuable in a professional setting based only on its inherent value, while in a personal setting it may be valuable due to a combination of value and the authentic process that generated it. We can apply this analogy in numerous other places; this is why it is appropriate to bring a store-bought cake to a corporate event but not to a baking competition, or why we regularly see internet personalities criticized for apparently using generative AI to author apology videos.
There are implications of this view for two audiences. For regular users, it’s important for us to consider the trade-off between authenticity and whatever value generative AI is delivering when electing to use these tools. For example, there’s a built-in tool here in WordPress that lets me simplify, expand, or summarize this post. I can be a bit verbose, and so I as a user have to consider whether the loss of authenticity that would come from using that tool is worth the apparent gains in readability or simplicity. We each individually need to attend to whether using these tools undermines the authenticity of the artifact we’re producing. This can be tough, of course, because so often using these tools is going to be much easier than producing the artifact ourselves. Generative AI is like the high-fructose corn syrup of content: it’s cheap and easy to use, but doesn’t yield as good of a result and has some long-term impacts if we use it too much. We have to be careful about when we use it because it would be so easy to get carried away.
Generative AI is like the high-fructose corn syrup of content: it’s cheap and easy to use, but doesn’t yield as good of a result and has some long-term impacts if we use it too much.
For those building tools that leverage generative AI, the same implication applies, but at a broader level. To what extent are we helping our users circumvent authenticity, and to what extent are we operating in areas where authenticity wasn’t part of the underlying value of the artifact? One area of rapid development for similar technologies over the past few years has been in photo editing: Photoshop and other tools can now do in a single click what it previously took a professional several hours to do. While this poses some obvious concerns about technology replacing workers and other such issues, I’ve not yet heard concerns raised about the authenticity of this exercise. But when we develop tools that generate content in settings where its only value would be in its authenticity—such as the aforementioned instance of the Meta AI fabricating a story about a child in a New York School system—we are creating something self-defeating. It’s solving a problem that it inherently can’t solve because it removes the authenticity from the artifact, and the authenticity is the only reason the artifact has relevance in the first place.
So, as both users and creators of Generative AI tools, it’s important that we keep in mind to what extent certain artifacts have value because of what they are, and to what extent they have value because of the authentic process that created them. Generative AI can be a useful tool for creating artifacts whose entire value is just the artifact itself, but if authenticity matters, generative AI is a poor fit.
That said, for the sake of science, I did drop this blog post into Jetpack’s built in AI summarizer. If I had posted the following blog post instead of the one above, would it be more or less impactful? Is that difference because of the different content, or because of the process that generated it? Does it matter that you know it was AI-shortened? Does it matter that you know the input was a full-length post by me rather than a short prompt to generate a longer post?
A few days ago, I posted a Facebook picture of my cat Boggle with a caption about the way he looks at me when he wants my soup. My wife then sent me a screenshot of Facebook’s suggested replies to this photo, which felt unsettling. The AI-generated responses were accurate but missed the personal touch of someone typing out a message themselves.
This lack of effort in AI-generated replies reminded me of the difference between recognition and recall. Generating a response personally feels more meaningful than selecting a pre-made one, much like how a student’s own work demonstrates their knowledge better than just copying answers.
I explained this in a class with an analogy: If you ask an assistant to buy a gift for your child’s birthday, the child might feel disappointed knowing it wasn’t chosen by you. The gift’s value is tied to the thought and effort behind it, unlike a professional setting where the inherent value of the gift is enough.
This applies to generative AI too. The value of an artifact depends on whether its authenticity or just its existence is important. For instance, a block of code is valuable in a workplace for its functionality, while in a class, it shows the student’s understanding. AI-generated content is useful when the artifact’s value is independent of its creation process, but not when authenticity matters.
We need to be cautious with generative AI. It’s easy to use but can undermine authenticity. Users and tool creators should consider whether the trade-off between ease and authenticity is worth it. For skills like photo editing, AI is helpful because authenticity concerns are less. However, AI-generated personal stories or apologies might lack the necessary authenticity, making them less impactful.
Ultimately, generative AI can simplify creating certain artifacts, but it’s vital to assess when the authenticity of the process matters.
I started my freshman year at Georgia Tech on August 15th, 2005—which itself was the 6,772nd day of my life.
As of today—February 29th, 2024—that day was also 6,772 days ago. I never really left Georgia Tech after I started—I began teaching my own classes a week after finishing my PhD, and even while I worked at Udacity 100% of my time was spent on the OMSCS program. So, that means that as of today, I’ve spent half of my life at Georgia Tech.
This seems like the perfect occasion for a completely unnecessary graphic:
Still to date, a little over half that time has been as a student: 3,546 days across three degrees, 3,226 as a teacher and researcher since then.
I’ve known this day was coming for quite a while, actually. I calculated it and put it on my calendar over two years ago. During that time, I knew I wanted to say something to mark the occasion. I thought about writing about why I stuck around, but the truth is that it has never really occurred to me to even consider leaving. Every stage has led smoothly into the next:
I came to Georgia Tech because I wanted to stay in-state and study computer science (…and because my girlfriend at the time was already here, let’s be honest).
I stuck around for a Master’s because I accidentally graduated earlier than I intended and didn’t have anything else lined up yet.
I stayed for a PhD because I learned late in my Master’s about these new efforts to create intelligent tutoring systems, and—as a private tutor on the side myself—I was instantly really interested in the idea.
I stayed after my PhD because developing a course with my PhD adviser sounded like a fun thing to do for a year.
I stayed after that because I discovered that I love online teaching: it has been everything I love about teaching—plus a lot of what I love about coding—without the stuff I never liked, like having to be compelling in front of a live audience.
And I’m still here because—at the risk of being overly quixotic—I really enjoy what I get to do. There’s a Japanese concept called ikigai that has been summarized by some nice infographics, and it’s rare to find something that sits at the intersection of all four areas—but for me, this does. I enjoy what I get to do, I believe the things we’re working on improving the world, I (obviously) have made a living at it, and I think I’m halfway decent at it. Parts of it, anyway.
I thought about writing all the people I would want to thank for helping me get here, but this would be a far longer post and I’m sure I’d still forget several people and be mortified when I realized. So instead I’ll just say: it’s been a fantastic 6,772 days, and I’m excited to see what the next 6,772 hold.
As I’ve done the last three years (2020, 2021, 2022), I’m ending the year by creating a list of my top ten (well, ten-ish) books that I read the past year. Release dates are all over the place, so rather than try to narrow down the books that I enjoyed that were released in 2023, I figure it’s easier and more interesting to just look at the books that I read during that year.
As always, I generally don’t review the books I read because I tend to think that when I don’t care for a book, it’s more a reflection on my tastes than the book’s quality, but listing my favorites from the year has always struck me as a good compromise.
So, my Top 10(ish) books of 2023, in no particular order, along with a handful of honorable mentions (photo shows the top “ten” themselves):
Opium and Absinthe by Lydia Kang—or The Impossible Girl by Lydia Kang, either would have been one of my choices for the same reasons. I first became acquainted with Lydia Kang when I read her Quackery: A Brief History of the Worst Ways to Cure Everything last year, and I just assumed it must be a different person with the same name until GoodReads showed both under her profile. I figured she was just branching out, but what I found compelling was the extent to which her medical career/writing pierced her novels as well. The level of medical detail in both books really set them apart from other similar-era mysteries.
Providence by Max Barry. I enjoyed this one so much I wrote a dedicated post about it. In a nutshell, it touches on a variety of elements that are usually left out of science fiction. I described it as Becky Chambers meets Orson Scott Card, and I think that’s still accurate: like Card, it’s a more tactical science fiction book than many I’ve read, but like Chambers, it’s deeply focused on the human elements.
The Final Empire by Brandon Sanderson (but really, The Well of Ascension and Hero of Ages, too). So much has been written in praise of Sanderson already that it’s trite to try to add anything, except that I’ll say this: based on how much praise I’ve heard for Mistborn over the years, I entered listening to this series with super-high expectations, and it still exceeded them. (And while I’m idolizing Sanderson—my daughter and I listened to Skyward this year together, and it’s just as phenomenal.)
The Management Style of Supreme Beings by Tom Holt. You could tell me that Terry Pratchett was still alive and had just switched to a pseudonym and I would absolutely believe you. It had that exact same brand of humor, but channeled into a more familiar world.
Project Hail Mary by Andy Weir (and, to a large extent, Artemis by Andy Weir as well). Like Sanderson, Weir is so popular that it seems silly to add my two cents, except to again note that the book exceeded the high expectations I came into it with. I probably nose Project Hail Mary above Artemis based on Ray Porter’s fantastic narration—it pulled in some mental connections to Bobiverse books that were remarkably compatible. For both books, though, I love how Weir initiates some believable technological rules, and then painstakingly follows them to their logical conclusion. I felt like the entirety of Project Hail Mary was set up in the first few pages in a way that it couldn’t have proceeded any differently save for some believable late twists.
The Devotion of Suspect X by Keigo Higashino. This was recommended in Games for Your Mind by Jason Rosenhouse, and appropriately so—it’s a mystery built entirely around logical deduction rather than the absurd coincidences and personalities common in others of the genre. It was particularly fascinating to me how the book managed to show things from both sides’ perspectives, but yet still provide a twist at the end.
Seven and a Half Lessons About the Brain by Lisa Feldman Barrett. This book was so short that I finished it in a day, but I’ve kept coming back to remind myself of some of the lessons. They’re remarkably well-explained, they hit the perfect balance between surprising and yet obvious, and so many of them are pretty directly relevant to everyday life.
Hello World: Being Human in the Age of Algorithms by Hannah Fry. I read this immediately after Life 3.0 by Max Tegmark which was so out there it may as well have been science fiction, and I found it to be a perfect companion, both more down to earth and more immediately relevant. What’s fascinating especially is that it was written in 2018, and yet the release of ChatGPT only made it more relevant and current, not obsolete.
Because Internet: Understanding the New Rules of Language by Gretchen McCulloch. This, along with Semicolon: The Past, Present, and Future of a Misunderstood Mark by Cecelia Watson, have significantly changed—though somehow also reinforced—my views on grammar, writing, and language in the past year. I used to consider grammar and writing a highly structured, rule-driven process, but these books shed light on how arbitrary the rules truly are, and how acceptable it should be to let them evolve. I feel like grammar is sort of like a political map of Europe: it was radically changing for centuries, and then at some point we said “Freeze!” and took the current state as permanent. When that’s about ending wars (as in Europe), that’s great—when that’s about stifling necessary evolution and creativity (as in language), it isn’t. (Sidenote: this is also one of the best read-by-the-author non-autobiographical audiobook I think I’ve listened to.)
And a couple honorable mentions:
Mind Bullet by Jeremy Robinson. Jeremy Robinson remains one of my favorite authors, and Mind Bullet was my favorite by him this year (I also read the entire Project Nemesis series, as well as Torment, Exo-Hunter, and Tribe)—but it’s gotten a little hard to separate his books enough to put one of them in my top of the year. It’s sort of like including all three Mistborn books, except they’re all so similar in style that either they all deserve to be there or none do… but it’s a consistently entertaining style. Whenever I finish a book I didn’t enjoy, I follow-up with a Jeremy Robinson because I know it’ll at least be engaging.
Sapiens: A Brief History of Humankind by Yuval Noah Harari. I loved this one, although this is one where its popularity in many ways has taken on a life of its own, and it’s hard to endorse the book without implicitly endorsing some of the decisions that some people have made citing Sapiens as support. But taken solely for its content, I found it as remarkable as many have noted, especially its emphasis on belief in shared fictions and belief in a better future as huge driving factors to the constructs on which modern society is based.
Several People Are Typing by Calvin Kasulke. The fact that a book could even be written this way (as an entire series of Slack conversations) is an achievement of its own, and the fact that it was able to touch on some deeper questions through that medium is even more remarkable.
What Works: Gender Equality by Design by Iris Bohnet. I honestly had difficulty comparing this to the other books I read this year because most of my reading is primarily for pleasure: this one was so relevant to my job and the classes I teach that it felt more like reading for work. It’s not only a fantastic book about designing with equality in mind, but it’s a great book on design in general.
Impromptu by Reid Hoffman. I’ve been an optimist about the positive impact AI can have on society, and this book—with its issues—was an early nice effort to call out some specific benefits we should focus on developing with these new AI tools.
Of those 119 books, 88 were audiobooks, 23 were physical books, and 8 were on Kindle. It’s interesting to see that shift: it used to be closer to 1/3rd audiobooks, 2/3rds physical, but the kids growing up has eaten into some times when I used to read a lot—and morning carpool has added around an hour to daily audiobook listening time now that Lucy and I listen to books together.
A few months ago, I tweeted about the AI collaboration policy I added to my course syllabi essentially at the last minute before the summer semester began.
I'm "finally" adding an official policy on collaboration with #ChatGPT (and AI assistants in general) to my course syllabi, and the fact that this is now needed is definitely my "we're living in the future" moment.
For those curious, this is my tentative policy language: <🧵>
— David A. Joyner @davidjoyner@fediscience.org (@DrDavidJoyner) May 14, 2023
I feel like it’s a little trite nowadays to say something “went viral”, but I do know that I woke up the next morning to a ton of notifications and replies. Over the next few months, I had a numberof differentmedia requestscome up that could be traced directly back to that thread. And frequently in those, I was asked if there was a place where my policies could be found, and… there really wasn’t. I mean, my syllabi are public, but that just shows the policy in isolation, not the rationale behind it. Plus, my classes actually have slightly different policies, so looking at one in isolation only shows a partial view of the overall idea.
In most of my classes, there are a number of specific skills and pieces of knowledge I want students to come away with. AI has the potential to be an enormous asset in these. I love the observation that a student could be writing some code at 11PM at night and get stuck, and instead of posting a question on a forum and waiting to get a response the next day they could work on it with some AI assistance immediately. But at the same time, these agents can often be too powerful: it’s entirely plausible for a student to instead put a problem into ChatGPT or Copilot or any number of other tools and get an answer that fulfills the assignment while having no understanding of their own. And that’s ultimately the issue: when does the AI assist the student, and when does the AI replace the student?
To try to address this, I came up with a two-fold policy. The first—the formal, binding, enforceable part of the policy—was:
However, all work you submit must be your own. You should never include in your assignment anything that was not written directly by you without proper citation (including quotation marks and in-line citation for direct quotes).
— David A. Joyner @davidjoyner@fediscience.org (@DrDavidJoyner) May 14, 2023
My “We’re living in the future moment” came from the fact that this is exactly the same policy I’ve always had for collaboration with classmates and friends. You can talk about your assignment and your ideas all you want, but the content of the deliverable should always come from you and you alone. That applies to human collaboration, and that applies to AI collaboration as well.
With human collaboration, though, I find that that line is pretty implicitly enforced. There are some clear boundaries. It’s weird to give a classmate edit access to your document. It’d be strange to hand a friend your laptop at a coffeeshop and ask them to write a paragraph on your essay. There are gray areas, sure, but the line between collaboration and misconduct is thinner. That’s not to say that students don’t cheat, but rather that when they do, they probably know it.
Collaboration with AI tends to feel a bit different. I think it’s partially because it’s still a tool, meaning that it feels like anything we create with the AI is fundamentally ours—we don’t regard a math problem that we solve or a chair that we build as any less “our” work because we used a calculator or a hammer, though we’d consider it more shared if we instead asked a friend to perform the calculations or pound in a few nails. And this is the argument I hear from people who think we should allow more collaboration with AI: it’s still just a tool, and we should be testing how well people know how to use it.
But what’s key is that in an educational context, the goal is not the product itself, but rather what producing the product says about the learner’s understanding. That’s why it’s okay to buy a cake from a grocery store to bring to a party, but it’s not okay to turn in that same cake for a project in a culinary class. In education, the product is about what it says about the learner. If a student is using AI, we still want to make sure that the product is reflecting something about the learner’s understanding.
And for that reason, I augmented my policy with two additional heuristics:
Heuristic 1: Never hit "Copy" within your conversation with an AI assistant. You can copy your own work into your conversation, but do not copy anything from the conversation back into your assignment.
— David A. Joyner @davidjoyner@fediscience.org (@DrDavidJoyner) May 14, 2023
Heuristic 2: Do not have your assignment and the AI agent open at the same time. Similar to above, use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge.
— David A. Joyner @davidjoyner@fediscience.org (@DrDavidJoyner) May 14, 2023
Truth be told, I really prefer just the second heuristic, but there are instances—especially in getting feedback from AI on one’s own work—where it’s overly restrictive.
Both heuristics have the same goal: ensure that the AI is contributing to the learner’s understanding, not directly to the assignment. That keeps the assignment as a reflection of the learner’s understanding rather than of their ability to use a tool to generate a product. The learner’s understanding may be enhanced or developed or improved by their collaboration with AI, but the assignment still reflects the understanding, not the collaboration.
There’s a corollary here to something I do in paper-writing. When writing papers with student co-authors, I often review their sections. I often come across simple things that I think should be changed—minor tips like grammatical errors, occasional misuse of personal pronouns in formal writing, etc. If a student is the primary author, it makes sense to give more major feedback as comments for the student to incorporate, but for more minor suggestions it would often be easier to make them directly than to explain them—but I usually leave them as comments anyway because that pivots the process into a learning/apprenticeship model. By that same token, sure, there are things generative AI can do that make sense to incorporate directly—that’s part of why there’s been such a rush to incorporate them directly into word processors and email clients and other tools. But reviewing and implementing the correction oneself helps develop an understanding of the rationale behind the correction. It’s indicative of a slightly improved understanding—or at least, I suspect it is.
So, in some ways, my policy is actually more draconian than others. I actually don’t want students to simply click the magic AI grammar-fixing button and have it make suggested changes to their work directly (not least because I subscribe to the view that grammar should not be as restrictive and prescriptive as it currently is seen to be—see Semicolon by Cecelia Watson for more on what I mean there). I’m fine with them receiving those suggestions from such a tool, but the execution of those suggestions should remain with the student.
Of course, there are a couple wrinkles. First, one of my classes deliberately doesn’t have this policy. One of my classes is a heavily project-oriented class where students propose their own ~100-hour project to complete. The goal is to make an authentic contribution to the field—or, since that’s a hard task in only 100 hours, to at least practice the process by which one might make an authentic contribution to the field. Toward that end, if a tool can allow students to do more in that 100 hours than they could otherwise, so be it! The goal there is to understand how to justify a project proposal in the literature and connect the resulting project with that prior context: if AI allows that project to be more ambitious, all the better. The key is to understand what students are really expected to learn in a particular class, and the extent to which AI can support or undermine that learning.
And second and most importantly: we are right at the beginning of this revolution. Generative AI emerged faster than any comparably revolutionary tool before it. Educators ultimately learned to adjust to prior technological innovations: when students received scientific calculators, we assigned more and tougher problems; when students started writing in word processors rather than with pen and paper, we expected more revisions and more polished results; when students got access to search engines instead of library card catalogs, we expected more sources and better-researched papers. Generative AI has arrived with unprecedented speed, but the fundamental question remains: now that students have a new, powerful tool, how will we alter and raise our expectations for what they can achieve?
And finishing catching up with porting my top 10 book-I-read-this-year lists to my blog by posting this from 2022:
As I’ve done the past couple years, I’m making a list of my ten favorite books I read in 2022. No particular order. (Well, actually, the order is the order I read them.) They’re not books that came out this year, just books I read this year.
Factfulness by the late Hans Rosling. I’d love an update to this for the pandemic and new AI era, but it provides a far more useful way to look at the world and understand its development. One of the few books that actually gives some optimism for the future.
Origins: How the Earth Shaped Human History by Lewis Dartnell. I’d heard about how things like oceans receding 60 million years ago informed modern voting patterns, but this book covered so much more, like how geology led to the evolution of intelligence itself.
Six of Crows by Leigh Bardugo. Or, really, the entire Grisha series, but this was the first I read. The characters are captivating, and every book has a remarkable ability to make you think it’s going one way before veering somewhere else while remaining believable.
Little Weirds by Jenny Slate. That rare book that defies any sort of genre boundaries. It’s like the autobiographical version of historical fiction: plenty of truth, but plenty of fantasy as well. It’s beautiful.
What Is Real? by Adam Becker. A fantastic look at the different macro interpretations of quantum mechanics. One of the more accessible quantum physics books I’ve read.
Zoey Punches the Future in the … by Jason Pargin/David Wong. I loved the humor, but I wasn’t expecting such an insightful look at a whole bunch of modern themes: social media, human augmentation, privatization, identity, and more. Plus, great characters.
The Dictionary of Obscure Sorrows by John Koenig. Words created to capture common feelings that are hard to describe without a word to label them. My favorites are galagog, star-stuck, harke, yu yi, grayshift, moledro, and foilsick.
Mostly a place where I write down things I repeat often so that instead of repeating them so often, I can just send a link.
Disclosure: I use Amazon referral links in some of my blog posts. That's mostly just a lightweight way to track and see if anyone's even clicking through. If you buy something through one of these links, I may get a bit of money back and achieve my dream of one day being able to buy the nicer set of kitchen scissors that Amazon sells instead of the bargain variety.