Brecht De Poortere made a neat breakdown of “top” poetry journals. Unfortunately, it’s based on the usual suspects (Best American, Pushcart, BotN). I’m hoping next year Brecht will collaborate with Chill Subs, and possibly Duotrope and Submission Grinder (etc.) to create a more nuanced list. Regardless, I’m always thankful people out there want to do this and I’m confident poets/writers appreciate these lists as reference points.
+
Cliff Garstang released his annual lists. Here’s the one for poetry.
(Note: Cliff’s lists are the ones I’ve followed the longest.)
+
I held off on publishing this piece because I was waiting for Erika Krouse to release her list. Now, it’s been a month… so, I figured, let’s just get this out in the world already! You can see Erika’s list for last year (with 2023 data).
+
Back in 2023, Erik Harper Klass, Founder of Submitit, wrote this piece discussing ‘The Possibly Impossible Task of Ranking Literary Journals’. (There are two interviews with Erik on Lit Mag News, just to say.) Here, Erik offers a few metrics he uses to evaluate small lit mags in an attempt to determine if they have staying power.
Overall, Erik looks at:
1. Reputation (which is, of course, subjective).
· But I like that it puts some emphasis on the hearsay regarding what people think about the journal. The myth of the lit mag. There’s a certain je ne sais quoi about heritage journals— often shrouded in a bit of mystery.
2. Lit mags that show up in anthologies.
· This is fast becoming my least favorite metric. I used to think this meant something about quality and value. Now, it seems to mean something about power. What sort of power? My sense is that connections play an outsized role. A longtime concern I’ve had is that editors look to other editors for reassurances. How so? Some editors will look at a Contributor’s bio for confirmation [read: confirmation bias] to confirm that the work they are reading is, in fact, good.
· The literary community is a microcosm of the greater society. In the marketplace, we find a similar trap. It’s often repeated (ad nauseum) that past performance is not an indicator of future results. In the stock market, this means that just because X stock performed well in the past there’s no guarantee that it will provide good ROI in future. In the literary community, where it’s arguably even easier to misrepresent, there’s no reason to believe that just because a bio indicates someone once published in AGNI or Kenyon Review that said individual is still writing stellar work in the here and now. For one thing, bios often omit timeframes. So, this writer may have published a terrific piece in AGNI 25 years ago… but how are their writing chops today? I’m not saying all momentum is a ceaseless path forward but, I hope, you get my point.
3. Masthead
· The presence of a Masthead for online lit mags has become an important indicator of authenticity. Basically, the lack of a Masthead makes the lit mag suspect. Oh, you want us to pay $5 to submit to the magazine and we don’t even know who you are? The audacity.
4. Top lit mags have usually been around
· There’s something to this. That being said, newer journals may be bringing fresh energy, perspectives, and ideas to the space.
· It’s also noted that all too many literary darlings have sadly gone defunct. We live in an age where even notable fixtures are under fire.
+
What’s the answer? I would LOVE to see all of these folks [who create lists] collaborate with each other. It would streamline the process saving all of them time and they could make more granular, useful lists.
Soon, AI will be able to make lists like this that are simply data-driven. The human element is going to involve… a special sauce. Curation. I’ve made lists of lit mags that I personally admire. Are these lists biased? Hell yeah. That’s the whole point.
It’s worth noting that Best American, Pushcart, and BotN are also curated and, in turn, biased. We all have unique preferences and that’s what makes taste interesting. For now, AI is not a full-blown agent— generative AI is not able to develop its own personal taste. Only humans can… for now. We should lean in on the human element.
Since I don’t care how AI evaluates literature, I don’t care about what it thinks about lit mags, and the prize and anthology selections don’t seem to influence my submission habits much.
What I’m hoping Duotrope or Chill Subs can bring to the table is their database of submission reports. If an author subs a piece to market A, gets rejected, and only then subs market B, that’s a signal that they perceive A > B. Aggregate those—with ELO, say—and you can get a ranking.
Similarly, I’d be very curious if someone puts together a database where each row corresponds to a piece and has the following fields:
1. author
2. market where it first appeared
3. list of markets named in the bio
As mind-numbing as Brecht et al. report indexing the anthologies to be, this would surely require computer assistance and cover only markets that make their content available online for free, but my professional opinion is that it would be feasible. (I myself won’t, for various reasons.)