Academic research is being diluted by low-effort, repetitive publications.
Reddit Community
Community Problem
Elevator Pitch
The proliferation of formulaic, low-novelty academic papers, particularly in AI, is overwhelming valuable research and hindering genuine scientific progress. A solution is needed to identify and prioritize impactful contributions.
Full Description
I came across a professor with 100+ published papers, and the pattern is striking. Almost every paper follows the same formula: take a new YOLO version (v8, v9, v10, v11...), train it on a public dataset from Roboflow, report results, and publish. Repeat for every new YOLO release and every new application domain.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22murat+bakirci%22+%22yolo%22&btnG=
As someone who works in computer vision, I can confidently say this entire research output could be replicated by a grad student in a day or two using the Ultralytics repo. No novel architecture, no novel dataset, no new methodology, no real contribution beyond "we ran the latest YOLO on this dataset."
The papers are getting accepted in IEEE conferences and even some Q1/Q2 journals, with surprisingly high citation counts.
My questions:
- •Is this actually academic misconduct? Is it reportable, or just a peer review failure?
- •Is anything being done systemically about this kind of research?
Get involved
Discussion
No comments yet. Be the first to share your thoughts.
From the Reddit thread(12 top comments)
- 329·Reddit commenter·1mo ago
There's a huge, huge number of papers that do this but with LLMs. 'we prompted ChatGPT and here's what it said' is an entire genre of paper, and it's almost always low-effort trash.
permalink ↗ - 104·Reddit commenter·1mo ago·reply
It's not just trash. It's trash that doesn't reproduce after a week or so due to the frequent changes made to API models.
permalink ↗ - 103·Reddit commenter·1mo ago
My old PhD team had a professor who would essentially freeze/assume the weights of parts of neural networks, and then report faster training with better results with those weights frozen, he is still publishing and is getting 20-30 papers out yearly together with his students, department loves him because he increases the state funding single-handedly by relatively big amount. Short answer is that the incentives for research are wrong.
permalink ↗ - 75·Reddit commenter·1mo ago
Not misconduct, no. There’s nothing inherently wrong with it, assuming he’s not salami slicing, which is the most obvious form of dishonesty that might be applicable. Of course, it’s probably not that useful. I would imagine this is reflected in the quality of journals most of the papers are published in. Publication count on its own doesn’t mean a great deal.
permalink ↗ - 50·Reddit commenter·1mo ago
Are they lying about what they have done? If not, why would it be research misconduct? There are thousands and thousands of PHD students, not everyone will generate great papers. If you see a paper is garbage just delete it and move on.
permalink ↗ - 35·Reddit commenter·1mo ago
I once rushed to do a course project the night before it’s due. I opened Kaggle notebook, got a Kaggle dataset related to blockchain frauds, spent 1-2hrs to implement simple fraud detection using out of the box tools from sklearn and xgboost. I also found a paper with pretty much the same result, but it has 15 pages and 4 authors, together with a few dozens citations. They add a bunch of other pre-processing steps and have the same result as me rushing a course project in 2hrs. That’s the quality of many research papers nowadays.
permalink ↗ - 34·Reddit commenter·1mo ago
If it's cited and published it seems to be valuable research. Not everything needs to be novel. Sometimes having a reliable benchmark for yolo is what other people need.
permalink ↗ - 34·Reddit commenter·1mo ago·reply
Citations are not necessarily an indication of value. Some of these citations are [other papers by himself.](https://www.sciencedirect.com/science/article/abs/pii/S0306261925013686) Others look like they just did a literature search for 'YOLO' and cited whatever came up. For example [this paper](https://www.mdpi.com/2504-446X/10/2/126) cites OP's paper to support this sentence: > These factors make it challenging for existing lightweight detectors to simultaneously achieve high accuracy, low latency, and stable throughput on embedded edge platforms [14,15,16]. They 100% wrote this sent…
permalink ↗ - 23·Reddit commenter·1mo ago
I think this happens when colleges focus more on quantity than quality. I can think of so many colleges that actually do this. This is not misconduct, but rather just a flaw in the system and how ppl are using the flaw to their advantage and pushing out stuff like this.
permalink ↗ - 23·Reddit commenter·1mo ago·reply
Genuine question, not trying to be facetious. Why are prompting patterns based research bad? I find prompting patterns interesting papers because LLMs have many components that are black box because they are emergent from the way they are trained: ie nobody programmed attention patterns logically, the function that each head is performing emerged as something useful to conserve during training. Same with MLP lookups. The only way you can really inspect how LLMs work is by looking at prompting patterns. Honestly I find it similar to biology where the dna is mysterious. To learn what the dna …
permalink ↗ - 23·Reddit commenter·1mo ago
Sharing a late-stage professor's perspective: There are lots of different kinds of people with the title "professor," and just because one person does this, does not mean that you should do this if you want to become a respected researcher. At the upper stages of academia, we are used to seeing all sorts of "games" people play to juice metrics, like salami-slicing papers, writing non-replicable results, overclaiming, staking territory with shallow studies, etc. Sometimes it works and can convince deans and university administrators that you are important and valuable. But when you get to the …
permalink ↗ - 19·Reddit commenter·1mo ago
It's probably fine Someone has to do that kind of research. It's useful to record historical benchmarks of these things. Research isn't necessarily meant to be hard. It can be easy as long as it's useful. Maybe that prof. found a way to make easy contributions which fill a necessary niche. Those publications probably also have a low impact factor.
permalink ↗