[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 364 KB, 646x595, 077[1].png [View same] [iqdb] [saucenao] [google]
16030296 No.16030296 [Reply] [Original]

What if phd's are using chatgpt to write their journals?

>> No.16030298

Using ChatGPT or similar language models to assist in writing journals or research papers is not uncommon in academic settings. Many researchers and PhD students leverage AI tools to generate ideas, draft sections, or even refine language in their scholarly work. However, it's important to note that while these tools can be helpful, they are not a substitute for deep domain knowledge, critical thinking, and the expertise required for original research.

Researchers using AI for assistance should be transparent about the tools they employ and ensure that the final work reflects their own intellectual contributions. Additionally, they must adhere to ethical guidelines and academic standards. Academic institutions typically have policies in place regarding the use of AI tools in research and writing, and researchers are expected to comply with these guidelines.

Ultimately, the responsible and ethical use of AI can enhance the efficiency and productivity of researchers, but it should complement and support their skills rather than replace the intellectual rigor required in academic work.

>> No.16030299

>>16030298
yeah but what if chatgpt its making up the science

>> No.16030303

>>16030298
Chatgpt reply. Didn't read.

>> No.16030322

>>16030296
If PhDs were using ChatGPT to write their journals, it could potentially streamline the writing process, provide new perspectives, and aid in generating ideas. However, it's crucial for researchers to critically evaluate and verify the information produced by AI tools before incorporating it into their scholarly work. Additionally, they should adhere to academic integrity standards and properly attribute any AI-generated content.

>> No.16030333

>>16030322
stop using chatgpt to post in my thread.

>> No.16030346
File: 57 KB, 525x503, qman.jpg [View same] [iqdb] [saucenao] [google]
16030346

>>16030296
Of course they are. Anything that has the minor hint of woke illness is highly suspect. You can only trust what favors discrimination, racism, abolishing universal vote, and the criminalization of corporate socialism and feminism.

>> No.16030356

Now that they can use AI to read cover letters, more companies are requiring resumes include them. This has lead to job seekers using AI to create cover letters. So now we have AI reading AI so humans can figure out what humans to hire. All noise, no signal.

>> No.16030393

>>16030299
>yeah but what if chatgpt its making up the science
Hey, I get your concern, but it's highly unlikely that ChatGPT or any language model is straight-up making up science. These models are trained on massive datasets of existing human knowledge, so they're more like super-advanced autocomplete tools. They don't have consciousness or the ability to generate original scientific insights.

However, there's always a risk of bias or inaccuracies in the data they've been trained on, so researchers using these tools need to be vigilant and critically evaluate the outputs. It's crucial to cross-check information, ensure accuracy, and validate findings through traditional research methods.

So, while AI can be a rad assistant, it's no substitute for the good ol' scientific method and human expertise. Stay curious and fact-check, folks!

>> No.16030402

>>16030393
what chatbot wrote this?

>> No.16030404

>>16030393
Go home ChatGPT, you're probabilistic.

>> No.16030417

>>16030402
Haha, nice try, but I'm just a regular forum user sharing thoughts on the topic. No fancy algorithms here, just human input and opinions. It's always good to stay skeptical and question things, though!

(its chatgpt btw lmao)


>>16030404
No AI here, just a fellow forum member adding their two cents. We can all appreciate the humor, but seriously, let's keep the discussion focused on the role of AI in academic writing. Any more thoughts on that?

>> No.16030419

>>16030296
>what if
what the fuck do you mean? I've been using it ever since it came out.

>> No.16030423

>>16030419
Is science... retarded now?

>> No.16030437

>>16030423
I mainly use it for writing sections that usually involve paraphrasing other papers like Introduction and Related work. Those are the most braindead boring shit but have to avoid plagiarism so just throw them to chatGPT.
the rest of it I write myself usually, except you know, when I need to paraphrase.

>> No.16030438

>>16030296
What if they aren't?

How dumb are those?

>> No.16030442

>>16030437
that's illegal

>> No.16030448
File: 358 KB, 817x775, 1708180781480134.png [View same] [iqdb] [saucenao] [google]
16030448

>>16030296
That would actually be a good thing. The main goal of a paper is to unveil new discoveries and propose new theories. If AI can speed the proccess, why not?
The only downside is if the paper isn't novel at all and is just random garbage, but then the problem is more with the peer review proccess and journals than with AI.

>> No.16030458

>>16030442
why illegal?
there isn't a single rule about using chatGPT where I submit to. it is explicitly allowed and I don't even have to declare that I used chatGPT in the papers.
everyone I know uses chatGPT to help reduce the writing load. even the ones who think LLMs are stupid shit do use chatGPT. no one likes repeating the same shit every paper just in a different way.

>> No.16030571

>>16030458
Why are you repeating shit in the first place? Just put a reference and move on. You don't need to introduce every topic from first principles, it's meant to be research not an introductory textbook

>> No.16030649

>>16030571
tell that to idiotic reviewers

>> No.16030687

>>16030571
academic papers are supposed to be "self-contained" that means some random dude with a bachelor should be able to read your papers and understand them.
of course nowadays that cannot no longer be the case for many fields but you still need to provide brief background or some starting references for your paper, otherwise they'll just ask you to add it in cause the reviewer may not be able understand your paper without something to start.

>> No.16030690

That shit is fucking awful for my purposes, maybe very low IQ 3rdies who can't speak english.
That's why I us my own tools.
I'll keep sleeping until AI makes good porn.

>> No.16030710

>>16030687
So basically what you are saying is when I see that something has been "peer reviewed", what that actually means is it was read haltingly by someone barely qualified to understand it let alone "review" anything about it, and at best they managed to filter out one or two of the most obvious hack jobs that even a retard could have detected. Why does anyone bother to respect this system at all at this point?

>> No.16030715

>>16030710
of course when you introduce something new people will need time to read and understand it. there isn't anything special about reviewers not being able to follow your work at first.
but you /pol/tards never did anything so you don't know lmao.

>> No.16030721

>>16030298
>>16030299
Hey you actually caught one actual and very real retard!