• 0 Posts
  • 5 Comments
Joined 8 months ago
cake
Cake day: October 11th, 2024

help-circle
  • I think the academic advice about Wikipedia was sadly mistaken.

    Yeah, a lot of people had your perspective about Wikipedia while I was in college, but they are wrong, according to Wikipedia.

    From the link:

    We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn’t be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

    I personally use ChatGPT like I would Wikipedia. It’s a great introduction to a subject, especially in my line of work, which is software development. I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand. Unfortunately, most people do not use LLMs this way.



  • Throughout most of my years of higher education as well as k-12, I was told that sourcing Wikipedia was forbidden. In fact, many professors/teachers would automatically fail an assignment if they felt you were using wikipedia. The claim was that the information was often inaccurate, or changing too frequently to be reliable. This reasoning, while irritating at times, always made sense to me.

    Fast forward to my professional life today. I’ve been told on a number of occasions that I should trust LLMs to give me an accurate answer. I’m told that I will “be left behind” if I don’t use ChatGPT to accomplish things faster. I’m told that my concerns of accuracy and ethics surrounding generative AI is simply “negativity.”

    These tools are (abstractly) referencing random users on the internet as well as Wikipedia and treating them both as legitimate sources of information. That seems crazy to me. How can we trust a technology that just references flawed sources from our past? I know there’s ways to improve accuracy with things like RAG, but most people are hitting the LLM directly.

    The culture around Generative AI should be scientific and cautious, but instead it feels like a cult with a good marketing team.