Hacker Newsnew | past | comments | ask | show | jobs | submit | more defgeneric's commentslogin

To have the reputation as an AI company that really cares about education and the responsible integration of AI into education is a pretty valuable goal. They are now ahead of OpenAI in this respect.

The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.


After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.

Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).

P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.


The BBC's Shock and Awe: The Story of Electricity [0] documentary really made it click for me. The historical development and the conceptual development are woven together nicely.

[0] https://en.wikipedia.org/wiki/Shock_and_Awe:_The_Story_of_El...


While the post talks about big LLMs as a valuable "snapshot" of world knowledge, the same technology can be used for lossless compression: https://bellard.org/ts_zip/.


Exactly, this is alienation. Marx went on to describe the emergence of capital in history as another form of alienation.


It's funny to see the Vatican reusing the Feuerbach thesis about humanity creating the idea of God and then becoming slaves of that idea to talk about AI, as they are the gatekeepers of the original Artificial Idea called God :)

But also in this text we can feel the idea of the human soul and free-will crumbling, that also are the core of secular humanism.

Marxist analysis is also challenged, as we can speculate that AI would make the organic composition of capital to go to the roof... but you can really talk about OCC in regards of singularity AIs resembling more the Aladdin lamp or the Green Lantern ring than a highly automated factory, without even mentioning the possibility of an agency on their own?


This is the correct, the above relation to "synopsis" is a false etymology that only sounds plausible because of the sense of the common syn- prefix.


I was about to assert the same as you with as much confidence, but the etymology source I trust most (EtymOnline) nearly agrees with OP [0]:

> 1763, in reference to tables, charts, etc., "pertaining to or forming a synopsis," from Modern Latin synopticus, from Late Latin synopsis (see synopsis). It was being used specifically of weather charts by 1808. Greek synoptikos meant "taking a general or comprehensive view."

> The English sense "affording a general view of a whole" emerged by mid-19c. The word was used from 1841 specifically of the first three Gospels, on notion of "giving an account of events from the same point of view." Related Synoptical (1660s). The writers of Matthew, Mark, and Luke are synoptists.

The subtle change vs OP's is that EtymOnline does include some sense that the word 'synoptic' should be understood to describe the way in which the works relate to one another. But they do say that the connection to 'synopsis' is, in fact, part of the original intent of the usage.

[0] https://www.etymonline.com/word/synoptic


I may be missing the point of this blog but I don't think I am. I too miss the era of blogging but this isn't it. The only thing this piece does is name drop--without having the courage to actually name names--and point out how those "famous" (we can't judge, as they're unnamed) people were so wrong back then and the writer was so right. The line that actually answers the title is a cliche and isn't elaborated on: the web brings like-minded people together from all over the world.


I don't know the context, but Claus Kiefer was on the Physics Frontiers podcast recently talking about this paper:

https://arxiv.org/abs/2305.07331

Gödel's undecidability theorems and the search for a theory of everything

"I investigate the question whether Gödel's undecidability theorems play a crucial role in the search for a unified theory of physics. I conclude that unless the structure of space-time is fundamentally discrete we can never decide whether a given theory is the final one or not. This is relevant for both canonical quantum gravity and string theory."


Perhaps what we should be pushing for is a law that would force full disclosure regarding the training corpus and require a curated version of the training data to be made available. I'm sure there would be all kinds of unintended consequences of a law like that but maybe we'd be better off starting from a strong basis and working out those exceptions. While billions have been spent to train these models, the value of the millions of human hours spent creating the content they're trained on should likewise be recognized.


Arguably it's more than just moral superiority, with thin being considered beautiful (high status) and fat being considered ugly (low status).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: