Hacker Newsnew | past | comments | ask | show | jobs | submit | pizza234's commentslogin

That's very subjective. Concepts like iterations are inevitable, and they don't look great in a declarative language like HCL.

I also find refactorings considerably harder in a declarative language, since configurations have a rigid structure.


From Calibre's repository README:

> Supports hundreds of AI models via Providers [...] no AI related code is even loaded until you configure an AI provider.

This fork is pretty much useless.


That's not true: there's some menu items and supporting code by default.

Still, the menu item is not interacting with AI without you explicit configuring it.

I bet if you click it without any configuration will give you an error message.


How many inactive menu items that error out when clicked on is acceptable? Are we ok with a Microsoft Word style ribbon of controls that do nothing?

If UI bugs are really the issue, then one just sends patches to the upstream project - I'm sure the maintainers will be happy to receive fixes for broken menus. A fork for this is useless, and guaranteed to be abandoned.

Not just useless but another fork that only confounds newcomers and users looking for help. We can be generally opposed to AI without making it a boogeyman.

This sentiment reflect the type of project worked on - small ones. As projects get bigger, more type information gets lost, and that's why it needs to be compensated, typically via automated (unit) testing.

After having worked with gradual typing, unless the application is very disciplined, IMO automated testing is not enough to document the code, as Ruby makes it very easy to use flexible data structures which very easily become messy.


> Many more elaborate projects exist. My favorite is the one that compiles something similar to Turbo Pascal to C64 6502

It depends on the purpose. The reference IDE is intended to produce real-world programs (it includes tools for sprites, music etc.), while high level language compilers are mostly academic, as they're not performant enough.


llvm-mos https://github.com/llvm-mos/llvm-mos seems to generate good enough code - more than competitive with other high-level languages for the 6502, though perhaps not so much with manual assembly coders.


Shoes was very limited, and could only be used for extremely simple applications.


VB6 deserves the huge popularity it had, but the reason wasn't because of the language design, rather, its (extremely) rapid GUI application development. It was actually a two-edged sword - it facilitated writing spaghetti code.

> You could do basically everything that you could do in languages like C/C++

As long as there is some form of memory access, any language can do basically everything that one can do in C/C++, but this doesn't make much sense.


> As long as there is some form of memory access, any language can do basically everything that one can do in C/C++, but this doesn't make much sense.

No VB6 had really easy COM integration which let you tap into a lot of Windows system components. The same code in C++ often required hundreds of lines of scaffolding, and I'm not exaggerating


FWIW, the pywin32 Python package and win32ole Ruby package have streamlined COM integration for Python and Ruby. Not quite as easy as VB6, but it's pretty close. I was even able to tab complete COM names in the Emacs Python REPL, but I remember it being a little buggy.


It probably still sucks in C, but the C++ DX got a lot better. Importing the idl would generate wrapper functions that made calling code look much more like a normal function. It would check the hresult and return an out param from the function. They also introduced types like _variant_t that help boxing and unboxing native types. It still wasn't fun but it greatly reduced line count.


Nah, unless talking about C++ Builder extensions for COM, in Visual C++ land it still sucks big time.

For some reason, there are vocal teams at Microsoft that resist anything in C++ that is comparable to VB, Delphi, .NET, C++ Builder ease of use regarding COM.

Hence why we got MFC COM, ATL COM, WRL, WinRT (as COM evolution), C++/CX, C++/WinRT, WIL, and eventually all of them lose traction with that vocal group that aparently rather use COM with bare bones IDL files, using the command line and VI on Windows most likely.


Windows has a COM system. VB6 isn’t special. You can do that with VB.Net or C# too, C and C++. Windows COM is a thing. VB6 COM isn’t as VB6 only hooked into windows COM.


I'm just giving context as to why VB6 was much better than C++ back in the day for building windows apps. VB.Net and C# didn't exist in the halcyon days of 1998


> Rust spends almost all of its complexity on eliminating weaknesses that don't even make the top 5 in either list

Uh? Number 2 in CWE is an out-of-bounds write, and the same vulnerability is number 1 in the KEV list.


That falls into "spatial" memory safety, which Zig also provides. Rust's ownership and lifetime - the core of its design - go into eliminating use-after-free, aka "temporal" memory safety.


> Rust's popularity seems to be dropping or holding steady in indexes like TIOBE, and a lot of big "influencers" seem to be over Rust's hype cycle

It is correct that the hype is past its peak, however, the TIOBE trend (if one wants to use that) is actually steadily increasing.

> There is still some development in linux

"Some development" is a miscarachterization - the official addition to the Linux kernel itself is a very big deal, and its adoption is increasing and will continue to do so.

I think that Rust has found its niche in safe low-level programming, and it will slowly have an increasingly dominant role in (although the ceiling of this area is certainly limited in the global landscape).


Here's my 2 cents - a programming language (or any technology, smartphones, LLMs etc) has a honeymoon phase - during which everyone's excited about it and extols its virtues and focuses on how different it is from everything else.

Once that's over, people start looking at it with a more pragmatic eye - how much better is this really than what I had before. People start focusing less on the gimmicks and more on everyday usability.

For a programming language to be really popular, it needs that something that captures people's imaginations, but ultimately the stay power is determined by how useful it turns out.

Kotlin and Swift are very practical, but never had any wow features, so they never got hyped, they just quietly got more popular. Go had it with its green threads and channels, but nowadays most people seem to be not using those that much (I don't think there are a ton of instances of Go processes in prod with 10k threads), but otherwise its a solid language.

Rust - it's a solid language as well, and an improvement over C++ in some aspects like package management, but it's borrow checker and programming style is divisive.

Thing is, unlike goroutines there's no avoiding the borrow checker, so a lot of people don't really commit to Rust.


> Go had it with its green threads and channels, but nowadays most people seem to be not using those that much (I don't think there are a ton of instances of Go processes in prod with 10k threads)

Err..this is incorrect. Some projects who scale to 10k go-routines on a single instance regularly are loki and CockroachDB. Even your common NATS server can scale 5k+ go-routines on single instance and it is not uncommon.


Rust has been adopted by Microsoft, Amazon, etc. It's past the hype phase and well into the "languages people use for work and complain about" phase.

I still like it -- for systems programming, that is. It's a much better C++ basically.


> I still like it -- for systems programming, that is.

Just curious, what language(s) do you prefer for things that you don't classify as "systems programming"?


Go and TypeScript are both nice.


> Anyway C++ isn't as complicated as people say, most of the so called complexity exists for a reason, so if you understand the reasoning it tends to make logical sense.

I think there was a comment on HN by Walter Bright, saying that at some point, C++ became too complex to be fully understood by a single person.

> You can also mostly just stick to the core subset of the language

This works well for tightly controlled codebases (e.g. Quake by Carmack), but I'm not sure how this work in general, especially when project owners change over time.


This issue is not black and white.

It is accepted, within limits, for humans do transformative work, but it's not been yet established which the limits for AIs are, primarily (IMO): 1. whether the work is transformative or not 2. whether the scale of transformation/distribution changes the nature of the work.


Embedding other people's work in a vector space, then sampling from the distribution at a different point in the vector space, is not a central member of the "transformative" category. The justifications for allowing transformative uses do not apply to it.


That does seem to be the plurality opinion yes. But you are responding to someone saying that what counts as transformative hasn't been decided by saying that you have decided. We don't know how human brains do it. What if we found that humans actually do it in the same way? Would that alter the dialog, or should we still give preference to humans? If we should, why should we?


> or should we still give preference to humans? If we should, why should we?

Because of the scaling abilities of a human brain, you cannot plug more brains into a building to pump out massive amounts of transformative work, it requires a lot for humans to be able to do it which creates a natural limit to the scale it's possible.

Scale and degree matter even if the process is 100% analogous to how humans do it, the natural limitation for computers to do it is only compute, which requires some physical server space, and electricity, both of which can be minimised with further technological advances. This completely changes the foundation of the concept for "transformative work" which before required a human being.


This is a good observation, and motivates adjusting the legal definition of "transformative" even if the previous definition did include what generative AI systems can now do.


> What if we found that humans actually do it in the same way?

We know that humans don't – or, at least, aren't limited to this approach. A quick perusal of an Organization for Transformative Works project (e.g. AO3, Fanlore) will reveal a lot of ideas which are novel. See the current featured article on Fanlore (https://fanlore.org/wiki/Stormtrooper_Rebellion), or the (after heavy filtering) the Crack Treated Seriously tag on AO3 (https://archiveofourown.org/works?work_search[sort_column]=k...). You can't get stuff like this from a large language model.


People could be doing their own transformative works, and then posting them to tumblr or whatever with a “Ghibli style” tag or something.


Critiques like this dismissis AI as a bunch of multiplications, while in reality it is backed by extensive research, implementation, and data preparation. There's an enormous complexity behind, making it difficult to categorize as simply transformative or not.


The Pirate Bay is also backed by extensive research, implementation, and data preparation. I'm not dismissising [sic] anything as "a bunch of multiplications" – you'll note I talked about embedding in vector spaces, not matrix multiplication. (I do, in fact, know what I'm talking about: if you want to dismiss [sic] my criticism, please actually engage with it like the other commenters have.)


Development of a product like ChatGPT has been orders of magnitude more resource-intenstive than the Pirate Bay, in any way. It's perplexing how when some think of LLMs, they talk like they could develop one in an afternoon; it's specifically their complexity that warrants the question whether they can be considered transformative in nature.


Any type of art is inspired by the art of others. Its the simplicity in which you now can generate "art" which is the issue. Stealing artists work while also making it harder than ever for them to make a living is a deeply ethical issue. AI "artists" and "art" disgust me. Its a skill you build over your whole life, taking the shortcut because you're unwilling to learn the craft is deeply insulting to real artists. Good thing traditional art is still somewhat safe from this. Thankfully, this is making it easier to leave highly addicting online platforms as I boycott AI content of any form.


> Its a skill you build over your whole life, taking the shortcut because

Doesn't this apply to the printing press?

For me the core issue is not that OpenAI can generate some copies if the art, the issue is that some artists can not earn an honest living and that people do not care about artists generally. I wonder how many of the people commenting here have bought themselves art from an artist.

I personally doubt that AI can do a movie similar to studio Ghibli (of which I seen a lot and I love and paid for) and I also wonder how much of the issue here is some corporate profit rather than some poor artists (do you know who owns studio Ghibli without looking?)

It's fine to boycott AI content, but you could also decide to boycott content produced by large corporations for profit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: