Hacker Newsnew | past | comments | ask | show | jobs | submit | mikemarsh's commentslogin

I wonder if the concern about "civic institutions" as some unique and special thing within society is a generational thing; as a millennial I've almost always viewed "universities, and a free press" ("rule of law" is much more nebulous) as simply "institutions", or rather "the establishment", the key distinction being "the establishment" also includes corporations, banks, big capital, etc.

The "institution" of the AI industry is actually a perfect example of this; the so-called "free press" uncritically repeats its hype at every turn, corporations (unsurprisingly) impose AI usage mandates, and even schools and universities ("civic institutions") are getting in on implicitly or explicitly encouraging its use.

Of course this is a simplification, but it certainly makes much more sense to view AI as another way "the establishment" is degrading "society" in general, rather than in terms of some imagined conflict between "civic institutions" and "the private sector", as if there was ever any real distinction between those two.


Yeah, it's quite unbelievable people are still repeating the "AI everything -> UBI -> everyone comfortable and well-fed" line.

Who's to say the governments won't just let society continue on the exact same trajectory it was before generative AI: "Lost your job? Too bad for you. Here's a gig economy and maybe some minimal food stamps. Innovation marches on!"


If the "author" couldn't be bothered to write it, why should the "reader" bother to read it?

Yep, it never fails. Here's another prediction for "The next two years of software engineering"; AI vendors will start to utilize their senior devs' personal domains to write their advertising pieces to attempt to mitigate scrutiny when such things are posted to social media.

In the sense of LLMs becoming competent enough, no, it's extremely unlikely.

In the sense of "smart tech founders" and C-suite executives in general using AI hype as an excuse for layoffs and understaffing, that's absolutely in the realm of possibility, and already happening in some places.


I'm glad Paul Louth of https://github.com/louthy/language-ext/ is here in the comments.

At this point basically everyone has been exposed to the concept of `Option/Result/Either/etc.`, and discussions typically end up revolving around the aesthetics of exception throwing vs. method chaining vs. if statements etc. without any concept of the bigger picture.

LanguageExt really presents a unified vision for and experience of Functional Programming in C# for those are who truly interested, akin to what's been going on in the Scala ecosystem for years.

I've been using it and following its development for a few years now and it continually impresses me and makes C# fresh and exciting each day.


Aww, thanks Mike! And thank you for the contributions and suggestions too :)

> At this point basically everyone has been exposed to the concept of `Option/Result/Either/etc. and discussions typically end up revolving around the aesthetics of exception throwing vs. method chaining vs. if statements etc. without any concept of the bigger picture.

I think this is a really important point. 12 years ago I created a project called 'csharp-monad' [1], it was the forerunner to language-ext [2], which I still keep on github for posterity. It has the following monadic types:

    Either<L, R>
    IO<T>
    Option<T>
    Parser<T>
    Reader<E,T>
    RWS<R,W,S,T>
    State<S,T>
    Try<T>
    Writer<W,T>
One thing I realised after developing these monadic types was that they're not much use on their own. If your List<T> type's Find method doesn't return Option<T>, then you haven't gained anything.

I see others on here are taking a similar journey to the one I took over a decade ago. There's an obsession over creating Result types (Either<L, R> and Fin<A> in language-ext, btw) and the other basic monads, but there's no thought as to what comes next. Everyone of them will realise that their result-type is useless if nothing returns it.

If you're serious about creating declarative code, then you need an ecosystem that is declarative. And that's why I decided that a project called "csharp-monad" was too limiting, so I started again (language-ext) and I started writing immutable collections, concurrency primitives, parsers, and effect systems (amongst others). Where everything works with everything else. A fully integrated functional ecosytem.

The idea is to make something that initially augments the BCL and then replaces/wraps it out of existence. I want to build a complete C# functional framework ecosystem (which admittedly is quite an undertaking for one person).

I'm sometimes a little wary about going all in on the evangelism here. C# devs in general tend to 'stick to what they know' and don't always like the new, especially when it's not idiomatic - you can see it in a number of the sub-threads here. But I made a decision early on to fuck the norms and focus on making something good on its own terms.

And for those that wonder "Why C#?" or "Why not F#?", well C# has one of the best compilers and tooling ecosystems out there, it's got an amazing set of functional language features, it will have ADTs in the next version, and it has a strong library ecosystem. It also has the same kind of borrow checker low level capability as Rust [3]. So as an all-rounder language it's quite hard to beat: from 'to the metal bit-wrangling', right the way up to monad comprehensions. It should be taken more seriously as a functional language, but just generally as a language that can survive the turmoil of a long-lived project (where mostly you want easy to maintain code for the long-term, but occasionally you might need to go in and optimise the hell out of something).

My approach will piss some people off, but my aim is for it to be like the Cats or Scalaz community within the larger Scala community.

It's certainly a labour of love right now. But, over a decade later I'm still enjoying it, so it can't be all bad.

(PS Mike, I have new highly optimised Foldable functionality coming that is faster than a regular C# for-loop over an array. Watch this space!)

[1] https://github.com/louthy/csharp-monad

[2] https://github.com/louthy/language-ext

[3] https://em-tg.github.io/csborrow/


Since when is a study needed to confirm that enabling a dopamine addiction, especially in developing minds, is a bad idea? Isn't our own direct experience as adults/parents struggling with said addictions enough?


Since a study is needed to determine if anything is true. Sometimes the study is simple: look out the window and see if it's raining. But this is not one of those.


The idea of replicating a consciousness/intelligence in a computer seems to fall apart even under materialist/atheist assumptions: what we experience as consciousness is a product of a vast number of biological systems, not just neurons firing or words spoken/thought. Even considering something as basic as how fundamental bodily movement is to mental development, or how hormones influence mood ultimately influencing thought, how could anyone ever hope to to replicate such things via software in a way that "clicks" to add up to consciousness?


Conflating consciousness and intelligence is going to hopelessly confuse any attempt to understand if or when a machine might achieve either.

(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)


Defining machine consciousness is indeed mysterious, at the end of the day it ultimately depends on how much faith one puts in science fiction rather than an objective measure.


Seems like a philosophy question, with maybe some input from neuroscience and ML interpretability. I'm not sure what faith in science fiction has to do with it.


I don't see a strong argument here. Are you saying there is a level of complexity involved in biological systems that can not be simulated? And if so, who says sufficient approximations and abstractions aren't enough to simulate the emergent behavior of said systems?

We can simulate weather (poorly) without modeling every hydrogen atom interaction.


The argument is about causation or generation, not simulation. Of course we can simulate just about anything, I could write a program that just prints "Hello, I'm a conscious being!" instead of "Hello, World!".

The weather example is a good one: you can run a program that simulates the weather in the same way my program above (and LLMs in general) simulate consciousness, but no one would say the program is _causing_ weather in any sense.

Of course, it's entirely possible that more and more people will be convinced AI is generating consciousness, especially when tricks like voice or video chat with the models are employed, but that doesn't mean that the machine is actually conscious in the same way a human body empirically already is.


>but that doesn't mean that the machine is actually conscious in the same way a human body empirically

Does it matter? Is a dog/cow/bird/lizard conscious in the same way a human is? We're built from the same basic parts, and yet humans seem to have a higher state of consciousness than other animals around us.

For example the definition of the word conscious is

>aware of and responding to one's surroundings; awake.

I'll give that we likely mean this in a general sense, but I'd say we're pretty close to this with machines. They can observe the real world with sensors of different types, and then either directly compute, or use neural nets to make generalized decisions on what is occurring around them, then proceed to act on those observations.


If you simulate rainy weather, does anything get wet?

(Not my original quote, but can't remember right now where I read it.)

It's similar asking about whether silicon computers performing intelligent tasks is "conscious".


I guess it depends, can you tell the difference between a weather simulation and the actual world?


Can you?

You have weather readouts. One set is from a weather simulation - a simulated planet with simulated climate. Another is real recordings from the same place at the same planet, taken by real weather monitoring probes. They have the same starting point, but diverge over time.

Which one is real though? Would you be able tell?


They're not asking about telling the difference in collected data sets, data sets aren't weather.

The question is can you tell the difference between the rain you see outside your window, and some representation of a simulated environment where the computer says "It's raining here in this simulated environment". The implied answer is of course, one is water falling from the sky and one is a machine.


>The implied answer is of course, one is water falling from the sky and one is a machine.

Lets say you're in a room a good distance from the window. Suddenly you hear what sounds like thunder and rain falling. From a distance it appears that it's raining outside.

Is the rain real? Or is it simulated on a screen well enough you can't tell?

You have input output devices just like a computer. They don't see reality, they filter out huge amounts of data and your brain just interprets it. If our machines get good enough we may be able to blast signals directly to the brain that say it's raining and the brain wouldn't have any idea if it was simulated or not. Much in the same way it feels like we exist and not a 3d hologram an infinite distance away (or whatever other weirdness physics may or could do).


So how do we get from "machines may get really good at faking things to human perception" to "machines themselves can have human-like (or 'better') perception"?

The question is so-called consciousness arising out of machines, not machines deceiving human consciousness.

Even then, that deception still doesn't prove equivalency of simulated and real things, unless we're adopting an extreme and self-contradictory subjectivist epistemology.


People seem to way overcomplicate consciousness, especially in machines.

Where does a running video game exist? It's a simulation in the hardware. Where is the consciousness in a human brain, again it's electrical signals in our brain.

At the end of the day a system is what it does. Once it starts simulating the human mind in ways that appear human then we're at the point of saying a plane has to flap its wings or it's not flying.


You can't look at the "real weather" though. You can only look at the outputs. That's the constraint. Good luck and have fun.

A human brain is a big pile of jellied meat spread. An LLM is a big pile of weights strung together by matrix math. Neither looks "intelligent". Neither is interpretable. The most reliable way we have to compare the two is by comparing the outputs.

You can't drill a hole in one of those and see something that makes you go "oh, it's this one, this one is the Real Intelligence, the other is fake". No easy out for you. You'll have to do it the hard way.


Even granting all of your unfounded assertions; "the output" of one is the rain you see outside, "the output" of the other is a series of notches on a hard drive (or the SSD equivalent, or something in RAM, etc.) that's then represented by pixels on a screen.

The difference between those two things (water and a computer) is plain, unless we want to depart into the territory of questioning whether that perception is accurate (after all, what "output" led us to believe that "jellied meat spread" can really "perceive" anything?), but then "the output" ceases to be any kind of meaningful measure at all.


there is no "real weather". the rain is the weather. the map is not the territory. these are very simple concepts, idk why we need to reevaluate them because we all of a sudden got really good at text synthesis


Everyone's a practical empiricist until our cherished science fiction worldview is called into question, then all of a sudden it's radical skepticism and "How can anyone really know anything, man?"


You experience everything through digital signals. I dont see why those same signals cant be simulated. You are only experiencing the signal your skin sends to tell you there is rain, you dont actually need skin to experience that signal.


Bundling up consciousness with intelligence is a big assumption, as is the assumption that panpsychism is incorrect. You may be right on both counts, but you can't just make those two assumptions as a foregone conclusion.


AGI won't replicate our experience.

But it could be more powerful than us.


Honestly replicating our experience would be rather wasteful. Much like making planes the same way birds work to carry cargo.


Talk to any right-leaning young man who's actively engaged in a church community (even better if they're actually pursuing Christ, and not just chasing an abstract ideal of "community") and you won't hear very much about the "loneliness epidemic", except maybe in reference to their peers.

It may be hard to find them online in order to "talk" to them though, and of course that's the whole point :-) Selection bias at work.


As a "believer" myself (we don't typically use that evangelical terminology but it's close enough) I can completely understand that. People should go to Church for Christ, and anything else, even "community" is a nice bonus but is not going to sustain people when pursued for its own sake, as you're experiencing.

Maybe a sport or hobby club or some other thing you're interested in where people have a specific motivation to get together, bonding and forming "community" more naturally?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: