Hacker Newsnew | past | comments | ask | show | jobs | submit | Kiyo-Lynn's commentslogin

I haven’t stopped using AI, but I use it less than I did a few months ago. Now I mostly turn to it when I’m stuck or need inspiration. Using it less actually made me more efficient.


It's not forced, but the atmosphere has definitely shifted. These days, before we even start on a task, the first question is often "Can we solve this with AI?" Over time, it starts to feel like we're working around the tool instead of focusing on the actual problem.


I’ve tried all kinds of tools but I always end up back at the terminal. It’s simple, direct, and helps me stay focused which makes me work more efficiently. Being productive isn’t always about using the newest tools. It’s about finding what really fits your rhythm. The terminal might look basic but it’s a minimal and powerful way to get things done.


Feels a bit much to ask students to open up all their social media just to study in the US. People post things without thinking too much, and now that could hurt their future. Not sure this is the kind of message we want to send.


It’s inspiring to see someone at 99 still speak with so much passion about the ocean. Hearing him say he won’t see how it ends feels heavy.

The part comparing bottom trawling to bulldozing underwater forests was powerful. But the recovery of sea otters and whales gives some hope.


When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.


The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.


Probably interesting to note that this is almost always true of weighted randomness.

If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).


> "LLMs are good at reducing text, not expanding it"

You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?


>You put it in quote marks, but the only search results are from you writing it here on HN.

They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.


I think that this conflates two issues though. It seems obvious to me that in general, the more time and effort I put into a task, the deeper I will understand it. But it's unclear to me how this aspect of how we learn by spending time on a task is related to what LLMs are good at.

Intentionally taking this to a slightly absurd metaphor - it seemed to me like a person saying that their desire to reduce their alcohol consumption, led them to infer the rule of thumb that "waiters are good at bringing food, not drinks".


I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.


These days when I write code, I usually let the AI generate a first draft and then I go in and fix it. The AI does not always get it right, but it helps lay out a lot of the repetitive and boring parts so I can focus on the logic and details. Before, building a small tool might take me an entire evening. Now I can get about 70 to 80 percent done in an hour, and then just spend time debugging and fine-tuning. I still need to understand all the code in the end, but the overall efficiency has definitely improved a lot.


I feel that the effects of fine-tuning are often short-term, and sometimes it can end up overwriting what the model has already learned, making it less intelligent in the process. I lean more towards using adaptive methods, optimizing prompts, and leveraging more efficient ways to handle tasks. This feels more practical and resource-efficient than blindly fine-tuning. We should focus on finding ways to maximize the potential of existing models without damaging their current capabilities, rather than just relying on fine-tuning.


I used to think that monitoring and alerting systems were just there to help you quickly and directly see the problems.But as the systems grew more complex, I found that the dashboards and alerts became overwhelming, and I often couldn’t figure out the root cause of the issue. Recently, I started using AI to help with analysis, and I found that it can give me clues in a few seconds that I might have spent half a day searching for.

While it's much more efficient, sometimes I worry that, even though AI makes problem-solving easier, we might be relying too much on these tools and losing our own ability to judge and analyze.


I would argue that people have been relying on observability tools too much rather than designing systems that are understandable in the first place.


Yes, at any particular task you will be better than AI.

We somehow forget that none of these systems are better than expert humans. If you rely on a tool, you might never develop the skills. Some skills are more worth than others. You won’t even have the experience to know which ones as well.


This is 100% true and also my experience.

However many companies will just care about how fast you deliver a solutiom, not about how much you are learning. They do not care anymore.

The speed of the productive process is critical to them in many jobs.


The current search engines really feel like a librarian who's always trying to sell you something. I just want to find a simple answer, but I keep getting led to all kinds of other pages. I believe if search engines were more like public libraries, focused on providing information rather than recommending things for commercial reasons, the experience would be so much better.


Did Google take another nosedive? I genuinely truly did not use it once in the last 6 months. Kagi actually works as a complete substitute now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: