Hacker Newsnew | past | comments | ask | show | jobs | submit | rashidujang's commentslogin

Tangentially related, I have recently been using my S10+ as a makeshift media server running Jellyfin in Termux. The main problem I had was that it is unsafe to keep a device perpetually charging and my first thought was to create a Routine to turn on the charger when the battery is above a certain threshold and off when it is below. This post gives me an alternative idea to try.


This was my impression after reading the article too. I have no doubt that the team at Filevine attempted to secure their systems and have probably thwarted other attackers, but got their foot stuck in what is an unsophisticated attack. It only takes one chain vulnerability to bring down the site.

Security reminds me of the Anna Karenina principle: All happy families are alike; each unhappy family is unhappy in its own way.


> There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development.

It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed. Throw in a knee-jerk solution of "daily call" (sound familiar?) for those involved while they are wading knee-deep through work and you have a perfect storm of terrible working conditions. My money is Google, who in my opinion have not only caught up, but surpassed OpenAI with their latest iteration of their AI offerings.


Besides, can't they just allocate more ChatGPT instances to accelerating their development?


> It's incredible how 50 year-old advice from The Mythical Man-Month are still not being heed.

A lot of advice is that way, which is why it is advice. If following it were easy everyone would just do it all the time, but if it's hard or there are temptations in the other direction, it has to be endlessly repeated.

Plus, there are always those special-snowflake guys who are "that's good advice for you, but for me it's different!"

Also it wouldn't surprise me if Sam Altman's talents aren't in management or successfully running a large organization, but in machiavellian manipulation and maneuvering.


Also, google has plenty of (unmatched?) proprietary data and their own money tree to fuel the money furnace.


As well as their own hardware and a steady cash flow to finance their AI endevours for longer.


There is always a daily call if a U.S. startup fails. Soon there will be quadrants and Ikigai Venn diagrams on the internal Slack.


Imho it just shows how relatively simple this technology really is, and nobody will have a moat. The bubble will pop.


Not exactly. Infra will win the race. In this aspect, Google is miles ahead of the competition. Their DC solutions scale very well. Their only risk is that the hardware and low level software stack is EXTREMELY custom. They don't even fully leverage OCP. Having said that, this has never been a major problem for Google over their 20+ years of moving away from OTS parts.


But anyone with enough money can make infra. Maybe not at the scale of Google, but maybe that's not necessary (unless you have a continuous stream of fresh high-quality training data).


If making infra means designing their own silicon to target only inference instead of more general GPUs I can agree with you, otherwise the long-term success is based on how cheap they can run the infra compared to competitors.

Depending on Nvidia for your inference means you'll be price gouged for it, Nvidia has a golden goose for now and will milk it as much as possible.

I don't see how a company without optimised hardware can win in the long run.


The silicon can be very generic. I don't see why prices of "tensor" computation units can't go down if the world sees the value in them, just like how it happened with CPUs.


Anyone with enough money can cross any moat. That's one of the many benefits of having infinite money.


amazing how the bubble pops either from the technology either being too simple or being too complex to make a profit


The technology is simple, but you need a ton of hardware. So you lose either because there's lots of competition or you lose because your hardware costs can't be recuperated.


the thought that this might be done one recommendation of ChatGPT has me rolling

think about it, with how much bad advice is out there in certain topics it's guaranteed that ChatGPT will promote common bad advice in many cases


Don't forget the bleak subtext of all this.

All these engineers working 70 hour weeks for world class sociopaths in some sort of fucked up space race to create a technology that is supposed to make all of them unemployed.


These engineers make enough money to comfortably retire by the time they are replaced with AI.


> technology that is supposed to make all of them unemployed.

To make all of us (other poor fuckers) unemployed.


They are paid exceptionally well though. Way above market rate for their skill set was at any point in history. Work long hours for a few years and enjoy freedom for the rest of your life. That's a deal a lot of people would take. No need to feel sorry for the ones in position to actually get the choice.


You can have a more upbeat take on it all.


You can, but then your model of the world will be less accurate.


Wait, shouldn't their internal agents be able to do all this work by now?


They have a stated goal of an AI researcher for 2028. Several years away.


From the context, what I gather was meant by the idea of "multiple Indias" was the socioeconomic status of different demographics in India and their app usage. The presence of specific apps gives a tell to which demographic they belong to.

In other words, the richest demographic used certain apps and was equated to folks in Mexico, followed by the less rich equated to folks in Indonesia and the poor to Sub-Saharan Africa.


This is so cool and actually was something I was thinking of building for a while! Would love a write-up of how the file formats was reverse engineered.


Love it, this seems very similar to https://www.actsofgord.com/


I've been struggling to find a workflow that can easily extract knowledge and insights from audio content on the web and sync it with note-taking systems such as Notion, Readwise or Obsidian, so I decided to create a system that transcribes the audio, summarizes it and shares it with other applications.

Right now we are only targetting podcasts to Notion as the vertical slice for the MVP, but in the future we're looking to support "connectors" that can take in other forms of content such as audiobooks, videos, etc. and share it to other popular note-taking forms.

It's been an exciting journey so far and we're looking to launch soon!


Amazing article! In case the author sees this, it'd be great if the author can deep dive into how he "found the right place" in finding the correct breakpoint to produce the decrypted message. It seems to me that if you're able to do this there's a lot of interesting things one could do.


Probably just painfully stepping through the debugger.


Hey there, I was confused at this exact question too. This link might help, written by a contributor to llama.cpp: https://github.com/ggerganov/llama.cpp/pull/1684

TLDR: Lower quantization means higher perplexity (i.e. how 'confused' the model is when seeing new information). It's a matter of testing it out and choosing a model that fits your available memory. The higher the quantization number, the better (generally).


Hey I'm fairly new to the who's who in the PostgreSQL world, would you mind telling why Heikki might be able to pull this off?


Not who you asked, but: He is a longtime contributor who has written/resigned important parts of postgres (WAL format, concurrent WAL insertion, 2PC support, parts of SSI support, much more). And he is just a nice person to work with.


Cool! He seems like a powerhouse in this space - thank you for the answer


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: