Hacker Newsnew | past | comments | ask | show | jobs | submit | joshma's commentslogin

Founder @ https://airplane.dev here - putting in a shameless plug for Airplane, where you _do_ write code from scratch. :) I think it depends on the builder and use case, and it's not a fit for everything, but our users like / need to manage their tooling as code and are more productive for it.


It's interesting because we came to a different conclusion with Autopilot[0] - context and learning is incredibly important for result quality, and gpt4 doesn't (yet) support fine-tuning but will soon, and we'll definitely be taking advantage of that. Not just for quality, but also for speed (less time spent gathering context and processing input tokens).

My view is, everyone has access to chatgpt and github copilot, and so the idea is to provide value in excess of what chatgpt/copilot can do. Part of that is embedding it in the UI, but (especially for internal tools, which tend to be shorter) the improvement isn't huge over copy/paste or using copilot in vs code.

However, beyond UI integration, we can intelligently pull context on related files, connected DBs/resources, SDKs you're using, and so on. And that's something chatgpt can't do (for now). The quality of response, from what we saw, dramatically improved with the right docs and examples pulled in.

And yes, gpt4 does much better on JS (React specifically) and Python. It's just whatever it's trained on, and there's a ton more JS/Python code out there.

[0] https://www.airplane.dev/autopilot


To be able to click a button and send in script, Context and error message. I belive is a huge changes not a small change. Speeding ut the turnaround time a lot.

I will fins out the next 2 weeks at least. I hopw this will change how i program


cron is a popular starting point on Airplane! We see our users creating tasks that are semi automated, but then adding manual tasks later (but then automating those tasks as well). The nice property of Airplane is that it's very composeable and extensible over time.


(Appreciate the feedback! We clearly need to continue to improve how we explain the product. :))

It's probably easiest to explain in terms of what a developer does on Airplane:

1. Dev writes code (e.g. see our Getting Started for views[0]) - this can be simple Python scripts, JS views, shell scripts, etc.

2. Dev uses the `airplane` CLI locally to run and test the code

3. Dev runs `airplane deploy` or pushes to GitHub to deploy the code to Airplane

4. Dev's teammate (or dev) can now visit app.airplane.dev to run the code (views, tasks, runbooks) - the execution defaults to Airplane's servers, but you can also use our self-hosted agents[1] to move the execution (data plane) to your own cloud environment.

It's similar to GitHub actions in architecture (but for a different domain).

[0] https://docs.airplane.dev/getting-started/views

[1] https://docs.airplane.dev/self-hosting/agents


I thought I understood what you do but with your itemized example, I'm somewhat confused.

1. OK

2. Dev uses the `airplane` CLI as opposed to running `npm`, `python` etc locally?

3. Dev runs `airplane deploy` as opposed to deploying to Heroku?

I think it would be nicer for us if you explained your value proposition in terms of what tools/steps I'd be replacing if I choose to adopt Airplane.


To add on to what Josh said, the main value of Airplane is that we automate a lot of things that would normally require you to write a lot more additional code. So for example, if you build an admin panel using Airplane instead of doing so from scratch, we'll provide the following for you:

* A rich React component library that's optimized for internal tooling (tables, charts, etc) * Permissions, audit logs, and approval flows that are easily configurable * Integrations into various systems that an internal tool would normally have to integrate with (e.g. identity providers like Okta, Slack for notifications, etc)

So if you'd expect building that admin panel to take a few days or weeks of work, ideally with Airplane we can reduce that down to a few hours instead.


Part of what bothers me with the current webdev approach is irreducible complexity.

How many layers of 'magic' to facilitate devs deploying do we really need and is it wise to depend on so many?


2 - correct, depending on the task type we still call through to node etc. But the CLI also provides a dev UI and other niceties specific to Airplane.

3 - correct, the code is built and pushed to us, similar to Heroku.

Our value prop ultimately is that 1) you can build tools like admin dashboards, data migration scripts, one off devops operations, etc, into production grade web apps and 2) you can do this using code!


Airplane is so cool!

Is there a way to trigger the tasks via webhooks?

E.g. I'm using a signup flow w/ Airtable to check off / approve new sign ups. Person signs up, we see the profile, sometimes fix some details, and move it into the official Airtable of profiles — but would love to use Airplane instead since I'm trying to move off of Airtable for these sorts of things and use a "real" database. Using Views and a webhooks/triggers would be nice for the future when we just want to "auto approve" or move to db and airtable in parallel, etc..


So it's an an alternative to e.g. AWS lambdas or GCP cloud functions?


(I'm one of the founders at Airplane.) It started as a reference to "lightweight control plane[0] for your company" - but also the domains were loosely available.

[0] https://www.cloudflare.com/learning/network-layer/what-is-th...


Hey folks! We're Josh and Ravi, co-founders of Airplane (https://www.airplane.dev/).

Airplane is a platform for quickly building internal tools. We let you turn scripts of various types (Python, JS, shell, SQL, REST, ...) into lightweight internal apps for your support, operations, and other teams. Today, we provide UIs, notifications, permissions, approvals, audit logs, and more out of the box. In the future, we'll support increasingly more complicated workflows and interfaces.

I was previously CTO at Benchling (YC S12) and Ravi previously co-founded Heap (YC W13). Across our companies and many others we've talked to, we've seen various combinations of chatbots, scripts, and Jira tickets adding friction, interrupts, and errors to processes across the entire, well, company. We hope Airplane serves as a useful tool to tackle these problems.

We'd love to hear your feedback on Airplane! It's free to sign up and start using.


We currently use a mildly exotic "temporary bastion" approach, where upon request / approval a dev can get a container launched. The container is launched on ECS running an ssh server, pinned to the dev's individual public key, and that container has the appropriate security groups / IAM roles to access various production resources.

Right now, a dev will 1) VPN to get shallow network access and 2) SSH over VPN to get deeper network access through the bastion container. Something like a database is security group'd off so that you need to be on the bastion container to access.

My question - would Twingate be able to support an ephemeral use case like this? I'm thinking ideally it can be launched as a sidecar container, and a dev could SSH through the twingate container. A lot of solutions I see don't seem to handle ephemeral situations super well, so I was curious.


Hey, great question, and your setup seems very secure, but I’m sure it would be nice to reduce some of the overhead. The right way to support your ephemeral bastion use case with Twingate will ultimately be to use a public API that we plan to launch later this year. That will allow you programmatically deploy connectors as needed.

However, I’d also question whether you even need your ephemeral bastions anymore with Twingate. A big part of the value is that you can do away with any public entry points (even if they are secured as well as you’ve described) and very tightly control who can access hosts on your deeper network. Do your bastions do more than provide access points? For example, session auditing is pretty common.


Can you explain how this is more secure than SSH to a bastion host via an out of band network?


Could you clarify a bit on "out of band" in this use case? In principle, if you have a way to access your bastion on a completely private--maybe physically separate / leased line--network, then that's going to be extremely secure, but maybe you had a different use case in mind?


Out of band could be as simple as ngrok, or cloudflare Argo - or as you suggest by a separate connection.

SSH is two factor - key + password and Argo,ngrok,wireguard to a VPS provide DDoS mitigation and attack surface concealment and reduction.

I think I’m missing what your product adds.


Gotcha. In your example: nothing. We're okay with that. The level of security that results from the setup you described is what we are hoping Twingate will bring to people with convenience and ease of management built-in. I'm always amazed at the very wide range of sophistication that different teams and companies approach security with, and very, very few companies are at the level of your example. That's what we're excited to help change with this new product.


Benchling (YC S12 - https://www.benchling.com/careers/) is excited to attend! Stop by and chat with us if you're curious about applying software engineering to biology.


(Posting on behalf of Somak)

Hi there, thanks for the feedback! The 2 examples in the write-up are: 1) Molecular Weight: weighted sum across amino acid sequences, using amino acid weights defined in [1] 2) List Concatenation: aggregate lists of added resistances (like ['Ampicillin', 'Kanamycin']) across itself and all ancestors

These are simple, and can be expressed as formulas in Excel or functions. They take < 0.5 s.

A more complex biochemical property for antibodies that can't be as easily expressed in Excel is isoelectric point ([2], see example BioJava implementation at [3]). It requires a binary search, but the search space is constant so usually these calculations finish < 20 s.

Since our implementation is in Python, we can wrap the function in try/catch and, in the catch block, log the error and set the failed computation status.

[1] https://www.promega.com/-/media/files/resources/technical-re... [2] https://en.wikipedia.org/wiki/Isoelectric_point [3] http://biojava.org/docs/api1.9.1/src-html/org/biojava/bio/pr...


Is anyone here running k8s in production with kops? Are there any missing pieces that require "manual" work, like rotating certificates? How feasible is it to run, say, 30 clusters out of the box with a 2-person team?


I'm one of the kops authors, and I will say that a lot of people run k8s clusters created by kops in production - I don't want to name others, but feel free to join sig-aws or kops channels in the kubernetes slack and ask there and I'm sure you'll get lots of reports. In general kops makes it very easy to get a production-suitable cluster; there shouldn't be any manual work required other than occasionally updating kops & kubernetes (which kops makes an easy process).

But: we don't currently support rotating certificates. There used to be a bug in kubernetes which made "live" certificate rotation impossible, but that bug has now been fixed so it's probably time to revisit it. We create 10 year CA certificates, so it isn't something that you have to do other than just good security practice though.

If you file an issue (https://github.com/kubernetes/kops/issues) for certificate rotation and any other gaps / questions we'll get to them!


I am curious if you might share your thoughts are on Kops vs Kubeadm for standing up a Kubernetes cluster.


There's no need to choose: kops uses kubeadm (not a lot of it, but more with each release), so choose kops and get kubeadm for free!

kubeadm is intended to be a building block that any installation tool can leverage, rather than each building the same low-level functionality. It isn't primarily meant for end-users, unless you want to build your own installation tool.

We want to accommodate everyone in kops, but there is a trade-off between making things easy vs. being entirely flexible, so there will always be people who can't use kops. You should absolutely use kubeadm if you're building your own installation tool - whether you're sharing it with the world or just within your company. luxas (the primary kubeadm author) does an amazing job.


Thanks, I wasn't aware that it was leveraging kubeadm. This is good to know. I have been really impressed by my limited exposure to Kops so far. Cheers!


How do you handle that kubernetes requires the eth0 ip in no_proxy? Do you set that automatically?

How do you handle that DNS in a corp net can get weird and for instance in Ubuntu 16.04 the NetworkManager setting for dnsmasq needs to be deactivated?

How do you report dying nodes due to kernel version and docker version not being similar?

Do you report why pods are pending?

Does kops wait until a sucessful health check before it reports a successful deployment (in contrast to helm which reports success when the docker image isn't even finished pulling)?

Do you run any metrics on the cluster to see if everything is working fine?

Edit: Sorry to disturb the kops marketing effort, but some people still hope for a real, enterprise ready solution for k8s instead of just another fluff added on a shaky foundation.


kops is an open source project that is part of the kubernetes project, we're all working to solve these things as best we can. Some of these issues are not best solved in kops; for example we don't try to force a particular monitoring system on you. That said I'm also a kubernetes contributor so I'll try to quickly answer:

* no_proxy - kops is getting support for servers that use http_proxy, but I think your issue is a client issue with kubectl proxy and it looks like it is being investigated in #45956. I retagged (what I think are) the right folks.

* DNS, docker version/kernel version: if you let kops it'll configure the AMI / kernel, docker, DNS, sysctls, everything. So in that scenario everything should just work, because kops controls everything. Obviously things can still go wrong, but I'm much more able to support or diagnose problems with a kops configuration where most things are set correctly, than a general scenario.

* why pods are pending: `kubectl describe pod` shows you why. Your "preferred alerting system" could be more proactive though.

* metrics are probably best handled by a monitoring system, and you should install your preferred system after kops installs the cluster. We try to only install things in kops that are required to get to the kubectl "boot prompt". Lots of options here: prometheus, sysdig, datadog, weave scope, newrelic etc.

* does kops wait for readiness: actually not by default - and this does cause problems. For example, if you hit your AWS instance quota, your kops cluster will silenty never come up. Similarly if your chosen instance type isn't available in your AZ. We have a fix for the latter and are working on the former. We have `kops validate` which will wait, but it's still too hard when something goes wrong - definitely room for improvement here.

In general though - where there are things you think we could do better, do open an issue on kops (or kubernetes if it's more of a kubernetes issue)!


Nice, thanks. My feeling is that this is about 75% of what we want and thereby may really be the best solution there is, right now. I'll bring your responses into my next team meeting.


I sympathize, but HN isn't a support forum. A decent reply would have to be a huge wall of text in the middle of the conversation.


Thanks for feedback. I agree that a huge wall of text is not desired. I think a single sentence answer is fine.

For instance: "Yes, we can. We considered most of that and also have some enterprise customers with similar setups. Check out "googleterm A", "googleterm B", "googleterm C". If you don't find all of that join our slack chat to get more details."

And a more likely answer, also single line: "WTF are these questions? We thought docker+k8s already solves that." (I would've also expected solutions from there but don't hope for it anymore.)

PS (actually an edit to the previous post, but it's already too old): For instance Openshift, as I just found, considers the docker-version kernel-version problem via "xxx-excluder" meta packages: https://docs.openshift.com/container-platform/3.4/install_co...

A step in the right direction!


We've been running a small Kubernetes cluster of < 30 nodes that handles a variety of workloads using kops for almost a year now. kops is a significant improvement over other provisioning tools like kube-up.sh and kube-aws and has simplified infrastructure management a great deal. We can provision a brand new cluster and a couple dozen services across multiple namespaces in less than an hour - kops helps a lot with making that process smooth and reliable.

We have run into some issues with kops. Customizing the Kubernetes executables, e.g. using a particular Docker version or storage driver, has been buggy pre-1.5. Upgrading clusters to later Kubernetes versions has left some of the kube-system services, like kube-dns, in a weird state. Occasionally we encounter issues with pods failing to schedule/volumes failing to mount - these are fixed by either restarting the Kubernetes (systemd) services on the problem nodes or by reprovisioning nodes entirely. On one occasion, a bad kops cluster update left our networking in an unrecoverable state (and our cluster inaccessible).

I don't think there are any missing pieces, the initial configuration is what usually takes the most time to set up. You'll have to become familiar with the kops source as not everything is documented. As far as running 30 clusters with a 2-person team, it's definitely feasible, just complicated when you're constantly switching between clusters.


Definitely some great feedback there - I think most of those are known issues, and not all of them are technically kops issues, but we'll be figuring out how to work around them for kops users. (Switching Docker versions is tricky because k8s is tested with a particular version, so we've been reluctant to make that too easy, and the kube-dns 1.5 -> 1.6 upgrade was particularly problematic). Do file issues for the hung nodes - it might be k8s not kops, but one of the benefit of kops is that it takes a lot of unknowns out of the equation so we can typically reproduce and track things down faster.

And it is way too hard to switch clusters with kubectl, I agree. I tend to use separate kubeconfig files, and use `export KUBECONFIG=<path>`, but I do hope we can find something better!


Right, the hung nodes issue is probably least related to kops (though it'd be great if in the future, kops could leverage something like node-problem-detector to mitigate similar issues). Of the other issues, the incorrectly applied cluster config (kops decided to update certs for all nodes and messed them up in the process, then proceeded to mess up the Route53 records for the cluster) is the most serious one, and also not likely easy to reproduce. Apart from that, kops has been an excellent tool and we've been very pleased with it.


I run kops in production, and while we've had issues, the authors are responsive and super helpful. The problems we've encountered have more frequently been with k8s itself than kops; mostly been fire and forget except when I've gotta debug which experimental feature I tried to enable broke kops' expectations (or just broke). Ping me in the channels @justinsb mentioned if you want advice.

We're at three live and two dead (decommissioned) clusters with a two man team, and while we regret some decisions, most of the time it just works.


What decisions do you regret?


Using the default networking stack. Basic aws networking on k8s relies on route tables, which are quite limited - Only supports up to 100 routes. We had to use bigger nodes than I'd planned to stay under that limit.


> Only supports up to 100 routes.

I don't know if AWS has the disclaimer up anymore, but the default limit is 50 with limit increases available to 100 with "no guarantee that performance will remain unaffected"... or something like that.

What network type are you using, out of curiosity?


What is the concern with using bigger nodes then planned?

I agree the basic networking has a lot of limitations. Compared with added more layers with networking, I'd rather have a simpler setup with fewer nodes, even if they are larger.


I've been using it in production for a couple of my clients (Y Combinator companies). Except for a few hiccups it has been pretty great. Only thing is for HIPPA and PCI compliance environments there needs to be some additional changes.

We are slowly open sourcing some of that and more here:

https://github.com/opszero/auditkube


There is support for automatic certificate rotation in the recently released 1.7. Pretty sure this was also in 1.6, albeit as an experimental alpha feature:

http://blog.kubernetes.io/2017/06/kubernetes-1.7-security-ha...


Thanks for the heads up - looks like we'll be adding support very soon then :-)


We (https://www.ReactiveOps.com) run a lot of clusters for our clients in AWS (and also GKE, but..) using kops. It's definitely possible to run a lot of clusters, but kops is only one piece of the puzzle. Autoscaling, self-healing, monitoring and alerting, cluster logging, etc. is all other things you have to deal with, which are non-trivial (they scale workload per cluster, so...)

We open sourced our kube generator code, called pentagon, which uses kops, terraform https://github.com/reactiveops/pentagon


I saw you mentioned autoscaling first. How do you handle this? Do you just install autoscaler pod by hand? (edit: just saw the link you provided, not sure if you edited your post to add it or not, but thanks! I followed the link through to https://github.com/reactiveops/pentagon/blob/d983feeaa0a8907... and it looks like I would be interested in #120 and #126, but they don't seem to be GH issue numbers or Pull Requests. Where can I find more?)

It seems like a lot of the work like this just "isn't there yet" when it comes to orchestrators like Tectonic or Stackpoint, or Kops, in making this easy for you. (So there's surely a market for people who know how to do this stuff, but it seems like this would be the first feature that every tool supporting AWS would want to have. Unless there are hidden gotchas, and it seems like there would be a lot of blog posts about it if that were the case.)


Based on your experience, would you recommend one vendor over the other? (aws vs gcp)


Last I checked, support was missing for the newer ALB load balancer on AWS. That is a hold up for some as the older ELB doesn't scale as well, and needs "pre warming".


kops can set up an ELB for HA apiservers, but I think you're primarily talking about Ingress for your own services. We don't actually take an opinion on which Ingress controller you use, so you can just choose e.g. https://github.com/zalando-incubator/kube-ingress-aws-contro... when your cluster is up.

Maybe kops should use ALBs for the apiserver, and maybe k8s should support ALB for Services of Type=LoadBalancer. Neither of those are supported at the moment, if they should be then open an issue to discuss. (Even if you're going to contribute support yourself, which is always super-appreciated, it's best to start with an issue to discuss!)


Yes, it's not specifically a criticism of kops. Supporting ALB as an ingress controller seems to be the direction, with the coreos contributed code the likely winner.[1]

Thought it worth mentioning though, as the older ELB+k8s isn't great, and because the ALB support hasn't shaken out yet, a cluster created with kops could be suboptimal unless you address it afterwards.

I assume once it all shakes out, kops would support whatever the direction is.

[1] https://github.com/coreos/alb-ingress-controller


Managing ALB outside of k8s works pretty well if you don't deploy new services that often. Map ALB to the ASG and do host/path routing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: