Hacker Newsnew | past | comments | ask | show | jobs | submit | eshamow's commentslogin

I'm not sure I understand why artifacts can't be stored in a different service - even an S3 bucket, if not a real repository service - and fetched dynamically via a build process.

Is there a reason why binary blobs need to be stored directly next to code in order to be versioned?


The way I understand it, it should be possible, though it's more complicated since they seem to infer the LFS URL from the repo URL by default. So if you wanted to say keep your repo on Github, and store your LFS files on S3, you'd need to explicitly tell git where to write the files. There are configuration values for that.

Also you'd need the necessary LFS server piece on Amazon's side.


I'm thinking that rather than using git for versioning the binary artifacts as well, you tag and version your git repo, then tag and name/label your artifacts in another storage service. You then allow a build tool to assemble from both locations.


Aside from a second point of failure, how does this integrate with anything? When you push, what piece of software pushes what where? And who pays?


You can put this sort of build framework together with whatever tool you're using (gradle, maven, rake, grunt etc).

The idea isn't to shove everything into a storage bucket, but to assemble a toolchain using components that are fit for purpose. Git is fundamentally not fit for purpose as an artifact repository. There are tools that are.

-Eric


Suggesting that moving to Docker obviates the need for configuration management is frankly naive.

The whole point of CM tools is to make the layout/configuration of systems predictable. That doesn't make Docker or CM redundant. It means that when you build Docker containers, it makes good sense to install a CM tool and use it to do the systems configuration.

Saying that the Dockerfile works like BASH and thus makes life easier is a huge step backwards. Ultimately, administrators have to enter, troubleshoot and debug containers. Moving back to shell script-style configuration inside of containers just kicks the problem down the road.

Docker, and containers in general, are great. And you should treat their contents with the same respect that you do any system.


> It means that when you build Docker containers, it makes good sense to install a CM tool and use it to do the systems configuration.

If you belong to the "one container, one process" camp, I'd say CM has little utility.

Let's say we have a redis image. Using a CM I'd have to:

  * install ruby and various ruby lib pkgs
  * install chef or puppet, and a multitude of rubygems 
  * add recipes/manifests
My image is now several times bigger than it need be, and the configuration is now more opaque, in several files. It's a pain to debug and troubleshoot when something deep inside the recesses of chef-solo fails. And why would I?

I'd rather have a dockerfile with `apt-get update && apt-get install -y redis-server` and perhaps a line adding a custom config file. Very readable.


Appreciate the comment. The point I was hoping to get across wasn't that Docker would completely replace CM (or that CM is a bad thing), but that it could help reduce the amount of work in the CM world. As mentioned, we still needed to use Chef anyway (and using Opsworks to get a head start), so at least in this kind of environment CM is still necessary. That said I can see how the article could be slightly misleading =)


I appreciate your response, but -

When you discuss CM being necessary, you are talking about using it on the host and not within the container.

Ultimately, operability and proper configuration inside the container is critical. Using a Dockerfile with no CM inside it is not much of an improvement on not using CM anywhere.

You don't need stateful CM inside the container. It's fine to fire and forget - use Puppet in apply mode or Chef solo. But there's a reason these tools are used in building AMIs and containers over scripting languages - we've come a long way over the past 10 years, and I still feel that switching to the Dockerfile as a configuration mechanism is like moving back to configure/make/make install.


What else do you think is happening under the hood when you use a CM tool like, say, Ansible? Ansible translates your configuration to small Python scripts, uploads them to the remote host and runs them.

What I'd like to see is a "script dump" output that still lets you create your yml configuration but converts it into shell scripts that you can call from your Dockerfile without any dependencies.

Of course, now you have an additional build step... and what could you use to tie everything together? make/make build :)


What happens under the hood is almost irrelevant, because it's predictable and repeatable.


There's definitely stuff planned for Europe, although I don't know the specifics about PSE hiring there - you should apply via the jobs page under "future opportunities" anyway.

One thing that's important to remember is that Puppet is still a small(ish) company. If you are the "right" person, and the fit is there, we can still often make good things happen. There's no harm in applying and seeing what can be done.


Sometimes. The description really means "up to 75%" - so they can send you on that many, although they may not.

The more PSEs we hire, though, the more we can keep people closer to a home region. So this situation improves itself as we grow.


Pretty great. I've been through two major release crunches, and while I won't say that a few folks haven't pulled the occasional all-nighter, there isn't anything like the pressure I've seen at other software houses.

In PS it's a bit different, because you're on the road/at engagements often. But they do their best to schedule you so that you're home on the weekends, and you can schedule "no-travel" weeks. In general they've never been anything but accommodating and respectful.

The company is also very spouse/significant-other friendly...I've often seen spouses around the office at the end of the day, during stand-ups at the end of the week, etc.


Disclaimer: I work for Puppet Labs.

The error messages are getting much better - we have a newly-formed UX group that is working hard on the command line. If you haven't watched Randall Hansen's UX talk from PuppetConf it's here - http://bit.ly/zg5D0l [video - Youtube] - he goes quite a bit into where we're looking to improve.

SSH as a transport mechanism - can you provide some reasoning as to why that's an issue for you? Network/firewall concerns, access controls, or something else?


Er...the delegation thing depends on what the manager is being paid to do.

Not every technical manager can be an expert in all the fields he/she supervises. That's why they hire other, individual contributors who are more focused on the details or tactical implementation without having to worry about the strategic.


>Er...the delegation thing depends on what the manager is being paid to do.

until the manager is specifically being paid to do delegation, the manager should do his work instead of delegating it.

>Not every technical manager can be an expert in all the fields he/she supervises.

that pretty much equivalent definition of a bad manager.

>the details or tactical implementation without having to worry about the strategic.

and the strategy BS they bring in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: