Update: Vapor has implemented a fix for this, which uses the correct environment file if it exists locally. So just make sure you pull the env file that you need for the build step before running vapor deploy.
I went down a bit of rabbit hole today, to the point where I was about to send a PR to the laravel/vapor-cli repository.
If I run vapor deploy staging locally, and I have this in .env:
And this in .env.staging:
Then mix is going to build `process.env.MIX_STRIPE_KEY` with a value of `pk_test`.
My line of thinking is I need the Vapor build to use my .env.staging file to build those assets, damnit!
Alas, there is a better solution, mix-env-file.
The steps are as follows, replace "staging" with your environment of choice:
Even better, this works very easily with a Continuous Integration service like CircleCI: in your vapor.yml build steps, just pull the correct .env as above prior to building assets and your done.
No sane developer willingly takes on the challenges of DevOps (or SysOps) on top of their already considerable full stack responsibilities but for most agencies, it’s a necessity to building, maintaining and delivering successful projects.
At Coding Labs, we work with self-funded startups and small to medium sized businesses, all of whom hope for viral growth, but are unable to predict when that growth might appear.
And what most clients know (or want to know) about infrastructure could fit on the back of a postage stamp.
They do universally agree that their infrastructure budget is not open-ended, and that their website being down while our team is drinking beer at the local on a Friday arvo:
This presents a problem: how do we prepare our client infrastructure for unforeseen outages, traffic surges and fast paced growth without prematurely over-engineering and overcapitalising?
We have a pretty short (but no by no means trivial) list of requirements for our infrastructure:
In my experience - which includes overseeing large highly available web apps,
suffering through attaining an AWS Cloud Architect certification, exhaustive application of "RTFM", and a shit-ton of sheer persistence - I can confirm that infrastructure and deployment strategies can be cobbled together to deliver HA web apps that don't completely break very often.
I would go as far as to say that we have largely ticked off Autohealing and Autoscaling from our list by scratching our own itch, but things get ugly when we talk about Approachability.
Question: Who enjoys trawling through the AWS Console / CLI / docs looking to debug some obscure but crucial part of your infrastructure?
AWS (the awesome backbone on which Vapor is built) can do virtually all things infrastructure, on account of the 1,000+ services which can fulfil so many use cases it will make your head explode.
But when you are using a service developed by a behemoth company founded by a cyborg, you should first stop and try and break it down a little, preferably with a large hammer.
The Laravel team have fortunately invested huge effort to make a very large hammer specifically for AWS at scale: Laravel Vapor.
Laravel Vapor is a killer app, and I mean that literally because it actually killed off two of my internal DevOps apps I had built to tackle similar problems.
There is something very reassuring knowing that our chosen infrastructure / deployment tool is continuously battle-tested against thousands of diverse Laravel apps, and that the ecosystem will only get stronger for years to come through the wonders of Open Source and the Laravel community at-large.
In fact, there is no comparison to what we have been doing up until now, and therefore no decision to make; to Vapor - onward march!
Not sure about you but I didn't even know MySQL 8.0 was a thing until I got my Vapor invite. I haven't hit any problems, but i'm expecting to break something that worked in 5.7 any day now.
I'm also a little surprised that RDS Aurora didn't make the initial cut (although I wouldn't be surprised if they roll out support for more RDS engines soon). Aurora is pretty powerful in comparison to the basic MySQL offerings whilst being more cost-effective than Aurora Serverless (I think).
Also keep in mind there is nothing stopping you from deploying your own database infrastructure as you see fit and connecting to it through .env.
This happens if you started with Laravel < 5.8.17 and did not add the default config for DynamoDB to config/cache.php.
For reasons I don't fully understand, some exceptions like the one generated by the DynamoDB omission above, are not getting logged anywhere that I can find.
I checked the Vapor dashboard, the tail command, and directly in CloudWatch to no avail. 🤷♂️
vapor deploy production
It looks too easy and it is too easy. You can run this command on any branch, with any uncommitted changes, and any level of test failures (a security blanket that CI may have protected you from in the past).
Thank goodness for:
vapor rollback production
Many heads will be scratched over the coming years thanks to some of the more powerful vapor commands, which will reap wholesale destruction on inexperienced AWS users.
The team permissions are nice, and should be used carefully to ensure mission critical apps are firewalled from curious juniors.
In some cases it would be advisable to disable team deployments altogether and instead invoke them from the CI server with branch filtering.
The holy grail for me to compliment Lambda powered app-servers has to be a serverless autoscaling database. Aurora Serverless is just that, but the downside is the costs can be rather prohibitive compared to conventional RDS.
Definitely run the calculate on this one before rolling it out to a busy site.
I don't think there is any suggestion from anyone that Laravel Vapor can totally circumvent the need for hands-on AWS experience, and there are definitely going to be times during debugging and customisation that you will need to get dirty and dirty with AWS.
Just be thankful that Vapor has done most of the boring stuff really, really well, and you will hopefully find yourself in the luxurious position of leveraging AWS rather than simply trying to keep the lights on.