Elastic Beanstalk automatically creates S3 buckets under your AWS account to save the data (manifests, logs, application versions…) that is used by itself. If the files are gone, the environment may fail to lauch.
Here is a real example: One of the power users set up a lifecycle against the Beanstalk bucket to automatically remove the files that are older than 30 days. From the cost saving perspective, this is good as less space less cost (not much though). But this can be extremely dangerous, if the dependent files are removed, the environment are screwed. Luckily, this happened to one of the dev environments and we fixed it before it caused issue to prod.
Beanstalk failed to launch the environment and throwed an error: LaunchWaitCondition failed. The expected number of EC2 instances were not initialized within the given time. Rebuild the environment. If this persists, contact support.
In the Beanstalk log file, you can see it kept trying to download the file but failed every time, then it timed out eventually.
2016-03-14 02:47:26,146 [ERROR] Exception in getting the location of latest version manifest file from bucket elasticbeanstalk-ap-southeast-2-XXXXXXXXXX and prefix resources/environments/e-xxxxxxxx/_runtime/versions/ 2016-03-14 02:47:26,146 [ERROR] Encountered exception: “” Traceback (most recent call last): File “/opt/elasticbeanstalk/bin/download_source_bundle”, line 71, in retry return function(*args, **kwargs) File “/opt/elasticbeanstalk/bin/download_source_bundle”, line 191, in get_latest_version_manifest_file_s3_key_retry raise Exception Exception 2016-03-14 02:47:26,147 [INFO] Sleeping for 5.000000 seconds before retrying
I checked the bucket and found the manifest file is gone. So I manually created one in the following format, and name it as manifest_1457926096143 (1457926096143 is epoch time, you can translate it to human readable format by runnining date -d @1457926096143)