AWS Chalice deployment with Oracle vendor
While working on SAM for examination api used by CTI. I encountered chalice, an AWS project based on Flask built for serverless in AWS. Oracle connection to on-premise database is a requirement, so the only option seems to be cx_Oracle from Oracle.
Setting up Chalice
Before using Chalice, we were using Zappa, Serverless and SAM in some of the other projects, Terraform looks interesting but we never get a chance to use it yet. One of the main reason we chose Chalice is that it is built specifically for serverless microframework.
At the time of writing, python 3.7 was released but AWS is still using python 3.6. I was surprised that Amazon Linux 2 AMI was using the beta version of python 3.7, which have yet released while testing. Below $
indicates a shell prompt, after installing python 3:
$ python3 -m venv venv $ source venv/bin/activate (venv) $ pip install chalice (venv) $ chalice new-project hello && cd hello
Vendoring cx_Oracle
Why do we need to vendor library when python have pip? Cx_Oracle is not a pure-python library in which it requires some proprietary binary. The other reason that you want to vendor stuff is for libraries with parts written in compiled languages in which it needs to be built.
There are two choices for build environment, either amazonlinux docker container or Amazon Linux AMI with t2.micro on EC2 for building the vendor files. First, we need to install and setup python 3 and the required dependencies in the build environment.
$ yum install python36-devel gcc $ pip-3.6 install wheel
We use Oracle instantclient 11.1, thus we require cx_Oracle 5.2.1 after searching through cx_Oracle’s release notes (cx_Oracle 5.2.2 drop support for instantclient 11.1). Download and decompress Instant Client for Linux x86-64 (Basic
and SDK
package) and decompress in the build environment.
$ # unzip oracle files to $PWD/oracle/ $ export ORACLE_HOME=$PWD/oracle/instantclient_11_1/ $ # create soft link for `libclntsh.so` as it might complain $ ln -s libclntsh.so.11.1 $ORACLE_HOME/libclntsh.so $ # chalice docs do `pip download` then `pip wheel`, we just skip that here $ pip-3.6 wheel cx_Oracle==5.2.1
Download and unzip cx_Oracle-5.2.1-cp36-cp36m-linux_x86_64.whl
to vendor/
and the required dynamic libraries traced with ldd
to vendor/lib/
, here we took some files from oracle/instantclient_11_1/
and /lib64/libaio.so.1
from libaio
package.
(venv) $ unzip cx_Oracle-5.2.1-cp36-cp36m-linux_x86_64.whl -d vendor/ (venv) $ mkdir vendor/lib/ $ # figure out the dependencies with `ldd` and move them to `vendor/lib/`
Chalice development
As this project contains huge binaries, we make use of Git LFS (Large File Storage) to store the binaries under vendor/
. Git LFS allows faster cloning when there are multiple large binaries stored in the git repositories in the future. As well, let’s stage the changes.
(venv) $ git init (venv) $ git lfs install (venv) $ git lfs track 'vendor/lib/*' 'vendor/*.so' (venv) $ git add vendor/ .gitattributes
After setting up Chalice and Oracle, let’s start the development. Before making the application work, we need to set dynamic library path used by Chalice (vendor/
and vendor/lib/
) and the credentials for the connection. Tips and tricks: Chalice supports hot reload.
(venv) $ export LD_LIBRARY_PATH=$PWD/vendor/lib/ (venv) $ export ORACLE_CONNECTION=username/password@1.2.3.4/instance (venv) $ chalice local
Below shows sample code that takes in optional parameter name
from path /
and search in the database if name
is provided. We put the following code to the convention path app.py
.
import os import cx_Oracle from chalice import Chalice app = Chalice(app_name='foobar') if os.getenv('CHALICE_DEBUG'): app.debug = True @app.route('/') def index(): params = app.current_request.query_params query_name = params.get('name') if params else None with cx_Oracle.Connection(os.environ['ORACLE_CONNECTION']) as conn: cursor = conn.cursor() if query_name: cursor.execute("""SELECT name, created_at FROM users WHERE name = :name""", name=query_name) else: cursor.execute('SELECT name, created_at FROM users') return [{ 'name': row[0], 'createdAt': row[1].date().isoformat(), } for row in cursor]
After the server reloaded, try testing it with cURL.
$ curl -H 'Accept: application/json' '127.0.0.1:8000?name=foo'
Chalice deployment
We use Bitbucket (no Microsoft’s GitHub T_T) so Bitbucket pipelines for Continuous Deployment. One assumption took quite some time to fix is Bitbucket pithole: Explicit LFS clone for Pipelines. We are required to deploy with Chalice CloudFormation. Here is why:
Chalice pitfall: default deployment uses direct API calls (unlike Zappa) for deployments with the deployments tracked in .chalice/deployed/
but since it would not be in the pipeline so we are required to use Chalice CloudFormation deployment.
Next, add the following to bitbucket-pipelines.yml
. Note that we are required to install cx_Oracle
as Chalice will complain during deployment when it is not there. We have a manual trigger with Bitbucket Pipeline deployments to push to production.
image: cticti/aws-cli clone: lfs: true pipelines: default: - step: name: Build cloudformation package image: python:3.6 caches: - pip script: - pip install chalice cx_Oracle - chalice package --stage dev packaged-dev - chalice package --stage prod packaged-prod artifacts: - packaged-*/** - step: name: Deploy to test deployment: test script: - aws cloudformation package --template-file packaged-dev/sam.json --s3-bucket foobar-src --output-template-file packaged-dev/packaged-dev.yaml - aws cloudformation deploy --template-file packaged-dev/packaged-dev.yaml --stack-name foobar-dev --capabilities CAPABILITY_IAM - aws cloudformation describe-stacks --stack-name foobar-dev --query "Stacks[].Outputs[?OutputKey=='EndpointURL'][] | [0].OutputValue" - step: name: Deploy to production deployment: production trigger: manual script: - aws cloudformation package --template-file packaged-prod/sam.json --s3-bucket foobar-src --output-template-file packaged-prod/packaged-prod.yaml - aws cloudformation deploy --template-file packaged-prod/packaged-prod.yaml --stack-name foobar-prod --capabilities CAPABILITY_IAM - aws cloudformation describe-stacks --stack-name foobar-prod --query "Stacks[].Outputs[?OutputKey=='EndpointURL'][] | [0].OutputValue"
As well, we need to modify .chailce/config.json
to suit our needs. We only have dev
and prod
stages so the beta
stage shown below is useless as of now. We also set the subnet_ids
and security_group_ids
for AWS networking configuration.
{ "version": "2.0", "app_name": "foobar", "api_gateway_stage": "api", "subnet_ids": ["subnet-xxxxxxxx", "subnet-xxxxxxxx"], "security_group_ids": ["sg-xxxxxxxxxxxxxxxxx"], "stages": { "dev": { "environment_variables": { "CHALICE_DEBUG": "true" } }, "beta": { }, "prod": { "api_gateway_stage": "prod" } } }
Now we can git add app.py bitbucket-pipelines.yml .chalice/config.json
and then git commit -m 'Initial commit'
. Remember to create the s3 bucket used above which is intentionally missed out in the tutorial.
Happy Hacking! 😀