Nowadays Docker is everywhere. It is one of the main components of Continuous Integration / Continuous Development environments. That alone indicates Docker has to be seen more as a Software Delivery Platform than as a replacement of a virtual machine.
However ...
If you are running an Oracle database using Docker on your local machine to develop some APEX application, you will probably not move that container is a whole to test and production environments. Because in that case you would not only deliver a new APEX application to the production environment - which is a good thing - but also overwrite the data in production with the data from your development environment. And that won't make your users very excited.
So in this set up you will be using Docker as a replacement of a Virtual Machine and not as a Delivery Platform.
And that's exactly the way Martin is using it as he described in this recent blog post. It is an ideal way to get up and running with an Oracle database in a Docker container in a very short time (you have to download 3.5 Gb, so dependent on your connection that might take some time). This works fine for a short PoC. But what if you want to be sure your container lives a bit longer than a week or two? Because, if your container gets blown away for some sort of reason, you will lose all your data. The original image is still there, but that contains just the starting situation. A.k.a. an empty database. And - from recent experience - I can tell you that containers can get destroyed (in this case by an upgrade of the Docker software). The whole concept of containers is that they are ephemeral, lasting for a short period of time.
But luckily we can set up a Docker container in a way we are sure our data is safe, even when the container is destroyed. We can use volume mapping for that. So we don't need to download the complete installed database from the link Martin mentioned in his blog, but we need the Docker build files and the required Oracle database software and build the image ourselves. Exactly as Maria describes in her blogpost.
In my setup, once the image was created, I started the container by issuing this command:
But if we want to develop APEX applications, we need to have ORDS installed somewhere as well. Of course we could just run ORDS locally connecting to the (forwarded) port 1521. But it would be way nicer if ORDS would run in a Docker container as well!
Before we start that second container, we have to make sure both containers are able to connect to each other. It is possible using the --link switch, but that is deprecated. The new way of connecting containers together, is to create a Docker network:
Some remarks about this command:
To end up, this is how it looks as a picture:
However ...
If you are running an Oracle database using Docker on your local machine to develop some APEX application, you will probably not move that container is a whole to test and production environments. Because in that case you would not only deliver a new APEX application to the production environment - which is a good thing - but also overwrite the data in production with the data from your development environment. And that won't make your users very excited.
So in this set up you will be using Docker as a replacement of a Virtual Machine and not as a Delivery Platform.
And that's exactly the way Martin is using it as he described in this recent blog post. It is an ideal way to get up and running with an Oracle database in a Docker container in a very short time (you have to download 3.5 Gb, so dependent on your connection that might take some time). This works fine for a short PoC. But what if you want to be sure your container lives a bit longer than a week or two? Because, if your container gets blown away for some sort of reason, you will lose all your data. The original image is still there, but that contains just the starting situation. A.k.a. an empty database. And - from recent experience - I can tell you that containers can get destroyed (in this case by an upgrade of the Docker software). The whole concept of containers is that they are ephemeral, lasting for a short period of time.
But luckily we can set up a Docker container in a way we are sure our data is safe, even when the container is destroyed. We can use volume mapping for that. So we don't need to download the complete installed database from the link Martin mentioned in his blog, but we need the Docker build files and the required Oracle database software and build the image ourselves. Exactly as Maria describes in her blogpost.
In my setup, once the image was created, I started the container by issuing this command:
docker run --name oracle -p 1521:1521 -p 5500:5500 \
-v /Users/Roel/docker/database:/opt/oracle/oradata \
oracle/database:12.2.0.1-ee
The advantage is all my data is stored outside my container, on my local machine. The container itself is static - there will be no changes there. So if the container blows up, I can just spin up a new one using the same data that is still stored on my disk. And it seems a bit faster as well ....But if we want to develop APEX applications, we need to have ORDS installed somewhere as well. Of course we could just run ORDS locally connecting to the (forwarded) port 1521. But it would be way nicer if ORDS would run in a Docker container as well!
Before we start that second container, we have to make sure both containers are able to connect to each other. It is possible using the --link switch, but that is deprecated. The new way of connecting containers together, is to create a Docker network:
docker network create my_network
Then we add our running database container, named "oracle", to that network:docker network connect my_network oracle
And now we can go looking for a Docker image for our ORDS. If you issue the command:docker search ords
you'll get a few hits. As we already have a database running, we need an image just containing ORDS and fire that up - in this case the image lucassampsouza/ords_apex:3.0.9.docker run -t -i \
--name ords \
--network=my_network \
-e DATABASE_HOSTNAME="oracle" \
-e DATABASE_PORT="1521" \
-e DATABASE_SERVICENAME="ORCLPDB1" \
-e DATABASE_PUBLIC_USER_PASS=oracle \
-e APEX_LISTENER_PASS=oracle \
-e APEX_REST_PASS=oracle \
-e ORDS_PASS=oracle \
--volume /Users/Roel/docker/apex/images:/usr/local/tomcat/webapps/i \
-p 8080:8080 \
lucassampsouza/ords_apex:3.0.9
It is version 3.0.9, but of course you can upgrade it yourself - or just wait for a newer version.Some remarks about this command:
- The --network switch adds this new container to the network immediately.
- I can reference the hostname of the database - normally the name or IP-address of the machine where the database is running on - by the name of the container, "oracle".
- I defined (another) volume mapping for this container in a way my APEX images directory is located on my local machine. Thus I can easily patch or update APEX without touching the container. If this container gets blown away ... I just fire up a new one with this same command and I am ready to go.
So now we have two containers running next to each other. Nice. But now I need a solution for printing as well. So I asked Dimitri whether he has a Docker image of his APEX Office Print (AOP) solution. And he was kind enough to make one available for me. So very similar to the ORDS one, I started this one up, attached to that same network - and also with another volume mapping to a directory that holds my license key:
docker run -d \
--name apexofficeprint \
--network=my_network \
-p 8010:8010 \
-v /Users/Roel/docker/apexofficeprint/:/apexofficeprintstartup/ \
apexrnd/apexofficeprint \
-s /apexofficeprintstartup/
So now I only needed two more steps to make it working. First define the AOP URL in the component settings of AOP in APEX: http://apexofficeprint:8010/.(again notice the use of the container name in that URL). And finally open up the ACL for the APEX owner, so it is possible to connect from within the database to the AOP container:begin
dbms_network_acl_admin.append_host_ace (
host => 'apexofficeprint',
lower_port => 8010,
upper_port => 8010,
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'APEX_050100',
principal_type => xs_acl.ptype_db));
end;
So this complete configuration is totally ephemeral (love using that word again) as all important data is stored on my local disk. And I can easily burn or replace containers - for instance when a new version of the ORDS or AOP image comes available.To end up, this is how it looks as a picture:
Now go and try it yourself!
Comments