merge origin/master

This commit is contained in:
mouyong 2019-08-05 03:43:48 +08:00
commit da8f0d0864
81 changed files with 7296 additions and 1036 deletions

6
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,6 @@
# DO NOT CHANGE THIS FILE PLEASE.
open_collective: laradock
ko_fi: laradock
issuehunt: laradock
custom: ['beerpay.io/laradock/laradock', 'paypal.me/mzmmzz']

39
.github/README.md vendored
View File

@ -2,7 +2,7 @@
<img src="/.github/home-page-images/laradock-logo.jpg?raw=true" alt="Laradock Logo"/>
</p>
<p align="center">A Docker PHP development environment that facilitates running PHP Apps on Docker</p>
<p align="center">PHP development environment that runs on Docker</p>
<p align="center">
<a href="https://travis-ci.org/laradock/laradock"><img src="https://travis-ci.org/laradock/laradock.svg?branch=master" alt="Build status"></a>
@ -13,10 +13,10 @@
<a href="http://laradock.io/contributing"><img src="https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat" alt="contributions welcome"></a>
</p>
<h4 align="center" style="color:#7d58c2">Use Docker First And Learn About It Later</h4>
<h4 align="center" style="color:#7d58c2">Use Docker First - Then Learn About It Later</h4>
<p align="center">
<a href="https://zalt.me"><img src="http://forthebadge.com/images/badges/built-by-developers.svg" alt="forthebadge" width="240" ></a>
<a href="http://zalt.me"><img src="http://forthebadge.com/images/badges/built-by-developers.svg" alt="forthebadge" width="240" ></a>
</p>
@ -24,17 +24,13 @@
<p align="center">
<a href="http://laradock.io">
<img src="https://s19.postimg.org/ecnn9vdw3/Screen_Shot_2017-08-01_at_5.08.54_AM.png" width=300px" alt="Laradock Docs"/>
<img src="https://raw.githubusercontent.com/laradock/laradock/master/.github/home-page-images/documentation-button.png" width=300px" alt="Laradock Docs"/>
</a>
</p>
## Sponsors
Support this project by becoming a sponsor.
Your logo will show up on the [github repository](https://github.com/laradock/laradock/) index page and the [documentation](http://laradock.io/) main page, with a link to your website. [[Become a sponsor](https://opencollective.com/laradock#sponsor)]
<a href="https://opencollective.com/laradock/sponsor/0/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/1/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/2/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/2/avatar.svg"></a>
@ -43,13 +39,14 @@ Your logo will show up on the [github repository](https://github.com/laradock/la
<a href="https://opencollective.com/laradock/sponsor/5/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/6/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/7/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/8/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/9/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/9/avatar.svg"></a>
For basic sponsorships go to [Open Collective](https://opencollective.com/laradock#sponsor), for golden sponsorships contact <a href = "mailto: support@laradock.io">support@laradock.io</a>.
## Contributors
*Your logo will show up on the [github repository](https://github.com/laradock/laradock/) index page and the [documentation](http://laradock.io/) main page, with a link to your website.*
#### Core contributors:
## People
#### Maintainers:
- [Mahmoud Zalt](https://github.com/Mahmoudz) @mahmoudz | [Twitter](https://twitter.com/Mahmoud_Zalt) | [Site](http://zalt.me)
- [Bo-Yi Wu](https://github.com/appleboy) @appleboy | [Twitter](https://twitter.com/appleboy)
- [Philippe Trépanier](https://github.com/philtrep) @philtrep
@ -62,9 +59,11 @@ Your logo will show up on the [github repository](https://github.com/laradock/la
- [Milan Urukalo](https://github.com/urukalo) @urukalo
- [Vince Chu](https://github.com/vwchu) @vwchu
- [Huadong Zuo](https://github.com/zuohuadong) @zuohuadong
- Join us, by submitting 20 useful PR's.
- [Lan Phan](https://github.com/lanphan) @lanphan
- [Ahkui](https://github.com/ahkui) @ahkui
- Join us.
#### Awesome contributors:
#### Awesome Contributors:
<a href="https://github.com/laradock/laradock/graphs/contributors"><img src="https://opencollective.com/laradock/contributors.svg?width=890" /></a>
@ -74,18 +73,18 @@ Your logo will show up on the [github repository](https://github.com/laradock/la
> Help keeping the project development going, by [contributing](http://laradock.io/contributing) or donating a little.
> Thanks in advance.
Donate directly via [Paypal](https://www.paypal.me/mzalt)
Donate directly via [Paypal](https://paypal.me/mzmmzz)
[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.me/mzalt)
or become a backer on [Open Collective](https://opencollective.com/laradock#backer)
<a href="https://opencollective.com/laradock#backers" target="_blank"><img src="https://opencollective.com/laradock/backers.svg?width=890"></a>
[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://paypal.me/mzmmzz)
or show your support via [Beerpay](https://beerpay.io/laradock/laradock)
[![Beerpay](https://beerpay.io/laradock/laradock/badge.svg?style=flat)](https://beerpay.io/laradock/laradock)
or become a backer on [Open Collective](https://opencollective.com/laradock#backer)
<a href="https://opencollective.com/laradock#backers" target="_blank"><img src="https://opencollective.com/laradock/backers.svg?width=890"></a>
## License

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

10
.gitignore vendored
View File

@ -5,4 +5,12 @@
/.project
.docker-sync
/jenkins/jenkins_home
/nginx/ssl/
/logstash/pipeline/*.conf
/logstash/config/pipelines.yml
/nginx/ssl/*.crt
/nginx/ssl/*.key
/nginx/ssl/*.csr
.DS_Store

View File

@ -11,11 +11,13 @@ env:
- PHP_VERSION=7.0 BUILD_SERVICE=workspace
- PHP_VERSION=7.1 BUILD_SERVICE=workspace
- PHP_VERSION=7.2 BUILD_SERVICE=workspace
- PHP_VERSION=7.3 BUILD_SERVICE=workspace
- PHP_VERSION=5.6 BUILD_SERVICE=php-fpm
- PHP_VERSION=7.0 BUILD_SERVICE=php-fpm
- PHP_VERSION=7.1 BUILD_SERVICE=php-fpm
- PHP_VERSION=7.2 BUILD_SERVICE=php-fpm
- PHP_VERSION=7.3 BUILD_SERVICE=php-fpm
- PHP_VERSION=hhvm BUILD_SERVICE=hhvm
@ -23,13 +25,15 @@ env:
- PHP_VERSION=7.0 BUILD_SERVICE=php-worker
- PHP_VERSION=7.1 BUILD_SERVICE=php-worker
- PHP_VERSION=7.2 BUILD_SERVICE=php-worker
- PHP_VERSION=7.3 BUILD_SERVICE=php-worker
- PHP_VERSION=NA BUILD_SERVICE=solr
- PHP_VERSION=NA BUILD_SERVICE="mssql rethinkdb aerospike"
- PHP_VERSION=NA BUILD_SERVICE="blackfire minio percona nginx caddy apache2 mysql mariadb postgres postgres-postgis neo4j mongo redis"
- PHP_VERSION=NA BUILD_SERVICE="blackfire minio percona nginx caddy apache2 mysql mariadb postgres postgres-postgis neo4j mongo redis cassandra"
- PHP_VERSION=NA BUILD_SERVICE="adminer phpmyadmin pgadmin"
- PHP_VERSION=NA BUILD_SERVICE="memcached beanstalkd beanstalkd-console rabbitmq elasticsearch certbot mailhog maildev selenium jenkins proxy proxy2 haproxy"
- PHP_VERSION=NA BUILD_SERVICE="kibana grafana laravel-echo-server"
- PHP_VERSION=NA BUILD_SERVICE="ipython-controller manticore"
# - PHP_VERSION=NA BUILD_SERVICE="aws"
# Installing a newer Docker version

View File

@ -48,42 +48,42 @@ googleAnalytics = "UA-37514928-9"
# ------- MENU START -----------------------------------------
[[menu.main]]
name = "Introduction"
name = "1. Introduction"
url = "introduction/"
weight = 1
[[menu.main]]
name = "Getting Started"
name = "2. Getting Started"
url = "getting-started/"
weight = 2
[[menu.main]]
name = "Documentation"
name = "3. Documentation"
url = "documentation/"
weight = 3
[[menu.main]]
name = "Guides"
name = "4. Guides"
url = "guides/"
weight = 4
[[menu.main]]
name = "Help & Questions"
name = "5. Help & Questions"
url = "help/"
weight = 5
[[menu.main]]
name = "Related Projects"
name = "6. Related Projects"
url = "related-projects/"
weight = 6
[[menu.main]]
name = "Contributing"
name = "7. Contributing"
url = "contributing/"
weight = 7
[[menu.main]]
name = "License"
name = "8. License"
url = "license/"
weight = 8

View File

@ -1,5 +1,5 @@
---
title: Contributing
title: 7. Contributing
type: index
weight: 7
---

View File

@ -1,5 +1,5 @@
---
title: Documentation
title: 3. Documentation
type: index
weight: 3
---
@ -297,6 +297,24 @@ e) set it to `true`
For information on how to configure xDebug with your IDE and work it out, check this [Repository](https://github.com/LarryEitel/laravel-laradock-phpstorm) or follow up on the next section if you use linux and PhpStorm.
<br>
<a name="Control-xDebug"></a>
## Start/Stop xDebug:
By installing xDebug, you are enabling it to run on startup by default.
To control the behavior of xDebug (in the `php-fpm` Container), you can run the following commands from the Laradock root folder, (at the same prompt where you run docker-compose):
- Stop xDebug from running by default: `.php-fpm/xdebug stop`.
- Start xDebug by default: `.php-fpm/xdebug start`.
- See the status: `.php-fpm/xdebug status`.
Note: If `.php-fpm/xdebug` doesn't execute and gives `Permission Denied` error the problem can be that file `xdebug` doesn't have execution access. This can be fixed by running `chmod` command with desired access permissions.
<br>
<a name="Install-phpdbg"></a>
## Install phpdbg
@ -320,37 +338,6 @@ PHP_FPM_INSTALL_PHPDBG=true
```
<a name="Setup remote debugging for PhpStorm on Linux"></a>
## Setup remote debugging for PhpStorm on Linux
- Make sure you have followed the steps above in the [Install Xdebug section](#install-xdebug).
- Make sure Xdebug accepts connections and listens on port 9000. (Should be default configuration).
![Debug Configuration](/images/photos/PHPStorm/linux/configuration/debugConfiguration.png "Debug Configuration").
- Create a server with name `laradock` (matches **PHP_IDE_CONFIG** key in environment file) and make sure to map project root path with server correctly.
![Server Configuration](/images/photos/PHPStorm/linux/configuration/serverConfiguration.png "Server Configuration").
- Start listening for debug connections, place a breakpoint and you are good to go !
<br>
<a name="Control-xDebug"></a>
## Start/Stop xDebug:
By installing xDebug, you are enabling it to run on startup by default.
To control the behavior of xDebug (in the `php-fpm` Container), you can run the following commands from the Laradock root folder, (at the same prompt where you run docker-compose):
- Stop xDebug from running by default: `.php-fpm/xdebug stop`.
- Start xDebug by default: `.php-fpm/xdebug start`.
- See the status: `.php-fpm/xdebug status`.
Note: If `.php-fpm/xdebug` doesn't execute and gives `Permission Denied` error the problem can be that file `xdebug` doesn't have execution access. This can be fixed by running `chmod` command with desired access permissions.
<br>
@ -394,6 +381,37 @@ Always download the latest version of [Loaders for ionCube ](http://www.ioncube.
<br>
<a name="Install-SonarQube"></a>
## Install SonarQube (automatic code review tool)
SonarQube® is an automatic code review tool to detect bugs, vulnerabilities and code smells in your code. It can integrate with your existing workflow to enable continuous code inspection across your project branches and pull requests.
<br>
1 - Open the `.env` file
<br>
2 - Search for the `SONARQUBE_HOSTNAME=sonar.example.com` argument
<br>
3 - Set it to your-domain `sonar.example.com`
<br>
4 - `docker-compose up -d sonarqube`
<br>
5 - Open your browser: http://localhost:9000/
Troubleshooting:
if you encounter a database error:
```
docker-compose exec --user=root postgres
source docker-entrypoint-initdb.d/init_sonarqube_db.sh
```
If you encounter logs error:
```
docker-compose run --user=root --rm sonarqube chown sonarqube:sonarqube /opt/sonarqube/logs
```
[**SonarQube Documentation Here**](https://docs.sonarqube.org/latest/)
@ -409,7 +427,9 @@ Always download the latest version of [Loaders for ionCube ](http://www.ioncube.
<a name="Laradock-for-Production"></a>
## Prepare Laradock for Production
It's recommended for production to create a custom `docker-compose.yml` file. For that reason, Laradock is shipped with `production-docker-compose.yml` which should contain only the containers you are planning to run on production (usage example: `docker-compose -f production-docker-compose.yml up -d nginx mysql redis ...`).
It's recommended for production to create a custom `docker-compose.yml` file, for example `production-docker-compose.yml`
In your new production `docker-compose.yml` file you should contain only the containers you are planning to run in production (usage example: `docker-compose -f production-docker-compose.yml up -d nginx mysql redis ...`).
Note: The Database (MySQL/MariaDB/...) ports should not be forwarded on production, because Docker will automatically publish the port on the host, which is quite insecure, unless specifically told not to. So make sure to remove these lines:
@ -532,29 +552,8 @@ phpunit
<a name="Run-Laravel-Queue-Worker"></a>
## Run Laravel Queue Worker
1 - First add `php-worker` container. It will be similar as like PHP-FPM Container.
<br>
a) open the `docker-compose.yml` file
<br>
b) add a new service container by simply copy-paste this section below PHP-FPM container
1 - Create supervisor configuration file (for ex., named `laravel-worker.conf`) for Laravel Queue Worker in `php-worker/supervisord.d/` by simply copy from `laravel-worker.conf.example`
```yaml
php-worker:
build:
context: ./php-worker
args:
- INSTALL_PGSQL=${PHP_WORKER_INSTALL_PGSQL} #Optionally install PGSQL PHP drivers
- INSTALL_BCMATH=${PHP_WORKER_INSTALL_BCMATH} #Optionally install BCMath php package
- INSTALL_SOAP=${PHP_WORKER_INSTALL_SOAP} #Optionally install Soap php package
volumes_from:
- applications
depends_on:
- workspace
extra_hosts:
- "dockerhost:${DOCKER_HOST_IP}"
networks:
- backend
```
2 - Start everything up
```bash
@ -566,6 +565,34 @@ docker-compose up -d php-worker
<br>
<a name="Run-Laravel-Scheduler"></a>
## Run Laravel Scheduler
Laradock provides 2 ways to run Laravel Scheduler
1 - Using cron in workspace container. Most of the time, when you start Laradock, it'll automatically start workspace container with cron inside, along with setting to run `schedule:run` command every minute.
2 - Using Supervisord in php-worker to run `schedule:run`. This way is suggested when you don't want to start workspace in production environment.
<br>
a) Comment out cron setting in workspace container, file `workspace/crontab/laradock`
```bash
# * * * * * laradock /usr/bin/php /var/www/artisan schedule:run >> /dev/null 2>&1
```
<br>
b) Create supervisor configuration file (for ex., named `laravel-scheduler.conf`) for Laravel Scheduler in `php-worker/supervisord.d/` by simply copy from `laravel-scheduler.conf.example`
<br>
c) Start php-worker container
```bash
docker-compose up -d php-worker
```
<br>
<a name="Use-Mailu"></a>
## Use Mailu
@ -701,6 +728,44 @@ composer require predis/predis:^1.0
<br>
<a name="Use-Redis-Cluster"></a>
## Use Redis Cluster
1 - First make sure you run the Redis-Cluster Container (`redis-cluster`) with the `docker-compose up` command.
```bash
docker-compose up -d redis-cluster
```
2 - Open your Laravel's `config/database.php` and set the redis cluster configuration. Below is example configuration with phpredis.
Read the [Laravel official documentation](https://laravel.com/docs/5.7/redis#configuration) for more details.
```php
'redis' => [
'client' => 'phpredis',
'options' => [
'cluster' => 'redis',
],
'clusters' => [
'default' => [
[
'host' => 'redis-cluster',
'password' => null,
'port' => 7000,
'database' => 0,
],
],
],
],
```
<br>
<a name="Use-Mongo"></a>
## Use Mongo
@ -817,6 +882,67 @@ docker-compose up -d gitlab
<br>
<a name="Use-Gitlab-Runner"></a>
## Use Gitlab Runner
1 - Retrieve the registration token in your gitlab project (Settings > CI / CD > Runners > Set up a specific Runner manually)
2 - Open the `.env` file and set the following changes:
```
# so that gitlab container will pass the correct domain to gitlab-runner container
GITLAB_DOMAIN_NAME=http://gitlab
GITLAB_RUNNER_REGISTRATION_TOKEN=<value-in-step-1>
# so that gitlab-runner container will send POST request for registration to correct domain
GITLAB_CI_SERVER_URL=http://gitlab
```
3 - Open the `docker-compose.yml` file and add the following changes:
```yml
gitlab-runner:
environment: # these values will be used during `gitlab-runner register`
- RUNNER_EXECUTOR=docker # change from shell (default)
- DOCKER_IMAGE=alpine
- DOCKER_NETWORK_MODE=laradock_backend
networks:
- backend # connect to network where gitlab service is connected
```
4 - Run the Gitlab-Runner Container (`gitlab-runner`) with the `docker-compose up` command. Example:
```bash
docker-compose up -d gitlab-runner
```
5 - Register the gitlab-runner to the gitlab container
```bash
docker-compose exec gitlab-runner bash
gitlab-runner register
```
6 - Create a `.gitlab-ci.yml` file for your pipeline
```yml
before_script:
- echo Hello!
job1:
scripts:
- echo job1
```
7 - Push changes to gitlab
8 - Verify that pipeline is run successful
<br>
<a name="Use-Adminer"></a>
## Use Adminer
@ -917,8 +1043,21 @@ _Note: You can customize the port on which beanstalkd console is listening by ch
<br>
<a name="Use-Confluence"></a>
## Use Confluence
1 - Run the Confluence Container (`confluence`) with the `docker-compose up` command. Example:
```bash
docker-compose up -d confluence
```
2 - Open your browser and visit the localhost on port **8090**: `http://localhost:8090`
**Note:** You can you trial version and then you have to buy a licence to use it.
You can set custom confluence version in `CONFLUENCE_VERSION`. [Find more info in section 'Versioning'](https://hub.docker.com/r/atlassian/confluence-server/)
<br>
<a name="Use-ElasticSearch"></a>
@ -1011,8 +1150,9 @@ docker-compose up -d rethinkdb
- set the `DB_DATABASE` to `database`.
#### Additional Notes
- You may do backing up of your data using the next reference: [backing up your data](https://www.rethinkdb.com/docs/backup/).
<br>
@ -1114,6 +1254,140 @@ docker-compose up -d grafana
<br>
<a name="Use-Graylog"></a>
## Use Graylog
1 - Boot the container `docker-compose up -d graylog`
2 - Open your Laravel's `.env` file and set the `GRAYLOG_PASSWORD` to some passsword, and `GRAYLOG_SHA256_PASSWORD` to the sha256 representation of your password (`GRAYLOG_SHA256_PASSWORD` is what matters, `GRAYLOG_PASSWORD` is just a reminder of your password).
> Your password must be at least 16 characters long
> You can generate sha256 of some password with the following command `echo -n somesupersecretpassword | sha256sum`
```env
GRAYLOG_PASSWORD=somesupersecretpassword
GRAYLOG_SHA256_PASSWORD=b1cb6e31e172577918c9e7806c572b5ed8477d3f57aa737bee4b5b1db3696f09
```
3 - Go to `http://localhost:9000/` (if your port is not changed)
4 - Authenticate from the app.
> Username: admin
> Password: somesupersecretpassword (if you haven't changed the password)
5 - Go to the system->inputs and launch new input
<br>
<a name="Use-Traefik"></a>
## Use Traefik
To use Traefik you need to do some changes in `traefik/trafik.toml` and `docker-compose.yml`.
1 - Open `traefik.toml` and change the `e-mail` property in `acme` section.
2 - Change your domain in `acme.domains`. For example: `main = "example.org"`
2.1 - If you have subdomains, you must add them to `sans` property in `acme.domains` section.
```bash
[[acme.domais]]
main = "example.org"
sans = ["monitor.example.org", "pma.example.org"]
```
3 - If you need to add basic authentication (https://docs.traefik.io/configuration/entrypoints/#basic-authentication), you just need to add the following text after `[entryPoints.https.tls]`:
```bash
[entryPoints.https.auth.basic]
users = ["user:password"]
```
4 - You need to change the `docker-compose.yml` file to match the Traefik needs. If you want to use Traefik, you must not expose the ports of each container to the internet, but specify some labels.
4.1 For example, let's try with NGINX. You must have:
```bash
nginx:
build:
context: ./nginx
args:
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- CHANGE_SOURCE=${CHANGE_SOURCE}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
depends_on:
- php-fpm
networks:
- frontend
- backend
labels:
- traefik.backend=nginx
- traefik.frontend.rule=Host:example.org
- traefik.port=80
```
instead of
```bash
nginx:
build:
context: ./nginx
args:
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- CHANGE_SOURCE=${CHANGE_SOURCE}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
- ${NGINX_SSL_PATH}:/etc/nginx/ssl
ports:
- "${NGINX_HOST_HTTP_PORT}:80"
- "${NGINX_HOST_HTTPS_PORT}:443"
depends_on:
- php-fpm
networks:
- frontend
- backend
```
<br>
<a name="Use-Mosquitto"></a>
## Use Mosquitto (MQTT Broker)
1 - Configure Mosquitto: Change Port using `MOSQUITTO_PORT` if you wish to. Default is port 9001.
2 - Run the Mosquitto Container (`mosquitto`) with the `docker-compose up`command:
```bash
docker-compose up -d mosquitto
```
3 - Open your command line and use a MQTT Client (Eg. https://github.com/mqttjs/MQTT.js) to subscribe a topic and publish a message.
4 - Subscribe: `mqtt sub -t 'test' -h localhost -p 9001 -C 'ws' -v`
5 - Publish: `mqtt pub -t 'test' -h localhost -p 9001 -C 'ws' -m 'Hello!'`
<br>
<a name="CodeIgniter"></a>
@ -1139,6 +1413,21 @@ To install CodeIgniter 3 on Laradock all you have to do is the following simple
<br>
<a name="Install-Powerline"></a>
## Install Powerline
1 - Open the `.env` file and set `WORKSPACE_INSTALL_POWERLINE` and `WORKSPACE_INSTALL_PYTHON` to `true`.
2 - Run `docker-compose build workspace`, after the step above.
Powerline is required python
<br>
<a name="Install-Symfony"></a>
## Install Symfony
@ -1188,6 +1477,19 @@ We also recommend [setting the timezone in Laravel](http://www.camroncade.com/ma
<br>
<a name="Add locales to PHP-FPM"></a>
## Add locales to PHP-FPM
To add locales to the container:
1 - Open the `.env` file and set `PHP_FPM_INSTALL_ADDITIONAL_LOCALES` to `true`.
2 - Add locale codes to `PHP_FPM_ADDITIONAL_LOCALES`.
3 - Re-build your PHP-FPM Container `docker-compose build php-fpm`.
4 - Check enabled locales with `docker-compose exec php-fpm locale -a`
@ -1198,7 +1500,7 @@ We also recommend [setting the timezone in Laravel](http://www.camroncade.com/ma
You can add your cron jobs to `workspace/crontab/root` after the `php artisan` line.
```
* * * * * php /var/www/artisan schedule:run >> /dev/null 2>&1
* * * * * laradock /usr/bin/php /var/www/artisan schedule:run >> /dev/null 2>&1
# Custom cron
* * * * * root echo "Every Minute" > /var/log/cron.log 2>&1
@ -1266,22 +1568,6 @@ Available versions are: 5.5, 5.6, 5.7, 8.0, or latest. See https://store.docker
<br>
<a name="MySQL-access-from-host"></a>
## MySQL access from host
You can forward the MySQL/MariaDB port to your host by making sure these lines are added to the `mysql` or `mariadb` section of the `docker-compose.yml` or in your [environment specific Compose](https://docs.docker.com/compose/extends/) file.
```
ports:
- "3306:3306"
```
<br>
<a name="MySQL-root-access"></a>
## MySQL root access
@ -1380,6 +1666,23 @@ Enabling Global Composer Install during the build for the container allows you t
<br>
<a name="Magento-2-authentication-credentials"></a>
## Add authentication credential for Magento 2
1 - Open the `.env` file
2 - Search for the `WORKSPACE_COMPOSER_AUTH` argument under the Workspace Container and set it to `true`
3 - Now add your credentials to `workspace/auth.json`
4 - Re-build the Workspace Container `docker-compose build workspace`
<br>
<a name="Install-Prestissimo"></a>
## Install Prestissimo
@ -1488,6 +1791,22 @@ To install NPM VUE CLI in the Workspace container
<br>
<a name="Install-NPM-ANGULAR-CLI"></a>
## Install NPM ANGULAR CLI
To install NPM ANGULAR CLI in the Workspace container
1 - Open the `.env` file
2 - Search for the `WORKSPACE_INSTALL_NPM_ANGULAR_CLI` argument under the Workspace Container and set it to `true`
3 - Re-build the container `docker-compose build workspace`
<br>
<a name="Install-Linuxbrew"></a>
@ -1505,6 +1824,47 @@ Linuxbrew is a package manager for Linux. It is the Linux version of MacOS Homeb
<br>
<a name="Install-FFMPEG"></a>
## Install FFMPEG
To install FFMPEG in the Workspace container
1 - Open the `.env` file
2 - Search for the `WORKSPACE_INSTALL_FFMPEG` argument under the Workspace Container and set it to `true`
3 - Re-build the container `docker-compose build workspace`
4 - If you use the `php-worker` container too, please follow the same steps above especially if you have conversions that have been queued.
**PS** Don't forget to install the binary in the `php-fpm` container too by applying the same steps above to its container, otherwise the you'll get an error when running the `php-ffmpeg` binary.
<br>
<a name="Install-GNU-Parallel"></a>
## Install GNU Parallel
GNU Parallel is a command line tool to run multiple processes in parallel.
(see https://www.gnu.org/software/parallel/parallel_tutorial.html)
To install GNU Parallel in the Workspace container
1 - Open the `.env` file
2 - Search for the `WORKSPACE_INSTALL_GNU_PARALLEL` argument under the Workspace Container and set it to `true`
3 - Re-build the container `docker-compose build workspace`
<br>
<a name="Common-Aliases"></a>
@ -1640,22 +2000,6 @@ Remote debug Laravel web and phpunit tests.
<br>
<a name="upgrading-laradock"></a>
## Upgrading Laradock
Moving from Docker Toolbox (VirtualBox) to Docker Native (for Mac/Windows). Requires upgrading Laradock from v3.* to v4.*:
1. Stop the docker VM `docker-machine stop {default}`
2. Install Docker for [Mac](https://docs.docker.com/docker-for-mac/) or [Windows](https://docs.docker.com/docker-for-windows/).
3. Upgrade Laradock to `v4.*.*` (`git pull origin master`)
4. Use Laradock as you used to do: `docker-compose up -d nginx mysql`.
**Note:** If you face any problem with the last step above: rebuild all your containers
`docker-compose build --no-cache`
"Warning Containers Data might be lost!"
@ -1720,7 +2064,7 @@ Laradock comes with `sync.sh`, an optional bash script, that automates installin
DOCKER_SYNC_STRATEGY=native_osx
```
3) set `APP_CODE_PATH_CONTAINER=/var/www` to `APP_CODE_PATH_CONTAINER=/var/www:nocopy` in the .env file
3) set `APP_CODE_CONTAINER_FLAG` to `APP_CODE_CONTAINER_FLAG=:nocopy` in the .env file
4) Install the docker-sync gem on the host-machine:
```bash
@ -1823,126 +2167,17 @@ docker-compose up ...
<br>
<a name="Common-Problems"></a>
## Common Problems
<a name="upgrade-laradock"></a>
## Upgrade Laradock
*Here's a list of the common problems you might face, and the possible solutions.*
Moving from Docker Toolbox (VirtualBox) to Docker Native (for Mac/Windows). Requires upgrading Laradock from v3.* to v4.*:
1. Stop the docker VM `docker-machine stop {default}`
2. Install Docker for [Mac](https://docs.docker.com/docker-for-mac/) or [Windows](https://docs.docker.com/docker-for-windows/).
3. Upgrade Laradock to `v4.*.*` (`git pull origin master`)
4. Use Laradock as you used to do: `docker-compose up -d nginx mysql`.
<br>
## I see a blank (white) page instead of the Laravel 'Welcome' page!
Run the following command from the Laravel root directory:
```bash
sudo chmod -R 777 storage bootstrap/cache
```
<br>
## I see "Welcome to nginx" instead of the Laravel App!
Use `http://127.0.0.1` instead of `http://localhost` in your browser.
<br>
## I see an error message containing `address already in use` or `port is already allocated`
Make sure the ports for the services that you are trying to run (22, 80, 443, 3306, etc.) are not being used already by other programs on the host, such as a built in `apache`/`httpd` service or other development tools you have installed.
<br>
## I get NGINX error 404 Not Found on Windows.
1. Go to docker Settings on your Windows machine.
2. Click on the `Shared Drives` tab and check the drive that contains your project files.
3. Enter your windows username and password.
4. Go to the `reset` tab and click restart docker.
<br>
## The time in my services does not match the current time
1. Make sure you've [changed the timezone](#Change-the-timezone).
2. Stop and rebuild the containers (`docker-compose up -d --build <services>`)
<br>
## I get MySQL connection refused
This error sometimes happens because your Laravel application isn't running on the container localhost IP (Which is 127.0.0.1). Steps to fix it:
* Option A
1. Check your running Laravel application IP by dumping `Request::ip()` variable using `dd(Request::ip())` anywhere on your application. The result is the IP of your Laravel container.
2. Change the `DB_HOST` variable on env with the IP that you received from previous step.
* Option B
1. Change the `DB_HOST` value to the same name as the MySQL docker container. The Laradock docker-compose file currently has this as `mysql`
## I get stuck when building nginx on `fetch http://mirrors.aliyun.com/alpine/v3.5/main/x86_64/APKINDEX.tar.gz`
As stated on [#749](https://github.com/laradock/laradock/issues/749#issuecomment-419652646), Already fixedjust set `CHANGE_SOURCE` to false.
## Custom composer repo packagist url and npm registry url
In China, the origin source of composer and npm is very slow. You can add `WORKSPACE_NPM_REGISTRY` and `WORKSPACE_COMPOSER_REPO_PACKAGIST` config in `.env` to use your custom source.
Example:
```bash
WORKSPACE_NPM_REGISTRY=https://registry.npm.taobao.org
WORKSPACE_COMPOSER_REPO_PACKAGIST=https://packagist.phpcomposer.com
```
<br>
## I get `Module build failed: Error: write EPIPE` while compiling react application
When you run `npm build` or `yarn dev` building a react application using webpack with elixir you may receive a `Error: write EPIPE` while processing .jpg images.
This is caused of an outdated library for processing **.jpg files** in ubuntu 16.04.
To fix the problem you can follow those steps
1 - Open the `.env`.
2 - Search for `WORKSPACE_INSTALL_LIBPNG` or add the key if missing.
3 - Set the value to true:
```dotenv
WORKSPACE_INSTALL_LIBPNG=true
```
4 - Finally rebuild the workspace image
```bash
docker-compose build workspace
```
**Note:** If you face any problem with the last step above: rebuild all your containers
`docker-compose build --no-cache`
"Warning Containers Data might be lost!"

View File

@ -1,10 +1,10 @@
---
title: Getting Started
title: 2. Getting Started
type: index
weight: 2
---
## Requirements
## 2.1 Requirements
- [Git](https://git-scm.com/downloads)
- [Docker](https://www.docker.com/products/docker/) `>= 17.12`
@ -12,10 +12,7 @@ weight: 2
## Installation
## 2.2 Installation
Choose the setup the best suits your needs.
@ -44,7 +41,7 @@ Note: If you are not using Git yet for your project, you can use `git clone` ins
*To keep track of your Laradock changes, between your projects and also keep Laradock updated [check these docs](/documentation/#keep-track-of-your-laradock-changes)*
Your folder structure should look like this:
2 - Make sure your folder structure should look like this:
```
+ project-a
@ -55,7 +52,7 @@ Your folder structure should look like this:
*(It's important to rename the laradock folders to unique name in each project, if you want to run laradock per project).*
> **Now jump to the [Usage](#Usage) section.**
3 - Go to the [Usage](#Usage) section.
<a name="A2"></a>
### A.2) Don't have a PHP project yet:
@ -89,7 +86,7 @@ APP_CODE_PATH_HOST=../project-z/
Make sure to replace `project-z` with your project folder name.
> **Now jump to the [Usage](#Usage) section.**
3 - Go to the [Usage](#Usage) section.
<a name="B"></a>
@ -110,9 +107,11 @@ Your folder structure should look like this:
+ project-2
```
2 - Go to `nginx/sites` and create config files to point to different project directory when visiting different domains.
2 - Go to your web server and create config files to point to different project directory when visiting different domains:
Laradock by default includes `app.conf.example`, `laravel.conf.example` and `symfony.conf.example` as working samples.
For **Nginx** go to `nginx/sites`, for **Apache2** `apache2/sites`.
Laradock by default includes some sample files for you to copy `app.conf.example`, `laravel.conf.example` and `symfony.conf.example`.
3 - change the default names `*.conf`:
@ -125,9 +124,10 @@ You can rename the config files, project folders and domains as you like, just m
127.0.0.1 project-2.test
...
```
If you use Chrome 63 or above for development, don't use `.dev`. [Why?](https://laravel-news.com/chrome-63-now-forces-dev-domains-https). Instead use `.localhost`, `.invalid`, `.test`, or `.example`.
> **Now jump to the [Usage](#Usage) section.**
4 - Go to the [Usage](#Usage) section.
@ -136,7 +136,7 @@ If you use Chrome 63 or above for development, don't use `.dev`. [Why?](https://
<a name="Usage"></a>
## Usage
## 2.3 Usage
**Read Before starting:**
@ -213,7 +213,16 @@ Open your PHP project's `.env` file or whichever configuration file you are read
DB_HOST=mysql
```
You need to use the Laradock's default DB credentials which can be found in the `.env` file (ex: `MYSQL_USER=`).
Or you can change them and rebuild the container.
*If you want to install Laravel as PHP project, see [How to Install Laravel in a Docker Container](#Install-Laravel).*
<br>
5 - Open your browser and visit your localhost address `http://localhost/`. If you followed the multiple projects setup, you can visit `http://project-1.test/` and `http://project-2.test/`.
5 - Open your browser and visit your localhost address.
If you followed the multiple projects setup, you can visit `http://project-1.test/` and `http://project-2.test/`.
[http://localhost:8080](http://localhost:8080)
Make sure you add use the right port number as provided by your running server. Ex: NGINX uses port 8080 by default while Apache2 uses 80.

View File

@ -1,21 +1,14 @@
---
title: Guides
title: 4. Guides
type: index
weight: 4
---
* [Production Setup on Digital Ocean](#Digital-Ocean)
* [PHPStorm XDebug Setup](#PHPStorm-Debugging)
* [Running Laravel Dusk Test](#Laravel-Dusk)
<a name="Digital-Ocean"></a>
# Production Setup on Digital Ocean
## Production Setup on Digital Ocean
## Install Docker
### Install Docker
- Visit [DigitalOcean](https://cloud.digitalocean.com/login) and login.
- Click the `Create Droplet` button.
@ -24,7 +17,7 @@ weight: 4
- Continue creating the droplet as you normally would.
- If needed, check your e-mail for the droplet root password.
## SSH to your Server
### SSH to your Server
Find the IP address of the droplet in the DigitalOcean interface. Use it to connect to the server.
@ -40,7 +33,7 @@ You can now check if Docker is available:
$root@server:~# docker
```
## Set Up Your Laravel Project
### Set Up Your Laravel Project
```
$root@server:~# apt-get install git
@ -50,18 +43,12 @@ $root@server:~/laravel/ git submodule add https://github.com/Laradock/laradock.g
$root@server:~/laravel/ cd laradock
```
## Install docker-compose command
```
$root@server:~/laravel/laradock# curl -L https://github.com/docker/compose/releases/download/1.8.0/run.sh > /usr/local/bin/docker-compose
$root@server:~/chmod +x /usr/local/bin/docker-compose
```
## Enter the laradock folder and rename env-example to .env.
### Enter the laradock folder and rename env-example to .env.
```
$root@server:~/laravel/laradock# cp env-example .env
```
## Create Your Laradock Containers
### Create Your Laradock Containers
```
$root@server:~/laravel/laradock# docker-compose up -d nginx mysql
@ -69,13 +56,21 @@ $root@server:~/laravel/laradock# docker-compose up -d nginx mysql
Note that more containers are available, find them in the [docs](http://laradock.io/introduction/#supported-software-containers) or the `docker-compose.yml` file.
## Go to Your Workspace
### Go to Your Workspace
```
docker-compose exec workspace bash
```
## Install and configure Laravel
### Execute commands
If you want to only execute some command and don't want to enter bash, you can execute `docker-compose run workspace <command>`.
```
docker-compose run workspace php artisan migrate
```
### Install and configure Laravel
Let's install Laravel's dependencies, add the `.env` file, generate the key and give proper permissions to the cache folder.
@ -98,7 +93,7 @@ It should show you the Laravel default welcome page.
However, we want it to show up using your custom domain name, as well.
## Using Your Own Domain Name
### Using Your Own Domain Name
Login to your DNS provider, such as Godaddy, Namecheap.
@ -116,7 +111,7 @@ Visit: https://cloud.digitalocean.com/networking/domains
Add your domain name and choose the server IP you'd provision earlier.
## Serving Site With NGINX (HTTP ONLY)
### Serving Site With NGINX (HTTP ONLY)
Go back to command line.
@ -140,14 +135,14 @@ And add `server_name` (your custom domain)
server_name yourdomain.com;
```
## Rebuild Your Nginx
### Rebuild Your Nginx
```
$root@server:~/laravel/laradock# docker-compose down
$root@server:~/laravel/laradock# docker-compose build nginx
```
## Re Run Your Containers MYSQL and NGINX
### Re Run Your Containers MYSQL and NGINX
```
$root@server:~/laravel/laradock/nginx# docker-compose up -d nginx mysql
@ -155,7 +150,7 @@ $root@server:~/laravel/laradock/nginx# docker-compose up -d nginx mysql
**View Your Site with HTTP ONLY (http://yourdomain.com)**
## Run Site on SSL with Let's Encrypt Certificate
### Run Site on SSL with Let's Encrypt Certificate
**Note: You need to Use Caddy here Instead of Nginx**
@ -194,7 +189,7 @@ tls serverbreaker@gmai.com
This is needed Prior to Creating Let's Encypt
## Run Your Caddy Container without the -d flag and Generate SSL with Let's Encrypt
### Run Your Caddy Container without the -d flag and Generate SSL with Let's Encrypt
```
$root@server:~/laravel/laradock# docker-compose up caddy
@ -215,7 +210,7 @@ caddy_1 | http://yourdomain.com
After it finishes, press `Ctrl` + `C` to exit.
## Stop All Containers and ReRun Caddy and Other Containers on Background
### Stop All Containers and ReRun Caddy and Other Containers on Background
```
$root@server:~/laravel/laradock# docker-compose down
@ -236,326 +231,6 @@ View your Site in the Browser Securely Using HTTPS (https://yourdomain.com)
- [https://caddyserver.com/docs/tls](https://caddyserver.com/docs/tls)
- [https://caddyserver.com/docs/caddyfile](https://caddyserver.com/docs/caddyfile)
<br>
<br>
<br>
<br>
<br>
<a name="PHPStorm-Debugging"></a>
# PHPStorm XDebug Setup
- [Intro](#Intro)
- [Installation](#Installation)
- [Customize laradock/docker-compose.yml](#CustomizeDockerCompose)
- [Clean House](#InstallCleanHouse)
- [Laradock Dial Tone](#InstallLaradockDialTone)
- [hosts](#AddToHosts)
- [Firewall](#FireWall)
- [Enable xDebug on php-fpm](#enablePhpXdebug)
- [PHPStorm Settings](#InstallPHPStorm)
- [Configs](#InstallPHPStormConfigs)
- [Usage](#Usage)
- [Laravel](#UsageLaravel)
- [Run ExampleTest](#UsagePHPStormRunExampleTest)
- [Debug ExampleTest](#UsagePHPStormDebugExampleTest)
- [Debug Web Site](#UsagePHPStormDebugSite)
- [SSH into workspace](#SSHintoWorkspace)
- [KiTTY](#InstallKiTTY)
<a name="Intro"></a>
## Intro
Wiring up [Laravel](https://laravel.com/), [Laradock](https://github.com/Laradock/laradock) [Laravel+Docker] and [PHPStorm](https://www.jetbrains.com/phpstorm/) to play nice together complete with remote xdebug'ing as icing on top! Although this guide is based on `PHPStorm Windows`,
you should be able to adjust accordingly. This guide was written based on Docker for Windows Native.
<a name="Installation"></a>
## Installation
- This guide assumes the following:
- you have already installed and are familiar with Laravel, Laradock and PHPStorm.
- you have installed Laravel as a parent of `laradock`. This guide assumes `/c/_dk/laravel`.
<a name="AddToHosts"></a>
## hosts
- Add `laravel` to your hosts file located on Windows 10 at `C:\Windows\System32\drivers\etc\hosts`. It should be set to the IP of your running container. Mine is: `10.0.75.2`
On Windows you can find it by opening Windows `Hyper-V Manager`.
- ![Windows Hyper-V Manager](images/photos/PHPStorm/Settings/WindowsHyperVManager.png)
- [Hosts File Editor](https://github.com/scottlerch/HostsFileEditor) makes it easy to change your hosts file.
- Set `laravel` to your docker host IP. See [Example](images/photos/SimpleHostsEditor/AddHost_laravel.png).
<a name="FireWall"></a>
## Firewall
Your PHPStorm will need to be able to receive a connection from PHP xdebug either your running workspace or php-fpm containers on port 9000. This means that your Windows Firewall should either enable connections from the Application PHPStorm OR the port.
- It is important to note that if the Application PHPStorm is NOT enabled in the firewall, you will not be able to recreate a rule to override that.
- Also be aware that if you are installing/upgrade different versions of PHPStorm, you MAY have orphaned references to PHPStorm in your Firewall! You may decide to remove orphaned references however in either case, make sure that they are set to receive public TCP traffic.
### Edit laradock/docker-compose.yml
Set the following variables:
```
### Workspace Utilities Container ###############
workspace:
build:
context: ./workspace
args:
- INSTALL_XDEBUG=true
- INSTALL_WORKSPACE_SSH=true
...
### PHP-FPM Container #####################
php-fpm:
build:
context: ./php-fpm
args:
- INSTALL_XDEBUG=true
...
```
### Edit xdebug.ini files
- `laradock/workspace/xdebug.ini`
- `laradock/php-fpm/xdebug.ini`
Set the following variables:
```
xdebug.remote_autostart=1
xdebug.remote_enable=1
xdebug.remote_connect_back=1
xdebug.cli_color=1
```
<a name="InstallCleanHouse"></a>
### Need to clean house first?
Make sure you are starting with a clean state. For example, do you have other Laradock containers and images?
Here are a few things I use to clean things up.
- Delete all containers using `grep laradock_` on the names, see: [Remove all containers based on docker image name](https://linuxconfig.org/remove-all-containners-based-on-docker-image-name).
`docker ps -a | awk '{ print $1,$2 }' | grep laradock_ | awk '{print $1}' | xargs -I {} docker rm {}`
- Delete all images containing `laradock`.
`docker images | awk '{print $1,$2,$3}' | grep laradock_ | awk '{print $3}' | xargs -I {} docker rmi {}`
**Note:** This will only delete images that were built with `Laradock`, **NOT** `laradock/*` which are pulled down by `Laradock` such as `laradock/workspace`, etc.
**Note:** Some may fail with:
`Error response from daemon: conflict: unable to delete 3f38eaed93df (cannot be forced) - image has dependent child images`
- I added this to my `.bashrc` to remove orphaned images.
```
dclean() {
processes=`docker ps -q -f status=exited`
if [ -n "$processes" ]; then
docker rm $processes
fi
images=`docker images -q -f dangling=true`
if [ -n "$images" ]; then
docker rmi $images
fi
}
```
- If you frequently switch configurations for Laradock, you may find that adding the following and added to your `.bashrc` or equivalent useful:
```
# remove laravel* containers
# remove laravel_* images
dcleanlaradockfunction()
{
echo 'Removing ALL containers associated with laradock'
docker ps -a | awk '{ print $1,$2 }' | grep laradock | awk '{print $1}' | xargs -I {} docker rm {}
# remove ALL images associated with laradock_
# does NOT delete laradock/* which are hub images
echo 'Removing ALL images associated with laradock_'
docker images | awk '{print $1,$2,$3}' | grep laradock_ | awk '{print $3}' | xargs -I {} docker rmi {}
echo 'Listing all laradock docker hub images...'
docker images | grep laradock
echo 'dcleanlaradock completed'
}
# associate the above function with an alias
# so can recall/lookup by typing 'alias'
alias dcleanlaradock=dcleanlaradockfunction
```
<a name="InstallLaradockDialTone"></a>
## Let's get a dial-tone with Laravel
```
# barebones at this point
docker-compose up -d nginx mysql
# run
docker-compose ps
# Should see:
Name Command State Ports
-----------------------------------------------------------------------------------------------------------
laradock_mysql_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp
laradock_nginx_1 nginx Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 php-fpm Up 9000/tcp
laradock_volumes_data_1 true Exit 0
laradock_volumes_source_1 true Exit 0
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
```
<a name="enablePhpXdebug"></a>
## Enable xDebug on php-fpm
In a host terminal sitting in the laradock folder, run: `./php-fpm/xdebug status`
You should see something like the following:
```
xDebug status
laradock_php-fpm_1
PHP 7.0.9 (cli) (built: Aug 10 2016 19:45:48) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Xdebug v2.4.1, Copyright (c) 2002-2016, by Derick Rethans
```
Other commands include `./php-fpm/xdebug start | stop`.
If you have enabled `xdebug=true` in `docker-compose.yml/php-fpm`, `xdebug` will already be running when
`php-fpm` is started and listening for debug info on port 9000.
<a name="InstallPHPStormConfigs"></a>
## PHPStorm Settings
- Here are some settings that are known to work:
- `Settings/BuildDeploymentConnection`
- ![Settings/BuildDeploymentConnection](/images/photos/PHPStorm/Settings/BuildDeploymentConnection.png)
- `Settings/BuildDeploymentConnectionMappings`
- ![Settings/BuildDeploymentConnectionMappings](/images/photos/PHPStorm/Settings/BuildDeploymentConnectionMappings.png)
- `Settings/BuildDeploymentDebugger`
- ![Settings/BuildDeploymentDebugger](/images/photos/PHPStorm/Settings/BuildDeploymentDebugger.png)
- `Settings/EditRunConfigurationRemoteWebDebug`
- ![Settings/EditRunConfigurationRemoteWebDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteWebDebug.png)
- `Settings/EditRunConfigurationRemoteExampleTestDebug`
- ![Settings/EditRunConfigurationRemoteExampleTestDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteExampleTestDebug.png)
- `Settings/LangsPHPDebug`
- ![Settings/LangsPHPDebug](/images/photos/PHPStorm/Settings/LangsPHPDebug.png)
- `Settings/LangsPHPInterpreters`
- ![Settings/LangsPHPInterpreters](/images/photos/PHPStorm/Settings/LangsPHPInterpreters.png)
- `Settings/LangsPHPPHPUnit`
- ![Settings/LangsPHPPHPUnit](/images/photos/PHPStorm/Settings/LangsPHPPHPUnit.png)
- `Settings/LangsPHPServers`
- ![Settings/LangsPHPServers](/images/photos/PHPStorm/Settings/LangsPHPServers.png)
- `RemoteHost`
To switch on this view, go to: `Menu/Tools/Deployment/Browse Remote Host`.
- ![RemoteHost](/images/photos/PHPStorm/RemoteHost.png)
- `RemoteWebDebug`
- ![DebugRemoteOn](/images/photos/PHPStorm/DebugRemoteOn.png)
- `EditRunConfigurationRemoteWebDebug`
Go to: `Menu/Run/Edit Configurations`.
- ![EditRunConfigurationRemoteWebDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteWebDebug.png)
- `EditRunConfigurationRemoteExampleTestDebug`
Go to: `Menu/Run/Edit Configurations`.
- ![EditRunConfigurationRemoteExampleTestDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteExampleTestDebug.png)
- `WindowsFirewallAllowedApps`
Go to: `Control Panel\All Control Panel Items\Windows Firewall\Allowed apps`.
- ![WindowsFirewallAllowedApps.png](/images/photos/PHPStorm/Settings/WindowsFirewallAllowedApps.png)
- `hosts`
Edit: `C:\Windows\System32\drivers\etc\hosts`.
- ![WindowsFirewallAllowedApps.png](/images/photos/PHPStorm/Settings/hosts.png)
- [Enable xDebug on php-fpm](#enablePhpXdebug)
<a name="Usage"></a>
## Usage
<a name="UsagePHPStormRunExampleTest"></a>
### Run ExampleTest
- right-click on `tests/ExampleTest.php`
- Select: `Run 'ExampleTest.php'` or `Ctrl+Shift+F10`.
- Should pass!! You just ran a remote test via SSH!
<a name="UsagePHPStormDebugExampleTest"></a>
### Debug ExampleTest
- Open to edit: `tests/ExampleTest.php`
- Add a BreakPoint on line 16: `$this->visit('/')`
- right-click on `tests/ExampleTest.php`
- Select: `Debug 'ExampleTest.php'`.
- Should have stopped at the BreakPoint!! You are now debugging locally against a remote Laravel project via SSH!
- ![Remote Test Debugging Success](/images/photos/PHPStorm/RemoteTestDebuggingSuccess.png)
<a name="UsagePHPStormDebugSite"></a>
### Debug WebSite
- In case xDebug is disabled, from the `laradock` folder run:
`./php-fpm/xdebug start`.
- To switch xdebug off, run:
`./php-fpm/xdebug stop`
- Start Remote Debugging
- ![DebugRemoteOn](/images/photos/PHPStorm/DebugRemoteOn.png)
- Open to edit: `bootstrap/app.php`
- Add a BreakPoint on line 14: `$app = new Illuminate\Foundation\Application(`
- Reload [Laravel Site](http://laravel/)
- Should have stopped at the BreakPoint!! You are now debugging locally against a remote Laravel project via SSH!
- ![Remote Debugging Success](/images/photos/PHPStorm/RemoteDebuggingSuccess.png)
<a name="SSHintoWorkspace"></a>
## Let's shell into workspace
Assuming that you are in laradock folder, type:
`ssh -i workspace/insecure_id_rsa -p2222 root@laravel`
**Cha Ching!!!!**
- `workspace/insecure_id_rsa.ppk` may become corrupted. In which case:
- fire up `puttygen`
- import `workspace/insecure_id_rsa`
- save private key to `workspace/insecure_id_rsa.ppk`
<a name="InstallKiTTY"></a>
### KiTTY
[Kitty](http://www.9bis.net/kitty/) KiTTY is a fork from version 0.67 of PuTTY.
- Here are some settings that are working for me:
- ![Session](/images/photos/KiTTY/Session.png)
- ![Terminal](/images/photos/KiTTY/Terminal.png)
- ![Window](/images/photos/KiTTY/Window.png)
- ![WindowAppearance](/images/photos/KiTTY/WindowAppearance.png)
- ![Connection](/images/photos/KiTTY/Connection.png)
- ![ConnectionData](/images/photos/KiTTY/ConnectionData.png)
- ![ConnectionSSH](/images/photos/KiTTY/ConnectionSSH.png)
- ![ConnectionSSHAuth](/images/photos/KiTTY/ConnectionSSHAuth.png)
- ![TerminalShell](/images/photos/KiTTY/TerminalShell.png)
<br>
<br>
<br>
@ -563,13 +238,9 @@ Assuming that you are in laradock folder, type:
<br>
<a name="Laravel-Dusk"></a>
# Running Laravel Dusk Tests
## Running Laravel Dusk Tests
- [Option 1: Without Selenium](#option1-dusk)
- [Option 2: With Selenium](#option2-dusk)
<a name="option1-dusk"></a>
## Option 1: Without Selenium
### Option 1: Without Selenium
- [Intro](#option1-dusk-intro)
- [Workspace Setup](#option1-workspace-setup)
@ -577,14 +248,12 @@ Assuming that you are in laradock folder, type:
- [Choose Chrome Driver Version (Optional)](#option1-choose-chrome-driver-version)
- [Run Dusk Tests](#option1-run-dusk-tests)
<a name="option1-dusk-intro"></a>
### Intro
#### Intro
This is a guide to run Dusk tests in your `workspace` container with headless
google-chrome and chromedriver. It has been tested with Laravel 5.4 and 5.5.
<a name="option1-workspace-setup"></a>
### Workspace Setup
#### Workspace Setup
Update your .env with following entries:
@ -604,8 +273,7 @@ Then run below to build your workspace.
docker-compose build workspace
```
<a name="option1-application-setup"></a>
### Application Setup
#### Application Setup
Run a `workspace` container and you will be inside the container at `/var/www` directory.
@ -670,8 +338,7 @@ abstract class DuskTestCase extends BaseTestCase
}
```
<a name="option1-choose-chrome-driver-version"></a>
### Choose Chrome Driver Version (Optional)
#### Choose Chrome Driver Version (Optional)
You could choose to use either:
@ -725,8 +392,7 @@ abstract class DuskTestCase extends BaseTestCase
}
```
<a name="option1-run-dusk-tests"></a>
### Run Dusk Tests
#### Run Dusk Tests
Run local server in `workspace` container and run Dusk tests.
@ -743,8 +409,7 @@ PHPUnit 6.4.0 by Sebastian Bergmann and contributors.
Time: 837 ms, Memory: 6.00MB
```
<a name="option2-dusk"></a>
## Option 2: With Selenium
### Option 2: With Selenium
- [Intro](#dusk-intro)
- [DNS Setup](#dns-setup)
@ -752,8 +417,7 @@ Time: 837 ms, Memory: 6.00MB
- [Laravel Dusk Setup](#laravel-dusk-setup)
- [Running Laravel Dusk Tests](#running-tests)
<a name="dusk-intro"></a>
### Intro
#### Intro
Setting up Laravel Dusk tests to run with Laradock appears be something that
eludes most Laradock users. This guide is designed to show you how to wire them
up to work together. This guide is written with macOS and Linux in mind. As such,
@ -763,8 +427,7 @@ for Windows-specific instructions.
This guide assumes you know how to use a DNS forwarder such as `dnsmasq` or are comfortable
with editing the `/etc/hosts` file for one-off DNS changes.
<a name="dns-setup"></a>
### DNS Setup
#### DNS Setup
According to RFC-2606, only four TLDs are reserved for local testing[^1]:
- `.test`
@ -797,8 +460,7 @@ For example, in your `/etc/hosts` file:
This will ensure that when navigating to `myapp.test`, it will route the
request to `127.0.0.1` which will be handled by Nginx in Laradock.
<a name="docker-compose"></a>
### Docker Compose setup
#### Docker Compose setup
In order to make the Selenium container talk to the Nginx container appropriately,
the `docker-compose.yml` needs to be edited to accommodate this. Make the following
changes:
@ -820,8 +482,7 @@ the Selenium container to make requests to the Nginx container, which is
necessary for running Dusk tests. These changes also link the `nginx` environment
variable to the domain you wired up in your hosts file.
<a name="laravel-dusk-setup"></a>
### Laravel Dusk Setup
#### Laravel Dusk Setup
In order to make Laravel Dusk make the proper request to the Selenium container,
you have to edit the `DuskTestCase.php` file that's provided on the initial
@ -831,13 +492,13 @@ Remote Web Driver attempts to use to set up the Selenium session.
One recommendation for this is to add a separate config option in your `.env.dusk.local`
so it's still possible to run your Dusk tests locally should you want to.
#### .env.dusk.local
##### .env.dusk.local
```
...
USE_SELENIUM=true
```
#### DuskTestCase.php
##### DuskTestCase.php
```php
abstract class DuskTestCase extends BaseTestCase
{
@ -857,8 +518,7 @@ abstract class DuskTestCase extends BaseTestCase
}
```
<a name="running-tests"></a>
### Running Laravel Dusk Tests
#### Running Laravel Dusk Tests
Now that you have everything set up, to run your Dusk tests, you have to SSH
into the workspace container as you normally would:
@ -883,3 +543,326 @@ This invokes the Dusk command from inside the workspace container but when the s
execution, it returns your session to your project directory.
[^1]: [Don't Use .dev for Development](https://iyware.com/dont-use-dev-for-development/)
<br>
<br>
<br>
<br>
<br>
<a name="PHPStorm-Debugging"></a>
## PHPStorm XDebug Setup
- [Intro](#Intro)
- [Installation](#Installation)
- [Customize laradock/docker-compose.yml](#CustomizeDockerCompose)
- [Clean House](#InstallCleanHouse)
- [Laradock Dial Tone](#InstallLaradockDialTone)
- [hosts](#AddToHosts)
- [Firewall](#FireWall)
- [Enable xDebug on php-fpm](#enablePhpXdebug)
- [PHPStorm Settings](#InstallPHPStorm)
- [Configs](#InstallPHPStormConfigs)
- [Usage](#Usage)
- [Laravel](#UsageLaravel)
- [Run ExampleTest](#UsagePHPStormRunExampleTest)
- [Debug ExampleTest](#UsagePHPStormDebugExampleTest)
- [Debug Web Site](#UsagePHPStormDebugSite)
- [SSH into workspace](#SSHintoWorkspace)
- [KiTTY](#InstallKiTTY)
### Intro
Wiring up [Laravel](https://laravel.com/), [Laradock](https://github.com/Laradock/laradock) [Laravel+Docker] and [PHPStorm](https://www.jetbrains.com/phpstorm/) to play nice together complete with remote xdebug'ing as icing on top! Although this guide is based on `PHPStorm Windows`,
you should be able to adjust accordingly. This guide was written based on Docker for Windows Native.
### Installation
- This guide assumes the following:
- you have already installed and are familiar with Laravel, Laradock and PHPStorm.
- you have installed Laravel as a parent of `laradock`. This guide assumes `/c/_dk/laravel`.
### hosts
- Add `laravel` to your hosts file located on Windows 10 at `C:\Windows\System32\drivers\etc\hosts`. It should be set to the IP of your running container. Mine is: `10.0.75.2`
On Windows you can find it by opening Windows `Hyper-V Manager`.
- ![Windows Hyper-V Manager](images/photos/PHPStorm/Settings/WindowsHyperVManager.png)
- [Hosts File Editor](https://github.com/scottlerch/HostsFileEditor) makes it easy to change your hosts file.
- Set `laravel` to your docker host IP. See [Example](images/photos/SimpleHostsEditor/AddHost_laravel.png).
### Firewall
Your PHPStorm will need to be able to receive a connection from PHP xdebug either your running workspace or php-fpm containers on port 9000. This means that your Windows Firewall should either enable connections from the Application PHPStorm OR the port.
- It is important to note that if the Application PHPStorm is NOT enabled in the firewall, you will not be able to recreate a rule to override that.
- Also be aware that if you are installing/upgrade different versions of PHPStorm, you MAY have orphaned references to PHPStorm in your Firewall! You may decide to remove orphaned references however in either case, make sure that they are set to receive public TCP traffic.
#### Edit laradock/docker-compose.yml
Set the following variables:
```
### Workspace Utilities Container ###############
workspace:
build:
context: ./workspace
args:
- INSTALL_XDEBUG=true
- INSTALL_WORKSPACE_SSH=true
...
### PHP-FPM Container #####################
php-fpm:
build:
context: ./php-fpm
args:
- INSTALL_XDEBUG=true
...
```
#### Edit xdebug.ini files
- `laradock/workspace/xdebug.ini`
- `laradock/php-fpm/xdebug.ini`
Set the following variables:
```
xdebug.remote_autostart=1
xdebug.remote_enable=1
xdebug.remote_connect_back=1
xdebug.cli_color=1
```
#### Need to clean house first?
Make sure you are starting with a clean state. For example, do you have other Laradock containers and images?
Here are a few things I use to clean things up.
- Delete all containers using `grep laradock_` on the names, see: [Remove all containers based on docker image name](https://linuxconfig.org/remove-all-containners-based-on-docker-image-name).
`docker ps -a | awk '{ print $1,$2 }' | grep laradock_ | awk '{print $1}' | xargs -I {} docker rm {}`
- Delete all images containing `laradock`.
`docker images | awk '{print $1,$2,$3}' | grep laradock_ | awk '{print $3}' | xargs -I {} docker rmi {}`
**Note:** This will only delete images that were built with `Laradock`, **NOT** `laradock/*` which are pulled down by `Laradock` such as `laradock/workspace`, etc.
**Note:** Some may fail with:
`Error response from daemon: conflict: unable to delete 3f38eaed93df (cannot be forced) - image has dependent child images`
- I added this to my `.bashrc` to remove orphaned images.
```
dclean() {
processes=`docker ps -q -f status=exited`
if [ -n "$processes" ]; then
docker rm $processes
fi
images=`docker images -q -f dangling=true`
if [ -n "$images" ]; then
docker rmi $images
fi
}
```
- If you frequently switch configurations for Laradock, you may find that adding the following and added to your `.bashrc` or equivalent useful:
```
# remove laravel* containers
# remove laravel_* images
dcleanlaradockfunction()
{
echo 'Removing ALL containers associated with laradock'
docker ps -a | awk '{ print $1,$2 }' | grep laradock | awk '{print $1}' | xargs -I {} docker rm {}
# remove ALL images associated with laradock_
# does NOT delete laradock/* which are hub images
echo 'Removing ALL images associated with laradock_'
docker images | awk '{print $1,$2,$3}' | grep laradock_ | awk '{print $3}' | xargs -I {} docker rmi {}
echo 'Listing all laradock docker hub images...'
docker images | grep laradock
echo 'dcleanlaradock completed'
}
# associate the above function with an alias
# so can recall/lookup by typing 'alias'
alias dcleanlaradock=dcleanlaradockfunction
```
### Let's get a dial-tone with Laravel
```
# barebones at this point
docker-compose up -d nginx mysql
# run
docker-compose ps
# Should see:
Name Command State Ports
-----------------------------------------------------------------------------------------------------------
laradock_mysql_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp
laradock_nginx_1 nginx Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 php-fpm Up 9000/tcp
laradock_volumes_data_1 true Exit 0
laradock_volumes_source_1 true Exit 0
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
```
### Enable xDebug on php-fpm
In a host terminal sitting in the laradock folder, run: `./php-fpm/xdebug status`
You should see something like the following:
```
xDebug status
laradock_php-fpm_1
PHP 7.0.9 (cli) (built: Aug 10 2016 19:45:48) ( NTS )
Copyright (c) 1997-2016 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2016 Zend Technologies
with Xdebug v2.4.1, Copyright (c) 2002-2016, by Derick Rethans
```
Other commands include `./php-fpm/xdebug start | stop`.
If you have enabled `xdebug=true` in `docker-compose.yml/php-fpm`, `xdebug` will already be running when
`php-fpm` is started and listening for debug info on port 9000.
### PHPStorm Settings
- Here are some settings that are known to work:
- `Settings/BuildDeploymentConnection`
- ![Settings/BuildDeploymentConnection](/images/photos/PHPStorm/Settings/BuildDeploymentConnection.png)
- `Settings/BuildDeploymentConnectionMappings`
- ![Settings/BuildDeploymentConnectionMappings](/images/photos/PHPStorm/Settings/BuildDeploymentConnectionMappings.png)
- `Settings/BuildDeploymentDebugger`
- ![Settings/BuildDeploymentDebugger](/images/photos/PHPStorm/Settings/BuildDeploymentDebugger.png)
- `Settings/EditRunConfigurationRemoteWebDebug`
- ![Settings/EditRunConfigurationRemoteWebDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteWebDebug.png)
- `Settings/EditRunConfigurationRemoteExampleTestDebug`
- ![Settings/EditRunConfigurationRemoteExampleTestDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteExampleTestDebug.png)
- `Settings/LangsPHPDebug`
- ![Settings/LangsPHPDebug](/images/photos/PHPStorm/Settings/LangsPHPDebug.png)
- `Settings/LangsPHPInterpreters`
- ![Settings/LangsPHPInterpreters](/images/photos/PHPStorm/Settings/LangsPHPInterpreters.png)
- `Settings/LangsPHPPHPUnit`
- ![Settings/LangsPHPPHPUnit](/images/photos/PHPStorm/Settings/LangsPHPPHPUnit.png)
- `Settings/LangsPHPServers`
- ![Settings/LangsPHPServers](/images/photos/PHPStorm/Settings/LangsPHPServers.png)
- `RemoteHost`
To switch on this view, go to: `Menu/Tools/Deployment/Browse Remote Host`.
- ![RemoteHost](/images/photos/PHPStorm/RemoteHost.png)
- `RemoteWebDebug`
- ![DebugRemoteOn](/images/photos/PHPStorm/DebugRemoteOn.png)
- `EditRunConfigurationRemoteWebDebug`
Go to: `Menu/Run/Edit Configurations`.
- ![EditRunConfigurationRemoteWebDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteWebDebug.png)
- `EditRunConfigurationRemoteExampleTestDebug`
Go to: `Menu/Run/Edit Configurations`.
- ![EditRunConfigurationRemoteExampleTestDebug](/images/photos/PHPStorm/Settings/EditRunConfigurationRemoteExampleTestDebug.png)
- `WindowsFirewallAllowedApps`
Go to: `Control Panel\All Control Panel Items\Windows Firewall\Allowed apps`.
- ![WindowsFirewallAllowedApps.png](/images/photos/PHPStorm/Settings/WindowsFirewallAllowedApps.png)
- `hosts`
Edit: `C:\Windows\System32\drivers\etc\hosts`.
- ![WindowsFirewallAllowedApps.png](/images/photos/PHPStorm/Settings/hosts.png)
- [Enable xDebug on php-fpm](#enablePhpXdebug)
### Usage
#### Run ExampleTest
- right-click on `tests/ExampleTest.php`
- Select: `Run 'ExampleTest.php'` or `Ctrl+Shift+F10`.
- Should pass!! You just ran a remote test via SSH!
#### Debug ExampleTest
- Open to edit: `tests/ExampleTest.php`
- Add a BreakPoint on line 16: `$this->visit('/')`
- right-click on `tests/ExampleTest.php`
- Select: `Debug 'ExampleTest.php'`.
- Should have stopped at the BreakPoint!! You are now debugging locally against a remote Laravel project via SSH!
- ![Remote Test Debugging Success](/images/photos/PHPStorm/RemoteTestDebuggingSuccess.png)
#### Debug WebSite
- In case xDebug is disabled, from the `laradock` folder run:
`./php-fpm/xdebug start`.
- To switch xdebug off, run:
`./php-fpm/xdebug stop`
- Start Remote Debugging
- ![DebugRemoteOn](/images/photos/PHPStorm/DebugRemoteOn.png)
- Open to edit: `bootstrap/app.php`
- Add a BreakPoint on line 14: `$app = new Illuminate\Foundation\Application(`
- Reload [Laravel Site](http://laravel/)
- Should have stopped at the BreakPoint!! You are now debugging locally against a remote Laravel project via SSH!
- ![Remote Debugging Success](/images/photos/PHPStorm/RemoteDebuggingSuccess.png)
### Let's shell into workspace
Assuming that you are in laradock folder, type:
`ssh -i workspace/insecure_id_rsa -p2222 root@laravel`
**Cha Ching!!!!**
- `workspace/insecure_id_rsa.ppk` may become corrupted. In which case:
- fire up `puttygen`
- import `workspace/insecure_id_rsa`
- save private key to `workspace/insecure_id_rsa.ppk`
#### KiTTY
[Kitty](http://www.9bis.net/kitty/) KiTTY is a fork from version 0.67 of PuTTY.
- Here are some settings that are working for me:
- ![Session](/images/photos/KiTTY/Session.png)
- ![Terminal](/images/photos/KiTTY/Terminal.png)
- ![Window](/images/photos/KiTTY/Window.png)
- ![WindowAppearance](/images/photos/KiTTY/WindowAppearance.png)
- ![Connection](/images/photos/KiTTY/Connection.png)
- ![ConnectionData](/images/photos/KiTTY/ConnectionData.png)
- ![ConnectionSSH](/images/photos/KiTTY/ConnectionSSH.png)
- ![ConnectionSSHAuth](/images/photos/KiTTY/ConnectionSSHAuth.png)
- ![TerminalShell](/images/photos/KiTTY/TerminalShell.png)
<br>
<br>
<br>
<br>
<br>
<a name="Setup remote debugging (PhpStorm)"></a>
## Setup remote debugging for PhpStorm on Linux
- Make sure you have followed the steps above in the [Install Xdebug section](#install-xdebug).
- Make sure Xdebug accepts connections and listens on port 9000. (Should be default configuration).
![Debug Configuration](/images/photos/PHPStorm/linux/configuration/debugConfiguration.png "Debug Configuration").
- Create a server with name `laradock` (matches **PHP_IDE_CONFIG** key in environment file) and make sure to map project root path with server correctly.
![Server Configuration](/images/photos/PHPStorm/linux/configuration/serverConfiguration.png "Server Configuration").
- Start listening for debug connections, place a breakpoint and you are good to go !

View File

@ -1,5 +1,5 @@
---
title: Help & Questions
title: 5. Help & Questions
type: index
weight: 5
---
@ -7,3 +7,121 @@ weight: 5
Join the chat room on [Gitter](https://gitter.im/Laradock/laradock) and get help and support from the community.
You can as well can open an [issue](https://github.com/laradock/laradock/issues) on Github (will be labeled as Question) and discuss it with people on [Gitter](https://gitter.im/Laradock/laradock).
<br>
<a name="Common-Problems"></a>
# Common Problems
*Here's a list of the common problems you might face, and the possible solutions.*
<br>
## I see a blank (white) page instead of the Laravel 'Welcome' page!
Run the following command from the Laravel root directory:
```bash
sudo chmod -R 777 storage bootstrap/cache
```
<br>
## I see "Welcome to nginx" instead of the Laravel App!
Use `http://127.0.0.1` instead of `http://localhost` in your browser.
<br>
## I see an error message containing `address already in use` or `port is already allocated`
Make sure the ports for the services that you are trying to run (22, 80, 443, 3306, etc.) are not being used already by other programs on the host, such as a built in `apache`/`httpd` service or other development tools you have installed.
<br>
## I get NGINX error 404 Not Found on Windows.
1. Go to docker Settings on your Windows machine.
2. Click on the `Shared Drives` tab and check the drive that contains your project files.
3. Enter your windows username and password.
4. Go to the `reset` tab and click restart docker.
<br>
## The time in my services does not match the current time
1. Make sure you've [changed the timezone](#Change-the-timezone).
2. Stop and rebuild the containers (`docker-compose up -d --build <services>`)
<br>
## I get MySQL connection refused
This error sometimes happens because your Laravel application isn't running on the container localhost IP (Which is 127.0.0.1). Steps to fix it:
* Option A
1. Check your running Laravel application IP by dumping `Request::ip()` variable using `dd(Request::ip())` anywhere on your application. The result is the IP of your Laravel container.
2. Change the `DB_HOST` variable on env with the IP that you received from previous step.
* Option B
1. Change the `DB_HOST` value to the same name as the MySQL docker container. The Laradock docker-compose file currently has this as `mysql`
## I get stuck when building nginx on `fetch http://mirrors.aliyun.com/alpine/v3.5/main/x86_64/APKINDEX.tar.gz`
As stated on [#749](https://github.com/laradock/laradock/issues/749#issuecomment-419652646), Already fixedjust set `CHANGE_SOURCE` to false.
## Custom composer repo packagist url and npm registry url
In China, the origin source of composer and npm is very slow. You can add `WORKSPACE_NPM_REGISTRY` and `WORKSPACE_COMPOSER_REPO_PACKAGIST` config in `.env` to use your custom source.
Example:
```bash
WORKSPACE_NPM_REGISTRY=https://registry.npm.taobao.org
WORKSPACE_COMPOSER_REPO_PACKAGIST=https://packagist.phpcomposer.com
```
<br>
## I get `Module build failed: Error: write EPIPE` while compiling react application
When you run `npm build` or `yarn dev` building a react application using webpack with elixir you may receive a `Error: write EPIPE` while processing .jpg images.
This is caused of an outdated library for processing **.jpg files** in ubuntu 16.04.
To fix the problem you can follow those steps
1 - Open the `.env`.
2 - Search for `WORKSPACE_INSTALL_LIBPNG` or add the key if missing.
3 - Set the value to true:
```dotenv
WORKSPACE_INSTALL_LIBPNG=true
```
4 - Finally rebuild the workspace image
```bash
docker-compose build workspace
```

View File

@ -1,24 +1,41 @@
---
title: Introduction
title: 1. Introduction
type: index
weight: 1
---
## Use Docker First - Then Learn About It Later
Laradock is a PHP development environment that runs on Docker.
Supports a variety of useful Docker Images, pre-configured to provide a wonderful PHP development environment.
A full PHP development environment for Docker.
Includes pre-packaged Docker Images, all pre-configured to provide a wonderful PHP development environment.
Laradock is well known in the Laravel community, as the project started with single focus on running Laravel projects on Docker. Later and due to the large adoption from the PHP community, it started supporting other PHP projects like Symfony, CodeIgniter, WordPress, Drupal...
![](https://raw.githubusercontent.com/laradock/laradock/master/.github/home-page-images/laradock-logo.jpg)
![](https://s19.postimg.org/jblfytw9f/laradock-logo.jpg)
<a name="sponsors"></a>
## Sponsors
<a href="https://opencollective.com/laradock/sponsor/0/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/1/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/2/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/3/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/4/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/5/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/6/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/7/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/7/avatar.svg"></a>
For basic sponsorships go to [Open Collective](https://opencollective.com/laradock#sponsor), for golden sponsorships contact <a href = "mailto: support@laradock.io">support@laradock.io</a>.
<br>
*Your logo will show up on the [github repository](https://github.com/laradock/laradock/) index page and the [documentation](http://laradock.io/) main page, with a link to your website.*
## Quick Overview
Let's see how easy it is to install `NGINX`, `PHP`, `Composer`, `MySQL`, `Redis` and `Beanstalkd`:
Let's see how easy it is to setup our demo stack `PHP`, `NGINX`, `MySQL`, `Redis` and `Composer`:
1 - Clone Laradock inside your PHP project:
@ -58,10 +75,10 @@ That's it! enjoy :)
<a name="features"></a>
## Features
- Easy switch between PHP versions: 7.2, 7.1, 5.6...
- Easy switch between PHP versions: 7.3, 7.2, 7.1, 5.6...
- Choose your favorite database engine: MySQL, Postgres, MariaDB...
- Run your own combination of software: Memcached, HHVM, Beanstalkd...
- Every software runs on a separate container: PHP-FPM, NGINX, PHP-CLI...
- Run your own stack: Memcached, HHVM, RabbitMQ...
- Each software runs on its own container: PHP-FPM, NGINX, PHP-CLI...
- Easy to customize any container, with simple edit to the `Dockerfile`.
- All Images extends from an official base Image. (Trusted base Images).
- Pre-configured NGINX to host any code at your root directory.
@ -71,39 +88,131 @@ That's it! enjoy :)
- Latest version of the Docker Compose file (`docker-compose`).
- Everything is visible and editable.
- Fast Images Builds.
- More to come every week..
<a name="Supported-Containers"></a>
## Supported Software (Images)
## Supported Software (Docker Images)
In adhering to the separation of concerns principle as promoted by Docker, Laradock runs each software on its own Container.
You can turn On/Off as many instances of as any container without worrying about the configurations, everything works like a charm.
> Laradock, adheres to the 'separation of concerns' principle, thus it runs each software on its own Docker Container.
> You can turn On/Off as many instances as you want without worrying about the configurations.
> To run a chosen container from the list below, run `docker-compose up -d {container-name}`.
> The container name `{container-name}` is the same as its folder name. Example to run the "PHP FPM" container use the name "php-fpm".
- **Web Servers:**
- NGINX
- Apache2
- Caddy
- **Load Balancers:**
- HAProxy
- Traefik
- **Database Engines:**
MySQL - MariaDB - Percona - MongoDB - Neo4j - RethinkDB - MSSQL - PostgreSQL - Postgres-PostGIS.
- **Database Management:**
PhpMyAdmin - Adminer - PgAdmin
- **Cache Engines:**
Redis - Memcached - Aerospike
- **PHP Servers:**
NGINX - Apache2 - Caddy
- **PHP Compilers:**
PHP FPM - HHVM
- **Message Queueing:**
Beanstalkd - RabbitMQ - PHP Worker
- **Queueing Management:**
Beanstalkd Console - RabbitMQ Console
- **Random Tools:**
Mailu - HAProxy - Certbot - Blackfire - Selenium - Jenkins - ElasticSearch - Kibana - Grafana - Gitlab - Mailhog - MailDev - Minio - Varnish - Swoole - NetData - Portainer - Laravel Echo - Phalcon...
- PHP FPM
- HHVM
Laradock introduces the **Workspace** Image, as a development environment.
It contains a rich set of helpful tools, all pre-configured to work and integrate with almost any combination of Containers and tools you may choose.
- **Database Management Systems:**
- MySQL
- PostgreSQL
- PostGIS
- MariaDB
- Percona
- MSSQL
- MongoDB
- MongoDB Web UI
- Neo4j
- CouchDB
- RethinkDB
- Cassandra
**Workspace Image Tools**
PHP CLI - Composer - Git - Linuxbrew - Node - V8JS - Gulp - SQLite - xDebug - Envoy - Deployer - Vim - Yarn - SOAP - Drush...
- **Database Management Apps:**
- PhpMyAdmin
- Adminer
- PgAdmin
- **Cache Engines:**
- Redis
- Redis Web UI
- Redis Cluster
- Memcached
- Aerospike
- Varnish
- **Message Brokers:**
- RabbitMQ
- RabbitMQ Admin Console
- Beanstalkd
- Beanstalkd Admin Console
- Eclipse Mosquitto
- PHP Worker
- Laravel Horizon
- **Mail Servers:**
- Mailu
- Mailhog
- MailDev
- **Log Management:**
- GrayLog
- **Testing:**
- Selenium
- **Monitoring:**
- Grafana
- NetData
- **Search Engines:**
- ElasticSearch
- Apache Solr
- Manticore Search
- **IDE's**
- ICE Coder
- Theia
- Web IDE
- **Miscellaneous:**
- Workspace *(Laradock container that includes a rich set of pre-configured useful tools)*
- `PHP CLI`
- `Composer`
- `Git`
- `Vim`
- `xDebug`
- `Linuxbrew`
- `Node`
- `V8JS`
- `Gulp`
- `SQLite`
- `Laravel Envoy`
- `Deployer`
- `Yarn`
- `SOAP`
- `Drush`
- `Wordpress CLI`
- Apache ZooKeeper *(Centralized service for distributed systems to a hierarchical key-value store)*
- Kibana *(Visualize your Elasticsearch data and navigate the Elastic Stack)*
- LogStash *(Server-side data processing pipeline that ingests data from a multitude of sources simultaneously)*
- Jenkins *(automation server, that provides plugins to support building, deploying and automating any project)*
- Certbot *(Automatically enable HTTPS on your website)*
- Swoole *(Production-Grade Async programming Framework for PHP)*
- SonarQube *(continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs and more)*
- Gitlab *(A single application for the entire software development lifecycle)*
- PostGIS *(Database extender for PostgreSQL. It adds support for geographic objects allowing location queries to be run in SQL)*
- Blackfire *(Empowers all PHP developers and IT/Ops to continuously verify and improve their app's performance)*
- Laravel Echo *(Bring the power of WebSockets to your Laravel applications)*
- Phalcon *(A PHP web framework based on the modelviewcontroller pattern)*
- Minio *(Cloud storage server released under Apache License v2, compatible with Amazon S3)*
- AWS EB CLI *(CLI that helps you deploy and manage your AWS Elastic Beanstalk applications and environments)*
- Thumbor *(Photo thumbnail service)*
- IPython *(Provides a rich architecture for interactive computing)*
- Jupyter Hub *(Jupyter notebook for multiple users)*
- Portainer *(Build and manage your Docker environments with ease)*
- Docker Registry *(The Docker Registry implementation for storing and distributing Docker images)*
- Docker Web UI *(A browser-based solution for browsing and modifying a private Docker registry)*
You can choose, which tools to install in your workspace container and other containers, from the `.env` file.
@ -112,30 +221,7 @@ You can choose, which tools to install in your workspace container and other con
If you can't find your Software in the list, build it yourself and submit it. Contributions are welcomed :)
## Sponsors
Support this project by becoming a sponsor.
Your logo will show up on the [github repository](https://github.com/laradock/laradock/) index page and the [documentation](http://laradock.io/) main page, with a link to your website. [[Become a sponsor](https://opencollective.com/laradock#sponsor)]
<a href="https://opencollective.com/laradock/sponsor/0/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/1/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/2/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/3/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/4/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/5/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/6/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/7/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/8/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/laradock/sponsor/9/website" target="_blank"><img src="https://opencollective.com/laradock/sponsor/9/avatar.svg"></a>
*If you can't find your Software in the list, build it yourself and submit it. Contributions are welcomed :)*
@ -172,7 +258,6 @@ Most importantly Docker can run on Development and on Production (same environme
What's better than a **Demo Video**:
- Laradock v5.* (should be next!)
- Laradock [v4.*](https://www.youtube.com/watch?v=TQii1jDa96Y)
- Laradock [v2.*](https://www.youtube.com/watch?v=-DamFMczwDA)
- Laradock [v0.3](https://www.youtube.com/watch?v=jGkyO6Is_aI)
@ -201,14 +286,14 @@ You are welcome to join our chat room on Gitter.
> Help keeping the project development going, by [contributing](http://laradock.io/contributing) or donating a little.
> Thanks in advance.
Donate directly via [Paypal](https://www.paypal.me/mzalt)
Donate directly via [Paypal](https://paypal.me/mzmmzz)
[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.me/mzalt)
or become a backer on [Open Collective](https://opencollective.com/laradock#backer)
<a href="https://opencollective.com/laradock#backers" target="_blank"><img src="https://opencollective.com/laradock/backers.svg?width=890"></a>
[![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://paypal.me/mzmmzz)
or show your support via [Beerpay](https://beerpay.io/laradock/laradock)
[![Beerpay](https://beerpay.io/laradock/laradock/badge.svg?style=flat)](https://beerpay.io/laradock/laradock)
or become a backer on [Open Collective](https://opencollective.com/laradock#backer)
<a href="https://opencollective.com/laradock#backers" target="_blank"><img src="https://opencollective.com/laradock/backers.svg?width=890"></a>

View File

@ -1,5 +1,5 @@
---
title: License
title: 8. License
type: index
weight: 8
---

View File

@ -1,5 +1,5 @@
---
title: Related Projects
title: 6. Related Projects
type: index
weight: 6
---

View File

@ -0,0 +1 @@
google.com, pub-9826129398689742, DIRECT, f08c47fec0942fa0

View File

@ -1,7 +1,4 @@
FROM adminer:4.3.0
# Version 4.3.1 contains PostgreSQL login errors. See docs.
# See https://sourceforge.net/p/adminer/bugs-and-features/548/
FROM adminer:4
LABEL maintainer="Patrick Artounian <partounian@gmail.com>"
@ -16,11 +13,15 @@ ARG INSTALL_MSSQL=false
ENV INSTALL_MSSQL ${INSTALL_MSSQL}
RUN if [ ${INSTALL_MSSQL} = true ]; then \
set -xe \
&& apk --update add --no-cache --virtual .phpize-deps $PHPIZE_DEPS unixodbc unixodbc-dev \
&& pecl channel-update pecl.php.net \
&& pecl install pdo_sqlsrv-4.1.8preview sqlsrv-4.1.8preview \
&& echo "extension=sqlsrv.so" > /usr/local/etc/php/conf.d/20-sqlsrv.ini \
&& echo "extension=pdo_sqlsrv.so" > /usr/local/etc/php/conf.d/20-pdo_sqlsrv.ini \
# && apk --update add --no-cache --virtual .phpize-deps $PHPIZE_DEPS unixodbc unixodbc-dev \
# && pecl channel-update pecl.php.net \
# && pecl install pdo_sqlsrv-4.1.8preview sqlsrv-4.1.8preview \
# && echo "extension=sqlsrv.so" > /usr/local/etc/php/conf.d/20-sqlsrv.ini \
# && echo "extension=pdo_sqlsrv.so" > /usr/local/etc/php/conf.d/20-pdo_sqlsrv.ini \
&& apk --update add --no-cache freetds unixodbc \
&& apk --update add --no-cache --virtual .build-deps $PHPIZE_DEPS freetds-dev unixodbc-dev \
&& docker-php-ext-install pdo_dblib \
&& apk del .build-deps \
;fi
USER adminer

View File

@ -1,7 +1,3 @@
FROM aerospike:latest
LABEL maintainer="Luciano Jr <luciano@lucianojr.com.br>"
RUN rm /etc/aerospike/aerospike.conf
COPY aerospike.conf /etc/aerospike/aerospike.conf

View File

@ -1,77 +0,0 @@
# Aerospike database configuration file.
# This stanza must come first.
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
# Send log messages to stdout
console {
context any critical
}
}
network {
service {
address any
port 3000
# Uncomment the following to set the `access-address` parameter to the
# IP address of the Docker host. This will the allow the server to correctly
# publish the address which applications and other nodes in the cluster to
# use when addressing this node.
# access-address <IPADDR>
}
heartbeat {
# mesh is used for environments that do not support multicast
mode mesh
port 3002
# use asinfo -v 'tip:host=<ADDR>;port=3002' to inform cluster of
# other mesh nodes
mesh-port 3002
interval 150
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test {
replication-factor 2
memory-size 1G
default-ttl 5d # 5 days, use 0 to never expire/evict.
# storage-engine memory
# To use file storage backing, comment out the line above and use the
# following lines instead.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 4G
data-in-memory true # Store data in memory in addition to file.
}
}

View File

@ -1,16 +1,7 @@
FROM phusion/baseimage:latest
FROM alpine
LABEL maintainer="Mahmoud Zalt <mahmoud@zalt.me>"
ENV DEBIAN_FRONTEND noninteractive
ENV PATH /usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN apt-get update
RUN apt-get install -y beanstalkd
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/lib/beanstalkd/data
RUN apk add --no-cache beanstalkd
EXPOSE 11300
CMD ["/usr/bin/beanstalkd"]
ENTRYPOINT ["/usr/bin/beanstalkd"]

View File

@ -1,27 +1,5 @@
FROM golang:alpine
FROM abiosoft/caddy:no-stats
LABEL maintainer="Huadong Zuo <admin@zuohuadong.cn>"
CMD ["--conf", "/etc/caddy/Caddyfile", "--log", "stdout", "--agree=true"]
RUN apk add --no-cache \
openssh \
git \
build-base && \
go get github.com/abiosoft/caddyplug/caddyplug \
&& caddyplug install-caddy \
apk del build-base
ARG plugins="cors"
## ARG plugins="cors cgi cloudflare azure linode"
RUN caddyplug install ${plugins}
RUN apk add --no-cache inotify-tools \
&& echo -e "#!/bin/sh\nwhile inotifywait -e modify /etc/caddy; do\n\tpkill caddy\ndone " >> /start.sh \
&& chmod +x /start.sh
EXPOSE 80 443
WORKDIR /var/www/public
CMD ["sh","-c","/start.sh & /usr/bin/caddy -conf /etc/caddy/Caddyfile -agree"]
EXPOSE 80 443 2015

5
cassandra/Dockerfile Normal file
View File

@ -0,0 +1,5 @@
ARG CASSANDRA_VERSION=latest
FROM bitnami/cassandra:${CASSANDRA_VERSION}
LABEL maintainer="Stefan Neuhaus <https://www.github.com/stefnats>"

3
couchdb/Dockerfile Normal file
View File

@ -0,0 +1,3 @@
FROM couchdb
EXPOSE 5984

View File

@ -39,6 +39,14 @@ volumes:
driver: ${VOLUMES_DRIVER}
elasticsearch:
driver: ${VOLUMES_DRIVER}
mosquitto:
driver: ${VOLUMES_DRIVER}
confluence:
driver: ${VOLUMES_DRIVER}
sonarqube:
driver: ${VOLUMES_DRIVER}
cassandra:
driver: ${VOLUMES_DRIVER}
services:
@ -56,10 +64,12 @@ services:
- INSTALL_SSH2=${WORKSPACE_INSTALL_SSH2}
- INSTALL_GMP=${WORKSPACE_INSTALL_GMP}
- INSTALL_SOAP=${WORKSPACE_INSTALL_SOAP}
- INSTALL_XSL=${WORKSPACE_INSTALL_XSL}
- INSTALL_LDAP=${WORKSPACE_INSTALL_LDAP}
- INSTALL_IMAP=${WORKSPACE_INSTALL_IMAP}
- INSTALL_MONGO=${WORKSPACE_INSTALL_MONGO}
- INSTALL_AMQP=${WORKSPACE_INSTALL_AMQP}
- INSTALL_CASSANDRA=${WORKSPACE_INSTALL_CASSANDRA}
- INSTALL_PHPREDIS=${WORKSPACE_INSTALL_PHPREDIS}
- INSTALL_MSSQL=${WORKSPACE_INSTALL_MSSQL}
- INSTALL_NODE=${WORKSPACE_INSTALL_NODE}
@ -68,12 +78,14 @@ services:
- INSTALL_NPM_GULP=${WORKSPACE_INSTALL_NPM_GULP}
- INSTALL_NPM_BOWER=${WORKSPACE_INSTALL_NPM_BOWER}
- INSTALL_NPM_VUE_CLI=${WORKSPACE_INSTALL_NPM_VUE_CLI}
- INSTALL_NPM_ANGULAR_CLI=${WORKSPACE_INSTALL_NPM_ANGULAR_CLI}
- INSTALL_DRUSH=${WORKSPACE_INSTALL_DRUSH}
- INSTALL_WP_CLI=${WORKSPACE_INSTALL_WP_CLI}
- INSTALL_DRUPAL_CONSOLE=${WORKSPACE_INSTALL_DRUPAL_CONSOLE}
- INSTALL_AEROSPIKE=${WORKSPACE_INSTALL_AEROSPIKE}
- AEROSPIKE_PHP_REPOSITORY=${AEROSPIKE_PHP_REPOSITORY}
- INSTALL_V8JS=${WORKSPACE_INSTALL_V8JS}
- COMPOSER_GLOBAL_INSTALL=${WORKSPACE_COMPOSER_GLOBAL_INSTALL}
- COMPOSER_AUTH=${WORKSPACE_COMPOSER_AUTH}
- COMPOSER_REPO_PACKAGIST=${WORKSPACE_COMPOSER_REPO_PACKAGIST}
- INSTALL_WORKSPACE_SSH=${WORKSPACE_INSTALL_WORKSPACE_SSH}
- INSTALL_LARAVEL_ENVOY=${WORKSPACE_INSTALL_LARAVEL_ENVOY}
@ -91,9 +103,12 @@ services:
- INSTALL_PG_CLIENT=${WORKSPACE_INSTALL_PG_CLIENT}
- INSTALL_PHALCON=${WORKSPACE_INSTALL_PHALCON}
- INSTALL_SWOOLE=${WORKSPACE_INSTALL_SWOOLE}
- INSTALL_TAINT=${WORKSPACE_INSTALL_TAINT}
- INSTALL_LIBPNG=${WORKSPACE_INSTALL_LIBPNG}
- INSTALL_IONCUBE=${WORKSPACE_INSTALL_IONCUBE}
- INSTALL_MYSQL_CLIENT=${WORKSPACE_INSTALL_MYSQL_CLIENT}
- INSTALL_PING=${WORKSPACE_INSTALL_PING}
- INSTALL_SSHPASS=${WORKSPACE_INSTALL_SSHPASS}
- PUID=${WORKSPACE_PUID}
- PGID=${WORKSPACE_PGID}
- CHROME_DRIVER_VERSION=${WORKSPACE_CHROME_DRIVER_VERSION}
@ -103,8 +118,14 @@ services:
- TZ=${WORKSPACE_TIMEZONE}
- BLACKFIRE_CLIENT_ID=${BLACKFIRE_CLIENT_ID}
- BLACKFIRE_CLIENT_TOKEN=${BLACKFIRE_CLIENT_TOKEN}
- INSTALL_POWERLINE=${WORKSPACE_INSTALL_POWERLINE}
- INSTALL_FFMPEG=${WORKSPACE_INSTALL_FFMPEG}
- INSTALL_GNU_PARALLEL=${WORKSPACE_INSTALL_GNU_PARALLEL}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
extra_hosts:
- "dockerhost:${DOCKER_HOST_IP}"
ports:
@ -131,11 +152,12 @@ services:
- INSTALL_BLACKFIRE=${INSTALL_BLACKFIRE}
- INSTALL_SSH2=${PHP_FPM_INSTALL_SSH2}
- INSTALL_SOAP=${PHP_FPM_INSTALL_SOAP}
- INSTALL_XSL=${PHP_FPM_INSTALL_XSL}
- INSTALL_IMAP=${PHP_FPM_INSTALL_IMAP}
- INSTALL_MONGO=${PHP_FPM_INSTALL_MONGO}
- INSTALL_AMQP=${PHP_FPM_INSTALL_AMQP}
- INSTALL_CASSANDRA=${PHP_FPM_INSTALL_CASSANDRA}
- INSTALL_MSSQL=${PHP_FPM_INSTALL_MSSQL}
- INSTALL_ZIP_ARCHIVE=${PHP_FPM_INSTALL_ZIP_ARCHIVE}
- INSTALL_BCMATH=${PHP_FPM_INSTALL_BCMATH}
- INSTALL_GMP=${PHP_FPM_INSTALL_GMP}
- INSTALL_PHPREDIS=${PHP_FPM_INSTALL_PHPREDIS}
@ -143,24 +165,37 @@ services:
- INSTALL_OPCACHE=${PHP_FPM_INSTALL_OPCACHE}
- INSTALL_EXIF=${PHP_FPM_INSTALL_EXIF}
- INSTALL_AEROSPIKE=${PHP_FPM_INSTALL_AEROSPIKE}
- AEROSPIKE_PHP_REPOSITORY=${AEROSPIKE_PHP_REPOSITORY}
- INSTALL_MYSQLI=${PHP_FPM_INSTALL_MYSQLI}
- INSTALL_PGSQL=${PHP_FPM_INSTALL_PGSQL}
- INSTALL_PG_CLIENT=${PHP_FPM_INSTALL_PG_CLIENT}
- INSTALL_POSTGIS=${PHP_FPM_INSTALL_POSTGIS}
- INSTALL_INTL=${PHP_FPM_INSTALL_INTL}
- INSTALL_GHOSTSCRIPT=${PHP_FPM_INSTALL_GHOSTSCRIPT}
- INSTALL_LDAP=${PHP_FPM_INSTALL_LDAP}
- INSTALL_PHALCON=${PHP_FPM_INSTALL_PHALCON}
- INSTALL_SWOOLE=${PHP_FPM_INSTALL_SWOOLE}
- INSTALL_TAINT=${PHP_FPM_INSTALL_TAINT}
- INSTALL_IMAGE_OPTIMIZERS=${PHP_FPM_INSTALL_IMAGE_OPTIMIZERS}
- INSTALL_IMAGEMAGICK=${PHP_FPM_INSTALL_IMAGEMAGICK}
- INSTALL_CALENDAR=${PHP_FPM_INSTALL_CALENDAR}
- INSTALL_FAKETIME=${PHP_FPM_INSTALL_FAKETIME}
- INSTALL_IONCUBE=${PHP_FPM_INSTALL_IONCUBE}
- INSTALL_APCU=${PHP_FPM_INSTALL_APCU}
- INSTALL_YAML=${PHP_FPM_INSTALL_YAML}
- INSTALL_RDKAFKA=${PHP_FPM_INSTALL_RDKAFKA}
- INSTALL_ADDITIONAL_LOCALES=${PHP_FPM_INSTALL_ADDITIONAL_LOCALES}
- INSTALL_MYSQL_CLIENT=${PHP_FPM_INSTALL_MYSQL_CLIENT}
- INSTALL_PING=${PHP_FPM_INSTALL_PING}
- INSTALL_SSHPASS=${PHP_FPM_INSTALL_SSHPASS}
- ADDITIONAL_LOCALES=${PHP_FPM_ADDITIONAL_LOCALES}
- INSTALL_FFMPEG=${PHP_FPM_FFMPEG}
- INSTALL_XHPROF=${PHP_FPM_INSTALL_XHPROF}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ./php-fpm/php${PHP_VERSION}.ini:/usr/local/etc/php/php.ini
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
expose:
- "9000"
extra_hosts:
@ -182,12 +217,24 @@ services:
context: ./php-worker
args:
- PHP_VERSION=${PHP_VERSION}
- PHALCON_VERSION=${PHALCON_VERSION}
- INSTALL_PGSQL=${PHP_WORKER_INSTALL_PGSQL}
- INSTALL_BCMATH=${PHP_WORKER_INSTALL_BCMATH}
- INSTALL_PHALCON=${PHP_WORKER_INSTALL_PHALCON}
- INSTALL_SOAP=${PHP_WORKER_INSTALL_SOAP}
- INSTALL_ZIP_ARCHIVE=${PHP_WORKER_INSTALL_ZIP_ARCHIVE}
- INSTALL_MYSQL_CLIENT=${PHP_WORKER_INSTALL_MYSQL_CLIENT}
- INSTALL_AMQP=${PHP_WORKER_INSTALL_AMQP}
- INSTALL_CASSANDRA=${PHP_WORKER_INSTALL_CASSANDRA}
- INSTALL_GHOSTSCRIPT=${PHP_WORKER_INSTALL_GHOSTSCRIPT}
- INSTALL_SWOOLE=${PHP_WORKER_INSTALL_SWOOLE}
- INSTALL_TAINT=${PHP_WORKER_INSTALL_TAINT}
- INSTALL_FFMPEG=${PHP_WORKER_INSTALL_FFMPEG}
- INSTALL_GMP=${PHP_WORKER_INSTALL_GMP}
- PUID=${PHP_WORKER_PUID}
- PGID=${PHP_WORKER_PGID}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ./php-worker/supervisord.d:/etc/supervisord.d
depends_on:
- workspace
@ -204,6 +251,8 @@ services:
- INSTALL_PGSQL=${PHP_FPM_INSTALL_PGSQL}
- INSTALL_BCMATH=${PHP_FPM_INSTALL_BCMATH}
- INSTALL_MEMCACHED=${PHP_FPM_INSTALL_MEMCACHED}
- INSTALL_SOCKETS=${LARAVEL_HORIZON_INSTALL_SOCKETS}
- INSTALL_CASSANDRA=${PHP_FPM_INSTALL_CASSANDRA}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ./laravel-horizon/supervisord.d:/etc/supervisord.d
@ -222,8 +271,11 @@ services:
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- CHANGE_SOURCE=${CHANGE_SOURCE}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
- ${NGINX_SSL_PATH}:/etc/nginx/ssl
@ -257,7 +309,7 @@ services:
- PHP_UPSTREAM_TIMEOUT=${APACHE_PHP_UPSTREAM_TIMEOUT}
- DOCUMENT_ROOT=${APACHE_DOCUMENT_ROOT}
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${APACHE_HOST_LOG_PATH}:/var/log/apache2
- ${APACHE_SITES_PATH}:/etc/apache2/sites-available
ports:
@ -273,7 +325,7 @@ services:
hhvm:
build: ./hhvm
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
expose:
- "9000"
depends_on:
@ -352,13 +404,20 @@ services:
### MariaDB ##############################################
mariadb:
build: ./mariadb
build:
context: ./mariadb
args:
- http_proxy
- https_proxy
- no_proxy
- MARIADB_VERSION=${MARIADB_VERSION}
volumes:
- ${DATA_PATH_HOST}/mariadb:/var/lib/mysql
- ${MARIADB_ENTRYPOINT_INITDB}:/docker-entrypoint-initdb.d
ports:
- "${MARIADB_PORT}:3306"
environment:
- TZ=${WORKSPACE_TIMEZONE}
- MYSQL_DATABASE=${MARIADB_DATABASE}
- MYSQL_USER=${MARIADB_USER}
- MYSQL_PASSWORD=${MARIADB_PASSWORD}
@ -378,6 +437,22 @@ services:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- GITLAB_POSTGRES_INIT=${GITLAB_POSTGRES_INIT}
- GITLAB_POSTGRES_USER=${GITLAB_POSTGRES_USER}
- GITLAB_POSTGRES_PASSWORD=${GITLAB_POSTGRES_PASSWORD}
- GITLAB_POSTGRES_DB=${GITLAB_POSTGRES_DB}
- JUPYTERHUB_POSTGRES_INIT=${JUPYTERHUB_POSTGRES_INIT}
- JUPYTERHUB_POSTGRES_USER=${JUPYTERHUB_POSTGRES_USER}
- JUPYTERHUB_POSTGRES_PASSWORD=${JUPYTERHUB_POSTGRES_PASSWORD}
- JUPYTERHUB_POSTGRES_DB=${JUPYTERHUB_POSTGRES_DB}
- SONARQUBE_POSTGRES_INIT=${SONARQUBE_POSTGRES_INIT}
- SONARQUBE_POSTGRES_DB=${SONARQUBE_POSTGRES_DB}
- SONARQUBE_POSTGRES_USER=${SONARQUBE_POSTGRES_USER}
- SONARQUBE_POSTGRES_PASSWORD=${SONARQUBE_POSTGRES_PASSWORD}
- POSTGRES_CONFLUENCE_INIT=${CONFLUENCE_POSTGRES_INIT}
- POSTGRES_CONFLUENCE_DB=${CONFLUENCE_POSTGRES_DB}
- POSTGRES_CONFLUENCE_USER=${CONFLUENCE_POSTGRES_USER}
- POSTGRES_CONFLUENCE_PASSWORD=${CONFLUENCE_POSTGRES_PASSWORD}
networks:
- backend
@ -438,6 +513,25 @@ services:
networks:
- backend
### Redis Cluster ##########################################
redis-cluster:
build: ./redis-cluster
ports:
- "${REDIS_CLUSTER_PORT_RANGE}:7000-7005"
networks:
- backend
### ZooKeeper #########################################
zookeeper:
build: ./zookeeper
volumes:
- ${DATA_PATH_HOST}/zookeeper/data:/data
- ${DATA_PATH_HOST}/zookeeper/datalog:/datalog
ports:
- "${ZOOKEEPER_PORT}:2181"
networks:
- backend
### Aerospike ##########################################
aerospike:
build: ./aerospike
@ -449,6 +543,10 @@ services:
- "${AEROSPIKE_FABRIC_PORT}:3001"
- "${AEROSPIKE_HEARTBEAT_PORT}:3002"
- "${AEROSPIKE_INFO_PORT}:3003"
environment:
- STORAGE_GB=${AEROSPIKE_STORAGE_GB}
- MEM_GB=${AEROSPIKE_MEM_GB}
- NAMESPACE=${AEROSPIKE_NAMESPACE}
networks:
- backend
@ -486,6 +584,41 @@ services:
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
hostname: laradock-rabbitmq
volumes:
- ${DATA_PATH_HOST}/rabbitmq:/var/lib/rabbitmq
depends_on:
- php-fpm
networks:
- backend
### Cassandra ############################################
cassandra:
build: ./cassandra
ports:
- "${CASSANDRA_TRANSPORT_PORT_NUMBER}:7000"
- "${CASSANDRA_JMX_PORT_NUMBER}:7199"
- "${CASSANDRA_CQL_PORT_NUMBER}:9042"
privileged: true
environment:
- CASSANDRA_VERSION=${CASSANDRA_VERSION}
- CASSANDRA_TRANSPORT_PORT_NUMBER=${CASSANDRA_TRANSPORT_PORT_NUMBER}
- CASSANDRA_JMX_PORT_NUMBER=${CASSANDRA_JMX_PORT_NUMBER}
- CASSANDRA_CQL_PORT_NUMBER=${CASSANDRA_CQL_PORT_NUMBER}
- CASSANDRA_USER=${CASSANDRA_USER}
- CASSANDRA_PASSWORD_SEEDER=${CASSANDRA_PASSWORD_SEEDER}
- CASSANDRA_PASSWORD=${CASSANDRA_PASSWORD}
- CASSANDRA_NUM_TOKENS=${CASSANDRA_NUM_TOKENS}
- CASSANDRA_HOST=${CASSANDRA_HOST}
- CASSANDRA_CLUSTER_NAME=${CASSANDRA_CLUSTER_NAME}
- CASSANDRA_SEEDS=${CASSANDRA_SEEDS}
- CASSANDRA_ENDPOINT_SNITCH=${CASSANDRA_ENDPOINT_SNITCH}
- CASSANDRA_ENABLE_RPC=${CASSANDRA_ENABLE_RPC}
- CASSANDRA_DATACENTER=${CASSANDRA_DATACENTER}
- CASSANDRA_RACK=${CASSANDRA_RACK}
hostname: laradock-cassandra
volumes:
- ${DATA_PATH_HOST}/cassandra:/var/lib/cassandra
depends_on:
- php-fpm
networks:
@ -505,7 +638,7 @@ services:
caddy:
build: ./caddy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${CADDY_CONFIG_PATH}:/etc/caddy
- ${CADDY_HOST_LOG_PATH}:/var/log/caddy
- ${DATA_PATH_HOST}:/root/.caddy
@ -550,11 +683,14 @@ services:
### pgAdmin ##############################################
pgadmin:
build: ./pgadmin
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}"
- "PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}"
ports:
- "5050:5050"
- "${PGADMIN_PORT}:80"
volumes:
- ${DATA_PATH_HOST}/pgadmin-backup:/var/lib/pgadmin/storage/pgadmin4
- ${DATA_PATH_HOST}/pgadmin:/var/lib/pgadmin
depends_on:
- postgres
networks:
@ -568,8 +704,10 @@ services:
- elasticsearch:/usr/share/elasticsearch/data
environment:
- cluster.name=laradock-cluster
- node.name=laradock-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.initial_master_nodes=laradock-node
ulimits:
memlock:
soft: -1
@ -583,6 +721,24 @@ services:
- frontend
- backend
### Logstash ##############################################
logstash:
build: ./logstash
volumes:
- './logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml'
- './logstash/pipeline:/usr/share/logstash/pipeline'
ports:
- '5001:5001'
environment:
LS_JAVA_OPTS: '-Xmx1g -Xms1g'
env_file:
- .env
networks:
- frontend
- backend
depends_on:
- elasticsearch
### Kibana ##############################################
kibana:
build: ./kibana
@ -710,6 +866,36 @@ services:
networks:
- backend
### Graylog #######################################
graylog:
build: ./graylog
environment:
- GRAYLOG_PASSWORD_SECRET=${GRAYLOG_PASSWORD}
- GRAYLOG_ROOT_PASSWORD_SHA2=${GRAYLOG_SHA256_PASSWORD}
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:${GRAYLOG_PORT}/
links:
- mongo
- elasticsearch
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- ${GRAYLOG_PORT}:9000
# Syslog TCP
- ${GRAYLOG_SYSLOG_TCP_PORT}:514
# Syslog UDP
- ${GRAYLOG_SYSLOG_UDP_PORT}:514/udp
# GELF TCP
- ${GRAYLOG_GELF_TCP_PORT}:12201
# GELF UDP
- ${GRAYLOG_GELF_UDP_PORT}:12201/udp
user: root
volumes:
- ./graylog/config:/usr/share/graylog/data/config
networks:
- backend
### Laravel Echo Server #######################################
laravel-echo-server:
build:
@ -858,9 +1044,9 @@ services:
### AWS EB-CLI ################################################
aws:
build:
context: ./aws
context: ./aws-eb-cli
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
depends_on:
- workspace
tty: true
@ -889,14 +1075,15 @@ services:
redis['enable'] = false
nginx['listen_https'] = false
nginx['listen_port'] = 80
nginx['custom_gitlab_server_config'] = "set_real_ip_from 172.0.0.0/8;\nreal_ip_header X-Real-IP;\nreal_ip_recursive on;"
postgresql['enable'] = false
gitlab_rails['trusted_proxies'] = ['caddy','nginx','apache2']
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_database'] = 8
gitlab_rails['db_host'] = 'postgres'
gitlab_rails['db_username'] = 'laradock_gitlab'
gitlab_rails['db_password'] = 'laradock_gitlab'
gitlab_rails['db_database'] = 'laradock_gitlab'
gitlab_rails['db_host'] = '${GITLAB_POSTGRES_HOST}'
gitlab_rails['db_username'] = '${GITLAB_POSTGRES_USER}'
gitlab_rails['db_password'] = '${GITLAB_POSTGRES_PASSWORD}'
gitlab_rails['db_database'] = '${GITLAB_POSTGRES_DB}'
gitlab_rails['initial_root_password'] = '${GITLAB_ROOT_PASSWORD}'
gitlab_rails['gitlab_shell_ssh_port'] = ${GITLAB_HOST_SSH_PORT}
volumes:
@ -915,11 +1102,14 @@ services:
gitlab-runner:
image: gitlab/gitlab-runner:latest
environment:
- CI_SERVER_URL=${GITLAB_DOMAIN_NAME}
- CI_SERVER_URL=${GITLAB_CI_SERVER_URL}
- REGISTRATION_TOKEN=${GITLAB_RUNNER_REGISTRATION_TOKEN}
- RUNNER_NAME=${COMPOSE_PROJECT_NAME}-runner
- REGISTER_NON_INTERACTIVE=${GITLAB_REGISTER_NON_INTERACTIVE}
- RUNNER_EXECUTOR=shell
volumes:
- ${DATA_PATH_HOST}/gitlab/runner:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock:rw
restart: always
### JupyterHub #########################################
jupyterhub:
@ -928,7 +1118,6 @@ services:
depends_on:
- postgres
- jupyterhub-user
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:rw
- ${DATA_PATH_HOST}/jupyterhub/:/data
@ -949,7 +1138,8 @@ services:
- JUPYTERHUB_OAUTH_CALLBACK_URL=${JUPYTERHUB_OAUTH_CALLBACK_URL}
- JUPYTERHUB_OAUTH_CLIENT_ID=${JUPYTERHUB_OAUTH_CLIENT_ID}
- JUPYTERHUB_OAUTH_CLIENT_SECRET=${JUPYTERHUB_OAUTH_CLIENT_SECRET}
- JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE=${JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE}
- JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE=${COMPOSE_PROJECT_NAME}_jupyterhub-user
- JUPYTERHUB_ENABLE_NVIDIA=${JUPYTERHUB_ENABLE_NVIDIA}
jupyterhub-user:
build:
context: ./jupyterhub
@ -999,9 +1189,10 @@ services:
networks:
- backend
### PHPRedisAdmin ################################################
phpredisadmin:
image: erikdubbelboer/phpredisadmin:latest
### REDISWEBUI ################################################
redis-webui:
build:
context: ./redis-webui
environment:
- ADMIN_USER=${REDIS_WEBUI_USERNAME}
- ADMIN_PASS=${REDIS_WEBUI_PASSWORD}
@ -1018,7 +1209,6 @@ services:
mongo-webui:
build:
context: ./mongo-webui
restart: always
environment:
- ROOT_URL=${MONGO_WEBUI_ROOT_URL}
- MONGO_URL=${MONGO_WEBUI_MONGO_URL}
@ -1293,3 +1483,96 @@ services:
backend:
aliases:
- fetchmail
### TRAEFIK #########################################
traefik:
build:
context: ./traefik
command: --docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "${TRAEFIK_HOST_HTTP_PORT}:80"
- "${TRAEFIK_HOST_HTTPS_PORT}:443"
networks:
- frontend
- backend
labels:
- traefik.backend=traefik
- traefik.frontend.rule=Host:monitor.localhost
- traefik.port=8080
### MOSQUITTO Broker #########################################
mosquitto:
build:
context: ./mosquitto
volumes:
- ${DATA_PATH_HOST}/mosquitto/data:/mosquitto/data
ports:
- "${MOSQUITTO_PORT}:9001"
networks:
- frontend
- backend
### COUCHDB ###################################################
couchdb:
build:
context: ./couchdb
volumes:
- ${DATA_PATH_HOST}/couchdb/data:/opt/couchdb/data
ports:
- "${COUCHDB_PORT}:5984"
networks:
- backend
### Manticore Search ###########################################
manticore:
build:
context: ./manticore
volumes:
- ${MANTICORE_CONFIG_PATH}:/etc/sphinxsearch
- ${DATA_PATH_HOST}/manticore/data:/var/lib/manticore/data
- ${DATA_PATH_HOST}/manticore/log:/var/lib/manticore/log
ports:
- "${MANTICORE_API_PORT}:9312"
- "${MANTICORE_SPHINXQL_PORT}:9306"
- "${MANTICORE_HTTP_PORT}:9308"
networks:
- backend
### SONARQUBE ################################################
sonarqube:
build:
context: ./sonarqube
hostname: "${SONARQUBE_HOSTNAME}"
volumes:
- ${DATA_PATH_HOST}/sonarqube/conf:/opt/sonarqube/conf
- ${DATA_PATH_HOST}/sonarqube/data:/opt/sonarqube/data
- ${DATA_PATH_HOST}/sonarqube/logs:/opt/sonarqube/logs
- ${DATA_PATH_HOST}/sonarqube/extensions:/opt/sonarqube/extensions
- ${DATA_PATH_HOST}/sonarqube/plugins:/opt/sonarqube/lib/bundled-plugins
ports:
- ${SONARQUBE_PORT}:9000
depends_on:
- postgres
environment:
- sonar.jdbc.username=${SONARQUBE_POSTGRES_USER}
- sonar.jdbc.password=${SONARQUBE_POSTGRES_PASSWORD}
- sonar.jdbc.url=jdbc:postgresql://${SONARQUBE_POSTGRES_HOST}:5432/${SONARQUBE_POSTGRES_DB}
networks:
- backend
- frontend
### CONFLUENCE ################################################
confluence:
container_name: Confluence
image: atlassian/confluence-server:${CONFLUENCE_VERSION}
restart: always
ports:
- "${CONFLUENCE_HOST_HTTP_PORT}:8090"
networks:
- frontend
- backend
depends_on:
- postgres
volumes:
- ${DATA_PATH_HOST}/Confluence:/var/atlassian/application-data

View File

@ -1,3 +1,3 @@
FROM docker.elastic.co/elasticsearch/elasticsearch:6.2.3
FROM docker.elastic.co/elasticsearch/elasticsearch:7.1.1
EXPOSE 9200 9300

View File

@ -7,8 +7,11 @@
# Point to the path of your applications code on your host
APP_CODE_PATH_HOST=../
# Point to where the `APP_CODE_PATH_HOST` should be in the container. You may add flags to the path `:cached`, `:delegated`. When using Docker Sync add `:nocopy`
APP_CODE_PATH_CONTAINER=/var/www:cached
# Point to where the `APP_CODE_PATH_HOST` should be in the container
APP_CODE_PATH_CONTAINER=/var/www
# You may add flags to the path `:cached`, `:delegated`. When using Docker Sync add `:nocopy`
APP_CODE_CONTAINER_FLAG=:cached
# Choose storage path on your machine. For all storage systems
DATA_PATH_HOST=~/.laradock/data
@ -34,7 +37,7 @@ COMPOSE_PROJECT_NAME=laradock
### PHP Version ###########################################
# Select a PHP version of the Workspace and PHP-FPM containers (Does not apply to HHVM). Accepted values: 7.2 - 7.1 - 7.0 - 5.6
# Select a PHP version of the Workspace and PHP-FPM containers (Does not apply to HHVM). Accepted values: 7.3 - 7.2 - 7.1 - 7.0 - 5.6
PHP_VERSION=7.2
### Phalcon Version ###########################################
@ -79,6 +82,7 @@ DOCKER_SYNC_STRATEGY=native_osx
### WORKSPACE #############################################
WORKSPACE_COMPOSER_GLOBAL_INSTALL=true
WORKSPACE_COMPOSER_AUTH=false
WORKSPACE_COMPOSER_REPO_PACKAGIST=
WORKSPACE_INSTALL_NODE=true
WORKSPACE_NODE_VERSION=node
@ -88,6 +92,7 @@ WORKSPACE_YARN_VERSION=latest
WORKSPACE_INSTALL_NPM_GULP=true
WORKSPACE_INSTALL_NPM_BOWER=false
WORKSPACE_INSTALL_NPM_VUE_CLI=true
WORKSPACE_INSTALL_NPM_ANGULAR_CLI=false
WORKSPACE_INSTALL_PHPREDIS=true
WORKSPACE_INSTALL_WORKSPACE_SSH=false
WORKSPACE_INSTALL_SUBVERSION=false
@ -97,13 +102,16 @@ WORKSPACE_INSTALL_SSH2=false
WORKSPACE_INSTALL_LDAP=false
WORKSPACE_INSTALL_GMP=false
WORKSPACE_INSTALL_SOAP=false
WORKSPACE_INSTALL_XSL=false
WORKSPACE_INSTALL_IMAP=false
WORKSPACE_INSTALL_MONGO=false
WORKSPACE_INSTALL_AMQP=false
WORKSPACE_INSTALL_CASSANDRA=false
WORKSPACE_INSTALL_MSSQL=false
WORKSPACE_INSTALL_DRUSH=false
WORKSPACE_DRUSH_VERSION=8.1.17
WORKSPACE_INSTALL_DRUPAL_CONSOLE=false
WORKSPACE_INSTALL_WP_CLI=false
WORKSPACE_INSTALL_AEROSPIKE=false
WORKSPACE_INSTALL_V8JS=false
WORKSPACE_INSTALL_LARAVEL_ENVOY=false
@ -114,6 +122,7 @@ WORKSPACE_INSTALL_LINUXBREW=false
WORKSPACE_INSTALL_MC=false
WORKSPACE_INSTALL_SYMFONY=false
WORKSPACE_INSTALL_PYTHON=false
WORKSPACE_INSTALL_POWERLINE=false
WORKSPACE_INSTALL_IMAGE_OPTIMIZERS=false
WORKSPACE_INSTALL_IMAGEMAGICK=false
WORKSPACE_INSTALL_TERRAFORM=false
@ -121,18 +130,24 @@ WORKSPACE_INSTALL_DUSK_DEPS=false
WORKSPACE_INSTALL_PG_CLIENT=false
WORKSPACE_INSTALL_PHALCON=false
WORKSPACE_INSTALL_SWOOLE=false
WORKSPACE_INSTALL_TAINT=false
WORKSPACE_INSTALL_LIBPNG=false
WORKSPACE_INSTALL_IONCUBE=false
WORKSPACE_INSTALL_MYSQL_CLIENT=false
WORKSPACE_INSTALL_PING=false
WORKSPACE_INSTALL_SSHPASS=false
WORKSPACE_INSTALL_INOTIFY=false
WORKSPACE_INSTALL_FSWATCH=false
WORKSPACE_PUID=1000
WORKSPACE_PGID=1000
WORKSPACE_CHROME_DRIVER_VERSION=2.42
WORKSPACE_TIMEZONE=UTC
WORKSPACE_SSH_PORT=2222
WORKSPACE_INSTALL_FFMPEG=false
WORKSPACE_INSTALL_GNU_PARALLEL=false
### PHP_FPM ###############################################
PHP_FPM_INSTALL_ZIP_ARCHIVE=true
PHP_FPM_INSTALL_BCMATH=true
PHP_FPM_INSTALL_MYSQLI=true
PHP_FPM_INSTALL_INTL=true
@ -142,13 +157,16 @@ PHP_FPM_INSTALL_IMAGE_OPTIMIZERS=true
PHP_FPM_INSTALL_PHPREDIS=true
PHP_FPM_INSTALL_MEMCACHED=false
PHP_FPM_INSTALL_XDEBUG=false
PHP_FPM_INSTALL_XHPROF=false
PHP_FPM_INSTALL_PHPDBG=false
PHP_FPM_INSTALL_IMAP=false
PHP_FPM_INSTALL_MONGO=false
PHP_FPM_INSTALL_AMQP=false
PHP_FPM_INSTALL_CASSANDRA=false
PHP_FPM_INSTALL_MSSQL=false
PHP_FPM_INSTALL_SSH2=false
PHP_FPM_INSTALL_SOAP=false
PHP_FPM_INSTALL_XSL=false
PHP_FPM_INSTALL_GMP=false
PHP_FPM_INSTALL_EXIF=false
PHP_FPM_INSTALL_AEROSPIKE=false
@ -157,20 +175,42 @@ PHP_FPM_INSTALL_GHOSTSCRIPT=false
PHP_FPM_INSTALL_LDAP=false
PHP_FPM_INSTALL_PHALCON=false
PHP_FPM_INSTALL_SWOOLE=false
PHP_FPM_INSTALL_TAINT=false
PHP_FPM_INSTALL_PG_CLIENT=false
PHP_FPM_INSTALL_POSTGIS=false
PHP_FPM_INSTALL_PCNTL=false
PHP_FPM_INSTALL_CALENDAR=false
PHP_FPM_INSTALL_FAKETIME=false
PHP_FPM_INSTALL_IONCUBE=false
PHP_FPM_INSTALL_RDKAFKA=false
PHP_FPM_FAKETIME=-0
PHP_FPM_INSTALL_APCU=false
PHP_FPM_INSTALL_YAML=false
PHP_FPM_INSTALL_ADDITIONAL_LOCALES=false
PHP_FPM_INSTALL_MYSQL_CLIENT=false
PHP_FPM_INSTALL_PING=false
PHP_FPM_INSTALL_SSHPASS=false
PHP_FPM_FFMPEG=false
PHP_FPM_ADDITIONAL_LOCALES="es_ES.UTF-8 fr_FR.UTF-8"
### PHP_WORKER ############################################
PHP_WORKER_INSTALL_PGSQL=false
PHP_WORKER_INSTALL_BCMATH=false
PHP_WORKER_INSTALL_PHALCON=false
PHP_WORKER_INSTALL_SOAP=false
PHP_WORKER_INSTALL_ZIP_ARCHIVE=false
PHP_WORKER_INSTALL_MYSQL_CLIENT=false
PHP_WORKER_INSTALL_AMQP=false
PHP_WORKER_INSTALL_GHOSTSCRIPT=false
PHP_WORKER_INSTALL_SWOOLE=false
PHP_WORKER_INSTALL_TAINT=false
PHP_WORKER_INSTALL_FFMPEG=false
PHP_WORKER_INSTALL_GMP=false
PHP_WORKER_INSTALL_CASSANDRA=false
PHP_WORKER_PUID=1000
PHP_WORKER_PGID=1000
### NGINX #################################################
@ -182,6 +222,10 @@ NGINX_PHP_UPSTREAM_CONTAINER=php-fpm
NGINX_PHP_UPSTREAM_PORT=9000
NGINX_SSL_PATH=./nginx/ssl/
### LARAVEL_HORIZON ################################################
LARAVEL_HORIZON_INSTALL_SOCKETS=false
### APACHE ################################################
APACHE_HOST_HTTP_PORT=80
@ -207,6 +251,14 @@ MYSQL_ENTRYPOINT_INITDB=./mysql/docker-entrypoint-initdb.d
REDIS_PORT=6379
### REDIS CLUSTER #########################################
REDIS_CLUSTER_PORT_RANGE=7000-7005
### ZooKeeper #############################################
ZOOKEEPER_PORT=2181
### Percona ###############################################
PERCONA_DATABASE=homestead
@ -224,6 +276,7 @@ MSSQL_PORT=1433
### MARIADB ###############################################
MARIADB_VERSION=latest
MARIADB_DATABASE=default
MARIADB_USER=default
MARIADB_PASSWORD=secret
@ -330,10 +383,30 @@ JENKINS_HOST_HTTP_PORT=8090
JENKINS_HOST_SLAVE_AGENT_PORT=50000
JENKINS_HOME=./jenkins/jenkins_home
### CONFLUENCE ###############################################
CONFLUENCE_POSTGRES_INIT=true
CONFLUENCE_VERSION=6.13-ubuntu-18.04-adoptopenjdk8
CONFLUENCE_POSTGRES_DB=laradock_confluence
CONFLUENCE_POSTGRES_USER=laradock_confluence
CONFLUENCE_POSTGRES_PASSWORD=laradock_confluence
CONFLUENCE_HOST_HTTP_PORT=8090
### GRAFANA ###############################################
GRAFANA_PORT=3000
### GRAYLOG ###############################################
# password must be 16 characters long
GRAYLOG_PASSWORD=somesupersecretpassword
# sha256 representation of the password
GRAYLOG_SHA256_PASSWORD=b1cb6e31e172577918c9e7806c572b5ed8477d3f57aa737bee4b5b1db3696f09
GRAYLOG_PORT=9000
GRAYLOG_SYSLOG_TCP_PORT=514
GRAYLOG_SYSLOG_UDP_PORT=514
GRAYLOG_GELF_TCP_PORT=12201
GRAYLOG_GELF_UDP_PORT=12201
### BLACKFIRE #############################################
# Create an account on blackfire.io. Don't enable blackfire and xDebug at the same time. # visit https://blackfire.io/docs/24-days/06-installation#install-probe-debian for more info.
@ -349,13 +422,9 @@ AEROSPIKE_SERVICE_PORT=3000
AEROSPIKE_FABRIC_PORT=3001
AEROSPIKE_HEARTBEAT_PORT=3002
AEROSPIKE_INFO_PORT=3003
## Temp solution, this should be in the dockerfile
# for all versions "https://github.com/aerospike/aerospike-client-php/archive/master.tar.gz"
# for php 7.2 (using this branch until the support for 7.2 on master) "https://github.com/aerospike/aerospike-client-php/archive/7.2.0-release-candidate.tar.gz"
AEROSPIKE_PHP_REPOSITORY=https://github.com/aerospike/aerospike-client-php/archive/7.2.0-release-candidate.tar.gz
# for php 5.6
# AEROSPIKE_PHP_REPOSITORY=https://github.com/aerospike/aerospike-client-php5/archive/3.4.15.tar.gz
AEROSPIKE_STORAGE_GB=1
AEROSPIKE_MEM_GB=1
AEROSPIKE_NAMESPACE=test
### RETHINKDB #############################################
@ -492,14 +561,25 @@ SOLR_DATAIMPORTHANDLER_MYSQL=false
SOLR_DATAIMPORTHANDLER_MSSQL=false
### GITLAB ###############################################
GITLAB_POSTGRES_INIT=true
GITLAB_HOST_HTTP_PORT=8989
GITLAB_HOST_HTTPS_PORT=9898
GITLAB_HOST_SSH_PORT=2289
GITLAB_DOMAIN_NAME=http://localhost
GITLAB_ROOT_PASSWORD=laradock
GITLAB_HOST_LOG_PATH=./logs/gitlab
GITLAB_POSTGRES_HOST=postgres
GITLAB_POSTGRES_USER=laradock_gitlab
GITLAB_POSTGRES_PASSWORD=laradock_gitlab
GITLAB_POSTGRES_DB=laradock_gitlab
### GITLAB-RUNNER ###############################################
GITLAB_CI_SERVER_URL=http://localhost:8989
GITLAB_RUNNER_REGISTRATION_TOKEN=<my-registration-token>
GITLAB_REGISTER_NON_INTERACTIVE=true
### JUPYTERHUB ###############################################
JUPYTERHUB_POSTGRES_INIT=true
JUPYTERHUB_POSTGRES_HOST=postgres
JUPYTERHUB_POSTGRES_USER=laradock_jupyterhub
JUPYTERHUB_POSTGRES_PASSWORD=laradock_jupyterhub
@ -508,10 +588,10 @@ JUPYTERHUB_PORT=9991
JUPYTERHUB_OAUTH_CALLBACK_URL=http://laradock:9991/hub/oauth_callback
JUPYTERHUB_OAUTH_CLIENT_ID={GITHUB_CLIENT_ID}
JUPYTERHUB_OAUTH_CLIENT_SECRET={GITHUB_CLIENT_SECRET}
JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE=laradock_jupyterhub-user
JUPYTERHUB_CUSTOM_CONFIG=./jupyterhub/jupyterhub_config.py
JUPYTERHUB_USER_DATA=/jupyterhub
JUPYTERHUB_USER_LIST=./jupyterhub/userlist
JUPYTERHUB_ENABLE_NVIDIA=false
### IPYTHON ##################################################
LARADOCK_IPYTHON_CONTROLLER_IP=127.0.0.1
@ -519,7 +599,7 @@ LARADOCK_IPYTHON_CONTROLLER_IP=127.0.0.1
### NETDATA ###############################################
NETDATA_PORT=19999
### PHPREDISADMIN #########################################
### REDISWEBUI #########################################
REDIS_WEBUI_USERNAME=laradock
REDIS_WEBUI_PASSWORD=laradock
REDIS_WEBUI_CONNECT_HOST=redis
@ -614,3 +694,80 @@ MAILU_ADMIN=true
MAILU_WEBMAIL=rainloop
# Dav server implementation (value: radicale, none)
MAILU_WEBDAV=radicale
### TRAEFIK #################################################
TRAEFIK_HOST_HTTP_PORT=80
TRAEFIK_HOST_HTTPS_PORT=443
### MOSQUITTO #################################################
MOSQUITTO_PORT=9001
### COUCHDB ###################################################
COUCHDB_PORT=5984
### Manticore Search ##########################################
MANTICORE_CONFIG_PATH=./manticore/config
MANTICORE_API_PORT=9312
MANTICORE_SPHINXQL_PORT=9306
MANTICORE_HTTP_PORT=9308
### pgadmin ##################################################
# use this address http://ip6-localhost:5050
PGADMIN_PORT=5050
PGADMIN_DEFAULT_EMAIL=pgadmin4@pgadmin.org
PGADMIN_DEFAULT_PASSWORD=admin
### SONARQUBE ################################################
## docker-compose up -d sonarqube
## (If you encounter a database error)
## docker-compose exec --user=root postgres
## source docker-entrypoint-initdb.d/init_sonarqube_db.sh
## (If you encounter logs error)
## docker-compose run --user=root --rm sonarqube chown sonarqube:sonarqube /opt/sonarqube/logs
SONARQUBE_HOSTNAME=sonar.example.com
SONARQUBE_PORT=9000
SONARQUBE_POSTGRES_INIT=true
SONARQUBE_POSTGRES_HOST=postgres
SONARQUBE_POSTGRES_DB=sonar
SONARQUBE_POSTGRES_USER=sonar
SONARQUBE_POSTGRES_PASSWORD=sonarPass
### CASSANDRA ################################################
# Cassandra Version, supported tags can be found at https://hub.docker.com/r/bitnami/cassandra/
CASSANDRA_VERSION=latest
# Inter-node cluster communication port. Default: 7000
CASSANDRA_TRANSPORT_PORT_NUMBER=7000
# JMX connections port. Default: 7199
CASSANDRA_JMX_PORT_NUMBER=7199
# Client port. Default: 9042.
CASSANDRA_CQL_PORT_NUMBER=9042
# Cassandra user name. Defaults: cassandra
CASSANDRA_USER=cassandra
# Password seeder will change the Cassandra default credentials at initialization. In clusters, only one node should be marked as password seeder. Default: no
CASSANDRA_PASSWORD_SEEDER=no
# Cassandra user password. Default: cassandra
CASSANDRA_PASSWORD=cassandra
# Number of tokens for the node. Default: 256.
CASSANDRA_NUM_TOKENS=256
# Hostname used to configure Cassandra. It can be either an IP or a domain. If left empty, it will be resolved to the machine IP.
CASSANDRA_HOST=
# Cluster name to configure Cassandra.. Defaults: My Cluster
CASSANDRA_CLUSTER_NAME="My Cluster"
# : Hosts that will act as Cassandra seeds. No defaults.
CASSANDRA_SEEDS=
# Snitch name (which determines which data centers and racks nodes belong to). Default SimpleSnitch
CASSANDRA_ENDPOINT_SNITCH=SimpleSnitch
# Enable the thrift RPC endpoint. Default :true
CASSANDRA_ENABLE_RPC=true
# Datacenter name for the cluster. Ignored in SimpleSnitch endpoint snitch. Default: dc1.
CASSANDRA_DATACENTER=dc1
# Rack name for the cluster. Ignored in SimpleSnitch endpoint snitch. Default: rack1.
CASSANDRA_RACK=rack1

3
graylog/Dockerfile Normal file
View File

@ -0,0 +1,3 @@
FROM graylog/graylog:3.0
EXPOSE 9000

481
graylog/config/graylog.conf Normal file
View File

@ -0,0 +1,481 @@
############################
# GRAYLOG CONFIGURATION FILE
############################
#
# This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.
# Characters that cannot be directly represented in this encoding can be written using Unicode escapes
# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.
# For example, \u002c.
#
# * Entries are generally expected to be a single line of the form, one of the following:
#
# propertyName=propertyValue
# propertyName:propertyValue
#
# * White space that appears between the property name and property value is ignored,
# so the following are equivalent:
#
# name=Stephen
# name = Stephen
#
# * White space at the beginning of the line is also ignored.
#
# * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.
#
# * The property value is generally terminated by the end of the line. White space following the
# property value is not ignored, and is treated as part of the property value.
#
# * A property value can span several lines if each line is terminated by a backslash (\) character.
# For example:
#
# targetCities=\
# Detroit,\
# Chicago,\
# Los Angeles
#
# This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).
#
# * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.
#
# * The backslash character must be escaped as a double backslash. For example:
#
# path=c:\\docs\\doc1
#
# If you are running more than one instances of Graylog server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true
# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /usr/share/graylog/data/config/node-id
# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.
# Generate one by using for example: pwgen -N 1 -s 96
password_secret = replacethiswithyourownsecret!
# The default root user is named 'admin'
#root_username = admin
# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
# Default password: admin
# CHANGE THIS!
root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
# The email address of the root user.
# Default is empty
#root_email = ""
# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.
# Default is UTC
#root_timezone = UTC
# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog/plugin
###############
# HTTP settings
###############
#### HTTP bind address
#
# The network interface used by the Graylog HTTP interface.
#
# This network interface must be accessible by all Graylog nodes in the cluster and by all clients
# using the Graylog web interface.
#
# If the port is omitted, Graylog will use port 9000 by default.
#
# Default: 127.0.0.1:9000
#http_bind_address = 127.0.0.1:9000
#http_bind_address = [2001:db8::1]:9000
http_bind_address = 0.0.0.0:9000
#### HTTP publish URI
#
# The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all
# clients using the Graylog web interface.
#
# The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.
#
# This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,
# for example if the machine has multiple network interfaces or is behind a NAT gateway.
#
# If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.
# This configuration setting *must not* contain a wildcard address!
#
# Default: http://$http_bind_address/
#http_publish_uri = http://192.168.1.1:9000/
#### External Graylog URI
#
# The public URI of Graylog which will be used by the Graylog web interface to communicate with the Graylog REST API.
#
# The external Graylog URI usually has to be specified, if Graylog is running behind a reverse proxy or load-balancer
# and it will be used to generate URLs addressing entities in the Graylog REST API (see $http_bind_address).
#
# When using Graylog Collector, this URI will be used to receive heartbeat messages and must be accessible for all collectors.
#
# This setting can be overriden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.
#
# Default: $http_publish_uri
#http_external_uri =
#### Enable CORS headers for HTTP interface
#
# This is necessary for JS-clients accessing the server directly.
# If these are disabled, modern browsers will not be able to retrieve resources from the server.
# This is enabled by default. Uncomment the next line to disable it.
#http_enable_cors = false
#### Enable GZIP support for HTTP interface
#
# This compresses API responses and therefore helps to reduce
# overall round trip times. This is enabled by default. Uncomment the next line to disable it.
#http_enable_gzip = false
# The maximum size of the HTTP request headers in bytes.
#http_max_header_size = 8192
# The size of the thread pool used exclusively for serving the HTTP interface.
#http_thread_pool_size = 16
################
# HTTPS settings
################
#### Enable HTTPS support for the HTTP interface
#
# This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.
#
# Default: false
#http_enable_tls = true
# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.
#http_tls_cert_file = /path/to/graylog.crt
# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.
#http_tls_key_file = /path/to/graylog.key
# The password to unlock the private key used for securing the HTTP interface.
#http_tls_key_password = secret
# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For
# header. May be subnets, or hosts.
#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128
# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
elasticsearch_hosts = http://elasticsearch:9200
# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.
#
# Default: 10 Seconds
#elasticsearch_connect_timeout = 10s
# Maximum amount of time to wait for reading back a response from an Elasticsearch server.
#
# Default: 60 seconds
#elasticsearch_socket_timeout = 60s
# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will
# be tore down.
#
# Default: inf
#elasticsearch_idle_timeout = -1s
# Maximum number of total connections to Elasticsearch.
#
# Default: 20
#elasticsearch_max_total_connections = 20
# Maximum number of total connections per Elasticsearch route (normally this means per
# elasticsearch server).
#
# Default: 2
#elasticsearch_max_total_connections_per_route = 2
# Maximum number of times Graylog will retry failed requests to Elasticsearch.
#
# Default: 2
#elasticsearch_max_retries = 2
# Enable automatic Elasticsearch node discovery through Nodes Info,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html
#
# WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.
#
# Default: false
#elasticsearch_discovery_enabled = true
# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,
# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes
#
# Default: empty
#elasticsearch_discovery_filter = rack:42
# Frequency of the Elasticsearch node discovery.
#
# Default: 30s
# elasticsearch_discovery_frequency = 30s
# Enable payload compression for Elasticsearch requests.
#
# Default: false
#elasticsearch_compression_enabled = true
# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
#elasticsearch_disable_version_check = true
# Disable message retention on this node, i. e. disable Elasticsearch index rotation.
#no_retention = false
# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.html
allow_leading_wildcard_searches = false
# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false
# Global request timeout for Elasticsearch requests (e. g. during search, index creation, or index time-range
# calculations) based on a best-effort to restrict the runtime of Elasticsearch operations.
# Default: 1m
#elasticsearch_request_timeout = 1m
# Global timeout for index optimization (force merge) requests.
# Default: 1h
#elasticsearch_index_optimization_timeout = 1h
# Maximum number of concurrently running index optimization (force merge) jobs.
# If you are using lots of different index sets, you might want to increase that number.
# Default: 20
#elasticsearch_index_optimization_jobs = 20
# Time interval for index range information cleanups. This setting defines how often stale index range information
# is being purged from the database.
# Default: 1h
#index_ranges_cleanup_interval = 1h
# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output
# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been
# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember
# that every outputbuffer processor manages its own batch and performs its own batch write calls.
# ("outputbuffer_processors" variable)
output_batch_size = 500
# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1
# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and
# over again. To prevent this, the following configuration options define after how many faults an output will
# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 5
outputbuffer_processors = 3
# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.
# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details
# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),
# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.
# Default: 5000
#outputbuffer_processor_keep_alive_time = 5000
# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3
# The maximum number of threads to allow in the pool
# Default: 30
#outputbuffer_processor_threads_max_pool_size = 30
# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576
# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
# - yielding
# Compromise between performance and CPU usage.
# - sleeping
# Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
# - blocking
# High throughput, low latency, higher CPU usage.
# - busy_spinning
# Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking
# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
# Enable the disk based message journal.
message_journal_enabled = true
# The directory which will be used to store the message journal. The directory must me exclusively used by Graylog and
# must not contain any other files than the ones created by Graylog itself.
#
# ATTENTION:
# If you create a seperate partition for the journal files and use a file system creating directories like 'lost+found'
# in the root directory, you need to create a sub directory for your journal.
# Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.
message_journal_dir = /usr/share/graylog/data/journal
# Journal hold messages before they could be written to Elasticsearch.
# For a maximum of 12 hours or 5 GB whichever happens first.
# During normal operation the journal will be smaller.
#message_journal_max_age = 12h
#message_journal_max_size = 5gb
#message_journal_flush_age = 1m
#message_journal_flush_interval = 1000000
#message_journal_segment_age = 1h
#message_journal_segment_size = 100mb
# Number of threads used exclusively for dispatching internal events. Default is 2.
#async_eventbus_processors = 2
# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual
# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3
# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is
# disabled if not set.
#lb_throttle_threshold_percentage = 95
# Every message is matched against the configured streams and it can happen that a stream contains rules which
# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.
# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other
# streams, Graylog limits the execution time for each stream.
# The default values are noted below, the timeout is in milliseconds.
# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times
# that stream is disabled and a notification is shown in the web interface.
#stream_processing_timeout = 2000
#stream_processing_max_faults = 3
# Length of the interval in seconds in which the alert conditions for all streams should be checked
# and alarms are being sent.
#alert_check_interval = 60
# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple
# outputs. The next setting defines the timeout for a single output module, including the default output module where all
# messages end up.
#
# Time in milliseconds to wait for all message outputs to finish writing a single message.
#output_module_timeout = 10000
# Time in milliseconds after which a detected stale master node is being rechecked on startup.
#stale_master_timeout = 2000
# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.
#shutdown_timeout = 30000
# MongoDB connection string
# See https://docs.mongodb.com/manual/reference/connection-string/ for details
mongodb_uri = mongodb://mongo/graylog
# Authenticate against the MongoDB server
#mongodb_uri = mongodb://grayloguser:secret@mongo:27017/graylog
# Use a replica set instead of a single host
#mongodb_uri = mongodb://grayloguser:secret@mongo:27017,mongo:27018,mongo:27019/graylog
# Increase this value according to the maximum connections your MongoDB server can handle from a single client
# if you encounter MongoDB connection problems.
mongodb_max_connections = 100
# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,
# then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5
# Drools Rule File (Use to rewrite incoming log messages)
# See: http://docs.graylog.org/en/2.1/pages/drools.html
#rules_file = /etc/graylog/server/rules.drl
# Email transport
#transport_email_enabled = false
#transport_email_hostname = mail.example.com
#transport_email_port = 587
#transport_email_use_auth = true
#transport_email_use_tls = true
#transport_email_use_ssl = true
#transport_email_auth_username = you@example.com
#transport_email_auth_password = secret
#transport_email_subject_prefix = [graylog]
#transport_email_from_email = graylog@example.com
# Specify and uncomment this if you want to include links to the stream in your stream alert mails.
# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.
#transport_email_web_interface_url = https://graylog.example.com
# The default connect timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 5s
#http_connect_timeout = 5s
# The default read timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_read_timeout = 10s
# The default write timeout for outgoing HTTP connections.
# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).
# Default: 10s
#http_write_timeout = 10s
# HTTP proxy for outgoing HTTP connections
#http_proxy_uri =
# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification
# will be generated to warn the administrator about possible problems with the system. Default is 1 second.
#gc_warning_threshold = 1s
# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.
#ldap_connection_timeout = 2000
# Disable the use of SIGAR for collecting system stats
#disable_sigar = false
# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)
#dashboard_widget_default_cache_time = 10s
# Automatically load content packs in "content_packs_dir" on the first start of Graylog.
content_packs_loader_enabled = true
# The directory which contains content packs which should be loaded on the first start of Graylog.
content_packs_dir = /usr/share/graylog/data/contentpacks
# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on
# the first start of Graylog.
# Default: empty
content_packs_auto_load = grok-patterns.json
# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number
# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.
# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.
proxied_requests_thread_pool_size = 32

35
graylog/config/log4j2.xml Normal file
View File

@ -0,0 +1,35 @@
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="org.graylog2.log4j" shutdownHook="disable">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%d %-5p: %c - %m%n"/>
</Console>
<!-- Internal Graylog log appender. Please do not disable. This makes internal log messages available via REST calls. -->
<Memory name="graylog-internal-logs" bufferSize="500"/>
</Appenders>
<Loggers>
<!-- Application Loggers -->
<Logger name="org.graylog2" level="info"/>
<Logger name="com.github.joschi.jadconfig" level="warn"/>
<!-- this emits a harmless warning for ActiveDirectory every time which we can't work around :( -->
<Logger name="org.apache.directory.api.ldap.model.message.BindRequestImpl" level="error"/>
<!-- Prevent DEBUG message about Lucene Expressions not found. -->
<Logger name="org.elasticsearch.script" level="warn"/>
<!-- Disable messages from the version check -->
<Logger name="org.graylog2.periodical.VersionCheckThread" level="off"/>
<!-- Suppress crazy byte array dump of Drools -->
<Logger name="org.drools.compiler.kie.builder.impl.KieRepositoryImpl" level="warn"/>
<!-- Silence chatty natty -->
<Logger name="com.joestelmach.natty.Parser" level="warn"/>
<!-- Silence Kafka log chatter -->
<Logger name="kafka.log.Log" level="warn"/>
<Logger name="kafka.log.OffsetIndex" level="warn"/>
<!-- Silence useless session validation messages -->
<Logger name="org.apache.shiro.session.mgt.AbstractValidatingSessionManager" level="warn"/>
<Root level="warn">
<AppenderRef ref="STDOUT"/>
<AppenderRef ref="graylog-internal-logs"/>
</Root>
</Loggers>
</Configuration>

View File

@ -2,4 +2,8 @@ FROM theiaide/theia
LABEL maintainer="ahkui <ahkui@outlook.com>"
USER root
RUN echo 'fs.inotify.max_user_watches=524288' >> /etc/sysctl.conf
USER theia

View File

@ -4,7 +4,7 @@ LABEL maintainer="ahkui <ahkui@outlook.com>"
USER root
RUN apk add --no-cache build-base
RUN apk add --no-cache build-base zeromq-dev
RUN python -m pip --quiet --no-cache-dir install \
ipyparallel

View File

@ -4,7 +4,7 @@ LABEL maintainer="ahkui <ahkui@outlook.com>"
USER root
RUN apk add --no-cache build-base
RUN apk add --no-cache build-base zeromq-dev
RUN python -m pip --quiet --no-cache-dir install \
ipyparallel \

View File

@ -108,3 +108,7 @@ COPY install-plugins.sh /usr/local/bin/install-plugins.sh
#RUN chmod 644 /var/jenkins_home/.ssh/id_rsa.pub
## ssh-keyscan -H github.com >> ~/.ssh/known_hosts
## ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts
# Fix docker permission denied error
USER root
RUN usermod -aG docker jenkins

View File

@ -10,6 +10,7 @@ ENV JUPYTERHUB_OAUTH_CALLBACK_URL ${JUPYTERHUB_OAUTH_CALLBACK_URL}
ENV JUPYTERHUB_OAUTH_CLIENT_ID ${JUPYTERHUB_OAUTH_CLIENT_ID}
ENV JUPYTERHUB_OAUTH_CLIENT_SECRET ${JUPYTERHUB_OAUTH_CLIENT_SECRET}
ENV JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE ${JUPYTERHUB_LOCAL_NOTEBOOK_IMAGE}
ENV JUPYTERHUB_ENABLE_NVIDIA ${JUPYTERHUB_ENABLE_NVIDIA}
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -

View File

@ -6,6 +6,9 @@ import os
c = get_config()
# create system users that don't exist yet
c.LocalAuthenticator.create_system_users = True
def create_dir_hook(spawner):
username = spawner.user.name # get the username
volume_path = os.path.join('/user-data', username)
@ -45,8 +48,12 @@ network_name = os.environ.get('JUPYTERHUB_NETWORK_NAME','laradock_backend')
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.network_name = network_name
enable_nvidia = os.environ.get('JUPYTERHUB_ENABLE_NVIDIA','false')
# Pass the network name as argument to spawned containers
c.DockerSpawner.extra_host_config = { 'network_mode': network_name, 'runtime': 'nvidia' }
c.DockerSpawner.extra_host_config = { 'network_mode': network_name }
if 'true' == enable_nvidia:
c.DockerSpawner.extra_host_config = { 'network_mode': network_name, 'runtime': 'nvidia' }
pass
# c.DockerSpawner.extra_host_config = { 'network_mode': network_name, "devices":["/dev/nvidiactl","/dev/nvidia-uvm","/dev/nvidia0"] }
# Explicitly set notebook directory because we'll be mounting a host volume to
# it. Most jupyter/docker-stacks *-notebook images run the Notebook server as

View File

@ -1,3 +1,3 @@
FROM docker.elastic.co/kibana/kibana:6.2.3
FROM docker.elastic.co/kibana/kibana:6.6.0
EXPOSE 5601

View File

@ -19,4 +19,4 @@ RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
CMD [ "npm", "start", "--force" ]

View File

@ -4,7 +4,7 @@
"version": "0.0.1",
"license": "MIT",
"dependencies": {
"laravel-echo-server": "^1.2.8"
"laravel-echo-server": "^1.5.0"
},
"scripts": {
"start": "laravel-echo-server start"

View File

@ -20,10 +20,11 @@ RUN apk --update add wget \
autoconf \
cyrus-sasl-dev \
libgsasl-dev \
supervisor
supervisor \
procps
RUN docker-php-ext-install mysqli mbstring pdo pdo_mysql tokenizer xml pcntl
RUN pecl channel-update pecl.php.net && pecl install memcached mcrypt-1.0.1 && docker-php-ext-enable memcached
RUN pecl channel-update pecl.php.net && pecl install memcached mcrypt-1.0.1 mongodb && docker-php-ext-enable memcached mongodb
#Install BCMath package:
ARG INSTALL_BCMATH=false
@ -31,6 +32,12 @@ RUN if [ ${INSTALL_BCMATH} = true ]; then \
docker-php-ext-install bcmath \
;fi
#Install Sockets package:
ARG INSTALL_SOCKETS=false
RUN if [ ${INSTALL_SOCKETS} = true ]; then \
docker-php-ext-install sockets \
;fi
# Install PostgreSQL drivers:
ARG INSTALL_PGSQL=false
RUN if [ ${INSTALL_PGSQL} = true ]; then \
@ -38,6 +45,28 @@ RUN if [ ${INSTALL_PGSQL} = true ]; then \
&& docker-php-ext-install pdo_pgsql \
;fi
# Install Cassandra drivers:
ARG INSTALL_CASSANDRA=false
RUN if [ ${INSTALL_CASSANDRA} = true ]; then \
apk --update add cassandra-cpp-driver \
;fi
WORKDIR /usr/src
RUN if [ ${INSTALL_CASSANDRA} = true ]; then \
git clone https://github.com/datastax/php-driver.git \
&& cd php-driver/ext \
&& phpize \
&& mkdir -p /usr/src/php-driver/build \
&& cd /usr/src/php-driver/build \
&& ../ext/configure > /dev/null \
&& make clean >/dev/null \
&& make >/dev/null 2>&1 \
&& make install \
&& docker-php-ext-enable cassandra \
;fi
###########################################################################
# PHP Memcached:
###########################################################################

10
logstash/Dockerfile Normal file
View File

@ -0,0 +1,10 @@
FROM docker.elastic.co/logstash/logstash:6.4.2
USER root
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
RUN curl -L -o /usr/share/logstash/lib/mysql-connector-java-5.1.47.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.47/mysql-connector-java-5.1.47.jar
ADD ./pipeline/ /usr/share/logstash/pipeline/
ADD ./config/ /usr/share/logstash/config/
RUN logstash-plugin install logstash-input-jdbc

View File

@ -0,0 +1,5 @@
http.host: "0.0.0.0"
xpack.monitoring.enabled: false
config.reload.automatic: true
path.config: "/usr/share/logstash/pipeline"

5
manticore/Dockerfile Normal file
View File

@ -0,0 +1,5 @@
FROM manticoresearch/manticore
EXPOSE 9306
EXPOSE 9308
EXPOSE 9312

View File

@ -0,0 +1,25 @@
index testrt {
type = rt
rt_mem_limit = 128M
path = /var/lib/manticore/data/testrt
rt_field = title
rt_field = content
rt_attr_uint = gid
}
searchd {
listen = 9312
listen = 9308:http
listen = 9306:mysql41
log = /var/lib/manticore/log/searchd.log
# you can also send query_log to /dev/stdout to be shown in docker logs
query_log = /var/lib/manticore/log/query.log
read_timeout = 5
max_children = 30
pid_file = /var/run/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /var/lib/manticore/data
}

View File

@ -1,7 +1,15 @@
FROM mariadb:latest
ARG MARIADB_VERSION=latest
FROM mariadb:${MARIADB_VERSION}
LABEL maintainer="Mahmoud Zalt <mahmoud@zalt.me>"
#####################################
# Set Timezone
#####################################
ARG TZ=UTC
ENV TZ ${TZ}
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && chown -R mysql:root /var/lib/mysql/
COPY my.cnf /etc/mysql/conf.d/my.cnf
CMD ["mysqld"]

5
mosquitto/Dockerfile Normal file
View File

@ -0,0 +1,5 @@
FROM eclipse-mosquitto:latest
LABEL maintainer="Luis Coutinho <luis@luiscoutinho.pt>"
COPY mosquitto.conf /mosquitto/config/

838
mosquitto/mosquitto.conf Normal file
View File

@ -0,0 +1,838 @@
# Config file for mosquitto
#
# See mosquitto.conf(5) for more information.
#
# Default values are shown, uncomment to change.
#
# Use the # character to indicate a comment, but only if it is the
# very first character on the line.
# =================================================================
# General configuration
# =================================================================
# Time in seconds to wait before resending an outgoing QoS=1 or
# QoS=2 message.
#retry_interval 20
# Time in seconds between updates of the $SYS tree.
# Set to 0 to disable the publishing of the $SYS tree.
#sys_interval 10
# Time in seconds between cleaning the internal message store of
# unreferenced messages. Lower values will result in lower memory
# usage but more processor time, higher values will have the
# opposite effect.
# Setting a value of 0 means the unreferenced messages will be
# disposed of as quickly as possible.
#store_clean_interval 10
# Write process id to a file. Default is a blank string which means
# a pid file shouldn't be written.
# This should be set to /var/run/mosquitto.pid if mosquitto is
# being run automatically on boot with an init script and
# start-stop-daemon or similar.
#pid_file
# When run as root, drop privileges to this user and its primary
# group.
# Leave blank to stay as root, but this is not recommended.
# If run as a non-root user, this setting has no effect.
# Note that on Windows this has no effect and so mosquitto should
# be started by the user you wish it to run as.
#user mosquitto
# The maximum number of QoS 1 and 2 messages currently inflight per
# client.
# This includes messages that are partway through handshakes and
# those that are being retried. Defaults to 20. Set to 0 for no
# maximum. Setting to 1 will guarantee in-order delivery of QoS 1
# and 2 messages.
#max_inflight_messages 20
# The maximum number of QoS 1 and 2 messages to hold in a queue
# above those that are currently in-flight. Defaults to 100. Set
# to 0 for no maximum (not recommended).
# See also queue_qos0_messages.
#max_queued_messages 100
# Set to true to queue messages with QoS 0 when a persistent client is
# disconnected. These messages are included in the limit imposed by
# max_queued_messages.
# Defaults to false.
# This is a non-standard option for the MQTT v3.1 spec but is allowed in
# v3.1.1.
#queue_qos0_messages false
# This option sets the maximum publish payload size that the broker will allow.
# Received messages that exceed this size will not be accepted by the broker.
# The default value is 0, which means that all valid MQTT messages are
# accepted. MQTT imposes a maximum payload size of 268435455 bytes.
#message_size_limit 0
# This option controls whether a client is allowed to connect with a zero
# length client id or not. This option only affects clients using MQTT v3.1.1
# and later. If set to false, clients connecting with a zero length client id
# are disconnected. If set to true, clients will be allocated a client id by
# the broker. This means it is only useful for clients with clean session set
# to true.
#allow_zero_length_clientid true
# If allow_zero_length_clientid is true, this option allows you to set a prefix
# to automatically generated client ids to aid visibility in logs.
#auto_id_prefix
# This option allows persistent clients (those with clean session set to false)
# to be removed if they do not reconnect within a certain time frame.
#
# This is a non-standard option in MQTT V3.1 but allowed in MQTT v3.1.1.
#
# Badly designed clients may set clean session to false whilst using a randomly
# generated client id. This leads to persistent clients that will never
# reconnect. This option allows these clients to be removed.
#
# The expiration period should be an integer followed by one of h d w m y for
# hour, day, week, month and year respectively. For example
#
# persistent_client_expiration 2m
# persistent_client_expiration 14d
# persistent_client_expiration 1y
#
# The default if not set is to never expire persistent clients.
#persistent_client_expiration
# If a client is subscribed to multiple subscriptions that overlap, e.g. foo/#
# and foo/+/baz , then MQTT expects that when the broker receives a message on
# a topic that matches both subscriptions, such as foo/bar/baz, then the client
# should only receive the message once.
# Mosquitto keeps track of which clients a message has been sent to in order to
# meet this requirement. The allow_duplicate_messages option allows this
# behaviour to be disabled, which may be useful if you have a large number of
# clients subscribed to the same set of topics and are very concerned about
# minimising memory usage.
# It can be safely set to true if you know in advance that your clients will
# never have overlapping subscriptions, otherwise your clients must be able to
# correctly deal with duplicate messages even when then have QoS=2.
#allow_duplicate_messages false
# The MQTT specification requires that the QoS of a message delivered to a
# subscriber is never upgraded to match the QoS of the subscription. Enabling
# this option changes this behaviour. If upgrade_outgoing_qos is set true,
# messages sent to a subscriber will always match the QoS of its subscription.
# This is a non-standard option explicitly disallowed by the spec.
#upgrade_outgoing_qos false
# =================================================================
# Default listener
# =================================================================
# IP address/hostname to bind the default listener to. If not
# given, the default listener will not be bound to a specific
# address and so will be accessible to all network interfaces.
# bind_address ip-address/host name
#bind_address
# Port to use for the default listener.
port 9001
# The maximum number of client connections to allow. This is
# a per listener setting.
# Default is -1, which means unlimited connections.
# Note that other process limits mean that unlimited connections
# are not really possible. Typically the default maximum number of
# connections possible is around 1024.
#max_connections -1
# Choose the protocol to use when listening.
# This can be either mqtt or websockets.
# Websockets support is currently disabled by default at compile time.
# Certificate based TLS may be used with websockets, except that
# only the cafile, certfile, keyfile and ciphers options are supported.
protocol websockets
# When a listener is using the websockets protocol, it is possible to serve
# http data as well. Set http_dir to a directory which contains the files you
# wish to serve. If this option is not specified, then no normal http
# connections will be possible.
#http_dir
# Set use_username_as_clientid to true to replace the clientid that a client
# connected with with its username. This allows authentication to be tied to
# the clientid, which means that it is possible to prevent one client
# disconnecting another by using the same clientid.
# If a client connects with no username it will be disconnected as not
# authorised when this option is set to true.
# Do not use in conjunction with clientid_prefixes.
# See also use_identity_as_username.
#use_username_as_clientid
# -----------------------------------------------------------------
# Certificate based SSL/TLS support
# -----------------------------------------------------------------
# The following options can be used to enable SSL/TLS support for
# this listener. Note that the recommended port for MQTT over TLS
# is 8883, but this must be set manually.
#
# See also the mosquitto-tls man page.
# At least one of cafile or capath must be defined. They both
# define methods of accessing the PEM encoded Certificate
# Authority certificates that have signed your server certificate
# and that you wish to trust.
# cafile defines the path to a file containing the CA certificates.
# capath defines a directory that will be searched for files
# containing the CA certificates. For capath to work correctly, the
# certificate files must have ".crt" as the file ending and you must run
# "c_rehash <path to capath>" each time you add/remove a certificate.
#cafile
#capath
# Path to the PEM encoded server certificate.
#certfile
# Path to the PEM encoded keyfile.
#keyfile
# This option defines the version of the TLS protocol to use for this listener.
# The default value allows v1.2, v1.1 and v1.0, if they are all supported by
# the version of openssl that the broker was compiled against. For openssl >=
# 1.0.1 the valid values are tlsv1.2 tlsv1.1 and tlsv1. For openssl < 1.0.1 the
# valid values are tlsv1.
#tls_version
# By default a TLS enabled listener will operate in a similar fashion to a
# https enabled web server, in that the server has a certificate signed by a CA
# and the client will verify that it is a trusted certificate. The overall aim
# is encryption of the network traffic. By setting require_certificate to true,
# the client must provide a valid certificate in order for the network
# connection to proceed. This allows access to the broker to be controlled
# outside of the mechanisms provided by MQTT.
#require_certificate false
# If require_certificate is true, you may set use_identity_as_username to true
# to use the CN value from the client certificate as a username. If this is
# true, the password_file option will not be used for this listener.
#use_identity_as_username false
# If you have require_certificate set to true, you can create a certificate
# revocation list file to revoke access to particular client certificates. If
# you have done this, use crlfile to point to the PEM encoded revocation file.
#crlfile
# If you wish to control which encryption ciphers are used, use the ciphers
# option. The list of available ciphers can be optained using the "openssl
# ciphers" command and should be provided in the same format as the output of
# that command.
# If unset defaults to DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:@STRENGTH
#ciphers DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:@STRENGTH
# -----------------------------------------------------------------
# Pre-shared-key based SSL/TLS support
# -----------------------------------------------------------------
# The following options can be used to enable PSK based SSL/TLS support for
# this listener. Note that the recommended port for MQTT over TLS is 8883, but
# this must be set manually.
#
# See also the mosquitto-tls man page and the "Certificate based SSL/TLS
# support" section. Only one of certificate or PSK encryption support can be
# enabled for any listener.
# The psk_hint option enables pre-shared-key support for this listener and also
# acts as an identifier for this listener. The hint is sent to clients and may
# be used locally to aid authentication. The hint is a free form string that
# doesn't have much meaning in itself, so feel free to be creative.
# If this option is provided, see psk_file to define the pre-shared keys to be
# used or create a security plugin to handle them.
#psk_hint
# Set use_identity_as_username to have the psk identity sent by the client used
# as its username. Authentication will be carried out using the PSK rather than
# the MQTT username/password and so password_file will not be used for this
# listener.
#use_identity_as_username false
# When using PSK, the encryption ciphers used will be chosen from the list of
# available PSK ciphers. If you want to control which ciphers are available,
# use the "ciphers" option. The list of available ciphers can be optained
# using the "openssl ciphers" command and should be provided in the same format
# as the output of that command.
#ciphers
# =================================================================
# Extra listeners
# =================================================================
# Listen on a port/ip address combination. By using this variable
# multiple times, mosquitto can listen on more than one port. If
# this variable is used and neither bind_address nor port given,
# then the default listener will not be started.
# The port number to listen on must be given. Optionally, an ip
# address or host name may be supplied as a second argument. In
# this case, mosquitto will attempt to bind the listener to that
# address and so restrict access to the associated network and
# interface. By default, mosquitto will listen on all interfaces.
# Note that for a websockets listener it is not possible to bind to a host
# name.
# listener port-number [ip address/host name]
#listener
# The maximum number of client connections to allow. This is
# a per listener setting.
# Default is -1, which means unlimited connections.
# Note that other process limits mean that unlimited connections
# are not really possible. Typically the default maximum number of
# connections possible is around 1024.
#max_connections -1
# The listener can be restricted to operating within a topic hierarchy using
# the mount_point option. This is achieved be prefixing the mount_point string
# to all topics for any clients connected to this listener. This prefixing only
# happens internally to the broker; the client will not see the prefix.
#mount_point
# Choose the protocol to use when listening.
# This can be either mqtt or websockets.
# Certificate based TLS may be used with websockets, except that only the
# cafile, certfile, keyfile and ciphers options are supported.
#protocol mqtt
# When a listener is using the websockets protocol, it is possible to serve
# http data as well. Set http_dir to a directory which contains the files you
# wish to serve. If this option is not specified, then no normal http
# connections will be possible.
#http_dir
# Set use_username_as_clientid to true to replace the clientid that a client
# connected with with its username. This allows authentication to be tied to
# the clientid, which means that it is possible to prevent one client
# disconnecting another by using the same clientid.
# If a client connects with no username it will be disconnected as not
# authorised when this option is set to true.
# Do not use in conjunction with clientid_prefixes.
# See also use_identity_as_username.
#use_username_as_clientid
# -----------------------------------------------------------------
# Certificate based SSL/TLS support
# -----------------------------------------------------------------
# The following options can be used to enable certificate based SSL/TLS support
# for this listener. Note that the recommended port for MQTT over TLS is 8883,
# but this must be set manually.
#
# See also the mosquitto-tls man page and the "Pre-shared-key based SSL/TLS
# support" section. Only one of certificate or PSK encryption support can be
# enabled for any listener.
# At least one of cafile or capath must be defined to enable certificate based
# TLS encryption. They both define methods of accessing the PEM encoded
# Certificate Authority certificates that have signed your server certificate
# and that you wish to trust.
# cafile defines the path to a file containing the CA certificates.
# capath defines a directory that will be searched for files
# containing the CA certificates. For capath to work correctly, the
# certificate files must have ".crt" as the file ending and you must run
# "c_rehash <path to capath>" each time you add/remove a certificate.
#cafile
#capath
# Path to the PEM encoded server certificate.
#certfile
# Path to the PEM encoded keyfile.
#keyfile
# By default an TLS enabled listener will operate in a similar fashion to a
# https enabled web server, in that the server has a certificate signed by a CA
# and the client will verify that it is a trusted certificate. The overall aim
# is encryption of the network traffic. By setting require_certificate to true,
# the client must provide a valid certificate in order for the network
# connection to proceed. This allows access to the broker to be controlled
# outside of the mechanisms provided by MQTT.
#require_certificate false
# If require_certificate is true, you may set use_identity_as_username to true
# to use the CN value from the client certificate as a username. If this is
# true, the password_file option will not be used for this listener.
#use_identity_as_username false
# If you have require_certificate set to true, you can create a certificate
# revocation list file to revoke access to particular client certificates. If
# you have done this, use crlfile to point to the PEM encoded revocation file.
#crlfile
# If you wish to control which encryption ciphers are used, use the ciphers
# option. The list of available ciphers can be optained using the "openssl
# ciphers" command and should be provided in the same format as the output of
# that command.
#ciphers
# -----------------------------------------------------------------
# Pre-shared-key based SSL/TLS support
# -----------------------------------------------------------------
# The following options can be used to enable PSK based SSL/TLS support for
# this listener. Note that the recommended port for MQTT over TLS is 8883, but
# this must be set manually.
#
# See also the mosquitto-tls man page and the "Certificate based SSL/TLS
# support" section. Only one of certificate or PSK encryption support can be
# enabled for any listener.
# The psk_hint option enables pre-shared-key support for this listener and also
# acts as an identifier for this listener. The hint is sent to clients and may
# be used locally to aid authentication. The hint is a free form string that
# doesn't have much meaning in itself, so feel free to be creative.
# If this option is provided, see psk_file to define the pre-shared keys to be
# used or create a security plugin to handle them.
#psk_hint
# Set use_identity_as_username to have the psk identity sent by the client used
# as its username. Authentication will be carried out using the PSK rather than
# the MQTT username/password and so password_file will not be used for this
# listener.
#use_identity_as_username false
# When using PSK, the encryption ciphers used will be chosen from the list of
# available PSK ciphers. If you want to control which ciphers are available,
# use the "ciphers" option. The list of available ciphers can be optained
# using the "openssl ciphers" command and should be provided in the same format
# as the output of that command.
#ciphers
# =================================================================
# Persistence
# =================================================================
# If persistence is enabled, save the in-memory database to disk
# every autosave_interval seconds. If set to 0, the persistence
# database will only be written when mosquitto exits. See also
# autosave_on_changes.
# Note that writing of the persistence database can be forced by
# sending mosquitto a SIGUSR1 signal.
#autosave_interval 1800
# If true, mosquitto will count the number of subscription changes, retained
# messages received and queued messages and if the total exceeds
# autosave_interval then the in-memory database will be saved to disk.
# If false, mosquitto will save the in-memory database to disk by treating
# autosave_interval as a time in seconds.
#autosave_on_changes false
# Save persistent message data to disk (true/false).
# This saves information about all messages, including
# subscriptions, currently in-flight messages and retained
# messages.
# retained_persistence is a synonym for this option.
persistence true
# The filename to use for the persistent database, not including
# the path.
#persistence_file mosquitto.db
# Location for persistent database. Must include trailing /
# Default is an empty string (current directory).
# Set to e.g. /var/lib/mosquitto/ if running as a proper service on Linux or
# similar.
persistence_location /mosquitto/data/
# =================================================================
# Logging
# =================================================================
# Places to log to. Use multiple log_dest lines for multiple
# logging destinations.
# Possible destinations are: stdout stderr syslog topic file
#
# stdout and stderr log to the console on the named output.
#
# syslog uses the userspace syslog facility which usually ends up
# in /var/log/messages or similar.
#
# topic logs to the broker topic '$SYS/broker/log/<severity>',
# where severity is one of D, E, W, N, I, M which are debug, error,
# warning, notice, information and message. Message type severity is used by
# the subscribe/unsubscribe log_types and publishes log messages to
# $SYS/broker/log/M/susbcribe or $SYS/broker/log/M/unsubscribe.
#
# The file destination requires an additional parameter which is the file to be
# logged to, e.g. "log_dest file /var/log/mosquitto.log". The file will be
# closed and reopened when the broker receives a HUP signal. Only a single file
# destination may be configured.
#
# Note that if the broker is running as a Windows service it will default to
# "log_dest none" and neither stdout nor stderr logging is available.
# Use "log_dest none" if you wish to disable logging.
log_dest file /mosquitto/log/mosquitto.log
# If using syslog logging (not on Windows), messages will be logged to the
# "daemon" facility by default. Use the log_facility option to choose which of
# local0 to local7 to log to instead. The option value should be an integer
# value, e.g. "log_facility 5" to use local5.
#log_facility
# Types of messages to log. Use multiple log_type lines for logging
# multiple types of messages.
# Possible types are: debug, error, warning, notice, information,
# none, subscribe, unsubscribe, websockets, all.
# Note that debug type messages are for decoding the incoming/outgoing
# network packets. They are not logged in "topics".
log_type error
log_type warning
log_type notice
log_type information
log_type all
# Change the websockets logging level. This is a global option, it is not
# possible to set per listener. This is an integer that is interpreted by
# libwebsockets as a bit mask for its lws_log_levels enum. See the
# libwebsockets documentation for more details. "log_type websockets" must also
# be enabled.
#websockets_log_level 0
# If set to true, client connection and disconnection messages will be included
# in the log.
#connection_messages true
# If set to true, add a timestamp value to each log message.
#log_timestamp true
# =================================================================
# Security
# =================================================================
# If set, only clients that have a matching prefix on their
# clientid will be allowed to connect to the broker. By default,
# all clients may connect.
# For example, setting "secure-" here would mean a client "secure-
# client" could connect but another with clientid "mqtt" couldn't.
#clientid_prefixes
# Boolean value that determines whether clients that connect
# without providing a username are allowed to connect. If set to
# false then a password file should be created (see the
# password_file option) to control authenticated client access.
# Defaults to true.
#allow_anonymous true
# In addition to the clientid_prefixes, allow_anonymous and TLS
# authentication options, username based authentication is also
# possible. The default support is described in "Default
# authentication and topic access control" below. The auth_plugin
# allows another authentication method to be used.
# Specify the path to the loadable plugin and see the
# "Authentication and topic access plugin options" section below.
#auth_plugin
# If auth_plugin_deny_special_chars is true, the default, then before an ACL
# check is made, the username/client id of the client needing the check is
# searched for the presence of either a '+' or '#' character. If either of
# these characters is found in either the username or client id, then the ACL
# check is denied before it is sent to the plugin.o
#
# This check prevents the case where a malicious user could circumvent an ACL
# check by using one of these characters as their username or client id. This
# is the same issue as was reported with mosquitto itself as CVE-2017-7650.
#
# If you are entirely sure that the plugin you are using is not vulnerable to
# this attack (i.e. if you never use usernames or client ids in topics) then
# you can disable this extra check and have all ACL checks delivered to your
# plugin by setting auth_plugin_deny_special_chars to false.
#auth_plugin_deny_special_chars true
# -----------------------------------------------------------------
# Default authentication and topic access control
# -----------------------------------------------------------------
# Control access to the broker using a password file. This file can be
# generated using the mosquitto_passwd utility. If TLS support is not compiled
# into mosquitto (it is recommended that TLS support should be included) then
# plain text passwords are used, in which case the file should be a text file
# with lines in the format:
# username:password
# The password (and colon) may be omitted if desired, although this
# offers very little in the way of security.
#
# See the TLS client require_certificate and use_identity_as_username options
# for alternative authentication options.
#password_file
# Access may also be controlled using a pre-shared-key file. This requires
# TLS-PSK support and a listener configured to use it. The file should be text
# lines in the format:
# identity:key
# The key should be in hexadecimal format without a leading "0x".
#psk_file
# Control access to topics on the broker using an access control list
# file. If this parameter is defined then only the topics listed will
# have access.
# If the first character of a line of the ACL file is a # it is treated as a
# comment.
# Topic access is added with lines of the format:
#
# topic [read|write|readwrite] <topic>
#
# The access type is controlled using "read", "write" or "readwrite". This
# parameter is optional (unless <topic> contains a space character) - if not
# given then the access is read/write. <topic> can contain the + or #
# wildcards as in subscriptions.
#
# The first set of topics are applied to anonymous clients, assuming
# allow_anonymous is true. User specific topic ACLs are added after a
# user line as follows:
#
# user <username>
#
# The username referred to here is the same as in password_file. It is
# not the clientid.
#
#
# If is also possible to define ACLs based on pattern substitution within the
# topic. The patterns available for substition are:
#
# %c to match the client id of the client
# %u to match the username of the client
#
# The substitution pattern must be the only text for that level of hierarchy.
#
# The form is the same as for the topic keyword, but using pattern as the
# keyword.
# Pattern ACLs apply to all users even if the "user" keyword has previously
# been given.
#
# If using bridges with usernames and ACLs, connection messages can be allowed
# with the following pattern:
# pattern write $SYS/broker/connection/%c/state
#
# pattern [read|write|readwrite] <topic>
#
# Example:
#
# pattern write sensor/%u/data
#
#acl_file
# -----------------------------------------------------------------
# Authentication and topic access plugin options
# -----------------------------------------------------------------
# If the auth_plugin option above is used, define options to pass to the
# plugin here as described by the plugin instructions. All options named
# using the format auth_opt_* will be passed to the plugin, for example:
#
# auth_opt_db_host
# auth_opt_db_port
# auth_opt_db_username
# auth_opt_db_password
# =================================================================
# Bridges
# =================================================================
# A bridge is a way of connecting multiple MQTT brokers together.
# Create a new bridge using the "connection" option as described below. Set
# options for the bridges using the remaining parameters. You must specify the
# address and at least one topic to subscribe to.
# Each connection must have a unique name.
# The address line may have multiple host address and ports specified. See
# below in the round_robin description for more details on bridge behaviour if
# multiple addresses are used.
# The direction that the topic will be shared can be chosen by
# specifying out, in or both, where the default value is out.
# The QoS level of the bridged communication can be specified with the next
# topic option. The default QoS level is 0, to change the QoS the topic
# direction must also be given.
# The local and remote prefix options allow a topic to be remapped when it is
# bridged to/from the remote broker. This provides the ability to place a topic
# tree in an appropriate location.
# For more details see the mosquitto.conf man page.
# Multiple topics can be specified per connection, but be careful
# not to create any loops.
# If you are using bridges with cleansession set to false (the default), then
# you may get unexpected behaviour from incoming topics if you change what
# topics you are subscribing to. This is because the remote broker keeps the
# subscription for the old topic. If you have this problem, connect your bridge
# with cleansession set to true, then reconnect with cleansession set to false
# as normal.
#connection <name>
#address <host>[:<port>] [<host>[:<port>]]
#topic <topic> [[[out | in | both] qos-level] local-prefix remote-prefix]
# Set the version of the MQTT protocol to use with for this bridge. Can be one
# of mqttv31 or mqttv311. Defaults to mqttv31.
#bridge_protocol_version mqttv31
# If a bridge has topics that have "out" direction, the default behaviour is to
# send an unsubscribe request to the remote broker on that topic. This means
# that changing a topic direction from "in" to "out" will not keep receiving
# incoming messages. Sending these unsubscribe requests is not always
# desirable, setting bridge_attempt_unsubscribe to false will disable sending
# the unsubscribe request.
#bridge_attempt_unsubscribe true
# If the bridge has more than one address given in the address/addresses
# configuration, the round_robin option defines the behaviour of the bridge on
# a failure of the bridge connection. If round_robin is false, the default
# value, then the first address is treated as the main bridge connection. If
# the connection fails, the other secondary addresses will be attempted in
# turn. Whilst connected to a secondary bridge, the bridge will periodically
# attempt to reconnect to the main bridge until successful.
# If round_robin is true, then all addresses are treated as equals. If a
# connection fails, the next address will be tried and if successful will
# remain connected until it fails
#round_robin false
# Set the client id to use on the remote end of this bridge connection. If not
# defined, this defaults to 'name.hostname' where name is the connection name
# and hostname is the hostname of this computer.
# This replaces the old "clientid" option to avoid confusion. "clientid"
# remains valid for the time being.
#remote_clientid
# Set the clientid to use on the local broker. If not defined, this defaults to
# 'local.<clientid>'. If you are bridging a broker to itself, it is important
# that local_clientid and clientid do not match.
#local_clientid
# Set the clean session variable for this bridge.
# When set to true, when the bridge disconnects for any reason, all
# messages and subscriptions will be cleaned up on the remote
# broker. Note that with cleansession set to true, there may be a
# significant amount of retained messages sent when the bridge
# reconnects after losing its connection.
# When set to false, the subscriptions and messages are kept on the
# remote broker, and delivered when the bridge reconnects.
#cleansession false
# If set to true, publish notification messages to the local and remote brokers
# giving information about the state of the bridge connection. Retained
# messages are published to the topic $SYS/broker/connection/<clientid>/state
# unless the notification_topic option is used.
# If the message is 1 then the connection is active, or 0 if the connection has
# failed.
#notifications true
# Choose the topic on which notification messages for this bridge are
# published. If not set, messages are published on the topic
# $SYS/broker/connection/<clientid>/state
#notification_topic
# Set the keepalive interval for this bridge connection, in
# seconds.
#keepalive_interval 60
# Set the start type of the bridge. This controls how the bridge starts and
# can be one of three types: automatic, lazy and once. Note that RSMB provides
# a fourth start type "manual" which isn't currently supported by mosquitto.
#
# "automatic" is the default start type and means that the bridge connection
# will be started automatically when the broker starts and also restarted
# after a short delay (30 seconds) if the connection fails.
#
# Bridges using the "lazy" start type will be started automatically when the
# number of queued messages exceeds the number set with the "threshold"
# parameter. It will be stopped automatically after the time set by the
# "idle_timeout" parameter. Use this start type if you wish the connection to
# only be active when it is needed.
#
# A bridge using the "once" start type will be started automatically when the
# broker starts but will not be restarted if the connection fails.
#start_type automatic
# Set the amount of time a bridge using the automatic start type will wait
# until attempting to reconnect. Defaults to 30 seconds.
#restart_timeout 30
# Set the amount of time a bridge using the lazy start type must be idle before
# it will be stopped. Defaults to 60 seconds.
#idle_timeout 60
# Set the number of messages that need to be queued for a bridge with lazy
# start type to be restarted. Defaults to 10 messages.
# Must be less than max_queued_messages.
#threshold 10
# If try_private is set to true, the bridge will attempt to indicate to the
# remote broker that it is a bridge not an ordinary client. If successful, this
# means that loop detection will be more effective and that retained messages
# will be propagated correctly. Not all brokers support this feature so it may
# be necessary to set try_private to false if your bridge does not connect
# properly.
#try_private true
# Set the username to use when connecting to a broker that requires
# authentication.
# This replaces the old "username" option to avoid confusion. "username"
# remains valid for the time being.
#remote_username
# Set the password to use when connecting to a broker that requires
# authentication. This option is only used if remote_username is also set.
# This replaces the old "password" option to avoid confusion. "password"
# remains valid for the time being.
#remote_password
# -----------------------------------------------------------------
# Certificate based SSL/TLS support
# -----------------------------------------------------------------
# Either bridge_cafile or bridge_capath must be defined to enable TLS support
# for this bridge.
# bridge_cafile defines the path to a file containing the
# Certificate Authority certificates that have signed the remote broker
# certificate.
# bridge_capath defines a directory that will be searched for files containing
# the CA certificates. For bridge_capath to work correctly, the certificate
# files must have ".crt" as the file ending and you must run "c_rehash <path to
# capath>" each time you add/remove a certificate.
#bridge_cafile
#bridge_capath
# Path to the PEM encoded client certificate, if required by the remote broker.
#bridge_certfile
# Path to the PEM encoded client private key, if required by the remote broker.
#bridge_keyfile
# When using certificate based encryption, bridge_insecure disables
# verification of the server hostname in the server certificate. This can be
# useful when testing initial server configurations, but makes it possible for
# a malicious third party to impersonate your server through DNS spoofing, for
# example. Use this option in testing only. If you need to resort to using this
# option in a production environment, your setup is at fault and there is no
# point using encryption.
#bridge_insecure false
# -----------------------------------------------------------------
# PSK based SSL/TLS support
# -----------------------------------------------------------------
# Pre-shared-key encryption provides an alternative to certificate based
# encryption. A bridge can be configured to use PSK with the bridge_identity
# and bridge_psk options. These are the client PSK identity, and pre-shared-key
# in hexadecimal format with no "0x". Only one of certificate and PSK based
# encryption can be used on one
# bridge at once.
#bridge_identity
#bridge_psk
# =================================================================
# External config files
# =================================================================
# External configuration files may be included by using the
# include_dir option. This defines a directory that will be searched
# for config files. All files that end in '.conf' will be loaded as
# a configuration file. It is best to have this as the last option
# in the main file. This option will only be processed from the main
# configuration file. The directory specified must not contain the
# main configuration file.
#include_dir
# =================================================================
# rsmb options - unlikely to ever be supported
# =================================================================
#ffdc_output
#max_log_entries
#trace_level
#trace_output

View File

@ -14,18 +14,29 @@ RUN if [ ${CHANGE_SOURCE} = true ]; then \
RUN apk update \
&& apk upgrade \
&& apk --update add logrotate \
&& apk add --no-cache openssl \
&& apk add --no-cache bash \
&& adduser -D -H -u 1000 -s /bin/bash www-data
&& apk add --no-cache bash
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
ARG PHP_UPSTREAM_CONTAINER=php-fpm
ARG PHP_UPSTREAM_PORT=9000
# Create 'messages' file used from 'logrotate'
RUN touch /var/log/messages
# Copy 'logrotate' config file
COPY logrotate/nginx /etc/logrotate.d/
# Set upstream conf and remove the default conf
RUN echo "upstream php-upstream { server ${PHP_UPSTREAM_CONTAINER}:${PHP_UPSTREAM_PORT}; }" > /etc/nginx/conf.d/upstream.conf \
&& rm /etc/nginx/conf.d/default.conf
ADD ./startup.sh /opt/startup.sh
RUN sed -i 's/\r//g' /opt/startup.sh
CMD ["/bin/bash", "/opt/startup.sh"]
EXPOSE 80 443

14
nginx/logrotate/nginx Normal file
View File

@ -0,0 +1,14 @@
/var/log/nginx/*.log {
daily
missingok
rotate 32
compress
delaycompress
nodateext
notifempty
create 644 www-data root
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}

4
nginx/ssl/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
*.crt
*.csr
*.key
*.pem

View File

@ -6,4 +6,8 @@ if [ ! -f /etc/nginx/ssl/default.crt ]; then
openssl x509 -req -days 365 -in "/etc/nginx/ssl/default.csr" -signkey "/etc/nginx/ssl/default.key" -out "/etc/nginx/ssl/default.crt"
fi
# Start crond in background
crond -l 2 -b
# Start nginx in foreground
nginx

View File

@ -1,10 +0,0 @@
FROM fenglc/pgadmin4
LABEL maintainer="Huadong Zuo <admin@zuohuadong.cn>"
# user: pgadmin4@pgadmin.org
# password: admin
# pg_dump & postgresql all in "/usr/bin"
# backup in "/var/lib/pgadmin/storage/pgadmin4"
EXPOSE 5050

View File

@ -14,7 +14,8 @@
ARG LARADOCK_PHP_VERSION
FROM laradock/php-fpm:2.2-${LARADOCK_PHP_VERSION}
# FROM laradock/php-fpm:2.2-${LARADOCK_PHP_VERSION}
FROM letsdockerize/laradock-php-fpm:2.4-${LARADOCK_PHP_VERSION}
LABEL maintainer="Mahmoud Zalt <mahmoud@zalt.me>"
@ -24,20 +25,27 @@ ARG LARADOCK_PHP_VERSION
ENV DEBIAN_FRONTEND noninteractive
# always run apt update when start and after add new source list, then clean up at end.
RUN apt-get update -yqq && \
apt-get install -y apt-utils && \
pecl channel-update pecl.php.net
#
#--------------------------------------------------------------------------
# Mandatory Software's Installation
#--------------------------------------------------------------------------
#
# Mandatory Software's such as ("mcrypt", "pdo_mysql", "libssl-dev", ....)
# are installed on the base image 'laradock/php-fpm' image. If you want
# to add more Software's or remove existing one, you need to edit the
# base image (https://github.com/Laradock/php-fpm).
#
RUN set -xe; \
apt-get update -yqq && \
pecl channel-update pecl.php.net && \
apt-get install -yqq \
apt-utils \
#
#--------------------------------------------------------------------------
# Mandatory Software's Installation
#--------------------------------------------------------------------------
#
# Mandatory Software's such as ("mcrypt", "pdo_mysql", "libssl-dev", ....)
# are installed on the base image 'laradock/php-fpm' image. If you want
# to add more Software's or remove existing one, you need to edit the
# base image (https://github.com/Laradock/php-fpm).
#
# next lines are here becase there is no auto build on dockerhub see https://github.com/laradock/laradock/pull/1903#issuecomment-463142846
libzip-dev zip unzip && \
docker-php-ext-configure zip --with-libzip && \
# Install the zip extension
docker-php-ext-install zip && \
php -m | grep -q 'zip'
#
#--------------------------------------------------------------------------
@ -47,7 +55,7 @@ RUN apt-get update -yqq && \
# Optional Software's will only be installed if you set them to `true`
# in the `docker-compose.yml` before the build.
# Example:
# - INSTALL_ZIP_ARCHIVE=true
# - INSTALL_SOAP=true
#
###########################################################################
@ -92,6 +100,18 @@ RUN if [ ${INSTALL_SOAP} = true ]; then \
docker-php-ext-install soap \
;fi
###########################################################################
# XSL:
###########################################################################
ARG INSTALL_XSL=false
RUN if [ ${INSTALL_XSL} = true ]; then \
# Install the xsl extension
apt-get -y install libxslt-dev && \
docker-php-ext-install xsl \
;fi
###########################################################################
# pgsql
###########################################################################
@ -108,13 +128,17 @@ RUN if [ ${INSTALL_PGSQL} = true ]; then \
###########################################################################
ARG INSTALL_PG_CLIENT=false
ARG INSTALL_POSTGIS=false
RUN if [ ${INSTALL_PG_CLIENT} = true ]; then \
# Create folders if not exists (https://github.com/tianon/docker-brew-debian/issues/65)
mkdir -p /usr/share/man/man1 && \
mkdir -p /usr/share/man/man7 && \
# Install the pgsql client
apt-get install -y postgresql-client \
apt-get install -y postgresql-client && \
if [ ${INSTALL_POSTGIS} = true ]; then \
apt-get install -y postgis; \
fi \
;fi
###########################################################################
@ -187,7 +211,7 @@ ARG INSTALL_SWOOLE=false
RUN if [ ${INSTALL_SWOOLE} = true ]; then \
# Install Php Swoole Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl install swoole-2.0.11; \
pecl install swoole-2.0.10; \
else \
if [ $(php -r "echo PHP_MINOR_VERSION;") = "0" ]; then \
pecl install swoole-2.2.0; \
@ -196,6 +220,22 @@ RUN if [ ${INSTALL_SWOOLE} = true ]; then \
fi \
fi && \
docker-php-ext-enable swoole \
&& php -m | grep -q 'swoole' \
;fi
###########################################################################
# Taint EXTENSION
###########################################################################
ARG INSTALL_TAINT=false
RUN if [ ${INSTALL_TAINT} = true ]; then \
# Install Php TAINT Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "7" ]; then \
pecl install taint && \
docker-php-ext-enable taint && \
php -m | grep -q 'taint'; \
fi \
;fi
###########################################################################
@ -214,6 +254,38 @@ RUN if [ ${INSTALL_MONGO} = true ]; then \
docker-php-ext-enable mongodb \
;fi
###########################################################################
# Xhprof:
###########################################################################
ARG INSTALL_XHPROF=false
RUN if [ ${INSTALL_XHPROF} = true ]; then \
# Install the php xhprof extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = 7 ]; then \
curl -L -o /tmp/xhprof.tar.gz "https://github.com/tideways/php-xhprof-extension/archive/v4.1.7.tar.gz"; \
else \
curl -L -o /tmp/xhprof.tar.gz "https://codeload.github.com/phacility/xhprof/tar.gz/master"; \
fi \
&& mkdir -p xhprof \
&& tar -C xhprof -zxvf /tmp/xhprof.tar.gz --strip 1 \
&& ( \
cd xhprof \
&& phpize \
&& ./configure \
&& make \
&& make install \
) \
&& rm -r xhprof \
&& rm /tmp/xhprof.tar.gz \
;fi
COPY ./xhprof.ini /usr/local/etc/php/conf.d
RUN if [ ${INSTALL_XHPROF} = false ]; then \
rm /usr/local/etc/php/conf.d/xhprof.ini \
;fi
###########################################################################
# AMQP:
###########################################################################
@ -221,23 +293,21 @@ RUN if [ ${INSTALL_MONGO} = true ]; then \
ARG INSTALL_AMQP=false
RUN if [ ${INSTALL_AMQP} = true ]; then \
apt-get install librabbitmq-dev -y && \
# download and install manually, to make sure it's compatible with ampq installed by pecl later
# install cmake first
apt-get update && apt-get -y install cmake && \
curl -L -o /tmp/rabbitmq-c.tar.gz https://github.com/alanxz/rabbitmq-c/archive/master.tar.gz && \
mkdir -p rabbitmq-c && \
tar -C rabbitmq-c -zxvf /tmp/rabbitmq-c.tar.gz --strip 1 && \
cd rabbitmq-c/ && \
mkdir _build && cd _build/ && \
cmake .. && \
cmake --build . --target install && \
# Install the amqp extension
pecl install amqp && \
docker-php-ext-enable amqp \
;fi
###########################################################################
# ZipArchive:
###########################################################################
ARG INSTALL_ZIP_ARCHIVE=false
RUN if [ ${INSTALL_ZIP_ARCHIVE} = true ]; then \
apt-get install libzip-dev -y && \
docker-php-ext-configure zip --with-libzip && \
# Install the zip extension
docker-php-ext-install zip \
docker-php-ext-enable amqp && \
# Install the sockets extension
docker-php-ext-install sockets \
;fi
###########################################################################
@ -287,7 +357,7 @@ RUN if [ ${INSTALL_MEMCACHED} = true ]; then \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached-dev/php-memcached/archive/2.2.0.tar.gz"; \
else \
curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached-dev/php-memcached/archive/php7.tar.gz"; \
curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached-dev/php-memcached/archive/master.tar.gz"; \
fi \
&& mkdir -p memcached \
&& tar -C memcached -zxvf /tmp/memcached.tar.gz --strip 1 \
@ -321,30 +391,30 @@ RUN if [ ${INSTALL_EXIF} = true ]; then \
USER root
ARG INSTALL_AEROSPIKE=false
ARG AEROSPIKE_PHP_REPOSITORY
RUN if [ ${INSTALL_AEROSPIKE} = true ]; then \
RUN set -xe; \
if [ ${INSTALL_AEROSPIKE} = true ]; then \
# Fix dependencies for PHPUnit within aerospike extension
apt-get -y install sudo wget && \
# Install the php aerospike extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
curl -L -o /tmp/aerospike-client-php.tar.gz https://github.com/aerospike/aerospike-client-php5/archive/master.tar.gz; \
else \
curl -L -o /tmp/aerospike-client-php.tar.gz ${AEROSPIKE_PHP_REPOSITORY}; \
curl -L -o /tmp/aerospike-client-php.tar.gz https://github.com/aerospike/aerospike-client-php/archive/master.tar.gz; \
fi \
&& mkdir -p aerospike-client-php \
&& tar -C aerospike-client-php -zxvf /tmp/aerospike-client-php.tar.gz --strip 1 \
&& mkdir -p /tmp/aerospike-client-php \
&& tar -C /tmp/aerospike-client-php -zxvf /tmp/aerospike-client-php.tar.gz --strip 1 \
&& \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
( \
cd aerospike-client-php/src/aerospike \
cd /tmp/aerospike-client-php/src/aerospike \
&& phpize \
&& ./build.sh \
&& make install \
) \
else \
( \
cd aerospike-client-php/src \
cd /tmp/aerospike-client-php/src \
&& phpize \
&& ./build.sh \
&& make install \
@ -438,7 +508,8 @@ RUN if [ ${INSTALL_LDAP} = true ]; then \
ARG INSTALL_MSSQL=false
RUN set -eux; if [ ${INSTALL_MSSQL} = true ]; then \
RUN set -eux; \
if [ ${INSTALL_MSSQL} = true ]; then \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
apt-get -y install freetds-dev libsybdb5 \
&& ln -s /usr/lib/x86_64-linux-gnu/libsybdb.so /usr/lib/libsybdb.so \
@ -461,12 +532,16 @@ RUN set -eux; if [ ${INSTALL_MSSQL} = true ]; then \
&& ln -sfn /etc/locale.alias /usr/share/locale/locale.alias \
&& locale-gen \
# Install pdo_sqlsrv and sqlsrv from PECL. Replace pdo_sqlsrv-4.1.8preview with preferred version.
&& pecl install pdo_sqlsrv sqlsrv \
&& if [ $(php -r "echo PHP_MINOR_VERSION;") = "0" ]; then \
pecl install pdo_sqlsrv-5.3.0 sqlsrv-5.3.0 \
;else \
pecl install pdo_sqlsrv sqlsrv \
;fi \
&& docker-php-ext-enable pdo_sqlsrv sqlsrv \
&& php -m | grep -q 'pdo_sqlsrv' \
&& php -m | grep -q 'sqlsrv' \
;fi \
;fi
;fi
###########################################################################
# Image optimizers:
@ -502,7 +577,6 @@ ARG INSTALL_IMAP=false
RUN if [ ${INSTALL_IMAP} = true ]; then \
apt-get install -y libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/* && \
docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install imap \
;fi
@ -528,17 +602,34 @@ ARG INSTALL_PHALCON=false
ARG LARADOCK_PHALCON_VERSION
ENV LARADOCK_PHALCON_VERSION ${LARADOCK_PHALCON_VERSION}
# Copy phalcon configration
COPY ./phalcon.ini /usr/local/etc/php/conf.d/phalcon.ini.disable
RUN if [ $INSTALL_PHALCON = true ]; then \
apt-get update && apt-get install -y unzip libpcre3-dev gcc make re2c \
&& curl -L -o /tmp/cphalcon.zip https://github.com/phalcon/cphalcon/archive/v${LARADOCK_PHALCON_VERSION}.zip \
&& unzip -d /tmp/ /tmp/cphalcon.zip \
&& cd /tmp/cphalcon-${LARADOCK_PHALCON_VERSION}/build \
&& ./install \
&& echo "extension=phalcon.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/phalcon.ini \
&& ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/phalcon.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/30-phalcon.ini \
&& mv /usr/local/etc/php/conf.d/phalcon.ini.disable /usr/local/etc/php/conf.d/phalcon.ini \
&& rm -rf /tmp/cphalcon* \
;fi
###########################################################################
# APCU:
###########################################################################
ARG INSTALL_APCU=false
RUN if [ ${INSTALL_APCU} = true ]; then \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl install -a apcu-4.0.11; \
else \
pecl install apcu; \
fi && \
docker-php-ext-enable apcu \
;fi
###########################################################################
# YAML:
###########################################################################
@ -549,15 +640,99 @@ ARG INSTALL_YAML=false
RUN if [ ${INSTALL_YAML} = true ]; then \
apt-get install libyaml-dev -y ; \
pecl install yaml ; \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl install -a yaml-1.3.2; \
else \
pecl install yaml; \
fi && \
docker-php-ext-enable yaml \
;fi
###########################################################################
# RDKAFKA:
###########################################################################
ARG INSTALL_RDKAFKA=false
RUN if [ ${INSTALL_RDKAFKA} = true ]; then \
apt-get install -y librdkafka-dev && \
pecl install rdkafka && \
docker-php-ext-enable rdkafka \
;fi
###########################################################################
# Install additional locales:
###########################################################################
ARG INSTALL_ADDITIONAL_LOCALES=false
ARG ADDITIONAL_LOCALES
RUN if [ ${INSTALL_ADDITIONAL_LOCALES} = true ]; then \
apt-get install -y locales \
&& echo '' >> /usr/share/locale/locale.alias \
&& temp="${ADDITIONAL_LOCALES%\"}" \
&& temp="${temp#\"}" \
&& for i in ${temp}; do sed -i "/$i/s/^#//g" /etc/locale.gen; done \
&& locale-gen \
;fi
###########################################################################
# MySQL Client:
###########################################################################
USER root
ARG INSTALL_MYSQL_CLIENT=false
RUN if [ ${INSTALL_MYSQL_CLIENT} = true ]; then \
apt-get update -yqq && \
apt-get -y install mysql-client \
;fi
###########################################################################
# ping:
###########################################################################
USER root
ARG INSTALL_PING=false
RUN if [ ${INSTALL_PING} = true ]; then \
apt-get update -yqq && \
apt-get -y install inetutils-ping \
;fi
###########################################################################
# sshpass:
###########################################################################
USER root
ARG INSTALL_SSHPASS=false
RUN if [ ${INSTALL_SSHPASS} = true ]; then \
apt-get update -yqq && \
apt-get -y install sshpass \
;fi
###########################################################################
# FFMPEG:
###########################################################################
USER root
ARG INSTALL_FFMPEG=false
RUN if [ ${INSTALL_FFMPEG} = true ]; then \
apt-get update -yqq && \
apt-get -y install ffmpeg \
;fi
###########################################################################
# Check PHP version:
###########################################################################
RUN php -v | head -n 1 | grep -q "PHP ${LARADOCK_PHP_VERSION}."
RUN set -xe; php -v | head -n 1 | grep -q "PHP ${LARADOCK_PHP_VERSION}."
#
#--------------------------------------------------------------------------

View File

@ -1,9 +1,9 @@
; NOTE: The actual opcache.so extention is NOT SET HERE but rather (/usr/local/etc/php/conf.d/docker-php-ext-opcache.ini)
opcache.enable="1"
opcache.memory_consumption="256"
opcache.use_cwd="0"
opcache.max_file_size="0"
opcache.max_accelerated_files = 30000
opcache.validate_timestamps="1"
opcache.revalidate_freq="0"
opcache.enable=1
opcache.memory_consumption=256
opcache.use_cwd=0
opcache.max_file_size=0
opcache.max_accelerated_files=30000
opcache.validate_timestamps=1
opcache.revalidate_freq=0

1
php-fpm/phalcon.ini Normal file
View File

@ -0,0 +1 @@
extension=phalcon.so

1918
php-fpm/php7.3.ini Normal file

File diff suppressed because it is too large Load Diff

8
php-fpm/xhprof.ini Normal file
View File

@ -0,0 +1,8 @@
[xhprof]
; extension=xhprof.so
extension=tideways.so
xhprof.output_dir=/var/www/xhprof
; no need to autoload, control in the program
tideways.auto_prepend_library=0
; set default rate
tideways.sample_rate=100

View File

@ -16,6 +16,7 @@ RUN apk --update add wget \
libmemcached-dev \
libmcrypt-dev \
libxml2-dev \
pcre-dev \
zlib-dev \
autoconf \
cyrus-sasl-dev \
@ -23,7 +24,16 @@ RUN apk --update add wget \
supervisor
RUN docker-php-ext-install mysqli mbstring pdo pdo_mysql tokenizer xml pcntl
RUN pecl channel-update pecl.php.net && pecl install memcached mcrypt-1.0.1 && docker-php-ext-enable memcached
RUN pecl channel-update pecl.php.net && pecl install memcached mcrypt-1.0.1 mongodb && docker-php-ext-enable memcached mongodb
# Add a non-root user:
ARG PUID=1000
ENV PUID ${PUID}
ARG PGID=1000
ENV PGID ${PGID}
RUN addgroup -g ${PGID} laradock && \
adduser -D -G laradock -u ${PUID} laradock
#Install SOAP package:
ARG INSTALL_SOAP=false
@ -53,9 +63,112 @@ RUN if [ ${INSTALL_ZIP_ARCHIVE} = true ]; then \
docker-php-ext-install zip \
;fi
# Install MySQL Client:
ARG INSTALL_MYSQL_CLIENT=false
RUN if [ ${INSTALL_MYSQL_CLIENT} = true ]; then \
apk --update add mysql-client \
;fi
# Install FFMPEG:
ARG INSTALL_FFMPEG=false
RUN if [ ${INSTALL_FFMPEG} = true ]; then \
apk --update add ffmpeg \
;fi
# Install AMQP:
ARG INSTALL_AMQP=false
RUN if [ ${INSTALL_AMQP} = true ]; then \
apk --update add rabbitmq-c rabbitmq-c-dev && \
pecl install amqp && \
docker-php-ext-enable amqp && \
docker-php-ext-install sockets \
;fi
# Install Cassandra drivers:
ARG INSTALL_CASSANDRA=false
RUN if [ ${INSTALL_CASSANDRA} = true ]; then \
apk --update add cassandra-cpp-driver \
;fi
WORKDIR /usr/src
RUN if [ ${INSTALL_CASSANDRA} = true ]; then \
git clone https://github.com/datastax/php-driver.git \
&& cd php-driver/ext \
&& phpize \
&& mkdir -p /usr/src/php-driver/build \
&& cd /usr/src/php-driver/build \
&& ../ext/configure --with-php-config=/usr/bin/php-config7.1 > /dev/null \
&& make clean >/dev/null \
&& make >/dev/null 2>&1 \
&& make install \
&& docker-php-ext-enable cassandra \
;fi
# Install Phalcon ext
ARG INSTALL_PHALCON=false
ARG PHALCON_VERSION
ENV PHALCON_VERSION ${PHALCON_VERSION}
RUN if [ $INSTALL_PHALCON = true ]; then \
apk --update add unzip gcc make re2c bash\
&& curl -L -o /tmp/cphalcon.zip https://github.com/phalcon/cphalcon/archive/v${PHALCON_VERSION}.zip \
&& unzip -d /tmp/ /tmp/cphalcon.zip \
&& cd /tmp/cphalcon-${PHALCON_VERSION}/build \
&& ./install \
&& rm -rf /tmp/cphalcon* \
;fi
RUN if [ $INSTALL_GHOSTSCRIPT = true ]; then \
apk --update add ghostscript \
;fi
#Install GMP package:
ARG INSTALL_GMP=false
RUN if [ ${INSTALL_GMP} = true ]; then \
apk add --update --no-cache gmp gmp-dev \
&& docker-php-ext-install gmp \
;fi
RUN rm /var/cache/apk/* \
&& mkdir -p /var/www
###########################################################################
# Swoole EXTENSION
###########################################################################
ARG INSTALL_SWOOLE=false
RUN if [ ${INSTALL_SWOOLE} = true ]; then \
# Install Php Swoole Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl -q install swoole-2.0.10; \
else \
if [ $(php -r "echo PHP_MINOR_VERSION;") = "0" ]; then \
pecl install swoole-2.2.0; \
else \
pecl install swoole; \
fi \
fi \
&& docker-php-ext-enable swoole \
;fi
###########################################################################
# Taint EXTENSION
###########################################################################
ARG INSTALL_TAINT=false
RUN if [ ${INSTALL_TAINT} = true ]; then \
# Install Php TAINT Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "7" ]; then \
pecl install taint; \
fi && \
docker-php-ext-enable taint \
;fi
#
#--------------------------------------------------------------------------
# Optional Supervisord Configuration

View File

@ -0,0 +1,8 @@
[program:laravel-scheduler]
process_name=%(program_name)s_%(process_num)02d
command=/bin/sh -c "while [ true ]; do (php /var/www/artisan schedule:run --verbose --no-interaction &); sleep 60; done"
autostart=true
autorestart=true
numprocs=1
user=laradock
redirect_stderr=true

View File

@ -4,4 +4,5 @@ command=php /var/www/artisan queue:work --sleep=3 --tries=3 --daemon
autostart=true
autorestart=true
numprocs=8
user=laradock
redirect_stderr=true

View File

@ -1,3 +1,5 @@
*.sh
!init_gitlab_db.sh
!init_jupyterhub_db.sh
!init_sonarqube_db.sh
!init_confluence_db.sh

View File

@ -0,0 +1,44 @@
#!/bin/bash
#
# Copy createdb.sh.example to createdb.sh
# then uncomment then set database name and username to create you need databases
#
# example: .env POSTGRES_USER=appuser and need db name is myshop_db
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER myuser WITH PASSWORD 'mypassword';
# CREATE DATABASE myshop_db;
# GRANT ALL PRIVILEGES ON DATABASE myshop_db TO myuser;
# EOSQL
#
# this sh script will auto run when the postgres container starts and the $DATA_PATH_HOST/postgres not found.
#
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db1 WITH PASSWORD 'db1';
# CREATE DATABASE db1;
# GRANT ALL PRIVILEGES ON DATABASE db1 TO db1;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db2 WITH PASSWORD 'db2';
# CREATE DATABASE db2;
# GRANT ALL PRIVILEGES ON DATABASE db2 TO db2;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db3 WITH PASSWORD 'db3';
# CREATE DATABASE db3;
# GRANT ALL PRIVILEGES ON DATABASE db3 TO db3;
# EOSQL
#
### default database and user for confluence ##############################################
if [ "$POSTGRES_CONFLUENCE_INIT" == 'true' ]; then
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $POSTGRES_CONFLUENCE_USER WITH PASSWORD '$POSTGRES_CONFLUENCE_PASSWORD';
CREATE DATABASE $POSTGRES_CONFLUENCE_DB;
GRANT ALL PRIVILEGES ON DATABASE $POSTGRES_CONFLUENCE_DB TO $POSTGRES_CONFLUENCE_USER;
ALTER ROLE $POSTGRES_CONFLUENCE_USER CREATEROLE SUPERUSER;
EOSQL
echo
fi

View File

@ -33,9 +33,12 @@
# EOSQL
#
### default database and user for gitlab ##############################################
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER laradock_gitlab WITH PASSWORD 'laradock_gitlab';
CREATE DATABASE laradock_gitlab;
GRANT ALL PRIVILEGES ON DATABASE laradock_gitlab TO laradock_gitlab;
ALTER ROLE laradock_gitlab CREATEROLE SUPERUSER;
EOSQL
if [ "$GITLAB_POSTGRES_INIT" == 'true' ]; then
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $GITLAB_POSTGRES_USER WITH PASSWORD '$GITLAB_POSTGRES_PASSWORD';
CREATE DATABASE $GITLAB_POSTGRES_DB;
GRANT ALL PRIVILEGES ON DATABASE $GITLAB_POSTGRES_DB TO $GITLAB_POSTGRES_USER;
ALTER ROLE $GITLAB_POSTGRES_USER CREATEROLE SUPERUSER;
EOSQL
echo
fi

View File

@ -33,9 +33,12 @@
# EOSQL
#
### default database and user for jupyterhub ##############################################
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER laradock_jupyterhub WITH PASSWORD 'laradock_jupyterhub';
CREATE DATABASE laradock_jupyterhub;
GRANT ALL PRIVILEGES ON DATABASE laradock_jupyterhub TO laradock_jupyterhub;
ALTER ROLE laradock_jupyterhub CREATEROLE SUPERUSER;
EOSQL
if [ "$JUPYTERHUB_POSTGRES_INIT" == 'true' ]; then
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $JUPYTERHUB_POSTGRES_USER WITH PASSWORD '$JUPYTERHUB_POSTGRES_PASSWORD';
CREATE DATABASE $JUPYTERHUB_POSTGRES_DB;
GRANT ALL PRIVILEGES ON DATABASE $JUPYTERHUB_POSTGRES_DB TO $JUPYTERHUB_POSTGRES_USER;
ALTER ROLE $JUPYTERHUB_POSTGRES_USER CREATEROLE SUPERUSER;
EOSQL
echo
fi

View File

@ -0,0 +1,44 @@
#!/bin/bash
#
# Copy createdb.sh.example to createdb.sh
# then uncomment then set database name and username to create you need databases
#
# example: .env POSTGRES_USER=appuser and need db name is myshop_db
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER myuser WITH PASSWORD 'mypassword';
# CREATE DATABASE myshop_db;
# GRANT ALL PRIVILEGES ON DATABASE myshop_db TO myuser;
# EOSQL
#
# this sh script will auto run when the postgres container starts and the $DATA_PATH_HOST/postgres not found.
#
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db1 WITH PASSWORD 'db1';
# CREATE DATABASE db1;
# GRANT ALL PRIVILEGES ON DATABASE db1 TO db1;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db2 WITH PASSWORD 'db2';
# CREATE DATABASE db2;
# GRANT ALL PRIVILEGES ON DATABASE db2 TO db2;
# EOSQL
#
# psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
# CREATE USER db3 WITH PASSWORD 'db3';
# CREATE DATABASE db3;
# GRANT ALL PRIVILEGES ON DATABASE db3 TO db3;
# EOSQL
#
### default database and user for gitlab ##############################################
if [ "$SONARQUBE_POSTGRES_INIT" == 'true' ]; then
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER $SONARQUBE_POSTGRES_USER WITH PASSWORD '$SONARQUBE_POSTGRES_PASSWORD';
CREATE DATABASE $SONARQUBE_POSTGRES_DB;
GRANT ALL PRIVILEGES ON DATABASE $SONARQUBE_POSTGRES_DB TO $SONARQUBE_POSTGRES_USER;
ALTER ROLE $SONARQUBE_POSTGRES_USER CREATEROLE SUPERUSER;
EOSQL
echo
fi

View File

@ -1,7 +1,7 @@
FROM rabbitmq
FROM rabbitmq:alpine
LABEL maintainer="Mahmoud Zalt <mahmoud@zalt.me>"
RUN rabbitmq-plugins enable --offline rabbitmq_management
EXPOSE 15671 15672
EXPOSE 4369 5671 5672 15671 15672 25672

3
redis-cluster/Dockerfile Normal file
View File

@ -0,0 +1,3 @@
FROM grokzen/redis-cluster:latest
LABEL maintainer="hareku <hareku908@gmail.com>"

3
redis-webui/Dockerfile Normal file
View File

@ -0,0 +1,3 @@
FROM erikdubbelboer/phpredisadmin
LABEL maintainer="ahkui <ahkui@outlook.com>"

1377
redis/redis.conf Normal file

File diff suppressed because it is too large Load Diff

View File

@ -4,6 +4,13 @@ LABEL maintainer="Cristian Mello <cristianc.mello@gmail.com>"
VOLUME /data/rethinkdb_data
#Necessary for the backup rethinkdb
RUN apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y install python-pip \
&& pip install rethinkdb \
&& rm -rf /var/lib/apt/lists/*
RUN cp /etc/rethinkdb/default.conf.sample /etc/rethinkdb/instances.d/instance1.conf
CMD ["rethinkdb", "--bind", "all"]

View File

@ -18,7 +18,7 @@ ENV SOLR_DATAIMPORTHANDLER_MSSQL ${SOLR_DATAIMPORTHANDLER_MSSQL}
# download mssql connector for dataimporthandler
RUN if [ ${SOLR_DATAIMPORTHANDLER_MSSQL} = true ]; then \
curl -L -o /tmp/mssql-jdbc-7.0.0.jre8.jar "https://github.com/Microsoft/mssql-jdbc/releases/download/v7.0.0/mssql-jdbc-7.0.0.jre8.jar" \
&& mkdir /opt/solr/contrib/dataimporthandler/lib \
&& mkdir -p /opt/solr/contrib/dataimporthandler/lib \
&& mv /tmp/mssql-jdbc-7.0.0.jre8.jar "/opt/solr/contrib/dataimporthandler/lib/mssql-jdbc-7.0.0.jre8.jar" \
;fi

3
sonarqube/Dockerfile Normal file
View File

@ -0,0 +1,3 @@
FROM sonarqube:latest
LABEL maintainer="xiagw <fxiaxiaoyu@gmail.com>"

7
traefik/Dockerfile Normal file
View File

@ -0,0 +1,7 @@
FROM traefik:1.7.5-alpine
LABEL maintainer="Luis Coutinho <luis@luiscoutinho.pt>"
COPY traefik.toml acme.json /
RUN chmod 600 /acme.json

0
traefik/acme.json Normal file
View File

23
traefik/traefik.toml Normal file
View File

@ -0,0 +1,23 @@
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[web]
address = ":8080"
[acme]
email = "email@example.org"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
[[acme.domais]]
main = "localhost"

View File

@ -16,9 +16,23 @@ if [ -n "${PHP_VERSION}" ]; then
sed -i -- 's/=false/=true/g' .env
sed -i -- 's/PHPDBG=true/PHPDBG=false/g' .env
if [ "${PHP_VERSION}" == "5.6" ]; then
sed -i -- 's/^AEROSPIKE_PHP_REPOSITORY=/##AEROSPIKE_PHP_REPOSITORY=/g' .env
sed -i -- 's/^# AEROSPIKE_PHP_REPOSITORY=/AEROSPIKE_PHP_REPOSITORY=/g' .env
# Aerospike C Client SDK 4.0.7, Debian 9.6 is not supported
# https://github.com/aerospike/aerospike-client-php5/issues/145
sed -i -- 's/PHP_FPM_INSTALL_AEROSPIKE=true/PHP_FPM_INSTALL_AEROSPIKE=false/g' .env
fi
if [ "${PHP_VERSION}" == "7.3" ]; then
# V8JS extension does not yet support PHP 7.3.
sed -i -- 's/WORKSPACE_INSTALL_V8JS=true/WORKSPACE_INSTALL_V8JS=false/g' .env
# This ssh2 extension does not yet support PHP 7.3.
sed -i -- 's/PHP_FPM_INSTALL_SSH2=true/PHP_FPM_INSTALL_SSH2=false/g' .env
# xdebug extension does not yet support PHP 7.3.
sed -i -- 's/PHP_FPM_INSTALL_XDEBUG=true/PHP_FPM_INSTALL_XDEBUG=false/g' .env
# memcached extension does not yet support PHP 7.3.
sed -i -- 's/PHP_FPM_INSTALL_MEMCACHED=true/PHP_FPM_INSTALL_MEMCACHED=false/g' .env
fi
sed -i -- 's/CHANGE_SOURCE=true/CHANGE_SOURCE=false/g' .env
cat .env
docker-compose build ${BUILD_SERVICE}
docker images

View File

@ -14,7 +14,8 @@
ARG LARADOCK_PHP_VERSION
FROM laradock/workspace:2.2-${LARADOCK_PHP_VERSION}
# FROM laradock/workspace:2.2-${LARADOCK_PHP_VERSION}
FROM letsdockerize/laradock-workspace:2.4-${LARADOCK_PHP_VERSION}
LABEL maintainer="Mahmoud Zalt <mahmoud@zalt.me>"
@ -37,22 +38,31 @@ ARG PGID=1000
ENV PGID ${PGID}
# always run apt update when start and after add new source list, then clean up at end.
RUN apt-get update -yqq && \
RUN set -xe; \
apt-get update -yqq && \
pecl channel-update pecl.php.net && \
groupadd -g ${PGID} laradock && \
useradd -u ${PUID} -g laradock -m laradock -G docker_env && \
usermod -p "*" laradock
#
#--------------------------------------------------------------------------
# Mandatory Software's Installation
#--------------------------------------------------------------------------
#
# Mandatory Software's such as ("php-cli", "git", "vim", ....) are
# installed on the base image 'laradock/workspace' image. If you want
# to add more Software's or remove existing one, you need to edit the
# base image (https://github.com/Laradock/workspace).
#
usermod -p "*" laradock -s /bin/bash && \
apt-get install -yqq \
apt-utils \
#
#--------------------------------------------------------------------------
# Mandatory Software's Installation
#--------------------------------------------------------------------------
#
# Mandatory Software's such as ("php-cli", "git", "vim", ....) are
# installed on the base image 'laradock/workspace' image. If you want
# to add more Software's or remove existing one, you need to edit the
# base image (https://github.com/Laradock/workspace).
#
# next lines are here becase there is no auto build on dockerhub see https://github.com/laradock/laradock/pull/1903#issuecomment-463142846
libzip-dev zip unzip \
# Install the zip extension
php${LARADOCK_PHP_VERSION}-zip \
# nasm
nasm && \
php -m | grep -q 'zip'
#
#--------------------------------------------------------------------------
@ -90,14 +100,14 @@ RUN sed -i 's/\r//' /root/aliases.sh && \
echo "" >> ~/.bashrc && \
echo "# Load Custom Aliases" >> ~/.bashrc && \
echo "source ~/aliases.sh" >> ~/.bashrc && \
echo "" >> ~/.bashrc
echo "" >> ~/.bashrc
USER laradock
RUN echo "" >> ~/.bashrc && \
echo "# Load Custom Aliases" >> ~/.bashrc && \
echo "source ~/aliases.sh" >> ~/.bashrc && \
echo "" >> ~/.bashrc
echo "" >> ~/.bashrc
###########################################################################
# Composer:
@ -108,9 +118,16 @@ USER root
# Add the composer.json
COPY ./composer.json /home/laradock/.composer/composer.json
# Add the auth.json for magento 2 credentials
COPY ./auth.json /home/laradock/.composer/auth.json
# Make sure that ~/.composer belongs to laradock
RUN chown -R laradock:laradock /home/laradock/.composer
# Export composer vendor path
RUN echo "" >> ~/.bashrc && \
echo 'export PATH="$HOME/.composer/vendor/bin:$PATH"' >> ~/.bashrc
USER laradock
# Check if global install need to be ran
@ -122,6 +139,15 @@ RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then \
composer global install \
;fi
# Check if auth file is disabled
ARG COMPOSER_AUTH=false
ENV COMPOSER_AUTH ${COMPOSER_AUTH}
RUN if [ ${COMPOSER_AUTH} = false ]; then \
# remove the file
rm /home/laradock/.composer/auth.json \
;fi
ARG COMPOSER_REPO_PACKAGIST
ENV COMPOSER_REPO_PACKAGIST ${COMPOSER_REPO_PACKAGIST}
@ -174,6 +200,21 @@ RUN if [ ${INSTALL_DRUSH} = true ]; then \
drush core-status \
;fi
###########################################################################
# WP CLI:
###########################################################################
# The command line interface for WordPress
USER root
ARG INSTALL_WP_CLI=false
RUN if [ ${INSTALL_WP_CLI} = true ]; then \
curl -fsSL -o /usr/local/bin/wp https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar | bash && \
chmod +x /usr/local/bin/wp \
;fi
###########################################################################
# SSH2:
###########################################################################
@ -194,11 +235,11 @@ RUN if [ ${INSTALL_SSH2} = true ]; then \
USER root
ARG INSTALL_GMP=false
ARG PHP_VERSION=${PHP_VERSION}
ARG PHP_VERSION=${LARADOCK_PHP_VERSION}
RUN if [ ${INSTALL_GMP} = true ]; then \
# Install the PHP GMP extension
apt-get -y install php${PHP_VERSION}-gmp \
apt-get -y install php${LARADOCK_PHP_VERSION}-gmp \
;fi
###########################################################################
@ -214,6 +255,20 @@ RUN if [ ${INSTALL_SOAP} = true ]; then \
apt-get -y install libxml2-dev php${LARADOCK_PHP_VERSION}-soap \
;fi
###########################################################################
# XSL:
###########################################################################
USER root
ARG INSTALL_XSL=false
RUN if [ ${INSTALL_XSL} = true ]; then \
# Install the PHP XSL extension
apt-get -y install libxslt-dev php${LARADOCK_PHP_VERSION}-xsl \
;fi
###########################################################################
# LDAP:
###########################################################################
@ -259,8 +314,7 @@ ARG INSTALL_XDEBUG=false
RUN if [ ${INSTALL_XDEBUG} = true ]; then \
# Load the xdebug extension only with phpunit commands
apt-get install -y php${LARADOCK_PHP_VERSION}-xdebug && \
sed -i 's/^;//g' /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-xdebug.ini && \
echo "alias phpunit='php -dzend_extension=xdebug.so /var/www/vendor/bin/phpunit'" >> ~/.bashrc \
sed -i 's/^;//g' /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-xdebug.ini \
;fi
# ADD for REMOTE debugging
@ -294,7 +348,7 @@ ARG BLACKFIRE_CLIENT_TOKEN
ENV BLACKFIRE_CLIENT_TOKEN ${BLACKFIRE_CLIENT_TOKEN}
RUN if [ ${INSTALL_XDEBUG} = false -a ${INSTALL_BLACKFIRE} = true ]; then \
curl -L https://packagecloud.io/gpg.key | apt-key add - && \
curl -L https://packages.blackfire.io/gpg.key | apt-key add - && \
echo "deb http://packages.blackfire.io/debian any main" | tee /etc/apt/sources.list.d/blackfire.list && \
apt-get update -yqq && \
apt-get install blackfire-agent \
@ -352,6 +406,37 @@ RUN if [ ${INSTALL_AMQP} = true ]; then \
ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/amqp.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/30-amqp.ini \
;fi
###########################################################################
# CASSANDRA:
###########################################################################
ARG INSTALL_CASSANDRA=false
RUN if [ ${INSTALL_CASSANDRA} = true ]; then \
apt-get install libgmp-dev -y && \
curl https://downloads.datastax.com/cpp-driver/ubuntu/18.04/dependencies/libuv/v1.28.0/libuv1-dev_1.28.0-1_amd64.deb -o libuv1-dev.deb && \
curl https://downloads.datastax.com/cpp-driver/ubuntu/18.04/dependencies/libuv/v1.28.0/libuv1_1.28.0-1_amd64.deb -o libuv1.deb && \
curl https://downloads.datastax.com/cpp-driver/ubuntu/18.04/cassandra/v2.12.0/cassandra-cpp-driver-dev_2.12.0-1_amd64.deb -o cassandra-cpp-driver-dev.deb && \
curl https://downloads.datastax.com/cpp-driver/ubuntu/18.04/cassandra/v2.12.0/cassandra-cpp-driver_2.12.0-1_amd64.deb -o cassandra-cpp-driver.deb && \
dpkg -i libuv1.deb && \
dpkg -i libuv1-dev.deb && \
dpkg -i cassandra-cpp-driver.deb && \
dpkg -i cassandra-cpp-driver-dev.deb && \
rm libuv1.deb libuv1-dev.deb cassandra-cpp-driver-dev.deb cassandra-cpp-driver.deb && \
cd /usr/src && \
git clone https://github.com/datastax/php-driver.git && \
cd /usr/src/php-driver/ext && \
phpize && \
mkdir /usr/src/php-driver/build && \
cd /usr/src/php-driver/build && \
../ext/configure > /dev/null && \
make clean >/dev/null && \
make >/dev/null 2>&1 && \
make install && \
echo "extension=cassandra.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/cassandra.ini && \
ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/cassandra.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/30-cassandra.ini \
;fi
###########################################################################
# PHP REDIS EXTENSION
###########################################################################
@ -359,10 +444,7 @@ RUN if [ ${INSTALL_AMQP} = true ]; then \
ARG INSTALL_PHPREDIS=false
RUN if [ ${INSTALL_PHPREDIS} = true ]; then \
# Install Php Redis extension
printf "\n" | pecl -q install -o -f redis && \
echo "extension=redis.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/redis.ini && \
phpenmod redis \
apt-get install -yqq php-redis \
;fi
###########################################################################
@ -374,7 +456,7 @@ ARG INSTALL_SWOOLE=false
RUN if [ ${INSTALL_SWOOLE} = true ]; then \
# Install Php Swoole Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl -q install swoole-2.0.11; \
pecl -q install swoole-2.0.10; \
else \
if [ $(php -r "echo PHP_MINOR_VERSION;") = "0" ]; then \
pecl install swoole-2.2.0; \
@ -384,6 +466,23 @@ RUN if [ ${INSTALL_SWOOLE} = true ]; then \
fi && \
echo "extension=swoole.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/swoole.ini && \
ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/swoole.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-swoole.ini \
&& php -m | grep -q 'swoole' \
;fi
###########################################################################
# Taint EXTENSION
###########################################################################
ARG INSTALL_TAINT=false
RUN if [ "${INSTALL_TAINT}" = true ]; then \
# Install Php TAINT Extension
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "7" ]; then \
pecl install taint && \
echo "extension=taint.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/taint.ini && \
ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/taint.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-taint.ini && \
php -m | grep -q 'taint'; \
fi \
;fi
###########################################################################
@ -397,6 +496,31 @@ RUN if [ ${INSTALL_LIBPNG} = true ]; then \
apt-get install libpng16-16 \
;fi
###########################################################################
# Inotify EXTENSION:
###########################################################################
ARG INSTALL_INOTIFY=false
RUN if [ ${INSTALL_INOTIFY} = true ]; then \
pecl -q install inotify && \
echo "extension=inotify.so" >> /etc/php/${LARADOCK_PHP_VERSION}/mods-available/inotify.ini && \
ln -s /etc/php/${LARADOCK_PHP_VERSION}/mods-available/inotify.ini /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-inotify.ini \
;fi
###########################################################################
# fswatch
###########################################################################
ARG INSTALL_FSWATCH=false
RUN if [ ${INSTALL_FSWATCH} = true ]; then \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 47FE03C1 \
&& add-apt-repository -y ppa:hadret/fswatch \
|| apt-get update -yqq \
&& apt-get -y install fswatch \
;fi
###########################################################################
# IonCube Loader
###########################################################################
@ -440,6 +564,7 @@ ARG INSTALL_NODE=false
ARG INSTALL_NPM_GULP=false
ARG INSTALL_NPM_BOWER=false
ARG INSTALL_NPM_VUE_CLI=false
ARG INSTALL_NPM_ANGULAR_CLI=false
ARG NPM_REGISTRY
ENV NPM_REGISTRY ${NPM_REGISTRY}
ENV NVM_DIR /home/laradock/.nvm
@ -464,6 +589,9 @@ RUN if [ ${INSTALL_NODE} = true ]; then \
&& if [ ${INSTALL_NPM_VUE_CLI} = true ]; then \
npm install -g @vue/cli \
;fi \
&& if [ ${INSTALL_NPM_ANGULAR_CLI} = true ]; then \
npm install -g @angular/cli \
;fi \
&& ln -s `npm bin --global` /home/laradock/.node-bin \
;fi
@ -545,26 +673,30 @@ ENV PATH $PATH:/home/laradock/.yarn/bin
USER root
ARG INSTALL_AEROSPIKE=false
ARG AEROSPIKE_PHP_REPOSITORY
RUN if [ ${INSTALL_AEROSPIKE} = true ]; then \
RUN set -xe; \
if [ ${INSTALL_AEROSPIKE} = true ]; then \
# Fix dependencies for PHPUnit within aerospike extension
apt-get -y install sudo wget && \
# Install the php aerospike extension
curl -L -o /tmp/aerospike-client-php.tar.gz ${AEROSPIKE_PHP_REPOSITORY} \
&& mkdir -p aerospike-client-php \
&& tar -C aerospike-client-php -zxvf /tmp/aerospike-client-php.tar.gz --strip 1 \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
curl -L -o /tmp/aerospike-client-php.tar.gz https://github.com/aerospike/aerospike-client-php5/archive/master.tar.gz; \
else \
curl -L -o /tmp/aerospike-client-php.tar.gz https://github.com/aerospike/aerospike-client-php/archive/master.tar.gz; \
fi \
&& mkdir -p /tmp/aerospike-client-php \
&& tar -C /tmp/aerospike-client-php -zxvf /tmp/aerospike-client-php.tar.gz --strip 1 \
&& \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
( \
cd aerospike-client-php/src/aerospike \
cd /tmp/aerospike-client-php/src/aerospike \
&& phpize \
&& ./build.sh \
&& make install \
) \
else \
( \
cd aerospike-client-php/src \
cd /tmp/aerospike-client-php/src \
&& phpize \
&& ./build.sh \
&& make install \
@ -574,7 +706,7 @@ RUN if [ ${INSTALL_AEROSPIKE} = true ]; then \
&& echo 'extension=aerospike.so' >> /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/aerospike.ini \
&& echo 'aerospike.udf.lua_system_path=/usr/local/aerospike/lua' >> /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/aerospike.ini \
&& echo 'aerospike.udf.lua_user_path=/usr/local/aerospike/usr-lua' >> /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/aerospike.ini \
;fi
;fi
###########################################################################
# PHP V8JS:
@ -584,14 +716,19 @@ USER root
ARG INSTALL_V8JS=false
RUN if [ ${INSTALL_V8JS} = true ]; then \
# Install the php V8JS extension
RUN set -xe; \
if [ ${INSTALL_V8JS} = true ]; then \
add-apt-repository -y ppa:pinepain/libv8-archived \
&& apt-get update -yqq \
&& apt-get install -y php${LARADOCK_PHP_VERSION}-xml php${LARADOCK_PHP_VERSION}-dev php-pear libv8-5.4 \
&& pecl install v8js \
&& apt-get install -y libv8-5.4 && \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
pecl install v8js-0.6.4; \
else \
pecl install v8js; \
fi \
&& echo "extension=v8js.so" >> /etc/php/${LARADOCK_PHP_VERSION}/cli/php.ini \
;fi
&& php -m | grep -q 'v8js' \
;fi
###########################################################################
# Laravel Envoy:
@ -612,6 +749,15 @@ RUN if [ ${INSTALL_LARAVEL_ENVOY} = true ]; then \
USER laradock
ARG INSTALL_LARAVEL_INSTALLER=false
RUN if [ ${INSTALL_LARAVEL_INSTALLER} = true ]; then \
# Install the Laravel Installer
composer global require "laravel/installer" \
;fi
USER root
ARG COMPOSER_REPO_PACKAGIST
ENV COMPOSER_REPO_PACKAGIST ${COMPOSER_REPO_PACKAGIST}
@ -689,7 +835,8 @@ RUN if [ ${INSTALL_LINUXBREW} = true ]; then \
ARG INSTALL_MSSQL=false
RUN set -eux; if [ ${INSTALL_MSSQL} = true ]; then \
RUN set -eux; \
if [ ${INSTALL_MSSQL} = true ]; then \
if [ $(php -r "echo PHP_MAJOR_VERSION;") = "5" ]; then \
apt-get -y install php5.6-sybase freetds-bin freetds-common libsybdb5 \
&& php -m | grep -q 'mssql' \
@ -707,13 +854,17 @@ RUN set -eux; if [ ${INSTALL_MSSQL} = true ]; then \
ln -sfn /opt/mssql-tools/bin/bcp /usr/bin/bcp && \
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen && \
pecl install sqlsrv pdo_sqlsrv && \
if [ $(php -r "echo PHP_MINOR_VERSION;") = "0" ]; then \
pecl install sqlsrv-5.3.0 pdo_sqlsrv-5.3.0 \
;else \
pecl install sqlsrv pdo_sqlsrv \
;fi && \
echo "extension=sqlsrv.so" > /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-sqlsrv.ini && \
echo "extension=pdo_sqlsrv.so" > /etc/php/${LARADOCK_PHP_VERSION}/cli/conf.d/20-pdo_sqlsrv.ini \
&& php -m | grep -q 'sqlsrv' \
&& php -m | grep -q 'pdo_sqlsrv' \
;fi \
;fi
;fi
###########################################################################
# Minio:
@ -779,6 +930,23 @@ RUN if [ ${INSTALL_PYTHON} = true ]; then \
&& python -m pip install --upgrade virtualenv \
;fi
###########################################################################
# POWERLINE:
###########################################################################
USER root
ARG INSTALL_POWERLINE=false
RUN if [ ${INSTALL_POWERLINE} = true ]; then \
if [ ${INSTALL_PYTHON} = true ]; then \
python -m pip install --upgrade powerline-status && \
echo "" >> /etc/bash.bashrc && \
echo ". /usr/local/lib/python2.7/dist-packages/powerline/bindings/bash/powerline.sh" >> /etc/bash.bashrc \
;fi \
;fi
USER laradock
###########################################################################
# ImageMagick:
###########################################################################
@ -823,15 +991,6 @@ RUN if [ ${INSTALL_PG_CLIENT} = true ]; then \
&& apt-get -y install postgresql-client-10 \
;fi
###########################################################################
# nasm
###########################################################################
USER root
RUN apt-get update -yqq \
&& apt-get -yqq install nasm
###########################################################################
# Dusk Dependencies:
###########################################################################
@ -890,11 +1049,62 @@ RUN if [ ${INSTALL_MYSQL_CLIENT} = true ]; then \
apt-get -y install mysql-client \
;fi
###########################################################################
# ping:
###########################################################################
USER root
ARG INSTALL_PING=false
RUN if [ ${INSTALL_PING} = true ]; then \
apt-get update -yqq && \
apt-get -y install inetutils-ping \
;fi
###########################################################################
# sshpass:
###########################################################################
USER root
ARG INSTALL_SSHPASS=false
RUN if [ ${INSTALL_SSHPASS} = true ]; then \
apt-get update -yqq && \
apt-get -y install sshpass \
;fi
###########################################################################
# FFMpeg:
###########################################################################
USER root
ARG INSTALL_FFMPEG=false
RUN if [ ${INSTALL_FFMPEG} = true ]; then \
apt-get -y install ffmpeg \
;fi
###########################################################################
# GNU Parallel:
###########################################################################
USER root
ARG INSTALL_GNU_PARALLEL=false
RUN if [ ${INSTALL_GNU_PARALLEL} = true ]; then \
apt-get -y install parallel \
;fi
###########################################################################
# Check PHP version:
###########################################################################
RUN php -v | head -n 1 | grep -q "PHP ${LARADOCK_PHP_VERSION}."
RUN set -xe; php -v | head -n 1 | grep -q "PHP ${LARADOCK_PHP_VERSION}."
#
#--------------------------------------------------------------------------

View File

@ -46,8 +46,8 @@ alias h="history"
alias j="jobs"
alias e='exit'
alias c="clear"
alias cla="clear && ls -l"
alias cll="clear && ls -la"
alias cla="clear && ls -la"
alias cll="clear && ls -l"
alias cls="clear && ls"
alias code="cd /var/www"
alias ea="vi ~/aliases.sh"
@ -107,6 +107,13 @@ alias gd="git --no-pager diff"
alias git-revert="git reset --hard && git clean -df"
alias gs="git status"
alias whoops="git reset --hard && git clean -df"
alias glog="git log --oneline --decorate --graph"
alias gloga="git log --oneline --decorate --graph --all"
alias gsh="git show"
alias grb="git rebase -i"
alias gbr="git branch"
alias gc="git commit"
alias gck="git checkout"
# Create a new directory and enter it
function mkd() {

8
workspace/auth.json Normal file
View File

@ -0,0 +1,8 @@
{
"http-basic": {
"repo.magento.com": {
"username": "",
"password": ""
}
}
}

10
zookeeper/Dockerfile Normal file
View File

@ -0,0 +1,10 @@
FROM zookeeper:latest
LABEL maintainer="Hyduan <hyduan96@qq.com>"
VOLUME /data
VOLUME /datalog
EXPOSE 2181
CMD ["zkServer.sh", "start-foreground"]