Compare commits

...

35 Commits

Author SHA1 Message Date
2e2e75fe87 fix: update JwtService to handle default expiration and add tests for token generation
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 39s
2026-03-28 03:40:03 -03:00
8f508034d5 fix: update Docker configuration for image source and enhance logging in supervisord
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 14s
2026-03-28 03:32:08 -03:00
7108aff54d fix: add access and error log configuration for Nginx
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 39s
2026-03-28 03:26:50 -03:00
b0a4278699 fix: update stack deployment to use production Docker Compose file 2026-03-28 03:25:35 -03:00
73c51e514c fix: update Docker Compose configuration for service names and database connection
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 7s
2026-03-28 03:24:00 -03:00
596a17b252 fix: update supervisord configuration to log output to stdout
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 9s
2026-03-28 03:14:15 -03:00
5ff28fa3d4 fix: update homepage logo and href in Docker Compose configuration
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 12s
2026-03-28 03:09:35 -03:00
a672c9efed fix: correct stack name in Portainer deployment configuration
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 35s
2026-03-28 03:07:44 -03:00
bfe8965c06 fix: enhance Portainer API interaction with DNS fallback and improved error handling
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 11s
2026-03-28 03:06:37 -03:00
c72595d396 fix: improve Portainer deployment script with enhanced logging and error handling
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 11s
2026-03-28 03:05:05 -03:00
51b596c7a5 fix: update Portainer API URL and correct image reference in Docker Compose
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 12s
2026-03-28 03:03:33 -03:00
e4e2ae3479 fix: sanitize Portainer API stack response output for improved logging
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 12s
2026-03-28 02:58:37 -03:00
808c0d0a22 fix: update Portainer API URL to use the correct lab address
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 11s
2026-03-28 02:55:59 -03:00
e3938d2351 fix: add network info logging before Portainer deployment
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 11s
2026-03-28 02:53:05 -03:00
8a04363b11 fix: enhance Portainer API deployment with detailed error handling and logging
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 11s
2026-03-28 02:49:30 -03:00
1038f40721 fix: update Portainer API URL to include port number for deployment
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 7s
2026-03-28 02:30:01 -03:00
4fd90b2497 fix: streamline deployment process by removing Gitea registry login steps and enhancing Portainer API integration
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 16s
2026-03-28 02:21:36 -03:00
cb74fdef7b fix: remove Gitea container registry login and push steps from build workflow
All checks were successful
Build And Publish Production Image / Build And Publish Production Image (push) Successful in 7s
2026-03-28 01:34:08 -03:00
0ed6f3824a fix: update build workflow to combine tagging and pushing of registry images
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 17s
2026-03-28 01:27:09 -03:00
572dc49bc9 fix: update Docker image tag format in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 1m37s
2026-03-28 00:49:03 -03:00
29627a0062 fix: correct syntax for Docker image tags in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 16s
2026-03-28 00:42:16 -03:00
776941b323 fix: update Docker Hub login step to be optional and clean up registry login process
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 1m28s
2026-03-28 00:40:15 -03:00
5f8834d0d4 fix: update Docker registry configuration and login endpoint in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 1m18s
2026-03-28 00:31:33 -03:00
854fabd874 fix: update logging of Docker registry credentials to use base64 encoding
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 22s
2026-03-27 22:28:45 -03:00
000bc0cc36 fix: update Docker registry credentials logging to use environment variables
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 12s
2026-03-27 22:27:51 -03:00
4d27a256d2 fix: update logging of Docker registry credentials to use secrets
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 21s
2026-03-27 22:26:12 -03:00
08bfced7ce fix: add logging for Docker registry login credentials in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 7s
2026-03-27 22:25:18 -03:00
c266be0eba Remove CI workflow and instructions documentation files
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 23s
2026-03-27 22:18:02 -03:00
837214f41a fix: update Gitea registry login endpoint in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 7s
2026-03-27 17:07:30 -03:00
fa4bf360ff fix: update registry URL in build workflow
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 12s
2026-03-27 17:05:51 -03:00
2072dd299d fix: enhance Gitea registry login step to handle empty secrets
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 27s
2026-03-27 16:50:50 -03:00
af391efa89 fix: update Gitea registry login step to use correct secret names
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 22s
2026-03-27 16:46:54 -03:00
8893e85d53 fix: move Docker Hub login step into build job
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 1m48s
2026-03-27 16:44:14 -03:00
14ecd2fa18 fix: add Docker Hub login step to build workflow
Some checks failed
Build And Publish Production Image / Log In To Docker Hub (push) Successful in 2s
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 12s
2026-03-27 16:42:57 -03:00
0fa3d28c1b Merge pull request 'develop' (#8) from develop into main
Some checks failed
Build And Publish Production Image / Build And Publish Production Image (push) Failing after 24s
Reviewed-on: #8
2026-03-27 16:35:58 -03:00
11 changed files with 245 additions and 1112 deletions

View File

@@ -10,8 +10,10 @@ jobs:
name: Build And Publish Production Image
runs-on: ubuntu-latest
env:
REGISTRY: gitea.lab
REGISTRY: gitea.lab:80
IMAGE_NAME: sancho41/condado-newsletter
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
steps:
- uses: actions/checkout@v4
with:
@@ -20,18 +22,164 @@ jobs:
- name: Verify Docker CLI
run: docker version
- name: Log in to Docker Hub (optional)
if: ${{ secrets.DOCKERHUB_USERNAME != '' && secrets.DOCKERHUB_TOKEN != '' }}
run: echo "${{ secrets.DOCKERHUB_TOKEN }}" | docker login docker.io -u "${{ secrets.DOCKERHUB_USERNAME }}" --password-stdin
- name: Build all-in-one image
run: docker build -t condado-newsletter:latest -f Dockerfile.allinone .
- name: Log in to Gitea container registry
run: echo "${{ secrets.GITEA_REGISTRY_PASSWORD }}" | docker login ${REGISTRY} -u "${{ secrets.GITEA_REGISTRY_USERNAME }}" --password-stdin
- name: Tag registry images
- name: Tag
run: |
docker tag condado-newsletter:latest ${REGISTRY}/${IMAGE_NAME}:latest
docker tag condado-newsletter:latest ${REGISTRY}/${IMAGE_NAME}:${{ github.sha }}
- name: Push registry images
- name: Deploy stack via Portainer API
env:
STACK_NAME: codado-newsletter-stack
PORTAINER_URL: http://portainer.lab/
PORTAINER_API_KEY: ${{ secrets.PORTAINER_API_KEY }}
PORTAINER_ENDPOINT_ID: ${{ secrets.PORTAINER_ENDPOINT_ID }}
run: |
docker push ${REGISTRY}/${IMAGE_NAME}:latest
docker push ${REGISTRY}/${IMAGE_NAME}:${{ github.sha }}
set -u
set +e
PORTAINER_BASE_URL=$(printf '%s' "${PORTAINER_URL}" | sed -E 's/[[:space:]]+$//; s#/*$##')
echo "Portainer deploy debug"
echo "PORTAINER_URL=${PORTAINER_URL}"
echo "PORTAINER_BASE_URL=${PORTAINER_BASE_URL}"
echo "STACK_NAME=${STACK_NAME}"
echo "PORTAINER_ENDPOINT_ID=${PORTAINER_ENDPOINT_ID}"
echo "HTTP_PROXY=${HTTP_PROXY:-<empty>}"
echo "HTTPS_PROXY=${HTTPS_PROXY:-<empty>}"
echo "NO_PROXY=${NO_PROXY:-<empty>}"
echo "Current runner network info:"
if command -v ip >/dev/null 2>&1; then
ip -4 addr show || true
ip route || true
else
hostname -I || true
fi
PORTAINER_HOST=$(printf '%s' "${PORTAINER_BASE_URL}" | sed -E 's#^[a-zA-Z]+://##; s#/.*$##; s/:.*$//')
echo "Resolved host target: ${PORTAINER_HOST}"
PORTAINER_IP=""
ACTIVE_PORTAINER_BASE_URL="${PORTAINER_BASE_URL}"
if command -v getent >/dev/null 2>&1; then
echo "Host lookup (getent):"
getent hosts "${PORTAINER_HOST}" || true
PORTAINER_IP=$(getent hosts "${PORTAINER_HOST}" | awk 'NR==1{print $1}')
if [ -n "${PORTAINER_IP}" ]; then
PORTAINER_IP_BASE_URL="${PORTAINER_BASE_URL/${PORTAINER_HOST}/${PORTAINER_IP}}"
echo "Portainer IP fallback URL: ${PORTAINER_IP_BASE_URL}"
fi
fi
STACKS_BODY=$(mktemp)
STACKS_ERR=$(mktemp)
STACKS_HTTP_CODE=$(curl -sS \
--noproxy "*" \
-o "${STACKS_BODY}" \
-w "%{http_code}" \
"${ACTIVE_PORTAINER_BASE_URL}/api/stacks" \
-H "X-API-Key: ${PORTAINER_API_KEY}" \
2>"${STACKS_ERR}")
STACKS_CURL_EXIT=$?
if [ "${STACKS_CURL_EXIT}" -eq 6 ] && [ -n "${PORTAINER_IP:-}" ]; then
echo "Retrying GET /api/stacks with IP fallback due to DNS failure"
STACKS_HTTP_CODE=$(curl -sS \
--noproxy "*" \
-o "${STACKS_BODY}" \
-w "%{http_code}" \
"${PORTAINER_IP_BASE_URL}/api/stacks" \
-H "X-API-Key: ${PORTAINER_API_KEY}" \
2>"${STACKS_ERR}")
STACKS_CURL_EXIT=$?
if [ "${STACKS_CURL_EXIT}" -eq 0 ]; then
ACTIVE_PORTAINER_BASE_URL="${PORTAINER_IP_BASE_URL}"
fi
fi
echo "GET /api/stacks curl exit: ${STACKS_CURL_EXIT}"
echo "GET /api/stacks http code: ${STACKS_HTTP_CODE}"
echo "GET /api/stacks stderr:"
cat "${STACKS_ERR}" || true
echo "GET /api/stacks response (sanitized):"
jq -r '.[] | "Id=\(.Id) Name=\(.Name) EndpointId=\(.EndpointId)"' "${STACKS_BODY}" || true
if [ "${STACKS_CURL_EXIT}" -ne 0 ]; then
echo "Failed to reach Portainer API while listing stacks."
exit "${STACKS_CURL_EXIT}"
fi
if [ "${STACKS_HTTP_CODE}" -lt 200 ] || [ "${STACKS_HTTP_CODE}" -ge 300 ]; then
echo "Portainer returned a non-success status for stack listing."
exit 1
fi
STACK_ID=$(jq -r --arg stack_name "${STACK_NAME}" '.[] | select(.Name == $stack_name) | .Id' "${STACKS_BODY}" | head -n 1)
APPLY_BODY=$(mktemp)
APPLY_ERR=$(mktemp)
if [ -n "${STACK_ID}" ]; then
echo "Existing stack found with id=${STACK_ID}; sending update request"
PAYLOAD=$(jq -n \
--rawfile stack_file docker-compose.prod.yml \
'{StackFileContent: $stack_file, Env: [], Prune: false, PullImage: false}')
APPLY_HTTP_CODE=$(curl -sS -X PUT \
--noproxy "*" \
-o "${APPLY_BODY}" \
-w "%{http_code}" \
"${ACTIVE_PORTAINER_BASE_URL}/api/stacks/${STACK_ID}?endpointId=${PORTAINER_ENDPOINT_ID}" \
-H "X-API-Key: ${PORTAINER_API_KEY}" \
-H "Content-Type: application/json" \
-d "${PAYLOAD}" \
2>"${APPLY_ERR}")
APPLY_CURL_EXIT=$?
else
echo "Stack not found; sending create request"
PAYLOAD=$(jq -n \
--arg name "${STACK_NAME}" \
--rawfile stack_file docker-compose.prod.yml \
'{Name: $name, StackFileContent: $stack_file, Env: [], FromAppTemplate: false}')
APPLY_HTTP_CODE=$(curl -sS -X POST \
--noproxy "*" \
-o "${APPLY_BODY}" \
-w "%{http_code}" \
"${ACTIVE_PORTAINER_BASE_URL}/api/stacks/create/standalone/string?endpointId=${PORTAINER_ENDPOINT_ID}" \
-H "X-API-Key: ${PORTAINER_API_KEY}" \
-H "Content-Type: application/json" \
-d "${PAYLOAD}" \
2>"${APPLY_ERR}")
APPLY_CURL_EXIT=$?
fi
echo "Apply curl exit: ${APPLY_CURL_EXIT}"
echo "Apply http code: ${APPLY_HTTP_CODE}"
echo "Apply stderr:"
cat "${APPLY_ERR}" || true
echo "Apply response body:"
cat "${APPLY_BODY}" || true
if [ "${APPLY_CURL_EXIT}" -ne 0 ]; then
echo "Failed to reach Portainer API while applying stack changes."
exit "${APPLY_CURL_EXIT}"
fi
if [ "${APPLY_HTTP_CODE}" -lt 200 ] || [ "${APPLY_HTTP_CODE}" -ge 300 ]; then
echo "Portainer returned a non-success status while applying stack changes."
exit 1
fi
echo "Portainer deploy step completed successfully"

View File

@@ -1,57 +0,0 @@
name: CI
on:
pull_request:
branches: ["develop"]
jobs:
backend-test:
name: Backend Tests
runs-on: ubuntu-latest
defaults:
run:
working-directory: backend
steps:
- uses: actions/checkout@v4
- name: Set up JDK 21
uses: actions/setup-java@v4
with:
java-version: "21"
distribution: temurin
cache: gradle
- name: Make Gradle wrapper executable
run: chmod +x gradlew
- name: Run tests
run: ./gradlew test --no-daemon
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: backend-test-results
path: backend/build/reports/tests/
frontend-test:
name: Frontend Tests
runs-on: ubuntu-latest
defaults:
run:
working-directory: frontend
steps:
- uses: actions/checkout@v4
- name: Set up Node 20
uses: actions/setup-node@v4
with:
node-version: "20"
cache: npm
cache-dependency-path: frontend/package-lock.json
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test

File diff suppressed because it is too large Load Diff

View File

@@ -14,8 +14,10 @@ import java.util.Date
@Service
class JwtService(
@Value("\${app.jwt.secret}") val secret: String,
@Value("\${app.jwt.expiration-ms}") val expirationMs: Long
@Value("\${app.jwt.expiration-ms:86400000}") expirationMsRaw: String
) {
private val expirationMs: Long = expirationMsRaw.toLongOrNull() ?: 86400000L
private val signingKey by lazy {
Keys.hmacShaKeyFor(secret.toByteArray(Charsets.UTF_8))
}

View File

@@ -40,7 +40,7 @@ class AuthServiceTest {
fun should_returnValidClaims_when_jwtTokenParsed() {
val realJwtService = JwtService(
secret = "test-secret-key-for-testing-only-must-be-at-least-32-characters",
expirationMs = 86400000L
expirationMsRaw = "86400000"
)
val token = realJwtService.generateToken()
@@ -51,7 +51,7 @@ class AuthServiceTest {
fun should_returnFalse_when_expiredTokenValidated() {
val realJwtService = JwtService(
secret = "test-secret-key-for-testing-only-must-be-at-least-32-characters",
expirationMs = 1L
expirationMsRaw = "1"
)
val token = realJwtService.generateToken()

View File

@@ -0,0 +1,26 @@
package com.condado.newsletter.service
import io.jsonwebtoken.Jwts
import io.jsonwebtoken.security.Keys
import org.junit.jupiter.api.Assertions.assertTrue
import org.junit.jupiter.api.Test
class JwtServiceTest {
private val secret = "12345678901234567890123456789012"
@Test
fun should_generate_token_when_expiration_is_empty() {
val jwtService = JwtService(secret, "")
val token = jwtService.generateToken()
val claims = Jwts.parser()
.verifyWith(Keys.hmacShaKeyFor(secret.toByteArray(Charsets.UTF_8)))
.build()
.parseSignedClaims(token)
.payload
assertTrue(claims.expiration.after(claims.issuedAt))
}
}

View File

@@ -1,6 +1,6 @@
services:
condado-newsletter:
image: gitea.lab/sancho41/condado-newsletter:latest
image: gitea.lab:80/sancho41/condado-newsletter:latest
container_name: condado-newsletter
restart: unless-stopped
environment:
@@ -9,7 +9,7 @@ services:
SPRING_DATASOURCE_PASSWORD: ${SPRING_DATASOURCE_PASSWORD}
APP_PASSWORD: ${APP_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
JWT_EXPIRATION_MS: ${JWT_EXPIRATION_MS}
JWT_EXPIRATION_MS: ${JWT_EXPIRATION_MS:-86400000}
MAIL_HOST: ${MAIL_HOST}
MAIL_PORT: ${MAIL_PORT}
MAIL_USERNAME: ${MAIL_USERNAME}
@@ -34,8 +34,8 @@ services:
- "homepage.group=Hyperlink"
- "homepage.name=Condado Newsletter"
- "homepage.description=Automated newsletter generator using AI"
- "homepage.logo=https://raw.githubusercontent.com/celtinha/condado-newsletter/main/docs/logo.png"
- "homepage.url=http://condado-newsletter.lab"
- "homepage.logo=claude-dark.png"
- "homepage.href=http://condado-newsletter.lab"
volumes:
postgres-data:

View File

@@ -4,14 +4,13 @@ services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
container_name: condado-newsletter-postgres
environment:
POSTGRES_DB: condado
POSTGRES_USER: ${SPRING_DATASOURCE_USERNAME}
POSTGRES_PASSWORD: ${SPRING_DATASOURCE_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- condado-net
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${SPRING_DATASOURCE_USERNAME} -d condado"]
interval: 10s
@@ -20,6 +19,7 @@ services:
# ── Backend (Spring Boot) ────────────────────────────────────────────────────
backend:
container_name: condado-newsletter-backend
build:
context: ./backend
dockerfile: Dockerfile
@@ -29,7 +29,7 @@ services:
condition: service_healthy
environment:
SPRING_PROFILES_ACTIVE: dev
SPRING_DATASOURCE_URL: ${SPRING_DATASOURCE_URL}
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/condado
SPRING_DATASOURCE_USERNAME: ${SPRING_DATASOURCE_USERNAME}
SPRING_DATASOURCE_PASSWORD: ${SPRING_DATASOURCE_PASSWORD}
APP_PASSWORD: ${APP_PASSWORD}
@@ -50,36 +50,42 @@ services:
extra_hosts:
- "celtinha.desktop:host-gateway"
- "host.docker.internal:host-gateway"
networks:
- condado-net
# ── Frontend + Nginx ─────────────────────────────────────────────────────────
nginx:
container_name: condado-newsletter-frontend
build:
context: ./frontend
dockerfile: Dockerfile
args:
VITE_API_BASE_URL: ${VITE_API_BASE_URL}
restart: unless-stopped
ports:
- "80:80"
depends_on:
- backend
networks:
- condado-net
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.condado.rule=Host(`condado-newsletter.lab`)"
- "traefik.http.services.condado.loadbalancer.server.port=80"
- "homepage.group=Hyperlink"
- "homepage.name=Condado Newsletter"
- "homepage.description=Automated newsletter generator using AI"
- "homepage.logo=claude-dark.png"
- "homepage.href=http://condado-newsletter.lab"
# ── Mailhog (DEV ONLY — SMTP trap) ───────────────────────────────────────────
mailhog:
container_name: condado-newsletter-mailhog
image: mailhog/mailhog:latest
restart: unless-stopped
ports:
- "8025:8025"
networks:
- condado-net
volumes:
postgres-data:
networks:
condado-net:
driver: bridge
traefik:
external: true
name: traefik

View File

@@ -27,6 +27,22 @@ mkdir -p /var/log/supervisor
export SPRING_DATASOURCE_URL=${SPRING_DATASOURCE_URL:-jdbc:postgresql://localhost:5432/${APP_DB_NAME}}
export SPRING_DATASOURCE_USERNAME=${SPRING_DATASOURCE_USERNAME:-${APP_DB_USER}}
export SPRING_DATASOURCE_PASSWORD=${SPRING_DATASOURCE_PASSWORD:-${APP_DB_PASSWORD}}
export JWT_EXPIRATION_MS=${JWT_EXPIRATION_MS:-86400000}
# ── Log all Spring Boot environment variables for debugging ──────────────────
echo "========================================"
echo "Spring Boot Configuration:"
echo "========================================"
echo "SPRING_DATASOURCE_URL=${SPRING_DATASOURCE_URL}"
echo "SPRING_DATASOURCE_USERNAME=${SPRING_DATASOURCE_USERNAME}"
echo "SPRING_DATASOURCE_PASSWORD=${SPRING_DATASOURCE_PASSWORD}"
echo "JWT_EXPIRATION_MS=${JWT_EXPIRATION_MS}"
echo "JAVA_OPTS=${JAVA_OPTS:-not set}"
echo "OPENAI_API_KEY=${OPENAI_API_KEY:-not set}"
echo "========================================"
# ── Start all services via supervisord ───────────────────────────────────────
# Export unbuffered output for both Python and Java
export PYTHONUNBUFFERED=1
export JAVA_OPTS="${JAVA_OPTS} -Dfile.encoding=UTF-8 -Djava.awt.headless=true"
exec /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf

View File

@@ -1,27 +1,36 @@
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
silent=false
logfile=/dev/stdout
logfile_maxbytes=0
pidfile=/var/run/supervisord.pid
loglevel=info
[program:postgres]
command=/usr/lib/postgresql/16/bin/postgres -D /var/lib/postgresql/data
user=postgres
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/postgres.log
stderr_logfile=/var/log/supervisor/postgres.err.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:backend]
command=java -jar /app/app.jar
command=java -Dspring.output.ansi.enabled=always -Dlogging.level.root=DEBUG -jar /app/app.jar
autostart=true
autorestart=true
startsecs=15
stdout_logfile=/var/log/supervisor/backend.log
stderr_logfile=/var/log/supervisor/backend.err.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.err.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

View File

@@ -15,6 +15,9 @@ http {
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript;
access_log /dev/stdout;
error_log /dev/stderr;
server {
listen 80;
server_name _;