March 2, 2021 12:03 AM - BoerumHillScott
Trigger warnings: geeky stuff and lots of Amazon
Here's a summary of site's technology:
The site is coded in PHP, with minor use of Javascript for client-side functionality. No back-end or front-end frameworks are used.
All code was written by me (BoerumHillScott), except for the code that filters HTML input, which is HTML Purifier 4.13.0
The production site runs on a series of docker containers managed by docker-compose on a single server:
- web engine: php:8.0.2-apache-buster, with multiple extensions
- web reverse proxy: nginx:1.19.6
- database: mariadb:10.5
- cache : redis:6.0.10
I originally used nginx for the root web because I had dev, test, and production all on the same server and nginx routed traffic to the right environment. There is now only one environment per sever, but nginx also does SSL/TLS (encryption) so well that I kept it around.
All traffic is encrypted via keys from https://letsencrypt.org/ , using acme.sh for key lifecycle management.
The production server runs on an Amazon Web Services (AWS) Elastic Cloud Compute (EC2) t4g.micro instance, with an Amazon Gravitron 2 ARM processor and 1 GB of RAM. Your phone has a similar processor (but more of them), and several times the RAM. Primary storage (operating system, code, database, 3 days of images) is on 8 GB of gp3 SSD. All images older than 3 days are stored on Amazon Simple Storage Service (S3).
The operating system is Amazon Linux 2 (based on Red Hat Enterprise Linux), but the main thing the OS does is serve as a host for the docker containers.
Images stored on S3 are delivered via the AWS CloudFront Content Distribution Network.
Backups of the database and primary storage images are taken nightly, and stored on AWS S3. Backups are kicked off via AWS Systems Manager, and backup failure reports are emailed to me.
All production resources are in AWS's us-east-2 (Ohio) region.
All images and database backups are replicated to AWS Glacier Deep Archive storage in AWS's us-east-1 (Northern Virginia) region, and retained "forever."
Development and deployment process:
The site development environment is on a Raspberry Pi 4 in my house, running Ubuntu Linux. The same docker configuration is used as production, and the development environment is frequently rebuilt from production backups.
Coding is done primary via Microsoft Visual Studio Code, running on Windows with a drive mapping to the Pi.
Code once commited is stored in AWS CodeCommit (a version of git).
When code is ready to be put into production, it is merged into the "master" CodeCommit branch.
This triggers AWS CodePipeline to create a brand new test environment in AWS, using CloudFormation.
Assuming the test environment is all good, the change is approved to go into prod, via a combination of CloudFormation and manual changes (my goal is to make it all automated).
Future ideas:
Use a javascript framework (React or Vue.js) on the front end to help make the site more "interactive."
Use AWS Lambda "serverless" technology on the back end to handle the interactive requests.
????
Here's a summary of site's technology:
The site is coded in PHP, with minor use of Javascript for client-side functionality. No back-end or front-end frameworks are used.
All code was written by me (BoerumHillScott), except for the code that filters HTML input, which is HTML Purifier 4.13.0
The production site runs on a series of docker containers managed by docker-compose on a single server:
- web engine: php:8.0.2-apache-buster, with multiple extensions
- web reverse proxy: nginx:1.19.6
- database: mariadb:10.5
- cache : redis:6.0.10
I originally used nginx for the root web because I had dev, test, and production all on the same server and nginx routed traffic to the right environment. There is now only one environment per sever, but nginx also does SSL/TLS (encryption) so well that I kept it around.
All traffic is encrypted via keys from https://letsencrypt.org/ , using acme.sh for key lifecycle management.
The production server runs on an Amazon Web Services (AWS) Elastic Cloud Compute (EC2) t4g.micro instance, with an Amazon Gravitron 2 ARM processor and 1 GB of RAM. Your phone has a similar processor (but more of them), and several times the RAM. Primary storage (operating system, code, database, 3 days of images) is on 8 GB of gp3 SSD. All images older than 3 days are stored on Amazon Simple Storage Service (S3).
The operating system is Amazon Linux 2 (based on Red Hat Enterprise Linux), but the main thing the OS does is serve as a host for the docker containers.
Images stored on S3 are delivered via the AWS CloudFront Content Distribution Network.
Backups of the database and primary storage images are taken nightly, and stored on AWS S3. Backups are kicked off via AWS Systems Manager, and backup failure reports are emailed to me.
All production resources are in AWS's us-east-2 (Ohio) region.
All images and database backups are replicated to AWS Glacier Deep Archive storage in AWS's us-east-1 (Northern Virginia) region, and retained "forever."
Development and deployment process:
The site development environment is on a Raspberry Pi 4 in my house, running Ubuntu Linux. The same docker configuration is used as production, and the development environment is frequently rebuilt from production backups.
Coding is done primary via Microsoft Visual Studio Code, running on Windows with a drive mapping to the Pi.
Code once commited is stored in AWS CodeCommit (a version of git).
When code is ready to be put into production, it is merged into the "master" CodeCommit branch.
This triggers AWS CodePipeline to create a brand new test environment in AWS, using CloudFormation.
Assuming the test environment is all good, the change is approved to go into prod, via a combination of CloudFormation and manual changes (my goal is to make it all automated).
Future ideas:
Use a javascript framework (React or Vue.js) on the front end to help make the site more "interactive."
Use AWS Lambda "serverless" technology on the back end to handle the interactive requests.
????
Edited by BoerumHillScott at March 2, 2021 9:48 AM
31 comments
https://www.youtube.com/watch?v=25J3u3P-HHg
And look at those old Macs!!! They must be collector's items
I have a random computer question - probably silly.
Thinking of getting an ultra-wide monitor to replace my dual setup where I use a regular monitor and my laptop as a secondary screen.
Would I have any issues connecting something like this :
LG 34WK650-W 34" UltraWide 21:9 IPS Monitor with HDR10 and FreeSync (2018), Black/White https://www.amazon.com/dp/B078GSH1LV/ref=cm_sw_r_cp_api_glt_fabc_29RNJS7YM0YBRFEYDM8H
To an older model Lenovo T450s - which only has VGA and mini display ports?
Have done extensive googling and seems to be “maybe”
Figured I’d ask the OT crew before emailing help desk tomo.
Trying to make my home setup more clean when I move back. The laptop monitor is too small as a secondary.
Post deletions will still get it out of alignment, but should also fix itself when you get caught up in a group.
Respectfully.
Practically, it will only mess things up until he reads the post (thread) again.
What does really mess things up is when:
- A post is created
- You log in but do not read the post
- The post is deleted
Before January, that would leave a zombie "x new" message around forever.
Now, if you have read everything else in the group and click on the "x new" message, it gets cleaned up.
The site security certificate is good for lifeinbklyn.com, www.lifeinbklyn.com, lifeinbkln.com, and www.lifeinbkln.com, but the last 3 automatically redirect to lifeinbklyn.com.
Looking at the logs, we both have Chrome/88.0.4324.181, so not sure what the issue would be.
When it happens again, please let me know the exact time and what you were viewing/doing.
Looking through the logs I found one potential issue, but unsure if it related to what you are seeing.
I'm not positive it was the same thing you have seen, but it would cause a plank page to be displayed.
For security reasons, when an software error happens on the server, a blank page is generated.
Error messages are often used by hackers to find out more about the internal workings of a site to make attacks easier.
My MacBook is fine. It's just my Android phone.