Posts

Setup Cloudflare for S3 Bucket

Image
One way to improve website performance is to use CDN to distribute the static contents of your site. S3 is a common place to host such type contents. In this post, I will show you how to publish a S3 bucket using Cloudflare. In fact, the screen shots used in this blogpost is published exactly through this manner. Setup S3 Bucket Permissions This is an optional step which adds a S3 bucket policy to your bucket. This restricts S3 access to  Cloudflare IPs only. It's still up to debate whether this is worth doing or not. But I believe it's always a best practice to reduce the public access surface whenever you can. {      "Version": "2012-10-17",      "Statement": [          {              "Sid": "PublicReadGetObject",              "Effect": "Allow",              "Principal": "*",              "Action": "s3:GetObject",              "Resource": "arn:aws:s3:::

Setup Cloudflare for AWS API Gateway

Image
 In this post I will show how to setup Cloudflare for a Serverless app built with AWS API Gateway and Lambda. For demonstration, I use a simple web app I built (rona.tomking.xyz). The app is hosted in  AWS Sydney region. It displays daily Victoria COVID case and that's it. To use Cloudflare, I have signed up a free Cloudflare account. The first site can be added for free with following features. DNS hosting Free SSL certificates CDN DDoS attacks mitigation up to 67 Tbps capacity Up to 100k workers requests and 30 scripts 3 Page rules To allow Cloudflare become the first ingress point for our web app, the first step is to change our DNS Nameserver to Cloudflare's DNS NS records. In Cloudflare portal, click + Add site , and type the site TLD, in our case it's tomking.xyz. Cloudflare will display the NS records set for the site. As I got my tomking.xyz from GoDaddy, I then need to log into GoDaddy DNS Zone management portal and change the nameservers from GoDaddy's NS

Automate EC2 Instance Security Group Rules Update

Ever come into the situation where you need to whitelist a long list of IPs for a EC2 instance? It can be painful to add them manually one by one. On top of that, what if these IPs change on a regular basis? You are in luck! I will show you how to update Security Group rules automatically using Python. Here's my use case. I got an EC2 instance takes syslog feeds from VMWare's WorkspaceOne. For security sake, it should only allow IPs owned by VMware SaaS solution, which are all listed in this KB . There are around 990 IPs and subnets need to be added. Plus, they are constantly get updated. The solution I put in place is to scrape the IPs off the KB webpage, and then add them into security groups attached to the EC2 instance. Sounds like a piece of cake, right? Well, there are a few huddles we need to address first. First, the VMware KB page is not a static HTML page. The IP list is rendered from Javascript. If we just use Requests library, it will return None as the page is not

Setup Splunk Universal Forwarder with TLS

One of the best practice to setup Splunk Universal Forwarder (UF) is to encrypt incoming log traffic with TLS. This is especially important if your intake is from an external source on Internet, e.g from a SaaS solution. In this blog I will demostrate the steps to get this setup.  First, we will create a public A DNS record for the UF. This is because our UF will be receiving logs from Internet.  Next, we need to purchase a new TLS certificate for the A record we just registered. Assume the domain name we set for the certificate is syslog.contoso.com.  On the UF, run the command below to generate a CSR to submit to the public CA (DigiCert, GoDaddy...). openssl req -new -key syslog.contoso.com.key -out syslog.contoso.com.csr   Copy the private key to Syslog-ng cert.d folder. The private key is automatically generated along with the CSR. cp syslog.contoso.com.key /usr/local/etc/syslog-ng/cert.d/syslog_contoso_com_key.pem   Submit the CSR file to CA and wait for the certificate to be issu

Use PowerShell to delete SPAM Blogger comments

Image
I haven't been very diligent on maintaining this blog. There has been quite a few SPAM comments accumulated on my posts. I am going to turn on moderation to block those. But I need a way to clean all those existing SPAM comments. So over the weekend, I wrote this PowerShell script to do just that.  In the end, it will probably take less time if I just manually all the cleanup manually. But I believe this is a good opportunity to learn about Google API and OAuth2 flow. Plus, now I have something to write about :) Google API can be accessed with both API key and OAuth2 token. However, the API key is only allowed for public accessible resources and actions like read a post or comment. Actions like deleting a post or comment require OAuth2 to be setup. Now let's look at how this is achieved using PowerShell. Setup the app in Google Developer Console Go to https://console.developers.google.com/ and setup a new project. Under Library , find Blogger v3 API and enable it. Under OAuth

Use Ansible to update Splunk Universal Forwarder Configuration

Today we will look at how to use Ansible to update Splunk UF (Universal Forwarder) configuration. The benefits of using Ansible to achive this are: - Save the hassel to manually modify conf files of syslog-ng and splunk uf. - Codify Splunk UF configuratoin, so they can be version controlled via GitHub. - Automate multiple UFs update without the need to ssh to each single server.  - The playbook can also be used to configure newly provisioned Spunk UF. Update log inputs on Splunk Universal Forwarder, normally invovles following tasks 1. Modify two configuratoin files: - Syslog-ng conf file in conf.d/syslog-ng.conf.  - Syslog app inputs config file under Splunk UF installation folder. 2. After the modification, Splunk service needs to be restarted. For configuration files, we use Jinja2 template to simpilify its format. The Jinja2 templates use parameters supplied from CSV file to populiate the final conf file. The Ansible playbook converts the template to conf file use template task. Th

OWA and ECP failure after Install Exchange 2016 CU17

Image
I recently ran into an issue after update Exchange 2016 from CU15 to CU17. The upgrade installation took around an hour, but was eventually completed successfully according to the Installation Wizard at least. When I tried to access ECP, I got the error below even before the login page shows up. At the meantime, Exchange Management Shell is inaccessible due to the error. In the eventlog, there are lots of 1003 errors relate to MSExchange Front End HTTP Proxy.  After some research, it appears the issue is caused by corrupted SharedWebConfig.config files in C:\Program Files\Microsoft\Exchange Server\V15\FrontEnd\HttpProxy and C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess. But regenerating the files base on this MS document didn't fix the issue. I end up had to setup a test Exchange 2016 CU17 in my own lab environment. Once the new Exchange is up, I copied those 2 SharedWebConfig.config files to the production Exchange server and then did a IISRESET. To my dismay, ECP t