Skip to main content

GCP Cloud Armor - Architecture flow of Cloud Amor with Load Balancer





Google Cloud Armor and Google Load Balancer work together to shield your website from digital villains. We will see how they are connected and how traffic flows using an architecture diagram



Explanation:

  1. Client sends request: The user (client) initiates a request to your website or application.
  2. Request reaches internet: The request travels through the internet and reaches your network.
  3. Router directs traffic: Your router routes the request towards GCP Network.
  4. Load Balancer distributes traffic: The GCP Load Balancer receives the request and distributes it evenly among your backend servers.
  5. Cloud Armor inspects traffic: Before reaching the backend servers, the request passes through Cloud Armor.
  6. Cloud Armor filters and protects: Cloud Armor analyzes the request using your configured security policies. It filters malicious traffic, mitigates DDoS attacks, and blocks web application vulnerabilities.
  7. Clean request to backend servers: If the request passes Cloud Armor's scrutiny, it gets forwarded to the appropriate backend server for processing.
  8. Response to client: The backend server processes the request and sends the response back to the client through the same path.

Key points:

  • Cloud Armor sits between the Load Balancer and backend servers, acting as a security shield.
  • It doesn't modify the Load Balancer's functionality of distributing traffic among servers.
  • It adds an extra layer of security by filtering and protecting incoming traffic.

Benefits of this integration:

  • Enhanced security: Protects your applications from a wide range of threats.
  • Improved uptime: Mitigates DDoS attacks and ensures your applications remain accessible.
  • Reduced complexity: Easy to configure and manage security policies within GCP.
  • Scalability: Cloud Armor automatically scales to handle increased traffic volumes.



Comments

Popular posts from this blog

Ansible script to stop iptables

 Ansible script to stop iptables and disable during boot Step 1. [root@cluster playbooks]# pwd /root/playbooks [root@cluster playbooks]# cat hosts [webservers] 169.254.41.221 169.254.41.222 Step2. [root@cluster playbooks]# cat iptables.yml --- - name: stop ipatbles and disable   hosts: webservers   tasks:   - name: stop iptables     service: name=iptables state=stopped   - name: disbale on iptable on boot     service: name=iptables enabled=no Step3: [root@cluster playbooks]# ansible-playbook iptables.yml PLAY [stop ipatbles and disable] *********************************************** TASK [setup] ******************************************************************* ok: [169.254.41.222] ok: [169.254.41.221] ok: [localhost] TASK [stop iptables] *********************************************************** changed: [localhost] ok: [169.254.41.221] ok: [169.254.41.222] TASK [disbale on iptable on boot] *********************************...

Understanding TCP & UDP

 TCP (Transmission Control Protocol) is stateful. This means it maintains a connection state between the communicating parties throughout the communication session. Stateful Nature of TCP Connection Establishment : TCP requires a connection to be established between the sender and receiver before data transmission can begin. This is done through a process called the three-way handshake. Three-Way Handshake : This process involves the exchange of three messages (SYN, SYN-ACK, and ACK) to establish a reliable connection. Maintaining State : During the connection, TCP keeps track of various parameters to ensure reliable and ordered data delivery. Sequence Numbers : TCP assigns sequence numbers to each byte of data to ensure it is received in the correct order. Acknowledgements (ACKs) : The receiver sends acknowledgements for the received data packets. If an ACK is not received, the sender retransmits the data. Flow Control : TCP uses a window mechanism to control the rate of data tran...

Basics of OSI Model in networking

 The OSI (Open Systems Interconnection) model is a conceptual framework used to understand and standardize the functions of a telecommunication or computing system. It divides the process of networking into seven distinct layers, each responsible for specific tasks related to data transmission across a network. The OSI model was developed by the International Organization for Standardization (ISO) to help different networks and devices communicate with each other in a standardized way. Here’s a brief overview: Physical Layer: This is the foundation. It’s like the cables and the hardware that connect computers. Imagine the physical wires or the radio signals used for Wi-Fi. Data Link Layer: This layer is responsible for making sure that the data gets from one point to another without errors. Think of it as the traffic cop that ensures data goes to the correct place on the same network. Network Layer: This is where routing happens. It decides the best path for the data to travel from...