Note: This document is subject to change.
With Vizcom Enterprise Cloud, you can count on Vizcom to manage the service. Vizcom prioritizes security, data protection, and reliability while providing a robust support service to satisfy any and all needs.
Security
- Managed data as a lifecycle
- Data classification efforts
- Data protection policies based on data lifecycle and data classification that inform
appropriate security controls
- Data security controls
- data encryption requirements
- protection of data in-use
- leveraging secure compute technology
- Other common security components for data protection include
- key management
- data-loss prevention
- activity monitoring
- governing data use at the endpoint
- Traffic Flow/Network encryption
- A client located in the Ford network needs to reach the IP address of an Amazon EC2 Instance in the VPC, so the traffic is routed through the Customers gateway.
- The Customers gateway determines that the best route to the VPC is via the AWS Site-to-Site VPN tunnel. The traffic is then encrypted based on cryptographic parameters for the IPSec tunnel, with the destination of the encrypted packet being the AWS VPN endpoint public IP address.
- The CGW determines that the best route to the AWS VPN endpoint public IP address is via the Direct Connect public VIF. The CGW then routes the encrypted IPSec traffic via the public VIF.
- The AWS VPN endpoint receives the encrypted IPSec traffic and decrypts it. As the original IP destination address is the EC2 instance in the VPC, the traffic is routed via the VPC fabric to the EC2 instance. Return traffic from the EC2 instance to the client located in the corporate network follows a reverse but identical path, as described in steps A-D.
Reliability
- Availability/uptime
- 99.9% of mean system uptime
- Change management
- Changes include
- those imposed on your workload such as spikes in demand,
- those from within such as feature deployments and security patches.
- monitor workload resources
- logs and metrics
- gain insight into the health of systems
- have alerting set up when health is degrading
- phases
- Generation — Monitor all components for the workload
- All components of your workload should be monitored, including the front-end, business logic, and storage tiers
- Aggregation — Define and calculate metrics
- Real-time processing and alarming — Send notifications and automate responses
- Storage and Analytics
- Design your Workload to Adapt to Changes in Demand
- controlled changes
- have standard practices for
- deploying a workload
- patching it
- making DNS modifications.
- Failure management
- back up data
- identify and back up all data
- Secure and encrypt backup
- Using client-side encryption your workload application is responsible for encrypting the data before it is sent to S3. Both methods allow you to either use AWS Key Management Service (AWS KMS) to create and store the data key, or you may provide your own key (which you are then responsible for). Using AWS KMS, you can set policies using IAM on who can and cannot access your data keys and decrypted data.
- Perform data backup automatically
- Perform periodic recovery of the data to verify backup integrity and processes
- Fault Isolation to Protect Workload
- Deploy the workload to multiple locations
- Layered defense
- Your first layer protects against smaller, more common, disruptions by building a highly available architecture using multiple AZs.
- To protect your workload from large-scale events, implement a Disaster Recovery strategy (see Plan for Disaster Recovery (DR) for more information).
- The next layer would therefore be to protect your workload from a disaster event based on disruption or loss of one physical data center. You can achieve this within a single Region using multiple AZs by adding data backups to your high availability architecture.
- In the case of a disaster event that affects a single AZ, components in other AZs continue to function, and backups prevent data loss.
- Automate recovery for components constrained to a single location:
- rebuild image if failure
- You must automate the ability to recreate necessary infrastructure, redeploy applications, and recreate necessary data for these cases.
- Designed Workload to Withstand Component Failures
- Monitor all components of the workload to detect failures
- Failover to healthy resources
- Monitor resources
- We will have simple monitoring, indicating whether the service home page is returning an HTTP 200 OK status. When problems occur, our playbook will indicate that logging from the instance will be used to establish root cause
- Adapt to changes in demand
- We will have playbooks for common hardware failures, urgent software updates, and other disruptive changes. Self-healing, autoscaling, and well-architected workloads
- Implement change
- We will use IAC to define our infrastructure as code, and specifically to speed up reconstruction in the event of a failure.
- Software updates are manually performed using a runbook, with downtime required for the installation and restart of the service. If a problem happens during deployment, the runbook describes how to roll back to the previous version.
- Any corrections of the error are done using analysis of logs by the operations and development teams, and the correction is deployed after the fix is prioritized and completed.
- Back up data
- We will use a vendor or purpose-built backup solution to send encrypted backup data to Amazon S3 using a runbook. We will test that the backups work by restoring the data and ensuring the ability to use it on a regular basis using a runbook. We configure versioning on our Amazon S3 objects and remove permissions for deletion of the backups. We use an Amazon S3 bucket lifecycle policy to archive or permanently delete according to our requirements.
- Approach to mitigate data loss and corruption utilizing AWS Backup Based on configured health checks, AWS services, such as Elastic Load Balancing and AWS
- Auto Scaling, can distribute the load to healthy Availability Zones
- Architect for resiliency
- Workloads are deployed with one Region and one Availability Zone. We deploy the application, including the database, to a single instance.
- Test resiliency
- The deployment pipeline of new software is scheduled, with some unit testing, but mostly white-box/black-box testing of the assembled workload.