martes, 27 de mayo de 2014

We have moved off

This blog is now in the Cloud!!!




Continue Reading...

viernes, 14 de marzo de 2014

Discovering AWS: Deployment & Management

Cloud Formation
Cloud Formation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You can create a template, for example, from an entire environment and you can deploy it including parameters in execution time.
CloudFormer tool enables you to generate a template from an environment while it is running.


Data Pipeline
Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on-premise data sources at specified intervals. It helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. For example, you could move data from S3 to DynamoDB, and in DynamoDB run another process to move the data to other bucket in S3. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premise data silos.


CloudWatch
Amazon CloudWatch provides monitoring for AWS cloud resources and the applications customers run on AWS. It is included by default in all EC2 instances, collecting CPU, I/O Disk and Network Latency metrics totally free. You can implement more metrics with a little cost. CloudWatch is accessible from AWS Console, APIs, SDK and CLI (Command Line Interface).


Elastic Beanstalk
It is a way to deploy and manage applications in AWS. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. To ensure easy portability of your application, Elastic Beanstalk is built using familiar software stacks such as the Apache HTTP Server for Node.js, PHP and Python, Passenger for Ruby, IIS 7.5 for .NET, and Apache Tomcat for Java. There is no additional charge for Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications.


 Identity and Access Management (IAM)
IAM enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources.
AWS CloudHSM
This is the only non-virtual component that you can find in AWS offering. It is a hardware piece that is installed in a rack in your on-promise datacenter. It stores your private keys and in case that it is opened, the device is deprogrammed automatically and all the information is lost. The CloudHSM service allows you to protect your encryption keys within HSMs (Hardware Security Module) designed and validated to government standards for secure key management. You can securely generate, store, and manage the cryptographic keys used for data encryption such that they are accessible only by you.


OpsWorks

AWS OpsWorks is an application management service that makes it easy for DevOps users to model and manage the entire application from load balancers to databases. Start from templates for common technologies like Ruby, Node.JS, PHP, and Java, or build your own using Chef recipes to install software packages and perform any task that you can script. AWS OpsWorks can scale your application using automatic load-based or time-based scaling and maintain the health of your application by detecting failed instances and replacing them. You have full control of deployments and automation of each component.


Continue Reading...

martes, 11 de marzo de 2014

Discovering AWS: Networking

Before to describe the different networking services in AWS, it is important to clarify that AWS doesn’t support Multicast.


Virtual Private Cloud (VPC)
It enables you to create a private network fully-managed in the same way you can do it in your datacenter. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.


Route 53
It is a DNS service. It use a global DNS servers network to publish DNSs, when a client needs to resolve a DNS name, it will reach the AWS DNS nearest to it, that reduce the network latency. Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancers, or Amazon S3 buckets – and can also be used to route users to infrastructure outside of AWS.


Elastic Load Balancing
It is a load balancer service. ELB supports routing and load balancing for HTTP, HTTPS and TCP traffic to EC2 instances. ELB automatically scales its request handling capacity to meet the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without requiring manual intervention. Single CNAME provides stable entry point for DNS configuration.


Direct Connect
It establish a dedicated network connection from your legacy datacenter to AWS VPC. AWS Direct Connect makes it easy to scale your connection to meet your needs. AWS Direct Connect provides 1 Gbps and 10 Gbps connections, and you can easily provision multiple connections if you need more capacity. You can also use AWS Direct Connect instead of establishing a VPN connection over the Internet to your Amazon VPC, avoiding the need to utilize VPN hardware that frequently can’t support data transfer rates above 4 Gbps.

     
Continue Reading...

lunes, 10 de marzo de 2014

Discovering AWS: Application Services

Simple Workflow Service (SWF)
It is a task coordinator and state management service for your applications in the Cloud. It has Deciders and Workers that we could configure as we need. SWF ensures tasks are executed in a reliable way, in order and without duplicate them. Deciders and Workers run in your infrastructure and SWF manage the state.


 CloudSearch
It is a fully-managed service that makes it easy to set up, manage, and scale a search solution for your website or application in the Cloud. It integrates a quick search and highly scalable functionality in your application. It escalates automatically. With a few clicks in the AWS Management Console, you can create a search domain, upload the data you want to make searchable to Amazon CloudSearch, and the search service automatically provisions the required technology resources and deploys a highly tuned search index. As your volume of data and traffic fluctuates, it seamlessly scales to meet your needs. You can easily change your search parameters, fine tune search relevance, and apply new settings at any time without having to re-upload your data.


Simple Notification Service (SNS)
It creates, operate and sent notifications. Publish a message from an application and distribute it immediately to all subscribers or to other applications. It includes now a Mobile Push Messaging service that enables you to push to mobile devices such as iPhone, iPad, Android, Kindle Fire and internet connected smart devices. It can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues, or to any HTTP endpoint.


 Simple Queue Service (SQS)
It is a fast, reliable, scalable, fully managed message queuing service. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. To prevent messages from being lost or becoming unavailable, all messages are stored redundantly across multiple servers and data centers.  Developers can get started with SQS by using only five APIs: CreateQueue, SendMessage, ReceiveMessage, ChangeMessageVisibility, and DeleteMessage. Additional APIs are available to provide advanced functionality.


 Simple Email Service (SES)
It is a bulk and transactional email-sending service. You can send transactional email, marketing messages, or any other type of high-quality content and you only pay for what you use. In the Cloud, servers can changes, IP addresses can changes (especially if you use AutoScaling) so a server can’t send mails itself. It needs to use SES to send mails. Important thing: this is not a spam tool.


 Elastic Transcoder
It is media transcoding in the cloud. It is designed to be a highly scalable, easy to use and a cost effective way for developers and businesses to convert (or “transcode”) media files from their source format into versions that will playback on devices like smartphones, tablets and PCs. It can be used for audio and video transcoding.



Amazon CloudFront

Amazon CloudFront is a content delivery web service. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments. It is basically a Content Delivery Network (CDN) like Akamai. It supports video streaming and live streaming with Adobe FMS.

      
Continue Reading...

martes, 4 de marzo de 2014

Discovering AWS: Database

Relational Database Service (RDS)
It provides a SQL Server, MySQL, Oracle or PostgreSQL server, installed, configured and ready to use. AWS will manage the infrastructure: they will install the patches, execute the backups, resizing, availability (they will manage the failover, for example), etc. And our DBAs will manage the databases.




DynamoDB
It is a NoSQL database solution, we have to specify the performance we need, for example, 20 writes/second and AWS will resize the solution to provide it. It could store any amount of data, it has not limits. It is very easy to provisioning and resize the capacity needed for each table.


It is totally integrated with AWS Elastic MapReduce.

MapReduce is a Hadoop cluster, so it could manage petabytes of data. It converts a complex and vast process in an easy and effective process from a cost perspective.





Redshift
It is a data warehouse in AWS, it is optimized for OLAP. It can be escalated from 1 node 2TB XL to 100 nodes 16TB 8XL, and vice versa. You pay just for the infrastructure you provisioned, it is fully managed and compatible with SQL and all AWS services. It is optimized to work with AWS S3.





ElastiCache
It is a fully managed caching service. It is protocol-compliant with Memcached. It manages patching, detection and recovery of failures in a cache node. We need just a simple call to the API to grown or shrink the cluster. It is fully integrated with AWS CloudWatch (monitoring service) and AWS SNS (notification service) for monitoring and alerting, and supports Security Groups. Seamlessly caches in front of SimpleDB or RDS instances.




SimpleDB

It is a highly available and flexible non-relational data store that offloads the work of database administration. Developers simply store and query data items via web services requests and Amazon SimpleDB does the rest. It is optimized to provide high availability and flexibility, with little or no administrative burden. SimpleDB creates and manages multiple geographically distributed replicas of your data automatically to enable high availability and data durability. The service charges you only for the resources actually consumed in storing your data and serving your requests.


Continue Reading...

miércoles, 26 de febrero de 2014

Discovering AWS: Storage


Simple Storage Service (S3)
It is not a file system, it is an object store. It has an unlimited store capacity, it is highly scalable and available, and exists inside a region, this means that object are not moved outside of the region unless you specify it. By default, all objects are replicated. All object exists in three different locations. It is a WORM store (Write Once, Read More) so it is perfect backups, log files, etc. even for static web pages for a web application.
Interfaces with S3 could be REST or SOAP, if we provide a key, data will be encrypted (AES-256) automatically. The name of our bucket could have 1024 characters as maximum, and it have to be unique (for all AWS, not just in our region or AZ).
It enables multipart upload (100 MB/piece, 10000 pieces as maximum), access control (using Amazon S3 policies, ACLs o IAM rules) and object versioning (you have to pay for both, original object and old version).
Reduced Redundancy Storage (RRS) is a new option inside S3 offering. It provides 99.99% of durability and availability, and a low level of redundancy. You pay 2/3 of S3 price.



Elastic Block Store (EBS)
EBS is for AWS the same that a SAN volume for a VMWare host. It is a simple storage volume that we could join to our EC2 instance. It is POSIX-Compliant (so you can use it as boot device). It is replicated automatically by default inside the same Availability Zone, and snapshots are stored in S3. A new EBS can be created from the snapshot and attach it to another EC2.
EBS volume is “portable”, you can move it from an instance to another (detach/attach), but it can’t be shared between different EC2 instances.
If we choice ESB-optimized option during the provisioning, the volume will be provisioned in a storage dedicated network zone that has a dedicated bandwidth.

You can use EBS Provisioned IOPS too. Differences between both solutions:
EBS Standard
EBS PIOPS
IOPS: 100 IOPS steady-state, with best-effort bursts to hundreds
IOPS: Within 10% of up to 4000 IOPS, 99.9% of given year, as provisioned
Throughput: variable by workload, best effort to 10s of MB/s
Throughput: 16 KB per I/O = up to 64 MB/s, as provisioned
Latency: Varies, reads typically < 20ms writes typically < 10ms
Latency: low and consistent. Second/IOPS, at recommended QD
Capacity: As provisioned, up to 1 TB
Capacity: As provisioned, up to 1 TB

In EBS Standard you pay for you use, in EBS PIOPS you pay for provisioned.


Glacier

Similar to S3 but used for archiving purposes. Its cost the tenth than S3 service, 1 cent/GB/month. It takes 3-5 hours to recover the data, so you should use it for data that you don’t need in your day-by-day. It is a WORN store (Write Once, Read Never), you pay for access to the data.


AWS Import/Export

AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. AWS transfers your data directly onto and off of storage devices using Amazon’s high-speed internal network and bypassing the Internet. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity.
  
  

Continue Reading...

Project Mgmt. Professional

Project Mgmt. Professional

AWS Architect

AWS Architect

ITIL Fundamentals

ITIL Fundamentals