How to delete AWS configuration recorder

References:

AWS Config now supports Deletion of Configuration Recorder

AWS Documentation » AWS Config » Developer Guide » Managing AWS Config » Managing the Configuration Recorder

delete-configuration-recorder

Note

  • Make sure that you have deleted all policies before you delete the configuration recorder because you’ll be charged for them even if the recorder is gone. This can be done on AWS Web console.
  • AWS console doesn’t provide the functionality to delete the configuration recorder yet, you can only stop the recorder through the console. To completely delete it, you need to perform it through AWS CLI or API. Belows is the command to run in CLI.

 

 

 

To delete a rule:

  1. Open the AWS Config console at https://console.aws.amazon.com/config/.

  2. Select the region your rules are stored in using the menu in the top-right corner of your console.

  3. In the navigation pane, choose Rules.

  4. Choose the Edit rule icon (the pencil icon) for the rule that you want to delete.

  5. On the Configure Rule page, choose Delete rule. When prompted, choose Delete again.

 

To delete the configuration recorder using AWS CLI:


C:\Users\Cat>aws configservice delete-configuration-recorder --configuration-recorder-name default --region ap-southeast-1

Note that you need to have administrator privilege to execute it otherwise an error would be thrown out.

After the configuration recorder is deleted, AWS Config will not record resource configuration changes until you create a new configuration recorder.

This action does not delete the configuration information that was previously recorded. You will be able to access the previously recorded information by using the get-resource-config-history action, but you will not be able to access this information in the AWS Config console until you create a new configuration recorder.

 

Advertisements

Chapter 10 Exercise 10.3 Create Redis Replication Group

There’re concepts you should understand before working on the exercise.

Reference:

AWS Documentation » Amazon ElastiCache » User Guide » ElastiCache Replication (Redis) » Redis Replication

Redis implements replication in two ways:

  • Redis (cluster mode disabled) with a single shard that contains all of the cluster’s data in each node.

ElastiCacheClusters-CSN-Redis-Replicas

  • Redis (cluster mode enabled) with data partitioned across up to 15 shards.

ElastiCacheClusters-CSN-RedisClusters (1)

I created a cluster named myredis3 for the exercise. Note the Node Type, it cannot be the t2 family in order to enable Multi-AZ failover.

2018-01-23_215158

Have a look at the details of the cluster.

2018-01-23_215841

As required, it has one replica configured.

2018-01-23_220340

In order to connect to the primary and replica respectively from outside AWS, you need to add the iptables rule on the NAT server. Refer to my previous blog for the whole procedure of configuring NAT servers to access ElastiCache cluster from outside AWS. At this stage, you just need to run commands as below:

-- primary node
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 63791 -j DNAT --to 172.31.26.20:6379
-- replica node
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 63792 -j DNAT --to 172.31.13.159:6379

 

Now you can use FastoNoSQL to connect to both nodes and test the write operations.

2018-01-23_221346

2018-01-23_221215

 

 

 

Chapter 10 Amazon ElastiCache Exercise Supplement – Connect to the cluster from outside AWS

The exercises in this chapter are very simple, just to create one Memcached and one Redis cluster using AWS ElastiCache. However, when I tried to connect to the two clusters using FastoNoSQL client on my windows machine, it just failed. Then I googled and found that “The service is designed to be accessed exclusively from within AWS.

For testing purpose, I’d like to use my client machine to access the cluster instead of setting up another EC2 instance and access the cluster from there. If that’s also your attempt, then read on:

Useful References:

AWS Documentation » Amazon ElastiCache » User Guide » Accessing ElastiCache Resources from Outside AWS

AWS Documentation » Amazon Virtual Private Cloud » User Guide » VPC Networking Components » NAT » NAT Instances » Setting up the NAT Instance

https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.ConnectToCacheNode.html

Below is a simple configuration diagram I draw to help to understand the procedure in the online documentation. Following the diagram, I also listed the key steps.

IMG_7764.JPG In my test environment:

  • Memcached cluster IP address: 172.31.26.77, default port: 11211,security group: sg-0e0b1268.
  • NAT instance Public IP: 54.254.145.146, security group: sg-ef8c5389.
    1. Create a NAT instance in the same VPC as your ElastiCache cluster.
    2. Create security group rules for both ElastiCache cluster and NAT instances.
      1. For NAT instance: <inbound rule a>(allow TCP from the client machine to cache port forwarded from NAT instance) <inbound rule b>(allow SSH access from the client machine)(allow TCP connections from forwarded cache port)
      2. For ElastiCache Cluster:
    3. Validate the rules.
      1. Confirm that the client machine is able to SSH to the NAT instance.
      2. Confirm that the client machine is able to connect to the cluster from the NAT instance.
    4. Add an iptables rule to the NAT instance to forward the cache port from the NAT instance to the cluster node.
    5. iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 11211 -j DNAT --to 172.31.26.77:11211

      Test connection from client machine.

        • Telnet

      [HSY.HSY] ➤ telnet 54.254.145.146 11211
      Trying 54.254.145.146…
      Connected to 54.254.145.146.
      Escape character is ‘^]’.
      quit
      Connection closed by foreign host.

      • FastoNOSQL client

2018-01-22_230521

Note, step 2 and 3 is optional if security is not a concern (in my case it’s purely a test environment and I’d like to quickly get things set up) , you can simply add allowing all traffic rule for both inbound and outbound. However, I would recommend you to set the strict rule as it’s also a good opportunity to go over the topic of security group settings.

Exercise 9.3-9.5 Create different types of DNS routing policies using AWS Route 53

The exercises designed in this chapter is really good and very straightforward. By completing the hands-on exercises, you would get a clear understanding regards routing policy in both public and private cloud.

Note: there are some typos in the description part of the exercises such as “create a record set with type set to developer” where “type” should be “name”. Don’t get confused.

9.3 Create an Alias A Record with a Simple Routing Policy

9.4 Create a Latency Routing Policy (Delete one of the Latency based record set to change the policy from Latency based to Weight based)

9.5 Create a Hosted Zone for Amazon Virtual Private Cloud(Amazon VPC)

 

Below is the screenshot of the record set configuration of the Hosted Zone in AWS Route 53 when finishing exercise 9.3/9.4.

c9

Below is the screenshot of Hosted Zone configuration in AWS Route 53 when finishing exercise 9.5.

c9_2

Below is the screenshot of Hosted Zones after finishing exercises 9.3/4/5. Note that you have two domains with one public and one private.

2018-01-17_130122

Exercise 7.5 Launch a Redshift Cluster

Key steps have been listed as below, along with the official document reference in the end.

    1. Download and install Java 8 JRE.
    2. Download the Amazon Redshift JDBC Driver.
    3. Download SQL Workbench.
    4. Click on the executable and create the database connection.  Note: you must configure the Redshift JDBC driver by choosing the downloaded driver location in the “Manage Drivers” dialogue.
    5. Load sample data from Amazon S3 by using the COPY command. Note: the loading file and commands can all be found in the official doc “Load Sample data from S3”.
      *a. Create an IAM role. Choose two permission policies “AmazonS3ReadOnlyAccess” and “AWSGlueConsoleFullAccess”.
      *b. attache the IAM role to the Redshift cluster.
      *c. run the create table and copy command by providing authentication by referencing the IAM role you created.

      copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt' 
      credentials 'aws_iam_role=' 
      delimiter '|' region 'us-west-2';

 

Reference Doc:

Chapter 6 AWS IAM exercises

Some useful commands to complete the exercises of chapter have been noted down as below:


--List available regions in default JSON format

C:\Users\Betty>aws ec2 describe-regions
{
"Regions": [
{
"Endpoint": "ec2.ap-south-1.amazonaws.com",
"RegionName": "ap-south-1"
},
{
"Endpoint": "ec2.eu-west-2.amazonaws.com",
"RegionName": "eu-west-2"
},
...
]
}

--List available regions in text format

C:\Users\Betty>aws ec2 describe-regions --output text
REGIONS ec2.ap-south-1.amazonaws.com ap-south-1
REGIONS ec2.eu-west-2.amazonaws.com eu-west-2
REGIONS ec2.eu-west-1.amazonaws.com eu-west-1
REGIONS ec2.ap-northeast-2.amazonaws.com ap-northeast-2
REGIONS ec2.ap-northeast-1.amazonaws.com ap-northeast-1
REGIONS ec2.sa-east-1.amazonaws.com sa-east-1
REGIONS ec2.ca-central-1.amazonaws.com ca-central-1
REGIONS ec2.ap-southeast-1.amazonaws.com ap-southeast-1
REGIONS ec2.ap-southeast-2.amazonaws.com ap-southeast-2
REGIONS ec2.eu-central-1.amazonaws.com eu-central-1
REGIONS ec2.us-east-1.amazonaws.com us-east-1
REGIONS ec2.us-east-2.amazonaws.com us-east-2
REGIONS ec2.us-west-1.amazonaws.com us-west-1
REGIONS ec2.us-west-2.amazonaws.com us-west-2

--Show the version of AWS CLI

C:\Users\Betty>aws --version
aws-cli/1.14.2 Python/2.7.9 Windows/8 botocore/1.8.6

--Configure the CLI to use multiple access keys. Disable and delete the old access keys in the end.

C:\Users\Betty>aws configure --profile administrator
AWS Access Key ID [None]: AKIAID54XROS2XNB2N4Q
AWS Secret Access Key [None]: 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
Default region name [None]: us-east-1
Default output format [None]: text

C:\Users\Betty>aws s3api list-buckets --profile administrator
BUCKETS 2017-10-03T00:08:45.000Z fairybetty-apsoutheast2
BUCKETS 2017-10-03T00:11:34.000Z fairybetty-euwest4
BUCKETS 2017-10-03T02:53:11.000Z fairybetty-myserverlesswebsite
BUCKETS 2017-10-19T05:02:40.000Z fairybetty-sharedbucket
BUCKETS 2017-10-03T00:05:24.000Z fairybetty-useast1
BUCKETS 2017-10-05T01:46:24.000Z fb-pollyaudiofiles
BUCKETS 2017-10-05T01:45:24.000Z fb-pollywebsite
BUCKETS 2017-11-03T04:24:15.000Z mynewbucket-huang-20171102
OWNER suya.huang 8864937e19554b4efa96371e3ad5c514d186faa25d790221f908fa0c8448930e


C:\Users\Betty>aws configure --profile administrator_new
AWS Access Key ID [None]: AKIAIGM4K5MPYUYWQPRQ
AWS Secret Access Key [None]: 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
Default region name [None]: us-east-1
Default output format [None]: text

C:\Users\Betty>aws s3api list-buckets --profile administrator_new
BUCKETS 2017-10-03T00:08:45.000Z fairybetty-apsoutheast2
BUCKETS 2017-10-03T00:11:34.000Z fairybetty-euwest4
BUCKETS 2017-10-03T02:53:11.000Z fairybetty-myserverlesswebsite
BUCKETS 2017-10-19T05:02:40.000Z fairybetty-sharedbucket
BUCKETS 2017-10-03T00:05:24.000Z fairybetty-useast1
BUCKETS 2017-10-05T01:46:24.000Z fb-pollyaudiofiles
BUCKETS 2017-10-05T01:45:24.000Z fb-pollywebsite
BUCKETS 2017-11-03T04:24:15.000Z mynewbucket-huang-20171102
OWNER suya.huang 8864937e19554b4efa96371e3ad5c514d186faa25d790221f908fa0c8448930e

C:\Users\Betty>aws s3api list-buckets --profile administrator

An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.

Install and Configure AWS CLI

Before you can use CLI from your client windows machine, you need to install and configure it. Refer to the official doc for details:

Install the AWS Command Line Interface on Microsoft Windows

Configuring the AWS CLI

Key steps are listed below:

  • Download the appropriate MSI installer.

Download the AWS CLI MSI installer for Windows (64-bit)

  • Run the downloaded MSI installer.
  • Follow the instructions that appear.

The CLI installs to C:\Program Files\Amazon\AWSCLI (64-bit) by default.

  • To confirm the installation, use the aws –version command at a command prompt

C:\Users\cat>aws --version
aws-cli/1.14.2 Python/2.7.9 Windows/8 botocore/1.8.6

  • Configure the AWS CLI

List available regions


C:\Users\cat>aws ec2 describe-regions --output text
REGIONS ec2.ap-south-1.amazonaws.com ap-south-1
REGIONS ec2.eu-west-2.amazonaws.com eu-west-2
REGIONS ec2.eu-west-1.amazonaws.com eu-west-1
REGIONS ec2.ap-northeast-2.amazonaws.com ap-northeast-2
REGIONS ec2.ap-northeast-1.amazonaws.com ap-northeast-1
REGIONS ec2.sa-east-1.amazonaws.com sa-east-1
REGIONS ec2.ca-central-1.amazonaws.com ca-central-1
REGIONS ec2.ap-southeast-1.amazonaws.com ap-southeast-1
REGIONS ec2.ap-southeast-2.amazonaws.com ap-southeast-2
REGIONS ec2.eu-central-1.amazonaws.com eu-central-1
REGIONS ec2.us-east-1.amazonaws.com us-east-1
REGIONS ec2.us-east-2.amazonaws.com us-east-2
REGIONS ec2.us-west-1.amazonaws.com us-west-1
REGIONS ec2.us-west-2.amazonaws.com us-west-2

Open the access key file you downloaded earlier while creating the IAM user and copy paste the Access Key ID and Secrect Access Key to the command prompt as below.


C:\Users\cat>aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-east-1
Default output format [None]: text

To show your current configuration values:


C:\Users\cat>aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************HWVQ shared-credentials-file
secret_key ****************V9oj shared-credentials-file
region us-east-1 config-file ~/.aws/config

The content of CLI configuration file ~/.aws/config


[default]
output = text
region = us-east-1
[profile administrator]
output = text
region = us-east-1
[profile administrator_new]
output = text
region = us-east-1

 


The content of AWS credentials file ~/.aws/<em>credentials</em>

[default]
aws_access_key_id = AKI***VQ
aws_secret_access_key = hnrF5tOrr***9oj
[administrator]
aws_access_key_id = AKI***4Q
aws_secret_access_key = Cb7F3u***zJX
[administrator_new]
aws_access_key_id = AKI***RQ
aws_secret_access_key = xaSRXV***OH

Now, you should be able to use CLI from your windows client machine. Note that you need to add appropriate policies to the IAM user to view or edit resource information.


 

Exercise 5.5 Create a Scaling Policy

  1. Create an Amazon Cloud Watch metric and alarm for CPU utilization using the AWS Management Console.
  2. Using the Auto Scaling group from Exercise 5.4, edit the Auto Scaling group to include a policy that uses the CPU utilization alarm.
  3. Drive CPU utilization on the monitored Amazon Ec2 instances up to observe Auto Scaling.

Firstly, create an Auto Scaling  Launch configuration.asg_c1

Secondly, create an Auto Scaling group using the launch configuration created earlier.asg1

asg2

Do not configure scaling policy at this step as we’re going to create one using CLI.

asg3

asg4

 

asg5

Thirdly, create a scaling policy for the scaling group created above.


C:\Users\Betty>aws autoscaling put-scaling-policy --auto-scaling-group-name MyASG55 --policy-name MYASG55_CPULoadScaleOut --scaling-adjustment 1 --adjustment-type ChangeInCapacity --cooldown 30
{
"Alarms": [],
"PolicyARN": "arn:aws:autoscaling:us-east-1:921874900115:scalingPolicy:fb30c7c0-a0b8-4a73-b3ce-37458acb40d0:autoScalingGroupName/MyASG55:policyName/MYASG55_CPULoadScaleOut"
}

Fourthly, creates an alarm and associates it with the specified metric. Use the policy ARN for alarm.


aws cloudwatch put-metric-alarm --alarm-name capacityAdd --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 60 --threshold 50 --comparison-operator GreaterThanOrEqualToThreshold --dimensions "Name=AutoScalingGroupName, Value=MyASG55" --evaluation-periods 1 --alarm-actions "arn:aws:autoscaling:us-east-1:921874900115:scalingPolicy:fb30c7c0-a0b8-4a73-b3ce-37458acb40d0:autoScalingGroupName/MyASG55:policyName/MYASG55_CPULoadScaleOut"

Fifthly, use a stress testing tool to simulate some workload on the Linux hosts and observe the EC2 instances in EC2 dashboard. You should be able to see new instances being created and started automatically.

 

 

Exercise 5.3 Create a custom Amazon Cloudwatch Metric for Memory Consumption

  1. Create a custom Amazon CloudWatch metric for memory consumption.
  2. Use the CLI to PUT values into the metric.

The steps of creating a custom metric can be found in Amazon official document. Refer to AWS Documentation » Amazon EC2 » User Guide for Linux Instances » Monitoring Amazon EC2 » Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances“.

Below is the excerpt required to complete the task.

  • Create an IAM user with aws access type as “Programmatic access” and inline policy as below. Download the access key file which needs to be used later.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "cloudwatch:PutMetricData",
 "cloudwatch:GetMetricStatistics",
 "cloudwatch:ListMetrics",
 "ec2:DescribeTags"
 ],
 "Resource": "*"
 }
 ]
}

  • Install the required packages
 yum install perl-Switch perl-DateTime perl-Sys-Syslog perl-LWP-Protocol-https -y 
  • Download, install, and configure the monitoring scripts
 curl http://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.1.zip -O

unzip CloudWatchMonitoringScripts-1.2.1.zip
rm CloudWatchMonitoringScripts-1.2.1.zip
cd aws-scripts-mon

cp awscreds.template awscreds.conf

AWSAccessKeyId=my-access-key-id
AWSSecretKey=my-secret-access-key

Alternatively, you can associate an IAM role (instance profile) with your instance so that you don’t need to add the access key information in a configuration file.

  • Run the script to generate metrics and send it to Cloudwatch
    • Perform a simple test run without posting data to CloudWatch
./mon-put-instance-data.pl --mem-util --verify --verbose
    • Collect all available memory metrics and send them to CloudWatch
./mon-put-instance-data.pl --mem-util --mem-used-incl-cache-buff --mem-used --mem-avail
    • Set a cron schedule for metrics reported to CloudWatch.Add the following command to crontab to report memory and disk space utilization to CloudWatch every five minutes:
*/5 * * * * ~/aws-scripts-mon/mon-put-instance-data.pl --mem-used-incl-cache-buff --mem-util --disk-space-util --disk-path=/ --from-cron
  •  Use CLI to put metric to Cloudwatch
C:\Users\cat aws cloudwatch put-metric-data --namespace "System/Linux" --dimensions Name=InstanceId,Value=i-0c5efa4c554f43c45 --metric-name MemoryUtilization --value 99 
  •  Use CLI to show metrics information

C:\Users\cat>aws cloudwatch list-metrics --namespace "System/Linux" --metric-name MemoryUtilization --dimensions Name=InstanceId,Value=i-0c5efa4c554f43c45
{
"Metrics": [
{
"Namespace": "System/Linux",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-0c5efa4c554f43c45"
}
],
"MetricName": "MemoryUtilization"
}
]
}

C:\Users\cat>aws cloudwatch get-metric-statistics --namespace "System/Linux" --metric-name MemoryUtilization --dimensions Name=InstanceId,Value=i-0c5efa4c554f43c45 --start-time 2017-12-04T12:00:00.000Z --end-time 2017-12-04T12:25:00.000Z --period 60 --statistics "Sum" "Maximum" "Minimum" "Average" "SampleCount"
{
"Datapoints": [
{
"SampleCount": 1.0,
"Timestamp": "2017-12-04T12:00:00Z",
"Average": 89.9832103270251,
"Maximum": 89.9832103270251,
"Minimum": 89.9832103270251,
"Sum": 89.9832103270251,
"Unit": "Percent"
},
{
"SampleCount": 1.0,
"Timestamp": "2017-12-04T12:05:00Z",
"Average": 89.9953995509647,
"Maximum": 89.9953995509647,
"Minimum": 89.9953995509647,
"Sum": 89.9953995509647,
"Unit": "Percent"
},
{
"SampleCount": 1.0,
"Timestamp": "2017-12-04T12:10:00Z",
"Average": 90.0075887749043,
"Maximum": 90.0075887749043,
"Minimum": 90.0075887749043,
"Sum": 90.0075887749043,
"Unit": "Percent"
},
{
"SampleCount": 1.0,
"Timestamp": "2017-12-04T12:15:00Z",
"Average": 90.019777998844,
"Maximum": 90.019777998844,
"Minimum": 90.019777998844,
"Sum": 90.019777998844,
"Unit": "Percent"
},
{
"SampleCount": 1.0,
"Timestamp": "2017-12-04T12:20:00Z",
"Average": 90.0504476590792,
"Maximum": 90.0504476590792,
"Minimum": 90.0504476590792,
"Sum": 90.0504476590792,
"Unit": "Percent"
}
],
"Label": "MemoryUtilization"
}

gptransfer failed with “[ERROR]:-error ‘ERROR: gpfdist error – line too long in file “

While the table you copied using gptransfer has wide columns, you might get error as below:

[gpadmin@mdw-1 ~]$ gptransfer -t preview.au.segment_weekly --dest-database qa1_cloned --drop
20160804:03:42:21:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Building list of source tables to transfer...
20160804:03:42:21:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Number of tables to transfer: 1
20160804:03:42:21:007931 gptransfer:mdw-1:gpadmin-[INFO]:-gptransfer will use &quot;fast&quot; mode for transfer.
20160804:03:42:21:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Validating source host map...
20160804:03:42:21:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Validating transfer table set...
20160804:03:42:22:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Using batch size of 2
20160804:03:42:22:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Using sub-batch size of 24
20160804:03:42:22:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Creating work directory '/home/gpadmin/gptransfer_7931'
20160804:03:42:23:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Creating schema au in database qa1_cloned...
20160804:03:42:24:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Starting transfer of preview.au.segment_weekly to qa1_cloned.au.segment_weekly...
20160804:03:42:24:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Creating target table qa1_cloned.au.segment_weekly...
20160804:03:42:24:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Retrieving schema for table preview.au.segment_weekly...
20160804:03:42:33:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Transfering data preview.au.segment_weekly -&gt; qa1_cloned.au.segment_weekly...
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[ERROR]:-Failed to transfer table preview.au.segment_weekly
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[ERROR]:-error 'ERROR: could not write to external resource: Broken pipe (fileam.c:1774) (seg6 sdw-1:40006 pid=9178) (cdbdisp.c:1326)
' in 'INSERT INTO gptransfer.w_ext_segment_weekly_131f3c5e2361d8cb90a4bd9328ccb0e7 SELECT * FROM &quot;au&quot;.&quot;segment_weekly&quot;'
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[ERROR]:-error 'ERROR: gpfdist error - line too long in file /home/gpadmin/gptransfer_7931/preview.au.segment_weekly/preview.au.segment_weekly.pipe.6 near (98286585 (url.c:2030) (seg7 slice1 sdw-1:40007 pid=10382) (cdbdisp.c:1326)
DETAIL: External table ext_segment_weekly_131f3c5e2361d8cb90a4bd9328ccb0e7, line 120726 of file gpfdist://sdw-1:8023/preview.au.segment_weekly.pipe.6
' in 'INSERT INTO &quot;au&quot;.&quot;segment_weekly&quot; SELECT * FROM gptransfer.ext_segment_weekly_131f3c5e2361d8cb90a4bd9328ccb0e7'
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Remaining 1 of 1 tables
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[WARNING]:-1 tables failed to transfer. A list of these tables
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[WARNING]:-has been written to the file failed_transfer_tables_20160804_034213.txt
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[WARNING]:-This file can be used with the -f option to continue
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[WARNING]:-the data transfer.
20160804:03:42:52:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Removing work directories...
20160804:03:42:57:007931 gptransfer:mdw-1:gpadmin-[INFO]:-Finished.

To avoid this error, you need to specify  option –max-line-length, according the official document (Utility Guide):

–max-line-length=length
Sets the maximum allowed data row length in bytes for the gpfidst utility. If not specified, the default is 10485760. Valid range is 32768 (32K) to 268435456 (256MB).

Should be used when user data includes very wide rows (or when line too long error message occurs). Should not be used otherwise as it increases resource allocation.

[gpadmin@mdw-1 ~]$ gptransfer -t preview.au.segment_weekly --dest-database qa1_cloned --drop  --max-line-length 100485760

Problem resolved!