Unexpected response () – Please Help

Question:

I have this in my test php file:
include_once(“cloud/cloudfiles.php”);
$auth = new CF_Authentication(MY_NAME, MY_APIKEY, NULL, UK_AUTHURL);
$auth->authenticate();
$conn = new CF_Connection($auth);

and all I get from authenticate is:

Fatal error: Uncaught exception ‘InvalidResponseException’ with message ‘Unexpected response (): ‘ in /home/royaltyf/public_html/cloud/cloudfiles.php:213 Stack trace: #0 /home/royaltyf/public_html/testcloud.php(10): CF_Authentication->authenticate() #1 {main} thrown in /home/royaltyf/public_html/cloud/cloudfiles.php on line 213

MY_NAME and MY_APIKEY are OK – doublechecked that…

Please Help me.

Answers:

I ran into a similar issue with CentOS. Is there anymore information from the error output you didn’t display? Such as:

* SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
* Closing connection #0

If you use CentOS I found the following guide will fix this issue.
http://cleverna.me/posts/centos-openssl-has-out-of-date-ca-certs

Hopefully this fixes your issue. It did mine.

A neat suggestion for protecting a cloud server

I suggest that you enable an option for a cloud server that will protect it from deletion by accident. I realize that you want cloud servers to be flexible and easy to create, but it would be good to also be able to protect a valuable server from deletion or rebuilding.

I know that it takes more than one click to delete or rebuild a server, but I still think that many of us would prefer to have several more steps involved before our server could be deleted or rebuilt. Steps that include making a phone call or opening a ticket, not just a couple of clicks in the Cloud Control.

I’m suggesting some kind of optional “lock” that would disable the delete and rebuild buttons on certain servers.

– volsoft

Speed up WordPress with a plugin and inexpensive CDN.

Found a fantastic article today that I thought was worth sharing. The article shows a great plugin that will help significantly speed up your WordPress installations. The plug-in is called W3 Total Cache and it uses a CDN called MAXCDN.

For $10 a year (1TB of bandwidth) Maxcdn will serve all your files for you without you having to lift a finger. MaxCDN grabs all the files itself..pretty cool no? I just signed up today and already noticed a large increase in load times for my WP site. I will report back on reliability once I’ve used the service for a bit longer.

For the price it’s worth checking out.

Links:

Full article – http://rackerhacker.com/2010/02/13/wordpress-w3-total-cache-maxcdn/

WP Plugin – http://wordpress.org/extend/plugins/w3-total-cache/

MaxCDN – http://www.maxcdn.com/

PS: This plugin has helped raise my Google Page Speed score up 12 points. That is quite a jump!

====================

There is a wordpress plugin called CDN Tools that is nearly identical in functionality that works with Cloud Files and our CDN partner LimeLight Networks as well. Here’s a link to the plugin ::

CDN Tools http://wordpress.org/extend/plugins/cdn-tools/

I use CDN Tools, and it is completely transparent. Whenever you upload an image, you’ll see a note that CDN sideloading is enabled. It stores the image locally, and automatically sends it to Cloud Files. If you ever uninstall the plugin, the local files are there already, and the rollback is transparent as well.

====================

I still see the MaxCDN deal as better & cheaper for WP site caching. $10 for a TB of bandwidth (per year) and since you aren’t physically storing the data (Pull Zone vs Push Zone) there are no data storage fees. It’s strictly a caching service meant to serve cached files from multiple locations.

Now if I was looking to keep a secondary storage for my files or a traditional CDN to serve files of my choosing I would look to Cloud Files.

Not to mention WP Total Cache has many many more features directed specifically at making WP run faster. Whereas CDN Tools seems to really only offer the ability to serve files via a CDN.

Check out the full article link for more details. It was written by Major Hayden who is a Senior Systems Engineer at Rackspace.

====================

I have done 2TB of transfer with MaxCDN (in the 1.5 month) and had only one small issue.

Their support is awesome and the cost is great $.10 per GB.

Origin Pull is much better than uploading the files. All you have to do to stop using it is turn off the check box in W3Cache.

If you sign up, email MaxCDN and tell them Franklin Townes is your referral. He is the author of W3Cache.

don’t run a WP site with out it.

====================

ASP.Net 2.0 – Membership and Role Providers using MySQL instead of Ms-SQL

Questions:

Hello,

I found an article about using ASP.Net 2.0 Membership and Role providers using the MySQL Connector.

http://www.marvinpalmer.com/MarvinPalmer/post/Implement-NET-Membership-and-Roles-using-MySql-Connector-523-on-GoDaddy.aspx

I followed these instructions to the tee, making sure that every step I followed was correct. My code works fine locally, but not when uploaded to Mosso. Mosso says that its got something to do with the medium trust level of ASP.Net on their servers.

Does anyone have any idea as to how I can get this working? Or another way that I could get authentication working using MySQL? I am pretty novice when it comes to ASP.Net, and probably cant really write my own authentication system.

Thanks in advance!

Answers:

The .net connector from mysql should be /bin deployable:
http://dev.mysql.com/downloads/connector/net/

You should be able to add the assembly and whatever version you choose into the web.config like this:
<add assembly=”MySql.Data, Version=x.x.x.xxxxx, Culture=neutral, PublicKeyToken=c5687fc88969c44d” />

Source:
http://www.eggheadcafe.com/community/aspnet/90/10123253/could-not-load-file-or-assembly-mysqldata.aspx

Hope that helps.

Cron Jobs & rotating back ups

I had found several scripts to run as cron jobs to create rolling back ups of my sites. But I couldn’t get any of them to work. The rolling backs ups that is. I turned to RSC tech support and they referred me to this blog post:http://capellic.com/blog/backup-script-rackspace-cloud

It worked the first time. The only changes I made were to set the script with the proper variables for my site.

Then I went ahead and modified the scripts so I had 7 days of daily backups and created a second script and cron job for 8 weeks of weekly backups. What is great about this script is I can set the frequency of the Cron jobs such that I tested the full run in one day. I set the daily script to run every 5 minutes and the weekly script every hour.

Here is my daily script dailybackup.sh:

Code:
#!/bin/bash
 # Modeled after <a href="http://snippets.dzone.com/posts/show/4172" target="_blank" rel="nofollow">http://snippets.dzone.com/posts/show/4172</a>

 #### VARIABLES
 # ACCOUNT_ROOT can be found on the Features tab in the control panel for the site
 export ACCOUNT_ROOT="/mnt/stor2-wc2-dfw1/427054/www.DOMAIN_NAME.com"
 export WEB_ROOT="${ACCOUNT_ROOT}/web/content"
 export DB_HOST="DB_SERVER_INTERNAL_NAME"
 export DB_USER="DB_USERNAME"
 export DB_PASSWORD="DB_PASSWORD"
 export DB_NAME="DB_NAME"

 #### PROGRAM - NO EDITING AFTER THIS LINE SHOULD BE NECESSARY
 echo "Rotating daily backups..."
 rm -rf $ACCOUNT_ROOT/backup_daily/07
 mv $ACCOUNT_ROOT/backup_daily/06 $ACCOUNT_ROOT/backup_daily/07
 mv $ACCOUNT_ROOT/backup_daily/05 $ACCOUNT_ROOT/backup_daily/06
 mv $ACCOUNT_ROOT/backup_daily/04 $ACCOUNT_ROOT/backup_daily/05
 mv $ACCOUNT_ROOT/backup_daily/03 $ACCOUNT_ROOT/backup_daily/04
 mv $ACCOUNT_ROOT/backup_daily/02 $ACCOUNT_ROOT/backup_daily/03
 mv $ACCOUNT_ROOT/backup_daily/01 $ACCOUNT_ROOT/backup_daily/02
 mkdir $ACCOUNT_ROOT/backup_daily/01
 echo "... done rotating daily backups."

 echo "Starting database backup..."
 mysqldump --host=$DB_HOST --user=$DB_USER --password=$DB_PASSWORD --all-databases | bzip2 > $ACCOUNT_ROOT/backup_daily/01/mysql-`date +%Y-%m-%d`.bz2
 echo "... daily database backup complete."

 echo "Starting file system backup..."
 tar czf $ACCOUNT_ROOT/backup_daily/01/web_backup.tgz $ACCOUNT_ROOT/web/content/
 echo "... daily file system backup complete."

 exit 0
 #### END PROGRAM

Here is my weekly script weeklybackup.sh:

Code:
#!/bin/bash

 #### VARIABLES
 # ACCOUNT_ROOT can be found on the Features tab in the control panel for the site
 export ACCOUNT_ROOT="/mnt/stor2-wc2-dfw1/427054/www.DOMAIN_NAME.com"

 #### PROGRAM 
 echo "Rotating backups..."
 rm -rf $ACCOUNT_ROOT/backup_weekly/08
 mv $ACCOUNT_ROOT/backup_weekly/07 $ACCOUNT_ROOT/backup_weekly/08
 mv $ACCOUNT_ROOT/backup_weekly/06 $ACCOUNT_ROOT/backup_weekly/07
 mv $ACCOUNT_ROOT/backup_weekly/05 $ACCOUNT_ROOT/backup_weekly/06
 mv $ACCOUNT_ROOT/backup_weekly/04 $ACCOUNT_ROOT/backup_weekly/05
 mv $ACCOUNT_ROOT/backup_weekly/03 $ACCOUNT_ROOT/backup_weekly/04
 mv $ACCOUNT_ROOT/backup_weekly/02 $ACCOUNT_ROOT/backup_weekly/03
 mv $ACCOUNT_ROOT/backup_weekly/01 $ACCOUNT_ROOT/backup_weekly/02
 mv $ACCOUNT_ROOT/backup_daily/07 $ACCOUNT_ROOT/backup_weekly/01
 echo "... done rotating backups."

 exit 0
 #### END PROGRAM

Each of theses scripts are stored in a cronjobs directory inside of web/content.

I created a backup_daily and backup_weekly directory at the root for the site. This is where the scripts will store the back up files.

Then I created two cronjobs. One that runs the daily backup each day at 3am and the other that runs the weekly backup every 7 days at 2:30 am.

New knowledgebase article needed: DNS

Here’s my story as a new customer:

1. Joined Rackspace after considering costs and sales rep chat
2. Made a server
3. Installed PHP + apache by following KB articles
4. Made a simple website
5. Made a cron job by following KB article
6. Bought a .com domain
7. ***Tried to set up the DNS records***
{ contacted support multiple times }

points 1 to 6 took ~3 hours 

point 7 has taken 2 days because the information I needed was not easily available 

What needs to be in the tutorial/article

It must start from the viewpoint of … “if I need to do task X, what records do I need to change?”

E.g. “I bought a new domain – how do I point it to my rackspace server?

1) I bought meowable.com from godaddy.

2) I need to give godaddy the DNS addresses dns1.stabletransit.com and dns2.stabletransit.com.

3) Then I need to add a domain into Hosting–>Cloud Servers–>yourserver–>DNS… and ensure the following records are added:

NS dns1.stabletransit.com
NS dns2.stabletransit.com
A domain.com 255.255.255.255
A www.domain.com 255.255.255.255

4) Check that the DNS information has spread around the globe: http://www.whatsmydns.net/#A/www.meowable.com
(thanks to support staff for this link!)

I hope this information could help someone else out who is the same position as me!

P.S. The reason it took so long was that all websites suggest that DNS can take up to 2 days to update. When I contacted rackspace support staff they were very helpful in adding the first ‘A’ record, however the 2nd, and 3rd staff did not realise I needed and wanted a www. ‘A’ entry, which was the cause of my “problem”.

Cloud LB only for spare server, possible?

Question:

I’d like to use the load balancer only to have a spare server. Explanations:

I have a main server A and a spare server B (rsynced).

I want connection happen to server B ONLY IF health monitor detects failures on server A. Otherwise, 100% of the trafic must be handled by server A.

I’m using Weighted Least Connections with a weight of 1 for server B, and 100 for server A. But still, there are too many connections happening to server B, there should be none at all when A is up and running.

Answers:

Unfortunately you cannot do this with a cloud load balancer. All of the methods and configurations provided only allow for different methods of connection distribution, not really a failover type solution.

Depending on your budget you could get RackConnect (connects dedicated equipment to cloud) and have an F5 or Brocade load balancer configured where this type of advanced configuration is possible.

On the F5 you would configure the VIP and POOL’s but assign a ‘priority group’ to say one set of servers of 1 and a ‘priority group’ to another set of servers of 0. Traffic will always respond to the 1 group first unless it’s marked down (fails healthcheck) then it would send traffic over to the 0 group of servers.

Rackspace Network Security would be the group that could set it up if you went that route. Hope this helps!

===

Yes, this is possible and we do it. We prefer this method to the Cloud Load Balancers that Rackspace offers, due to 1) performance (tested), and 2) SSL termination.

Create a 256MB Cloud Server and install Nginx on it, with Nginx configured as a reverse proxy. Here is an example Nginx configuration file for this purpose (In this example, the load balancing server has /etc/hosts entries for Cloud Servers yourfavoritecloudserver-eth1 and yoursecondfavoritecloudserver-eth0). The “backup” keyword is what you’re looking for, but this example also demonstrates an https-only site. Sorry the indents aren’t coming through.


user nobody nobody;
worker_processes 4;

events {
worker_connections 100;
}

http {
upstream www-secure {
server yourfavoritecloudserver-eth1:443;
server yoursecondfavoritecloudserver-eth0:443 backup;
}
server {
listen 80;
server_name yourdomain.com;
rewrite ^ https://yourdomain.com permanent;
}
server {
listen 80;
server_name www.yourdomain.com;
rewrite ^ https://yourdomain.com permanent;
}
server {
listen 443;
server_name yourdomain.com;
ssl on;
ssl_certificate /usr/local/nginx/ssl/yourdomain.com.pkcs;
ssl_certificate_key /usr/local/nginx/ssl/yourdomain.com.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_max_temp_file_size 0;
location / {
proxy_pass https://www-secure;
}
}
}

Possible using rackspace cloud balancer or do I need a more tricky setup ?

Cloud Files Streaming File / Force Download Issue

Question:

I’m using a function to pass a streaming URI to the browser for force-downloading large files (100megs each). I found a great example of this here:

https://gist.github.com/938948

Caveat: I do not want to display a direct public URL to the files. So I must do some sort of force-download The download function is like this:

function cloudDownload($connection,$container,$filename) {
try {
$folderDetail = getCloudFolder($connection,$container);
$cdnURL = $folderDetail->cdn_uri;
$path = $cdnURL.’/’.$filename;
$filesize = getFilesize($connection,$container,$filename);

// For cloud based files.
session_cache_limiter(‘none’);
@ set_time_limit(0); // For large files.

header(‘Cache-Control: must-revalidate, post-check=0, pre-check=0’);
header(‘Content-Description: File Transfer’);
header(“Content-Type: application/force-download”);
header(“Content-Type: application/octec-stream”, false);
header(“Content-Type: application/download”, false);
header(“Content-Disposition: attachment; filename=$filename”);
header(“Content-Transfer-Encoding: binary”);
header(“Content-Length: $filesize”);
readfile($path);
}
}

The only problem is you cannot download more than one file at a time. Is there a better way to force-download a file without revealing its URL? Is there something else I should add to the header to allow multiple file downloads?

Answers:

You can’t hide where the file is coming from, no. The way you’re doing it there (reading from Cloud Files to your server, then pushing it to the client) is the only way that I’ve heard of. Keep in mind that you’re essentially paying for double bandwidth using that method and you’re not getting any of the benefits of the multiple edge servers a CDN gives you.

I’m going to be posting a feature request in a few minutes for increased flexibility in this area. I would love to be able to generate “throw away” links that expire after a certain period of time. Right now a redirect is the best you can do (but that exposes the URL too, even if it takes the user looking at the headers to see it).

Looking for VB samples to redirect files to a browser

Question:

I have the container connection working and can read the metadata for files stored in my container, yet am not able to push them through to the client for display in a browser.

Does anyone have any code samples for this?

Function Send_Image_To_Browser(conn, client, container_name)

Dim container = New CF_Container(conn, client, container_name)
Dim obj = New CF_Object(conn, container, client, “images8.jpg”)

Response.Clear()
Response.ContentType = “image/jpeg”
Response.AppendHeader(“content-disposition”, “inline; filename=images8.jpg”)
Response.Write(obj.Read)
Response.End()
End Function

There is a calling routine above this that provides the user/api/container info.

The only thing that displays in the browser is:
System.Net.ConnectStream

Thanks

Answers:

I haven’t tested this code in quite some time, but it may get you going in the right direction:

Code:
Dim buffer(4096) As Byte
 Dim amt As Integer = 0
 Dim br As IO.BinaryReader = New IO.BinaryReader(storageobject.ObjectStream)
 amt = br.Read(buffer, 0, buffer.Length)
 While (amt > 0)
     Response.BinaryWrite(buffer)
     amt = br.Read(buffer, 0, buffer.Length)
 End While
 br.Close()
 storageobject.Dispose()
 Response.End()

You’ll have to modify it to suit your code, but it should fit in where you have:

Code:
Response.Write(obj.Read)