What Rackspace Cloud Sucks At? Customers Vote at Rackspace Feedback

Rackspace Cloud has a very nice and cozy small forum for all their customers and users to submit and vote for ideas and feedback, such as features they want but are currently not available, or improvements / fixes that need to be done to make the cloud a better product.

Check out the feedback forum here: http://feedback.rackspace.com/

You can read through all the ideas and requests as well as the comments to get an idea of what Rackspace Cloud is like and how it is doing in the eyes of their current customers and users. How the company is responding to these invaluable input is also an important factor in deciding to go with them or not.

A great move by Rackspace.

Infinite Redirect Loop When Going From non-SSL to SSL

Hi,

I’m trying to get things up and running in a cloud environment.

when I click on pages(ASP.NET) that use HTTPS protocol, page hangs. I think it is really doing the loop due to re-direction from HTTP to HTTPS.

Any ideas on how to fix this?

I tried asking tech support environment and they told me to use SSL throught my site or don’t use it.

Please help!!

thanks,

regards,

CK

=========== Answer 1 ===========

.IsSecureConnection doesn’t work. You have to use an environment variable to check for SSL

https://manage.rackspacecloud.com/forum/posts/list/1464.page

=========== Answer 2 ===========

correct! I had the following code:

if (!Request.IsSecureConnection)
{
// send user to SSL
string serverName =HttpUtility.UrlEncode(Request.ServerVariables[“SERVER_NAME”]);
string filePath = Request.FilePath;
Response.Redirect(“https://” + serverName + filePath);
}

and it hangs. what is the equivalent code that I should use for cluster environment? Thanks,

CK

=========== Answer 3 ===========

if(Request.ServerVariables[“HTTP_CLUSTER_HTTPS”] != “on”) { // we need to redirect
// check to see if the other Server Var is defined, if not redirect otherwise dont

if(Request.ServerVariables.Get(“HTTP_CLUSTER-HTTPS”) == null) {
Response.Redirect(“https://” + Request.ServerVariables[“HTTP_HOST”] + newUrl);
}
}

Here is another one:

<system.webServer>
<rewrite>
<rules>

<rule name=”Redirect to HTTPS” stopProcessing=”true”>
<match url=”(.*)” />
<conditions>
<add input=”{HTTP_CLUSTER_HTTPS}” pattern=”^OFF$” />
</conditions>
<action type=”Redirect” url=”https://{HTTP_HOST}/{R:1}” redirectType=”SeeOther” />
</rule>

</rules>
</rewrite>
</system.webServer>

=========== Answer 4 ===========

Yep, I linked too fast. I linked to a thread on getting the IP address for SSL, which is related, but different.

You may want to take note of it though, because you may at some point want to get the IP address of your visitors, and ServerVariables(“REMOTE_ADDR”) won’t work, you’ll need the HTTP_X_FORWARDED_FOR variable if you are on SSL.

=========== Answer 5 ===========

Hi,

I am using URLRewriter an open source module. Following is the script I use

<rewriter>
<if header=”HTTP_HOST” match=”^www.MyDomain\.com$”>
<redirect url=”^(.+)$” to=”http://MyDomain.com$1″/>
</if>
<!–<if header=”HTTP_CLUSTER_HTTPS” match=”^OFF$”>
<redirect url=”^~/test.aspx$” to=”https://MyDomain.com/test.aspx”/>
</if>–>
</rewriter>

I tested this last night and the page still hangs.

regards,

Chandar

Subdomains as Accounts

Be Warned!!!

If you want to set up accounts for subdomains of your main account, they MUST be created in the same “client” as the main domain

ie “this.domain.com”, “that.domain.com” as accounts when you own “domain.com”

The control panel will allow you to create the domain – but it will fail and you will need support to remove the accounts as the delete also fails.

This once was possible to do, but in the last month, changes where made to stop it from happening – but not to stop us from entering it. (I have some sub-domains in a different client account I did months ago that worked).

The recent change is due to “security” I have been told. More like a coding mistake!

This message reads like one of those pass this email on messages or the world will blow up!!!

Anyway, If you want to set up a subdomain for a client – you can no longer have separate “client” billing. Thanks Mosso!

=========== Answer 1 ===========

just tested this again today. doh, i was hoping to separate the sub domains so they are not in the same ftp and therefore more secure but its not possible.

=========== Answer 2 ===========

Yep.

I actually had some subdomains I had created before they imposed this restriction in different accounts. I ended up having to delete them and recreate them in the same account as the master domain.

I wonder though? If there was no master domain – only subdomains – would the restriction apply then? And what to do with the master domain? Set it up as a forward to a selected subdomain….

In the past, I would park a subdomain on top of a clients domain I was designing so they could access and see their domain before it went into production. Or just create a subdomain to test a new CMS or technology – keeping it separate from the main account. Not any more!

=========== Answer 3 ===========

yes, i used to do somethin similar. maybe it is for a security reason, its kinda annoying though.

=========== Answer 4 ===========

It is NOT for security reasons as I want to have separate accounts for each subdomain BECAUSE OF security reasons, and RS doesn´t let me do that.

=========== Answer 5 ===========

It is NOT for security reasons as I want to have separate accounts for each subdomain BECAUSE OF security reasons, and RS doesn´t let me do that. 

And I need to have subdomains into separate accounts, because on same account they are vulnerable if ANY of the subdomains get hacked, then ALL your subdomains and main domain could be accessed-hacked too easily then.

Maybe RS can bring this feature to all of us soon!

Proxy Error due to process time out

So I wanted to see if anyone else out there is running into proxy errors due to processing timeouts. Rackspace has a 30 second limit. I’ve had it happen a number of times on not so complicated things. Support gave me a canned response and blamed my method of doing it. Then suggested a work around that doesn’t work.

It’s a real shame. We have plenty of sites that have complicated reports that are called on a very limited basis – once a day or month. 30 seconds seems like such an arbitrary number too. Its not often that I say I think Rackspace has it wrong. But they do. I bet you they would satisfy 95% of our complaints about this by raising the time out to 45 seconds. In truth, the cloud servers can run slow, so these things take much longer in the cloud then they would else where. I have a table with 30k records and I am cross referencing for duplicates. Simple as that and it can’t be done. The pipe method they recommend doesn’t work either. I would even venture to say that RS’s timeout is not 30 seconds, in reality its less. I can do my call using phpmyadmin and it only takes a few seconds, then when I make the same call using php – it won’t work. REAL BUMMER. I can only say that I was easily able to find a lot of programmers that don’t like this on the web. Slowly, rackspace’s products are loosing their luster and their reputation. Cheap ass hosting that does a better job than Rackspace, I have to take you more seriously now.

=========== Answer 1 ===========

Yah, the 30 second load-balancer / proxy timeout is a real pain the the arse. But if you are coding your own script, it really isn’t that hard to work around – at least not with PHP anyway. The key is to make sure some data is sent back to the browser no more than 30 seconds apart.

I just recently wrote a script that accesses a remote web service repeatedly while building a large array of results and ultimately writing to a text file. It ran into the 30-sec timeout, or course. So in my loop where I repeatedly accessed the web service, I simply do this:

Code:
 echo " Submitted $entrynum...
";

 ob_flush();

 flush();

…and my proggy runs for about 15 minutes and finishes.

Not sure what you are doing that’s taking up time – but if it’s looping then you can do the flush as I did above. But if it’s waiting for the database to come back and the DB query is taking a long time, then you might have to find another solution.

Good luck!

Gary

=========== Answer 2 ===========

Gary I’ll give that a try and report back!

Piping files to Cloud Files with chunking

I’ve written a PHP CLI script to pipe files to Cloud Files without intermediate storage on the server, and to also split the files into chunks on-the-fly. The main purpose is to be able to pipe MySQL backups directly from xtrabackup, via tar and gzip and onto Cloud Files without using precious disk space on the way, and to avoid the 5GB limit. I’ve attached the script, along with a similar script to download and reconstruct the files from Cloud Files.

While this script might be of use, ironically I’m really posting to ask if anyone knows of a similar (and more reliable) script available anywhere else, ideally in Python or Ruby: I only used PHP because my usual scripting language of choice (Perl) doesn’t have a Cloud Files API that supports chunking, and my Python is nowhere near good enough (yet). PHP isn’t really the best choice for CLI scripting.

==================

Linked is a trivial example using the python language bindings to load a local file and send it to Cloud Files. http://c2821672.cdn.cloudfiles.rackspacecloud.com/cf_drop.py

To download without buffering, look at the stream() method on an object in the python language bindings.

As always, look to the dev guide (http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf) for help on the API. If that fails, you can find Cloud Files developers online in the #cloudfiles channel on irc.freenode.net.

==================

Thanks, but as I mentioned in my initial post, the goal is to do the upload without having an intermediate file: if I’m paying for, say, a 10GB Cloud Server, I don’t want to have to pay for an extra 10GB just for temporary space to store the tar file before uploading to Cloud Files.

This capability (“Chunked PUT”) and this specific use (piping a database dump directly to CF without buffering to disk) is explicitly mentioned in section 4.3.2.1 of that CF developer guide.

However, it’s easier said than done, as sending STDIN directly to the Chunked PUT request doesn’t avoid the 5GB limit, which is why I encapsulated STDIN in a stream wrapper in my script… something I doubt is a novice feature even in Python!

====================

I completely understand. It is a bad idea to spool data to disk, especially if you need to scale much at all.

In my earlier post, I had not added the link to my simple upload script. I’ve fixed it.

In the Python language bindings, the storage object’s write method accepts a file. That file is then sent in small chunks to cloud files. If, however, you have a potentially large (>5GB) stream), wrapping stdin in a your own file object to chunk (as you do in your PHP example) would certainly be possible.

Here’s a toy example in python: http://c2821672.cdn.cloudfiles.rackspacecloud.com/stream_chunker.py

My example can be greatly improved upon, but it should give the basic idea and offer similar features of your PHP script.

Cloud LB only for spare server, possible?

Question:

I’d like to use the load balancer only to have a spare server. Explanations:

I have a main server A and a spare server B (rsynced).

I want connection happen to server B ONLY IF health monitor detects failures on server A. Otherwise, 100% of the trafic must be handled by server A.

I’m using Weighted Least Connections with a weight of 1 for server B, and 100 for server A. But still, there are too many connections happening to server B, there should be none at all when A is up and running.

Answers:

Unfortunately you cannot do this with a cloud load balancer. All of the methods and configurations provided only allow for different methods of connection distribution, not really a failover type solution.

Depending on your budget you could get RackConnect (connects dedicated equipment to cloud) and have an F5 or Brocade load balancer configured where this type of advanced configuration is possible.

On the F5 you would configure the VIP and POOL’s but assign a ‘priority group’ to say one set of servers of 1 and a ‘priority group’ to another set of servers of 0. Traffic will always respond to the 1 group first unless it’s marked down (fails healthcheck) then it would send traffic over to the 0 group of servers.

Rackspace Network Security would be the group that could set it up if you went that route. Hope this helps!

===

Yes, this is possible and we do it. We prefer this method to the Cloud Load Balancers that Rackspace offers, due to 1) performance (tested), and 2) SSL termination.

Create a 256MB Cloud Server and install Nginx on it, with Nginx configured as a reverse proxy. Here is an example Nginx configuration file for this purpose (In this example, the load balancing server has /etc/hosts entries for Cloud Servers yourfavoritecloudserver-eth1 and yoursecondfavoritecloudserver-eth0). The “backup” keyword is what you’re looking for, but this example also demonstrates an https-only site. Sorry the indents aren’t coming through.


user nobody nobody;
worker_processes 4;

events {
worker_connections 100;
}

http {
upstream www-secure {
server yourfavoritecloudserver-eth1:443;
server yoursecondfavoritecloudserver-eth0:443 backup;
}
server {
listen 80;
server_name yourdomain.com;
rewrite ^ https://yourdomain.com permanent;
}
server {
listen 80;
server_name www.yourdomain.com;
rewrite ^ https://yourdomain.com permanent;
}
server {
listen 443;
server_name yourdomain.com;
ssl on;
ssl_certificate /usr/local/nginx/ssl/yourdomain.com.pkcs;
ssl_certificate_key /usr/local/nginx/ssl/yourdomain.com.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_max_temp_file_size 0;
location / {
proxy_pass https://www-secure;
}
}
}

Possible using rackspace cloud balancer or do I need a more tricky setup ?

Cloud Files Streaming File / Force Download Issue

Question:

I’m using a function to pass a streaming URI to the browser for force-downloading large files (100megs each). I found a great example of this here:

https://gist.github.com/938948

Caveat: I do not want to display a direct public URL to the files. So I must do some sort of force-download The download function is like this:

function cloudDownload($connection,$container,$filename) {
try {
$folderDetail = getCloudFolder($connection,$container);
$cdnURL = $folderDetail->cdn_uri;
$path = $cdnURL.’/’.$filename;
$filesize = getFilesize($connection,$container,$filename);

// For cloud based files.
session_cache_limiter(‘none’);
@ set_time_limit(0); // For large files.

header(‘Cache-Control: must-revalidate, post-check=0, pre-check=0’);
header(‘Content-Description: File Transfer’);
header(“Content-Type: application/force-download”);
header(“Content-Type: application/octec-stream”, false);
header(“Content-Type: application/download”, false);
header(“Content-Disposition: attachment; filename=$filename”);
header(“Content-Transfer-Encoding: binary”);
header(“Content-Length: $filesize”);
readfile($path);
}
}

The only problem is you cannot download more than one file at a time. Is there a better way to force-download a file without revealing its URL? Is there something else I should add to the header to allow multiple file downloads?

Answers:

You can’t hide where the file is coming from, no. The way you’re doing it there (reading from Cloud Files to your server, then pushing it to the client) is the only way that I’ve heard of. Keep in mind that you’re essentially paying for double bandwidth using that method and you’re not getting any of the benefits of the multiple edge servers a CDN gives you.

I’m going to be posting a feature request in a few minutes for increased flexibility in this area. I would love to be able to generate “throw away” links that expire after a certain period of time. Right now a redirect is the best you can do (but that exposes the URL too, even if it takes the user looking at the headers to see it).

Looking for VB samples to redirect files to a browser

Question:

I have the container connection working and can read the metadata for files stored in my container, yet am not able to push them through to the client for display in a browser.

Does anyone have any code samples for this?

Function Send_Image_To_Browser(conn, client, container_name)

Dim container = New CF_Container(conn, client, container_name)
Dim obj = New CF_Object(conn, container, client, “images8.jpg”)

Response.Clear()
Response.ContentType = “image/jpeg”
Response.AppendHeader(“content-disposition”, “inline; filename=images8.jpg”)
Response.Write(obj.Read)
Response.End()
End Function

There is a calling routine above this that provides the user/api/container info.

The only thing that displays in the browser is:
System.Net.ConnectStream

Thanks

Answers:

I haven’t tested this code in quite some time, but it may get you going in the right direction:

Code:
Dim buffer(4096) As Byte
 Dim amt As Integer = 0
 Dim br As IO.BinaryReader = New IO.BinaryReader(storageobject.ObjectStream)
 amt = br.Read(buffer, 0, buffer.Length)
 While (amt > 0)
     Response.BinaryWrite(buffer)
     amt = br.Read(buffer, 0, buffer.Length)
 End While
 br.Close()
 storageobject.Dispose()
 Response.End()

You’ll have to modify it to suit your code, but it should fit in where you have:

Code:
Response.Write(obj.Read)