URL for API

Question:

Hi,

I am a new python. Can you help me with this please?

I setup http://github.com/rackspace/python-cloudservers tool and it’s ok for me. But this source is difficult.

I want to use simple url to send request to server API and get response.

Example: https://auth.api.rackspacecloud.com/v1.0/flavors

But I dont know how to set API KEY and user for this link

Can you help me please?

Thanks

Answers:

You can find your API key in the control panel under “Your Account” -> “API Access”.

Your first request to cloud files needs to be to authenticate with the Rackspace Cloud auth system. You will then be given a token that you will use for each subsequent request until the token expires. The language bindings (Python in your case) abstract this process and offer a simpler way to access Cloud Files.

You can find this entire process, and more, described in our developer docs. http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf

Medium Trust and Cloud Files .NET API

Questions:

I’m using Cloud Files for about a month now and I’m working on application which store files in Cloud Files.

I used .NET API for Cloud files to get Stream for File in Cloud and return content of file via ASP.NET HTTP Handler.

It works fine on my Development Server but as soon as I transfer it into Cloud Sites I get this while connecting to Cloud Files:

Request for the permission of type ‘System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’ failed.

The command which I get this error is:

Connection CloudConnection = new Connection(new UserCredentials(ServerUsername, ServerPassword));

Is there anything which I need to done to be able to run application on Cloud Sites?

On my development Server I set Trust Level to Medium and I got same error!!! So I think it’s about Medium Trust, but I don’t have any idea if it’s possible to make it work on RackSpace Cloud Sites?

I really appreciate any help.

Answers:

Make sure you have the latest version of the Cloudfiles API. I was getting this with the older API. Authentication is different in the new API and it works in Medium Trust.

======================

I’m having this exact same problem, using a recent version of the API.

Does the Cloud Files .NET API definitely work in medium trust?

======================

Definitely works. i use it every day.

Can’t return the amount of objects in a container?

Question:

I am attempting to return the amount of objects in a container. When loading the page it shows only the bit of text ‘number of objects: ‘ and no number.

No errors show when loading the page, its connecting to the cloud and the container name is correct too.

Code:

Code:
require('../../rackspace_store/cloudfiles.php');

 // cloud info
 $username = "username"; 
 $key = "key"; 

 // Connect to Rackspace
 $auth = new CF_Authentication($username, $key);
 $auth->authenticate();
 $conn = new CF_Connection($auth);
 $files = $conn->get_container("containername");
 print "Number of Objects: ". $files->count ."\n";
 $conn->close();

Answers:

$c =$ conn->get_container(“my_container”);
$c->object_count;
$c->bytes_used;

object_count and bytes_used should give you the two attributes you are looking for.

1 big server 4 nodes running all services vs. 2 small servers 2 nodes each with services separated?

Questions:

I have a question i need some opinions on

I have a linux cloud server now that is hosting a few sites and is a little bogged down memory wise primarily MySQL i am looking to add a node or 2 but realised it doesn’t cost any more to have 2 smaller VPS 1 dedicated to MySQL and one for the web server.

What would be the better solution?

1 big server 4 nodes running all services
VS
2 small servers 2 nodes each with services separated?

Answers:

This is a general architectural decision that is made by people using all stacks out there. LAMP, MSFT, et al. It really depends on how you define “Better”…

Some people will say you should always separate your web server from your DB server. Sometimes that’s because of political decisions made within an enterprise. You’ll see one team dedicated to web and another DB team, so to keep the bureaucracy running smoothly big orgs separate the software onto different hardware.

It’s not only political though, there are other rational reasons as well that are relevant to a smaller organization — security is one. If your web server gets hacked, which is more likely because it must be exposed to the world to serve its pages, then your DB server can remain unscathed — hidden safely behind a private IP or firewall. Your system is also more scalable when separated, because you can grow both servers independently.

The downsides of separating the software onto two boxes include the need to patch and maintain two separate machines, either virtual or hard. There is also a performance hit due to sending requests from the web server outside over the network to another machine. So while more scalable, it won’t necessarily make your website faster to separate the db and web services onto two separate machines.

Of course, that depends on your programming. Do you use indexes properly? How much of the processing is done in stored procedures vs. procedural PHP, .NET, etc? If 75% of your website’s work is done by the database server and you split it off into 50% of the total available horsepower, then your site will slow down when compared to having both systems on the same machine — and that’s without considering the network latency.

So really, to provide a reasoned opinion, I would need to know more about your priorities. Do you want the fastest page loads? Highest security? Least maintenance time/cost? Flexibility? Scalability? Cost?

If I can assume you want your site to be faster, then I would guess, and that’s about all it is with what I know now — a guess, that your site will be faster with one big box, because you can dedicate more ram to the database. If your web server only needs 10% of what is available and you give it a whole machine which is 50% of your total, then 40% is unused and could be used by MySQL to run queries.

So, if your web server only needs one node and you give it two, one is unused when you could put three to mysql if both web and db were running on the same 4 node server.

Anyway, hope this helps! Good luck!

CDN streaming URL not working

Question:

I use Java API to produce container streaming URL, and browse the video at http://c######.r##.stream.cf#.rackcdn.com/###.wmv.
I got the error at browser:
An error occurred while processing your request.
Reference #183.67bef5cc.1338391493.3c9fb6d0

Any clue?
Thanks for the help!

Answers:

I got an error having added stream to the url and was advised by Matteo to leave out the .stream. part works fine without.

Leave out .stream. works for me, but should we use it for streaming a video? I’m confused what’s the difference with or without .stream. ?

Expire a video in container?

Question:

I’m looking for a feature, not sure if it’s readily available in Rackspace… Can we expire a video after certain period of time, meaning video still exists (not removed) in the container but needs to re-access it if it expired.

Answers:

Sounds like you’re looking for a video streaming service which has specific publication features. Cloud Files (and there’s a forum specifically for Cloud Files here as well) is just for storing files, you would have to build your own web site or web service to have that kind of feature built on top of and using Cloud Files. Try www.bitsontherun.com – they have a free tier that you might be able to make use of. (I am not affiliated with them.)

============

In cyberduck, select the video

go to Get Info

Go to Distribution (CDN)

Drag bottom of requester

This reveals an Invalidate Button

Click it to “remove selected files from distribution cache)

You can upload a file of the same name if you like but there is a lag while the change propagates.

Piping files to Cloud Files with chunking

I’ve written a PHP CLI script to pipe files to Cloud Files without intermediate storage on the server, and to also split the files into chunks on-the-fly. The main purpose is to be able to pipe MySQL backups directly from xtrabackup, via tar and gzip and onto Cloud Files without using precious disk space on the way, and to avoid the 5GB limit. I’ve attached the script, along with a similar script to download and reconstruct the files from Cloud Files.

While this script might be of use, ironically I’m really posting to ask if anyone knows of a similar (and more reliable) script available anywhere else, ideally in Python or Ruby: I only used PHP because my usual scripting language of choice (Perl) doesn’t have a Cloud Files API that supports chunking, and my Python is nowhere near good enough (yet). PHP isn’t really the best choice for CLI scripting.

==================

Linked is a trivial example using the python language bindings to load a local file and send it to Cloud Files. http://c2821672.cdn.cloudfiles.rackspacecloud.com/cf_drop.py

To download without buffering, look at the stream() method on an object in the python language bindings.

As always, look to the dev guide (http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf) for help on the API. If that fails, you can find Cloud Files developers online in the #cloudfiles channel on irc.freenode.net.

==================

Thanks, but as I mentioned in my initial post, the goal is to do the upload without having an intermediate file: if I’m paying for, say, a 10GB Cloud Server, I don’t want to have to pay for an extra 10GB just for temporary space to store the tar file before uploading to Cloud Files.

This capability (“Chunked PUT”) and this specific use (piping a database dump directly to CF without buffering to disk) is explicitly mentioned in section 4.3.2.1 of that CF developer guide.

However, it’s easier said than done, as sending STDIN directly to the Chunked PUT request doesn’t avoid the 5GB limit, which is why I encapsulated STDIN in a stream wrapper in my script… something I doubt is a novice feature even in Python!

====================

I completely understand. It is a bad idea to spool data to disk, especially if you need to scale much at all.

In my earlier post, I had not added the link to my simple upload script. I’ve fixed it.

In the Python language bindings, the storage object’s write method accepts a file. That file is then sent in small chunks to cloud files. If, however, you have a potentially large (>5GB) stream), wrapping stdin in a your own file object to chunk (as you do in your PHP example) would certainly be possible.

Here’s a toy example in python: http://c2821672.cdn.cloudfiles.rackspacecloud.com/stream_chunker.py

My example can be greatly improved upon, but it should give the basic idea and offer similar features of your PHP script.