Search Results

Search found 27 results on 2 pages for 'boto'.

Page 1/2 | 1 2  | Next Page >

  • Import boto from local library

    - by ensnare
    I'm trying to use boto as a downloaded library, rather than installing it globally on my machine. I'm able to import boto, but when I run boto.connect_dynamodb() I get an error: ImportError: No module named dynamodb.layer2 Here's my file structure: project/ project/ __init__.py libraries/ __init__.py flask/ boto/ views/ .... modules/ __init__.py db.py .... templates/ .... static/ .... runserver.py And the contents of the relevant files as follows: project/project/modules/db.py from project.libraries import boto conn = boto.connect_dynamodb( aws_access_key_id='<YOUR_AWS_KEY_ID>', aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') What am I doing wrong? Thanks in advance.

    Read the article

  • Problem importing modul2s from boto

    - by Worbis
    I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection How to settle the matter?

    Read the article

  • Problem importing moduls from boto

    - by Worbis
    I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection How to settle the matter?

    Read the article

  • finding last snapshot using boto

    - by shantanuo
    I have read the explanation about "describe_cluster_snapshots" from ... http://docs.pythonboto.org/en/latest/ref/redshift.html#boto.redshift.layer1.RedshiftConnection.create_cluster It has an option start_time and end_time but there is no way to sort it. How do I get the id of the latest snapshot using boto? Here is what I have tried but it does not seem to return the last snapshot. mysnap=conn.describe_cluster_snapshots() mysnapidentifier=mysnap['DescribeClusterSnapshotsResponse']['DescribeClusterSnapshotsResult']['Snapshots'][-1]['SnapshotIdentifier']

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • Eclipse and python: library will import in interprer, but not in IDE

    - by John
    I'm running Windows 7, Python 2.6.4 and the latest version of Eclipse. I downloaded the boto library (http://code.google.com/p/boto/) and ran python setup.py install, which created boto-1.9b-py2.6.egg in C:\Python26\Lib\site-packages. Importing a class - say, by doing 'from boto.sqs.connection import SQSConnection' - works fine from the python command line tool. But Eclipse will not find boto, despite the fact that it is using the same python interpreter as I am using when at the command line. I added the library as an external source folder, but that didn't work either. How can I properly import the boto library into Eclipse? Thanks.

    Read the article

  • Django as S3 proxy

    - by schneck
    Hi there, I extended a ModelAdmin with a custom field "Download file", which is a link to a URL in my Django project, like: http://www.myproject.com/downloads/1 There, I want to serve a file which is stored in a S3-bucket. The files in the bucket are not public readable, and the user may not have direct access to it. Now I want to avoid that the file has to be loaded in the server memory (these are multi-gb-files) avoid to have temp files on the server The ideal solution would be to let django act as a proxy that streams S3-chunks directly to the user. I use boto, but did not find a possibility to stream the chunks. Any ideas? Thanks.

    Read the article

  • Selecting keys based on metadata, possible with Amazon S3?

    - by nbv4
    I'm sending files to my S3 bucket that are basically gzipped database dumps. They keys are a human readable date ("2010-05-04.dump"), and along with that, I'm setting a metadata field to the UNIX time of the dump. I want to write a script that retrieve the latest dump from the bucket. That is to say I want the the key with the largest unix time metadata value. Is this possible with Amazon S3, or is this not how S3 is meant to work? I'm using both the command line tool aws, and the python library boto

    Read the article

  • Amazon S3 as secure backup without multiple invoices

    - by Tom Viner
    I'm storing copies of database backups on Amazon S3 using the Python Boto library. But I worry that if my web server was hacked, those backups could be deleted using the credentials I need to do the upload. Ok, so I know you can grant permissions to another Amazon email address, so I can imagine doing that after an upload then removing the original user's write access BUT in this scenario I now end up with 2 accounts and 2 sets of invoices to give to accounts every month. Is there a solution to this that doesn't require a new Amazon account for each web server I run?

    Read the article

  • HTTPSConnection module missing in Python 2.6 on CentOS 5.2

    - by d2kagw
    Hi guys, I'm playing around with a Python application on CentOS 5.2. It uses the Boto module to communicate with Amazon Web Services, which requires communication through a HTTPS connection. When I try running my application I get an error regarding HTTPSConnection being missing: "AttributeError: 'module' object has no attribute 'HTTPSConnection'" Google doesn't really return anything relevant, I've tried most of the solutions but none of them solve the problem. Has anyone come across anything like it? Here's the traceback: Traceback (most recent call last): File "./chatter.py", line 114, in <module> sys.exit(main()) File "./chatter.py", line 92, in main chatter.status( ) File "/mnt/application/chatter/__init__.py", line 161, in status cQueue.connect() File "/mnt/application/chatter/tools.py", line 42, in connect self.connection = SQSConnection(cConfig.get("AWS", "KeyId"), cConfig.get("AWS", "AccessKey")); File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/sqs/connection.py", line 54, in __init__ self.region.endpoint, debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 418, in __init__ debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 189, in __init__ self.refresh_http_connection(self.server, self.is_secure) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 247, in refresh_http_connection connection = httplib.HTTPSConnection(host) AttributeError: 'module' object has no attribute 'HTTPSConnection'

    Read the article

  • Using IAM for user authentication

    - by mdavis6890
    I've read lots and lots of posts that touch on what I think should be a very common use case - but without finding exactly what I want, or a simple reason why it can't be done. I have some files on S3. I want to be able to grant certain users access to certain files, via a front end that I build. So far, I've made it work this way: I built the front end in Django, using it's built-in Users and Groups I have a model for Buckets, in which I mirror my S3 buckets. I have a m2m relationship from groups to buckets representing the S3 permissions. The user logs in and authenticates against Django's users. I grab from Django the list of buckets that the user is allowed to see I use boto to grab a list of links to files from those buckets and display to user. This works, but isn't ideal, and also just doesn't feel right. I've got to keep a mirror of the buckets, and I also have to maintain my own list of user/passwords and permissions, when AWS already has all that built in. What I really want is to simply create the users in IAM and use group permissions in IAM to control access to the S3 buckets. No duplication of data or function. My app would request a UN/PW from the user and use that to connect to IAM/S3 to pull the list of buckets and files, then display links to the user. Simple. How can I, or why can't I? Am I looking at this the wrong way? What's the "right" way to address this (I assume) very common use case?

    Read the article

  • store image installation error in UEC

    - by selvakumar
    to my college final year project we planned to setup the private cloud on the two machines. I recently installed Ubuntu Enterprise Cloud (UEC) on two of my machines . I was trying to install the store image through WebUI. I was able to download Ubuntu 10.04 - (i386) image but while installing, it's giving me following error: - Command 'euca-upload-bundle' returned status code 1: Checking bucket: image-store-1296600766 Traceback (most recent call last): File "/usr/bin/euca-upload-bundle", line 231, in main() File "/usr/bin/euca-upload-bundle", line 214, in main bucket_instance = ensure_bucket(conn, bucket, canned_acl) File "/usr/bin/euca-upload-bundle", line 87, in ensure_bucket bucket_instance = connection.get_bucket(bucket) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 275, in get_bucket rs = bucket.get_all_keys(headers, maxkeys=0) File "/usr/lib/pymodules/python2.6/boto/s3/bucket.py", line 204, in get_all_keys headers=headers, query_args=s) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 342, in make_request data, host, auth_path, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 459, in make_request return self._mexe(method, path, data, headers, host, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 437, in _mexe raise e socket.error: [Errno 110] Connection timed out could anyone please help me?

    Read the article

  • How to log python exception ?

    - by Maxim Veksler
    Hi, Coming from java, being familiar with logback I used to do try { ... catch (Exception e) { log("Error at X", e); } I would like the same functionality of being able to log the exception and the stacktrace into a file. How would you recommend me implementing this? Currently using boto logging infrastructre, boto.log.info(...) I've looked at some options and found out I can access the actual exception details using this code: import sys try: 1/0 except: exc_type, exc_value, exc_traceback = sys.exc_info() traceback.print_exception(exc_type, exc_value, exc_traceback) I would like to somehow get the string print_exception() throws to stdout so that I can log it. Thank you, Maxim.

    Read the article

  • Amazon Web Services : Fault tolerant solution

    - by Algorist
    Hi, I am using Boto library to write scripts for automating our jobs on AWS. My script actually starts a hadoop cluster using cloudera scripts and then does some customization. I am having a problem with retries. Seems like very command in my script fails once couple of days. I started adding retry to all the commands, but then the code is very clumsy and difficult to maintain. what do people do in general. Thank you Bala

    Read the article

  • lighttpd not reliable

    - by schneck
    Hi there, I have a lighttpd running which serves a django-based webservice. It was running well for some months, but from today on, it returns a 410 sometimes, and sometimes fails silently. To test, I make a curl-call, which most time runs fine; it returns some json test-data. Some times. however, it does not return any data, but the call seems to have run well, since I don't get an error code. When I post to my webservice via third-party-packaged like boto, I sometimes get a "410 gone" - but I do not find any entry in the lighttpd error log. Any ideas what the problem could be or how to avoid this? Thanks a lot

    Read the article

  • Installing ubuntu 12.04 on macbook pro9,2

    - by stariz77
    I seem to have tried all the various suggested methods for installing ubuntu on a mbp, but can't seem to get anything that works and was wondering if anyone has run into any new problems with the latest non-retina models? I have a core i7 in my macbook, and model identifier is MacBookPro9,2. I have partitioned my HD using disk utility and have 700gig free space ready for the install (I haven't removed OSX Lion, it is still there in a 50gig partition). Problem: I am just getting a blank screen with a blinking cursor (unresponsive) in the top left whenever I boot from the disk. I left it for 20 minutes and nothing ever happened. This was without any boot manager, just holding "c" on startup. Attempted remedies: I have downloaded the 64 ubuntu iso from their site 3 times now and burned 4 separate discs to rule out some kind of corruption or burn error. I burned one in OSX Lion 10.7.4 and 3 on my windows 7 pc. I tried holding "alt" instead and then navigating to the windows disc to boot. Same thing happens, blank blinking unresponsive cursor. I also tried going to the EFI disc which actually brings up a menu (after saying "error prefix is not set") asking if I want to install ubuntu, test for errors or partition. All three options lead me to an unresponsive blank screen (some without cursors). I downloaded and installed rEFIt and if I hold "alt" on startup a linux penguin (Boot Linux from CD) appears in my boot options, along with the apple boot, and two others that I'm not sure of: "Boot EFI\boto\bootx64.efi from" and "Boot Legacy OS from". The "Boot Linux from CD" just takes me to the blank blinking cursor screen; again, I left if for 10+ minutes and nothing. I heard that the detection of the graphics card might be a problem and that I need change to nomodeset, but I have tried pressing F6 in all of the boot menus listed above and no options appear. Does anyone have any other suggested routes or can you see what I might have done wrong?

    Read the article

  • Try to fill the GAE datastore but the code consumes to much cpu time. How to optimize this?

    - by Neverland
    I try to get the list of images in Amazon EC2 inside the Google datastore. I want to realize this with a cron job inside the GAE. class AmazonEC2uswest(db.Model): ami = db.StringProperty(required=True) mani = db.StringProperty() typ = db.StringProperty() arch = db.StringProperty() state = db.StringProperty() owner = db.StringProperty() class CronAMIsAmazonUS_WEST(webapp.RequestHandler): def get(self): aws_access_key_id_admin = "<secret>" aws_secret_access_key_admin = "<secret>" conn_us_west = boto.ec2.connect_to_region('us-west-1', aws_access_key_id=aws_access_key_id_admin, aws_secret_access_key=aws_secret_access_key_admin, is_secure = False) liste_images_us_west = conn_us_west.get_all_images() laenge_liste_images_us_west = len(liste_images_us_west) for i in range(laenge_liste_images_us_west): datastore_uswest_AMIs = AmazonEC2uswest(ami=liste_images_us_west[i].id, mani=str(liste_images_us_west[i].location), typ=liste_images_us_west[i].type, arch=liste_images_us_west[i].architecture, state=liste_images_us_west[i].state, owner=liste_images_us_west[i].ownerId) datastore_uswest_AMIs.put() The problem: Getting the list with get_all_images() lasts only a few seconds. But writing the data to the Google datastore needs way too much CPU time. My IBM T42p (P4M with 2GHz) needs for that piece of code approx. 1 Minute! Is it possible to optimize my code in a way that it needs fewer CPU time?

    Read the article

  • Need help understanding some Python code

    - by Yarin
    I'm new to Python, and stumped by this piece of code from the Boto project: class SubdomainCallingFormat(_CallingFormat): @assert_case_insensitive def get_bucket_server(self, server, bucket): return '%s.%s' % (bucket, server) def assert_case_insensitive(f): def wrapper(*args, **kwargs): if len(args) == 3 and not (args[2].islower() or args[2].isalnum()): raise BotoClientError("Bucket names cannot contain upper-case " \ "characters when using either the sub-domain or virtual " \ "hosting calling format.") return f(*args, **kwargs) return wrapper Trying to understand what's going on here. What is the '@' symbol in @assert_case_sensitive ? What do the args *args, **kwargs mean? What does 'f' represent? Thanks!

    Read the article

1 2  | Next Page >