Search Results

Search found 800 results on 32 pages for 's3'.

Page 12/32 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Bitmap issue in Samsung Galaxy S3

    - by user1531240
    I wrote a method to change Bitmap from camera shot : public Bitmap bitmapChange(Bitmap bm) { /* get original image size */ int w = bm.getWidth(); int h = bm.getHeight(); /* check the image's orientation */ float scale = w / h; if(scale < 1) { /* if the orientation is portrait then scaled and show on the screen*/ float scaleWidth = (float) 90 / (float) w; float scaleHeight = (float) 130 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } else { /* if the orientation is landscape then rotate 90 */ float scaleWidth = (float) 130 / (float) w; float scaleHeight = (float) 90 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); mtx.postRotate(90); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } } It works fine in another Android device, even Galaxy Nexus but in Samsung Galaxy S3, the scaled image doesn't show on screen. I tried to mark the bitmapChange method , let it show the original size Bitmap on screen but S3 also show nothing on screen. The information of variables in eclipse is here. The information of sony xperia is here. xperia and other device is working fine.

    Read the article

  • Problem installing iATKOS S3 Version 2 Snow Leopard 10.6.3 on DELL Precision T5500 Desktop

    - by Matias Dominoni
    Someone managed to install this right? I've used the following parameters: -v -x -f cpus=1 busratio=22 After installation, boot fails with a Kernel Panic. The exact error here: http://www.insanelymac.com/forum/index.php?showtopic=182609&mode=linearplus I'm aware that is very annoying installation. http://www.insanelymac.com/forum/index.php?showtopic=222386 Does anyone knows a guide to follow up or any other distrubution that works?

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • Translate to MIPS

    - by user2334400
    HELP Translate A[i] = B[i-1] + B[i] + B[i+1] to MIPS the address of array A is $s1 the address of array B is $s2 the value of i is $s3 lb $s1,0($s3) # $s1=A[i] lb $s2,0($s3) # $s2=B[i] sb $s2,0($s3) # store s2 add $t0, $zero, $s2 #t0=B[i] addi $s3,$s3,-1 #i=i-1 lb $s2 0($s3) #$s2=B[i-1] add $t1,$zero,$s2 #t1=B[i-1] add $s1,$t0,$t1 #A[i]=B[i-1]+B[i] addi $s3,$s3,2 #i+1 lb $s2,0($s3) #s2=B[i+1] add $t2,$zero,$s2 #t2=B[i+1] add $s1,$s1,$t2 #A[i] = t1 + B[i+1] sb $s1 I know this is kind of mess, but it's the best i can write

    Read the article

  • No GPS updates on Galaxy S3

    - by Valelik
    I'm developing a GPS tracker and it works like a charm. But a couple of weeks ago a customer of me (a trackage company) bought Samsung Galaxy S3 for his drivers. And since that we have really strange behaviour of my app. The app receives location updates from GPS receiver, but after some hours of work it doesn't receive any location updates. I have registered the app for onGpsStatusChanged() too and in this time onGpsStatusChanged() was called (I see that GPS receiver have 10-17 satellites!), but the method onLocationChanged() was not called! After the service restart (=re-registering of LocationListener) it works again. It is really strange. It seems that after some hours of work the GPS reciever is not in the mood for calling onLocationChanged() :) Any idea what may be wrong?

    Read the article

  • Does Iphone supports background process or services?

    - by Fedrick
    Hi all, I am planning to develop a iphone client application to upload images from iphone gallery to amazon s3 using rest calls.so is there any library to run this application as a background process in iphone. Also is there any library to access the iphone photo gallery(Should be able access all the images,not only selected one like in UIImagePickerController) Thanks in advance for stack overflow masters for sharing their knowledge.....

    Read the article

  • Is there any sync services library for iphone

    - by Fedrick
    Hi all, i am developing a mobile client to sync images from iphone photo gallery to amazon s3,so is there any sync services libraries that can help me in this regard.. Also is there any library to access the iphone photo gallery,I just wanted to pick all photos, randomly, from the images stored on the device with no user interaction? Thanks in advance.......

    Read the article

  • How can I script an alert for when my Amazon Web Service usage goes above a certain amount?

    - by frabcus
    We're using S3, SimpleDB and SQS on quite a complicated project. I'd like to be able to automatically track their usage, to be sure we don't suddenly spend large amounts of money when we didn't intend to (perhaps because of a bug). Is there a way of reading the usage figures of all Amazon Web Services and/or the current real time dollar cost of an account from a script? Or any service or script which provides alerts based on that?

    Read the article

  • Cache front end for the JetS3t API

    - by Joshua
    Storage via JetS3t REST API seems to be very slow. Is there a caching front end for the JetS3t API for avoiding a network hit on the fetch calls [link text][1] [1]: http://jets3t.s3.amazonaws.com/api/org/jets3t/service/S3Service.html#getObject(org.jets3t.service.model.S3Bucket, java.lang.String, java.util.Calendar, java.util.Calendar, java.lang.String[], java.lang.String[], java.lang.Long, java.lang.Long)

    Read the article

  • Conditional action based on whether any file in a directory has a ctime newer than X

    - by jberryman
    I would like to run a backup job on a directory tree from a bash script if any of the files have been modified in the last 30 minutes. I think I can hack together something using find with the -ctime flag, but I'm sure there is a standard way to examine a directory for changes. I know that I can inspect the ctime of the top level directory to see if files were added, but I need to be able to see changes also. FWIW, I am using duplicity to backup directories to S3.

    Read the article

  • AWS for Android SDK host name error

    - by feelingtheblanks
    I'm trying to upload to AWS S3 by using thier AWS for Android SDK but both sample project within SDK and my project give the following error on devices while emulator runs without problem. So there's no problem with my AWS account. "Host name may not be null." Upload Code : s3Client.createBucket(Constants.getBucket()); PutObjectRequest por = new PutObjectRequest(Constants.getBucket(), record.getFile().getName(), record.getFile()); s3Client.putObject(por); Any help is appreciated.

    Read the article

  • Automatically Login & Startup A Windows Program On Amazon's EC2 Service

    - by darkAsPitch
    How can I automatically start a program on Amazon's EC2 Windows 2008 web servers? For example, if I wanted to test the "Digg effect" on a web page of mine, how could I open 100 windows 2008 servers at once, each loading one (or two?) instances of the firefox web browser? I have placed a sample batch file in the windows startup folder that echos the time it was called, but it is only started when I actually login remotely via the remote desktop protocol. I don't want to have to login to 100 servers to get my software to run :P What can I do? I am using this Windows 2008 Datacenter, Amazon-supplied AMI specifically: ami-a2698bcb

    Read the article

  • How to deploy local project to Amazon

    - by Nai
    I have a small webapp written in Python/Django which works fine on my local machine. I've been tinkering and setting up my server on the free tier of Amazon EC2 by following online tutorials. However, the tutorials I have found so far shows you how to setup your instance and stops there. So my question is, how do I get my local webapp onto my Amazon instance? FYI, I'm a sys admin/web dev. noob. Thanks.

    Read the article

  • Django Upload form to S3 img and form validation

    - by citadelgrad
    I'm fairly new to both Django and Python. This is my first time using forms and upload files with django. I can get the uploads and saves to the database to work fine but it fails to valid email or check if the users selected a file to upload. I've spent a lot of time reading documentation trying to figure this out. Thanks! views.py def submit_photo(request): if request.method == 'POST': def store_in_s3(filename, content): conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) bucket = conn.create_bucket(AWS_STORAGE_BUCKET_NAME) mime = mimetypes.guess_type(filename)[0] k = Key(bucket) k.key = filename k.set_metadata("Content-Type", mime) k.set_contents_from_file(content) k.set_acl('public-read') if imghdr.what(request.FILES['image_url']): qw = request.FILES['image_url'] filename = qw.name image = filename content = qw.file url = "http://bpd-public.s3.amazonaws.com/" + image data = {image_url : url, user_email : request.POST['user_email'], user_twittername : request.POST['user_twittername'], user_website : request.POST['user_website'], user_desc : request.POST['user_desc']} s = BeerPhotos(data) if s.is_valid(): #import pdb; pdb.set_trace() s.save() store_in_s3(filename, content) return HttpResponseRedirect(reverse('photos.views.thanks')) return s.errors else: return errors else: form = BeerPhotoForm() return render_to_response('photos/submit_photos.html', locals(),context_instance=RequestContext(request) forms.py class BeerPhotoForm(forms.Form): image_url = forms.ImageField(widget=forms.FileInput, required=True,label='Beer',help_text='Select a image of no more than 2MB.') user_email = forms.EmailField(required=True,help_text='Please type a valid e-mail address.') user_twittername = forms.CharField() user_website = forms.URLField(max_length=128,) user_desc = forms.CharField(required=True,widget=forms.Textarea,label='Description',) template.html <div id="stylized" class="myform"> <form action="." method="post" enctype="multipart/form-data" width="450px"> <h1>Photo Submission</h1> {% for field in form %} {{ field.errors }} {{ field.label_tag }} {{ field }} {% endfor %} <label><span>Click here</span></label> <input type="submit" class="greenbutton" value="Submit your Photo" /> </form> </div>

    Read the article

  • Restoring WordPress EC2 instance from snapshot results in 403 Forbidden error

    - by Eric Matthew Turano
    This problem has been perplexing me for weeks now. Here's how the issue goes: Launch Amazon Linux 64-bit instance, successfully install WordPress, and site is active w/ no issues Create snapshot of the instance's root volume Shut down instance Create volume from snapshot, attach to instance, and reboot instance Associate Elastic IP with instance Once that's done and I try logging onto the site, I am redirected to myurl.com/wp-admin/install.php and greeted with this message: Forbidden: You don't have permission to access /wp-admin/install.php on this server. Apache/2.2.25 (Amazon) Server at www.myurl.com Port 80 Port 80 is open on the inbound security group settings, so that's not the issue. Keep in mind all I am doing is merely creating a new volume and attaching it to the same instance, and this issue comes up. What am I doing wrong, and how can I create a complete backup of my instance without this error occuring?

    Read the article

  • Does cloud storage replicate the data over many datacenters if so it means i benefit content delive

    - by Berkay
    Let's assume that i want to use cloud storage service from one of the cloud storage provider, i got X gb structured and unstructured data and i will use this data as my contents of my interactive web page. And now i have some doubts about this point.I have many users and they are all visiting my web page from various countries.To be more specific first; does my data stored only of the Cloud Storage data center ? or Does my data replicated over many data centers of my provider? second if so; how can i benefit from content delivery network? (matching and placing users’ content nearest storage data centers)

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • List DB2 version, OS and hardware on Linux? (aws image)H

    - by mestika
    Hello everybody, I'm not that familiar with Linux but I'm currently working on a aws image for an assignment and I need to display the DB2 version, the OS and the hardware. Is there a commando or program of some sort I can use for this purpose? I tried a rpm called "Bonnie" but that only writes the throughput for the system. Thanks Mestika

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • How can I deploy my .NET app to Amazon EC2?

    - by Khash
    I have a .NET Windows service and a .NET Web Application that I would like to deploy to my Amazon EC2 Windows 2008 instances. At this point, all I need to do is to copy the zipped files across to the EC2 box and remote desktop to the EC2 instance and finish the deployment. In order to do this, I have tried LogMeIn Hamachi2 to create a P2P VPN and use RoboCopy to copy the files, however it seems Hamachi doesn't work on Windows EC2. What is your solution for deploying your .NET apps to Windows EC2 instances? I want to avoid running an FTP server on the box just to get my files up on the server and don't have a VPN server (like OpenVPN) running to run a cloud based VPN solution. Perhaps I can find a simple way of using Amazon S3 as a strategy? Any ideas? Suggestions?

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • Serving files over HTTPS dynamically based on request.ssl? with Attachment_fu

    - by Marston A.
    I see there is a :user_ssl option in attachment_fu which checks the amazon_s3.yml file in order to serve files via https:// In the s3_backend.rb you have this method: def self.protocol @protocol ||= s3_config[:use_ssl] ? 'https://' : 'http://' end But this then makes it serve ALL s3 attachments with SSL. I'd like to make it dynamic depending if the current request was made with https:// i.e: if request.ssl? @protocol = "https://" else @protocol = "http://" end How can I make it work in this way? I've tried modifying the method and then get the NameError: undefined local variable or method `request' for Technoweenie::AttachmentFu::Backends::S3Backend:Module error

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >