Search Results

Search found 800 results on 32 pages for 's3'.

Page 18/32 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • fast forward/streaming in html5 video? RTSP?

    - by karpodiem
    right now I've got a few .mp4's hosted on Amazon S3. I know that S3 has support for RTMP, which is useful for streaming Flash. I'd like to accomplish something similar with html5 video; my biggest issue is that I need the ability to seek (fast forward) to a particular part of the video. Right now when I query the video, it loads the entire video before playing, which is a waste of bandwidth/dealbreaker. In what manner could this be implemented? Is this even possible? Looks like RTSP would be a good bet, but I haven't found whether anyone has rolled this out successfully.

    Read the article

  • How to process images with paperclip on Heroku?

    - by Yuri
    I use Heroku for my app. I want to auto-orient image and then to resize it. So I do: class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end It does not work with error: There was an error processing the thumbnail for stream.20143 What am I doing wrong?

    Read the article

  • String concatenation: Final string value does not equal to the latest value

    - by Pan Pizza
    I have a simple question about string concatenation. Following is the code. I want to ask why s6 = "abcde" and not "akcde"? I have change the s2 value to "k". Public Class Form1 Public s1 As String = "a" Public s2 As String = "b" Public s3 As String = "c" Public s4 As String = "d" Public s5 As String = "e" Public s6 As String = "" Public s7 As String = "k" Private Sub Button2_Click(sender As System.Object, e As System.EventArgs) Handles Button2.Click s6 = s1 & s2 & s3 & s4 & s5 s2 = s7 MessageBox.Show(s6) 's6 = abcde End Sub End Class

    Read the article

  • jQuery Cycle pageAnchorBuilder / jQuery Selectors

    - by Wes
    I'm trying to grab the source of an image with jquery. My HTML looks like this: <div class="featuredSlideImage"> <img src="http://apture.s3.amazonaws.com/0000012865c9e9d984b36217007f000000000001.latte%20heart.jpg"/> </div> <!--featuredSlideImage--> My jQuery Selector is: return '<li>' + jQuery(slide).children(".featuredSlideImage").html(); + '</li>'; which reutrns this: <img src="http://apture.s3.amazonaws.com/0000012865c9e9d984b36217007f000000000001.latte%20heart.jpg"/> I was to just return the source of that, sans the HTML. How can I go about this?

    Read the article

  • Java webapp: adding a content-disposition header to force browsers "save as" behavior

    - by WizardOfOdds
    Even though it's not part of HTTP 1.1/RFC2616 webapps that wish to force a resource to be downloaded (rather than displayed) in a browser can use the Content-Disposition header like this: Content-Disposition: attachment; filename=FILENAME Even tough it's only defined in RFC2183 and not part of HTTP 1.1 it works in most web browsers as wanted. So from the client side, everything is good enough. However on the server-side, in my case, I've got a Java webapp and I don't know how I'm supposed to set that header, especially in the following case... I'll have a file (say called "bigfile") hosted on an Amazon S3 instance (my S3 bucket shall be accessible using a partial address like: files.mycompany.com/) so users will be able to access this file at files.mycompany.com/bigfile. Now is there a way to craft a servlet (or a .jsp) so that the Content-Disposition header is always added when the user wants to download that file? What would the code look like and what are the gotchas, if any?

    Read the article

  • What is Google Docs' SLA?

    - by Walter White
    Hi all, I am evaluating online storage and for me, that means either Amazon S3 or Google Docs. Amazon very clearly posts there reliability and SLA: http://aws.amazon.com/s3/#protecting Their rates are obviously higher than Google's, but it is really hard to compare without having an SLA. Does anyone know what Google's commitment is for reliability? Is it 99.99% for data, is there anyway to make that more durable? I have to ask too, wouldn't google docs at least be inheritently more reliable than a hard drive? Thanks, Walter

    Read the article

  • null-coalescing operator or conditional operator

    - by rkrauter
    Which coding style do you prefer: object o = new object(); //string s1 = o ?? "Tom"; // Cannot implicitly convert type 'object' to 'string' CS0266 string s3 = Convert.ToString(o ?? "Tom"); string s2 = (o != null) ? o.ToString() : "Tom"; s2 or s3? Is it possible to make it shorter? s1 does not obviously work.

    Read the article

  • JSF & jqPlot Uncaught TypeError

    - by sdg
    I have a problem using jqPlot with JSF I added this code to my JSF page: $(document).ready(function () { var s1 = [200, 600, 700, 1000]; var s2 = [460, - 210, 690, 820]; var s3 = [-260, - 440, 320, 200]; var ticks = ['May', 'June', 'July', 'August']; var plot1 = $.jqplot('chart1', [s1, s2, s3], { // The "seriesDefaults" option is an options object that will // be applied to all series in the chart. seriesDefaults: { renderer: $.jqplot.BarRenderer, rendererOptions: { fillToZero: true } }, series: [{ label: 'Hotel' }, { label: 'Event Regristration' }, { label: 'Airfare' }], legend: { show: true, placement: 'outsideGrid' }, axes: { xaxis: { renderer: $.jqplot.CategoryAxisRenderer, ticks: ticks }, yaxis: { pad: 1.05, tickOptions: { formatString: '$%d' } } } }); }); but when I try to load the page I got this error : Uncaught TypeError: Cannot read property 'BarRenderer' of undefined (anonymous function)portfolioModeling.xhtml:184 f.extend._Deferred.e.resolveWithjquery.min.js:2 e.extend.readyjquery.min.js:2 c.addEventListener.C I added the whole required js files and also the css file but I am lost and don't know where is the problem Thanks in advance

    Read the article

  • why does this code crash?

    - by ashish yadav
    why does this code crash? is using strcat illegal on character pointers? #include <stdio.h> #include <string.h> int main() { char *s1 = "Hello, "; char *s2 = "world!"; char *s3 = strcat(s1, s2); printf("%s",s3); return 0; } please give a proper way with referring to both array and pointers.

    Read the article

  • Is there a way to rewrite the SQL query efficiently

    - by user320587
    hi, I have two tables with following definition TableA TableB ID1 ID2 ID3 Value1 Value ID1 Value1 C1 P1 S1 S1 C1 P1 S2 S2 C1 P1 S3 S3 C1 P1 S5 S4 S5 The values are just examples in the table. TableA has a clustered primary key ID1, ID2 & ID3 and TableB has p.k. ID1 I need to create a table that has the missing records in TableA based on TableB The select query I am trying to create should give the following output C1 P1 S4 To do this, I have the following SQL query SELECT DISTINCT TableA.ID1, TableA.ID2, TableB.ID1 FROM TableA a, TableB b WHERE TableB.ID1 NOT IN ( SELECT DISTINCT [ID3] FROM TableA aa WHERE a.ID1 == aa.ID1 AND a.ID2 == aa.ID2 ) Though this query works, it performs poorly and my final TableA may have upto 1M records. is there a way to rewrite this more efficiently. Thanks for any help, Javid

    Read the article

  • Using delayed_job to process file uploads across multiple servers

    - by Steve Klabnik
    Does anyone have any good resources on how to do this? Basically, I'm working on a project (in Rails) where people can upload files. They might be big. I'd like to process them using delayed_job before sending them to S3. I'd also like to do this processing on a separate job queue server, rather than on the webserver itself. I'd rather not have to upload the files to the webserver, then transfer them to the job queue server, and then upload them to S3 if I don't have to. Thanks.

    Read the article

  • Drop duplicated axis label in Flex Chart

    - by Sean Chen
    Hi, All. I use LineChart in Flex with horizontal category axis and I need drop duplicated category label on the chart. The data I use are like that: {Product: "C1", Store: "S1", Profit: "1500}, {Product: "C2", Store: "S1", Profit: "1000}, {Product: "C3", Store: "S2", Profit: "800}, {Product: "C4", Store: "S2", Profit: "1200}, {Product: "C5", Store: "S3", Profit: "1800} Beacuse I set horizontalAxis.categoryField = "Store" , the chart show label "S1,S1,S2,S2,S3" on ths axes. However, both C1 and C2 data point group on the second "S1" category (as same as C3,C4 on second S2). If I accept group data point on the same x-poistion, is there any idea to drop duplicated label?

    Read the article

  • Rails streaming file download

    - by Leonard Teo
    I'm trying to implement a file download with Rails. I want to eventually migrate this code to using S3 to serve the file. I've copied the Rails send_file code almost verbatim and I cannot seem to get it to stream a file to the user. What happens is that it sends 'a' file to the user, but the downloaded file itself simply contains the text.inspect of the Proc: # What am I doing wrong here? options = {} options[:length] = File.size(file.path) options[:filename] = File.basename(file.path) send_file_headers! options render :status => 200, :text => Proc.new { |response, output| len = 4096 File.open(file.path, 'rb') do |fh| while buf = fh.read(len) output.write(buf) end end } Ps: I've read in a number of other posts that it's not advisable to send files through the Rails stack, and if possible serve using the web server, or in the case of S3 use the hashed URL it can provide. Yes, we really do want to serve the file through the Rails stack.

    Read the article

  • Rails - Paperclip, getting width and height of image in model

    - by Corey Tenold
    Trying to get the width and height of the uploaded image while still in the model on the initial save. Any way to do this? Here's the snippet of code I've been testing with from my model. Of course it fails on "instance.photo_width". has_attached_file :photo, :styles => { :original => "634x471>", :thumb => Proc.new { |instance| ratio = instance.photo_width/instance.photo_height min_width = 142 min_height = 119 if ratio > 1 final_height = min_height final_width = final_height * ratio else final_width = min_width final_height = final_width * ratio end "#{final_width}x#{final_height}" } }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'foo_bucket' So I'm basically trying to do this to get a custom thumbnail width and height based on the initial image dimensions. Any ideas?

    Read the article

  • bcdiv() bcadd() bcsub() with Php

    - by Pieman
    Will this code be 'stressful' for a server? Or is it easy to bcdiv/sub/add to 10000 decimal places? I'm thinking of looping it afew times... Not Sure... $s2 = (bcdiv('1', $test, 10000)); $s = bcsub($s, $s2, 10000); $test += 2; $s3 = (bcdiv('1', $test, 10000)); $s = bcadd($s, $s3, 10000); $test += 2; Any advice? :)

    Read the article

  • Memory alignment in C

    - by user1758245
    Here is a snippet: #pragma pack(4) struct s1 { char a; long b; }; #pragma pack() #pragma pack(2) struct s2 { char c; struct s1 st1; }; #pragma pack() #pragma pack(2) struct s3 { char a; long b; }; #pragma pack() #pragma pack(4) struct s4 { char c; struct s3 st3; }; #pragma pack() I though sizeof(s4) should be 10 or 12. But it turns out to be 8. I am using Visual C++ 6.0. Could someone tell me why?

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • udp through nat

    - by youllknow
    Hi everyone! I've two private networks (each of them behind a typical dsl router). The routers are connected to the WWW. The extern interface of each router have one dynamic IP address. I want to stream data via UDP directly between one client in private network A and one client in private network B. I've already tried a lot of things (see: http://en.wikipedia.org/wiki/UDP_hole_punching, or STUN). But it wasn't possible for me to transfer data between the two clients. It's possible to use a server (located in the WWW, with static IP) to transfer the extern IPs (and extern ports) from the routers between the clients. So imagine client A knows client B's external IP and client B's external port assigned by his router. I simply tried sending UDP packet to the receivers external IP/port combination, but without any result. So does anyone know what do to communicate via UDP throw the two NAT routers? It must be possible??? Or does Skype, for example, not directly communicate between the clients when the call eachother (voice over ip). I am sorry for my bad English! If something is confusing don't mind asking me!!! Thanks for your help in advance. ::::EDIT:::: I can't get pwnat or chownat working. I tried it with my own dsl-gateway - didn't work. Then I set up a complete virtual environment using VMWare. C1 (Client 1, WinXP Prof SP3): 172.16.16.100/24, GW 172.16.16.1 C2 (Client 2, WinXP Prof SP3): 10.0.0.100/24, GW 10.0.0.1 C3 (Client 3, WinXP Prof SP3): 3.0.0.2/24, GW 3.0.0.1 S1 (Ubuntu 10.04 x64 Server): eth0: 172.16.16.1/24, eth1: 1.0.0.2/24 GW 1.0.0.1 S2 (Ubuntu 10.04 x64 Server): eth0: 10.0.0.1/24, eth1: 2.0.0.2/24 GW 2.0.0.1 S3 (Ubuntu 10.04 x64 Server): eth0: 1.0.0.1/24, eth1: 2.0.0.1/24, eth2: 3.0.0.1/24 +--+ +--+ +--+ +--+ +--+ |C1|-----|S1|-----|S3|-----|S2|-----|C2| +--+ +--+ +--+ +--+ +--+ | +--+ |C3| +--+ Server S1 and S2 provide NAT functionality. (they have routing enabled and provide a firewall, which allows trafic from the internal net and provide the nat functionality) Server S3 has routing enabled. The client firewalls are turned off. C1 and C2 are able to ping C3, e.g. visit C3's webserver. They are also able to send UDP Packets to C3 (C3 successful receives them)! C1 and C2 have also webservers running for test reasons. I run ""chownat -s 80 2.0.0.2"" at C1, and ""chownat -c 8000 1.0.0.2"" at C2. Then I tried to access the Webpage from C1 via webbrower localhost at port 8000. It didn't work. Can anybody help me? Any suggestions? If you have any questions to my question, please ask!

    Read the article

  • Grant’s video warning – backup verification

    Grant takes a humorous (but completely serious) look at why you should be regularly verifying your backups. Get top tips for backup and recovery, and protect yourself when disaster strikes. Watch the video Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Unity desktop "smears" (doesn't refresh) and shows no wallpaper

    - by Cedric Reichenbach
    Since a couple of days now, my unity desktop background smears everything, just like what old Windows versions were famous for: Of course, I tried rebooting a couple of times. Also, I switched graphics driver and I tried to change wallpaper and theme, but none of them solved the problem. What could be causing that problem, and where can I search on for its source? Infomation update I'm using Ubuntu 13.04 (not updated to 13.10 yet). The following command were all run from cinnamon (on the same Ubuntu installation). sudo lsb_release -a: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring sudo uname -a: Linux cedric-MacBookPro 3.8.0-32-generic #47-Ubuntu SMP Tue Oct 1 22:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux sudo dpkg -l | grep xserver-xorg-video: ii xserver-xorg-video-all 1:7.7+1ubuntu4 amd64 X.Org X server -- output driver metapackage ii xserver-xorg-video-ati 1:7.1.0-0ubuntu2 amd64 X.Org X server -- AMD/ATI display driver wrapper ii xserver-xorg-video-cirrus 1:1.5.2-0ubuntu1 amd64 X.Org X server -- Cirrus display driver ii xserver-xorg-video-fbdev 1:0.4.3-0ubuntu1 amd64 X.Org X server -- fbdev display driver ii xserver-xorg-video-intel 2:2.21.6-0ubuntu4.3 amd64 X.Org X server -- Intel i8xx, i9xx display driver ii xserver-xorg-video-mach64 6.9.3-0ubuntu1 amd64 X.Org X server -- ATI Mach64 display driver ii xserver-xorg-video-mga 1:1.6.2-0ubuntu1 amd64 X.Org X server -- MGA display driver ii xserver-xorg-video-modesetting 0.7.0-0ubuntu2 amd64 X.Org X server -- Generic modesetting driver ii xserver-xorg-video-neomagic 1:1.2.7-0ubuntu1 amd64 X.Org X server -- Neomagic display driver ii xserver-xorg-video-nouveau 1:1.0.7-0ubuntu1 amd64 X.Org X server -- Nouveau display driver ii xserver-xorg-video-openchrome 1:0.3.1-0ubuntu1.13.04.1 amd64 X.Org X server -- VIA display driver ii xserver-xorg-video-qxl 0.1.0-0ubuntu3 amd64 X.Org X server -- QXL display driver ii xserver-xorg-video-r128 6.9.1-0ubuntu1 amd64 X.Org X server -- ATI r128 display driver ii xserver-xorg-video-radeon 1:7.1.0-0ubuntu2 amd64 X.Org X server -- AMD/ATI Radeon display driver ii xserver-xorg-video-s3 1:0.6.5-0ubuntu3 amd64 X.Org X server -- legacy S3 display driver ii xserver-xorg-video-savage 1:2.3.6-0ubuntu1 amd64 X.Org X server -- Savage display driver ii xserver-xorg-video-siliconmotion 1:1.7.7-0ubuntu1 amd64 X.Org X server -- SiliconMotion display driver ii xserver-xorg-video-sis 1:0.10.7-0ubuntu1 amd64 X.Org X server -- SiS display driver ii xserver-xorg-video-sisusb 1:0.9.6-0ubuntu1 amd64 X.Org X server -- SiS USB display driver ii xserver-xorg-video-tdfx 1:1.4.5-0ubuntu1 amd64 X.Org X server -- tdfx display driver ii xserver-xorg-video-trident 1:1.3.6-0ubuntu2 amd64 X.Org X server -- Trident display driver ii xserver-xorg-video-vesa 1:2.3.2-0ubuntu1 amd64 X.Org X server -- VESA display driver ii xserver-xorg-video-vmware 1:12.0.2+git.e5ac80d8-0ubuntu1 amd64 X.Org X server -- VMware display driver sudo lspci | grep VGA: 01:00.0 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 330M] (rev a2)

    Read the article

  • Data Quality Services Performance Best Practices Guide

    This guide details high-level performance numbers expected and a set of best practices on getting optimal performance when using Data Quality Services (DQS) in SQL Server 2012 with Cumulative Update 1. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Open Source developers: Need your help to answer an 8-minute academic survey

    - by Yi Wang
    I am a research in University of California, Irvine (UCI). I am conducting a research on collaboration tool usage in Open Source development. Your answers will help us to develop new, powerful tools in future. The link of this survey is: http://edu.surveygizmo.com/s3/1035227/Attitude-and-Usage-of-Collaboration-Tools-in-Open-Source-Software-Development The survey only takes you 5-8 mins. thanks a lot for you help!

    Read the article

  • Stairway to XML: Level 1 - Introduction to XML

    In this level, Rob Sheldon explains what XML is, and describes the components of an XML document, Elements and Attributes. He explains the basics of tags, entity references, enclosed text, comments and declarations Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >