Search Results

Search found 1018 results on 41 pages for 'galaxy s3'.

Page 22/41 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Is there a way to rewrite the SQL query efficiently

    - by user320587
    hi, I have two tables with following definition TableA TableB ID1 ID2 ID3 Value1 Value ID1 Value1 C1 P1 S1 S1 C1 P1 S2 S2 C1 P1 S3 S3 C1 P1 S5 S4 S5 The values are just examples in the table. TableA has a clustered primary key ID1, ID2 & ID3 and TableB has p.k. ID1 I need to create a table that has the missing records in TableA based on TableB The select query I am trying to create should give the following output C1 P1 S4 To do this, I have the following SQL query SELECT DISTINCT TableA.ID1, TableA.ID2, TableB.ID1 FROM TableA a, TableB b WHERE TableB.ID1 NOT IN ( SELECT DISTINCT [ID3] FROM TableA aa WHERE a.ID1 == aa.ID1 AND a.ID2 == aa.ID2 ) Though this query works, it performs poorly and my final TableA may have upto 1M records. is there a way to rewrite this more efficiently. Thanks for any help, Javid

    Read the article

  • Using delayed_job to process file uploads across multiple servers

    - by Steve Klabnik
    Does anyone have any good resources on how to do this? Basically, I'm working on a project (in Rails) where people can upload files. They might be big. I'd like to process them using delayed_job before sending them to S3. I'd also like to do this processing on a separate job queue server, rather than on the webserver itself. I'd rather not have to upload the files to the webserver, then transfer them to the job queue server, and then upload them to S3 if I don't have to. Thanks.

    Read the article

  • Drop duplicated axis label in Flex Chart

    - by Sean Chen
    Hi, All. I use LineChart in Flex with horizontal category axis and I need drop duplicated category label on the chart. The data I use are like that: {Product: "C1", Store: "S1", Profit: "1500}, {Product: "C2", Store: "S1", Profit: "1000}, {Product: "C3", Store: "S2", Profit: "800}, {Product: "C4", Store: "S2", Profit: "1200}, {Product: "C5", Store: "S3", Profit: "1800} Beacuse I set horizontalAxis.categoryField = "Store" , the chart show label "S1,S1,S2,S2,S3" on ths axes. However, both C1 and C2 data point group on the second "S1" category (as same as C3,C4 on second S2). If I accept group data point on the same x-poistion, is there any idea to drop duplicated label?

    Read the article

  • Rails streaming file download

    - by Leonard Teo
    I'm trying to implement a file download with Rails. I want to eventually migrate this code to using S3 to serve the file. I've copied the Rails send_file code almost verbatim and I cannot seem to get it to stream a file to the user. What happens is that it sends 'a' file to the user, but the downloaded file itself simply contains the text.inspect of the Proc: # What am I doing wrong here? options = {} options[:length] = File.size(file.path) options[:filename] = File.basename(file.path) send_file_headers! options render :status => 200, :text => Proc.new { |response, output| len = 4096 File.open(file.path, 'rb') do |fh| while buf = fh.read(len) output.write(buf) end end } Ps: I've read in a number of other posts that it's not advisable to send files through the Rails stack, and if possible serve using the web server, or in the case of S3 use the hashed URL it can provide. Yes, we really do want to serve the file through the Rails stack.

    Read the article

  • Rails - Paperclip, getting width and height of image in model

    - by Corey Tenold
    Trying to get the width and height of the uploaded image while still in the model on the initial save. Any way to do this? Here's the snippet of code I've been testing with from my model. Of course it fails on "instance.photo_width". has_attached_file :photo, :styles => { :original => "634x471>", :thumb => Proc.new { |instance| ratio = instance.photo_width/instance.photo_height min_width = 142 min_height = 119 if ratio > 1 final_height = min_height final_width = final_height * ratio else final_width = min_width final_height = final_width * ratio end "#{final_width}x#{final_height}" } }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'foo_bucket' So I'm basically trying to do this to get a custom thumbnail width and height based on the initial image dimensions. Any ideas?

    Read the article

  • bcdiv() bcadd() bcsub() with Php

    - by Pieman
    Will this code be 'stressful' for a server? Or is it easy to bcdiv/sub/add to 10000 decimal places? I'm thinking of looping it afew times... Not Sure... $s2 = (bcdiv('1', $test, 10000)); $s = bcsub($s, $s2, 10000); $test += 2; $s3 = (bcdiv('1', $test, 10000)); $s = bcadd($s, $s3, 10000); $test += 2; Any advice? :)

    Read the article

  • Memory alignment in C

    - by user1758245
    Here is a snippet: #pragma pack(4) struct s1 { char a; long b; }; #pragma pack() #pragma pack(2) struct s2 { char c; struct s1 st1; }; #pragma pack() #pragma pack(2) struct s3 { char a; long b; }; #pragma pack() #pragma pack(4) struct s4 { char c; struct s3 st3; }; #pragma pack() I though sizeof(s4) should be 10 or 12. But it turns out to be 8. I am using Visual C++ 6.0. Could someone tell me why?

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • udp through nat

    - by youllknow
    Hi everyone! I've two private networks (each of them behind a typical dsl router). The routers are connected to the WWW. The extern interface of each router have one dynamic IP address. I want to stream data via UDP directly between one client in private network A and one client in private network B. I've already tried a lot of things (see: http://en.wikipedia.org/wiki/UDP_hole_punching, or STUN). But it wasn't possible for me to transfer data between the two clients. It's possible to use a server (located in the WWW, with static IP) to transfer the extern IPs (and extern ports) from the routers between the clients. So imagine client A knows client B's external IP and client B's external port assigned by his router. I simply tried sending UDP packet to the receivers external IP/port combination, but without any result. So does anyone know what do to communicate via UDP throw the two NAT routers? It must be possible??? Or does Skype, for example, not directly communicate between the clients when the call eachother (voice over ip). I am sorry for my bad English! If something is confusing don't mind asking me!!! Thanks for your help in advance. ::::EDIT:::: I can't get pwnat or chownat working. I tried it with my own dsl-gateway - didn't work. Then I set up a complete virtual environment using VMWare. C1 (Client 1, WinXP Prof SP3): 172.16.16.100/24, GW 172.16.16.1 C2 (Client 2, WinXP Prof SP3): 10.0.0.100/24, GW 10.0.0.1 C3 (Client 3, WinXP Prof SP3): 3.0.0.2/24, GW 3.0.0.1 S1 (Ubuntu 10.04 x64 Server): eth0: 172.16.16.1/24, eth1: 1.0.0.2/24 GW 1.0.0.1 S2 (Ubuntu 10.04 x64 Server): eth0: 10.0.0.1/24, eth1: 2.0.0.2/24 GW 2.0.0.1 S3 (Ubuntu 10.04 x64 Server): eth0: 1.0.0.1/24, eth1: 2.0.0.1/24, eth2: 3.0.0.1/24 +--+ +--+ +--+ +--+ +--+ |C1|-----|S1|-----|S3|-----|S2|-----|C2| +--+ +--+ +--+ +--+ +--+ | +--+ |C3| +--+ Server S1 and S2 provide NAT functionality. (they have routing enabled and provide a firewall, which allows trafic from the internal net and provide the nat functionality) Server S3 has routing enabled. The client firewalls are turned off. C1 and C2 are able to ping C3, e.g. visit C3's webserver. They are also able to send UDP Packets to C3 (C3 successful receives them)! C1 and C2 have also webservers running for test reasons. I run ""chownat -s 80 2.0.0.2"" at C1, and ""chownat -c 8000 1.0.0.2"" at C2. Then I tried to access the Webpage from C1 via webbrower localhost at port 8000. It didn't work. Can anybody help me? Any suggestions? If you have any questions to my question, please ask!

    Read the article

  • Grant’s video warning – backup verification

    Grant takes a humorous (but completely serious) look at why you should be regularly verifying your backups. Get top tips for backup and recovery, and protect yourself when disaster strikes. Watch the video Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Unity desktop "smears" (doesn't refresh) and shows no wallpaper

    - by Cedric Reichenbach
    Since a couple of days now, my unity desktop background smears everything, just like what old Windows versions were famous for: Of course, I tried rebooting a couple of times. Also, I switched graphics driver and I tried to change wallpaper and theme, but none of them solved the problem. What could be causing that problem, and where can I search on for its source? Infomation update I'm using Ubuntu 13.04 (not updated to 13.10 yet). The following command were all run from cinnamon (on the same Ubuntu installation). sudo lsb_release -a: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring sudo uname -a: Linux cedric-MacBookPro 3.8.0-32-generic #47-Ubuntu SMP Tue Oct 1 22:35:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux sudo dpkg -l | grep xserver-xorg-video: ii xserver-xorg-video-all 1:7.7+1ubuntu4 amd64 X.Org X server -- output driver metapackage ii xserver-xorg-video-ati 1:7.1.0-0ubuntu2 amd64 X.Org X server -- AMD/ATI display driver wrapper ii xserver-xorg-video-cirrus 1:1.5.2-0ubuntu1 amd64 X.Org X server -- Cirrus display driver ii xserver-xorg-video-fbdev 1:0.4.3-0ubuntu1 amd64 X.Org X server -- fbdev display driver ii xserver-xorg-video-intel 2:2.21.6-0ubuntu4.3 amd64 X.Org X server -- Intel i8xx, i9xx display driver ii xserver-xorg-video-mach64 6.9.3-0ubuntu1 amd64 X.Org X server -- ATI Mach64 display driver ii xserver-xorg-video-mga 1:1.6.2-0ubuntu1 amd64 X.Org X server -- MGA display driver ii xserver-xorg-video-modesetting 0.7.0-0ubuntu2 amd64 X.Org X server -- Generic modesetting driver ii xserver-xorg-video-neomagic 1:1.2.7-0ubuntu1 amd64 X.Org X server -- Neomagic display driver ii xserver-xorg-video-nouveau 1:1.0.7-0ubuntu1 amd64 X.Org X server -- Nouveau display driver ii xserver-xorg-video-openchrome 1:0.3.1-0ubuntu1.13.04.1 amd64 X.Org X server -- VIA display driver ii xserver-xorg-video-qxl 0.1.0-0ubuntu3 amd64 X.Org X server -- QXL display driver ii xserver-xorg-video-r128 6.9.1-0ubuntu1 amd64 X.Org X server -- ATI r128 display driver ii xserver-xorg-video-radeon 1:7.1.0-0ubuntu2 amd64 X.Org X server -- AMD/ATI Radeon display driver ii xserver-xorg-video-s3 1:0.6.5-0ubuntu3 amd64 X.Org X server -- legacy S3 display driver ii xserver-xorg-video-savage 1:2.3.6-0ubuntu1 amd64 X.Org X server -- Savage display driver ii xserver-xorg-video-siliconmotion 1:1.7.7-0ubuntu1 amd64 X.Org X server -- SiliconMotion display driver ii xserver-xorg-video-sis 1:0.10.7-0ubuntu1 amd64 X.Org X server -- SiS display driver ii xserver-xorg-video-sisusb 1:0.9.6-0ubuntu1 amd64 X.Org X server -- SiS USB display driver ii xserver-xorg-video-tdfx 1:1.4.5-0ubuntu1 amd64 X.Org X server -- tdfx display driver ii xserver-xorg-video-trident 1:1.3.6-0ubuntu2 amd64 X.Org X server -- Trident display driver ii xserver-xorg-video-vesa 1:2.3.2-0ubuntu1 amd64 X.Org X server -- VESA display driver ii xserver-xorg-video-vmware 1:12.0.2+git.e5ac80d8-0ubuntu1 amd64 X.Org X server -- VMware display driver sudo lspci | grep VGA: 01:00.0 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 330M] (rev a2)

    Read the article

  • Data Quality Services Performance Best Practices Guide

    This guide details high-level performance numbers expected and a set of best practices on getting optimal performance when using Data Quality Services (DQS) in SQL Server 2012 with Cumulative Update 1. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Open Source developers: Need your help to answer an 8-minute academic survey

    - by Yi Wang
    I am a research in University of California, Irvine (UCI). I am conducting a research on collaboration tool usage in Open Source development. Your answers will help us to develop new, powerful tools in future. The link of this survey is: http://edu.surveygizmo.com/s3/1035227/Attitude-and-Usage-of-Collaboration-Tools-in-Open-Source-Software-Development The survey only takes you 5-8 mins. thanks a lot for you help!

    Read the article

  • Stairway to XML: Level 1 - Introduction to XML

    In this level, Rob Sheldon explains what XML is, and describes the components of an XML document, Elements and Attributes. He explains the basics of tags, entity references, enclosed text, comments and declarations Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Oracle Open World Tokyo

    - by user762552
    ????????????????????????????Oracle Open World Tokyo????????????????????????????????????????????Database Firewall????????·??????????????????????????????????????????????????VP(?????????)???Vipin Samar????????????????????SNS???????DBA???????????????????????????????????????????????????????S3-01 4/6(?)11:50-12:35??????????????????????????????? ???????????????????????2415?????????????????????????···???????????????????????4/4???????????S1-12(13:00-13:45)????????????????????··· ?????????????

    Read the article

  • Getting Started with the New Column Store Index of SQL Server 2012

    Column Store Index, a new feature in SQL Server 2012, improves performance of data warehouse queries several folds. Arshad Ali shows you how to create column store index, and how to use index query hint to include or exclude a column store index. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Suspend fails and I know the module causing it. What can I do?

    - by ch0wn
    My suspend did not correctly since I installed a USB 3 extension card in my PC. Instead of going to S3, the computer just woke up instantly. dmesg gave the hint "usb_dev_suspend+0x0/0x20 returns -2" so I rmmod'ed the "xhci_hcd" module which did the trick. Is there a good way for me to work around this? The builds from the Kernel Mainline PPA did not help, unfortunately, and blacklisting the module is not my favorite option. Where is the best place for me to report this issue?

    Read the article

  • Stairway to SQL PowerShell Level 4: Objects in SQL PowerShell

    This far, we have learned about installation and setup of the PowerShell environment. You should now have a foundation of SQL Server PowerShell. We now are ready to learn about Objects in SQL PowerShell. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Hassle-free Backup with Deja Dup

    <b>Linux Pro Magazine:</b> "The Dé Dup backup utility may not be the most powerful or flexible backup tool out there, but it does have its advantages. Its straightforward interface makes it dead-easy to configure backups, while the support for the Amazon S3 storage back-end is a boon for users looking for unlimited backup storage on the cheap."

    Read the article

  • REPLACE Multiple Spaces with One

    Replacing multiple spaces with a single space is an old problem that people use loops, functions, and/or Tally tables for. Here's a set based method from MVP Jeff Moden. “Thanks for building such a useful and simple-to-use service”- Steve Harshbarger, CTO, 10th Magnitude. Get started with Red Gate Cloud Services and back up your SQL Azure databases to Azure Blob storage or Amazon S3 – download a free trial today.

    Read the article

  • Objects, Relationships, Systems, And Processes

    What is the difference between an expert DBA and a Master DBA? This piece from William Talada talks about Objects, Relationships, Systems, and Processes and how they may relate to your job as a DBA. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >