Search Results

Search found 6053 results on 243 pages for 'usage'.

Page 120/243 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Sony Vaio CPU fan runs at full speed after installing Windows 8.1 preview

    - by greg27
    I installed the Windows 8.1 preview on my Sony Vaio E Series 14P (http://www.sony.com.au/product/sve14a27cg) and now my CPU fan is running at full speed all the time. When I first start my computer it runs normally, but a few seconds after reaching the Windows login screen the fan speed will suddenly jump up to max. It's pretty loud, so I'm hoping there's a solution! I've tried updating my bios to the latest version but that didn't help. I've tried programs like SpeedFan, but that wasn't able to detect my CPU fan at all. My CPU usage and temperature is normal and doesn't warrant the max fan speed. Someone else has reported the issue here: http://answers.microsoft.com/en-us/windows/forum/windows8_1_pr-hardware/sony-vaio-e-series-sve14a35cxh100-fan-throttle/45ec823a-2bc8-43ea-8557-b1a5dd0a6870 Is there any way to fix this without having to refresh/reinstall Windows?

    Read the article

  • Synology DS210j & online backup: what do people recommend?

    - by Dean
    I've just purchased a Synology DS210j for my home network and would like to backup this NAS online. I noticed that DiskStation Manager v2.3 provides various options including Amazon S3 and rsync: Does anybody have some real usage against cost statistics for Amazon's S3 service? How is sensitive data protected on Amazon S3? Are there any rsync online backup options? If so, what do people recommend? UPDATE: I am still unable to find any decent answers to the above questions, can anybody help me out?

    Read the article

  • Lots of TIME_WAIT connections in netstat (Windows Server 2008)

    - by Rhys Causey
    I'm having some issues on a Windows 2008 server with some network connections not going through. For instance, in a web application on the server, we need to open a socket connection to another server, and this fails sometimes with the following message: Only one usage of each socket address (protocol/network address/port) is normally permitted I looked up the error, which led me to this page: http://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx, which indicates that it might be TCP/IP port exhaustion. When I perform netstat -n, I get tons of TIME_WAIT connections on port 80. Does anyone have any idea what could be causing this?

    Read the article

  • API Management Solutions

    - by Mike
    I'm currently building an API and am looking for a tool to allow me to monitor (in a GUI) and rate limit usage. I've come across a few enterprise solutions including: http://apigee.com/ http://mashery.com/ http://www.layer7tech.com/ http://www.3scale.net/ The Apigee enterprise plan is exactly what I'm looking for but plans start at $3000 / month which is out of my price range. The other solutions are all either to expense or do not provide the solution I'm looking for. This led me to look at some open source options including: http://apiaxle.com/ https://code.google.com/p/varnish-apikey/wiki/UsageManual Varnish seems like a fairly complete solution, however I would need to build a GUI to visualise the data. My final option would to build a solution from scratch using EventMachine and ruby. Any advice?

    Read the article

  • lsass.exe memory leak on windows 2003 server

    - by thelsdj
    In the past month or so I noticed that lsass.exe has started to leak memory, getting to 500MB+ of ram in under a week after reboot. Before this I had never noticed it using any significant amount of memory compared to other processes on the system. This is happening on 2 identical servers, neither of which has anything to do with Active Directory. Maybe a recent Windows Update has caused this? Any thoughts on things to check? As a side question is there some way to recycle the memory usage of lsass.exe without rebooting? Edit: Here is what I'm seeing in Process Monitor, there are thousands of registry open/query/close a minute from lsass.exe. How can I track down what is triggering these?

    Read the article

  • How do I enable JPEG Support for PHP?

    - by ngache
    My Configure Command doesn't say anything about jpg, nor gif/png, but I can see gif/png support in the output of phpinfo(). I built PHP with --with-gd, but only GIF Support and PNG Support are in the output of phpinfo(), how do I enable JPEG Support? UPDATE I got this problem when compiling : Sorry, I cannot run apxs. Possible reasons follow: 1. Perl is not installed 2. apxs was not found. Try to pass the path using --with-apxs2=/path/to/apxs 3. Apache was not built using --enable-so (the apxs usage page is displayed) The output of /usr/local/apache2/bin/apxs follows: cannot open /usr/local/apache2/build/config_vars.mk: No such file or directory at /usr/local/apache2/bin/apxs line 218. What should I do now?

    Read the article

  • Do I have an efficient APC Setup?

    - by Gaia
    Regarding my particular APC setup: APC 3.1.9 PHP 5.3.3 via fCGI Apache 2.2.15 CentOS 6.3 1) Is it setup properly to minimize overall memory usage? /etc/php.d/apc.ini has only one line: "apc.cache_by_default=0" Each domain for which I want to turn on APC has all the apropriate APC configurations in its own php.ini. 2) I would like to keep only one copy of apc.php that can be accessed via any of the vhosts on the server. What's the recommended way to do this? It seems that apc.php doesn't play well with apache Alias directive. apc.php only exists is on one of the vhosts, is set to 644 and it doesn't seem to matter who owns it: if I try to access it via an alias I get only gibberish.

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • About HDD enclosure

    - by kmitnick
    hey guys, how r u doin? I have this 3.5" IDE enclosure, and it works great, I mean I love the idea of enclosures ( not the power feeding thing :), btw can't I just insert a chargable battery to feed the power when I am unable to find an electricity block), anyway my question is, when I finish the usage of the enclosure I safely remove it when using Windows or umount when working with Linux, and after that I got confused whether to turn it off or no? when I turn it off, the HDD suddenly stop spinning as if power failure not as when it was an internal and normally shuted down the pc. So is it ok to turn it off the way I've just said??? regards, ~Abed

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • Suggest me a good php-fpm configuartion

    - by Werulz
    I am configuring a server for a friend.The server has the following specs 8GB RAM Quad Core processor 1 TB HDD 100 mbps port However all php files are loadking very slowly.I did a speedtest and server takes 16 secs to Load FIRST byte.I strongly believe its my php-fpm configuration.Server uses nginx and php only , no mysql etc... My current php-fpm configuration pm.max_children = 50 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 35 Server load and ram usage are perfectly fine Please suggest me a good configuration for this server UPDATE: This configuration works fine pm.max_children = 20 pm.start_servers = 7 pm.min_spare_servers = 5 pm.max_spare_servers = 10 pm.max_requests = 100 The problem with first byte load time is solved.However after like 15-20 hours First byte load time increase gradually. I have to reload php-fpm to get small load time Based on my conf above what i modify to it so that first byte load time remain small and i don't have to restart it:P

    Read the article

  • Sleep/Suspend and WOL on FreeNAS

    - by Timothy R. Butler
    I am trying to figure out how to get FreeNAS 8 to sleep when inactive and, ideally, wake on lan activity (or, less ideally, wake on a WOL magic packet). However, as I've tried to search for information on how to do this, almost all discussions seem to be centered on FreeNAS 7. Also, the tools included in FreeBSD to do this seem to be missing (i.e. acpiconf, etc.). Is there a way to get FreeNAS 8 to sleep and wake so that I don't have to leave the server running all the time? Given its usage level, it seems a waste to have the server running constantly.

    Read the article

  • What advantage to I have if I use 64bit libraries?

    - by RadiantHex
    Hi folks, I see many people go crazy about 64bit libraries, and preferring them in general to the 32bit counter parts. I realise there is a lot of talk that gets lost in translation, and that the 64bit can be often over-valued. The setting is libraries that are called on web application, I'm aware that a new instance of the web app is generated for each hit. Therefore I'm thinking that 64bit is not necessary as the instances in no way surpass 2Gb of RAM usage. Help would be much appreciated! :)

    Read the article

  • Applying memory limits to screen sessions

    - by CollinJSimpson
    You can set memory usage limits for standard Linux applications in: /etc/security/limits.conf Unfortunately, I previously thought these limits only apply to user applications and not system services. This means that users can by bypass their limits by launching applications through a system service such as screen. I'd like to know if it's possible to let users use screen but still enforce application limits. Jeff had the great idea of using nohup which obeys user limits (wonderful!), but I would still like to know if it's possible to mimic the useful windowing features of screen. EDIT: It seems my screen sessions are now obeying my hard address space limits defined in /etc/security/limits.conf. I must have been making some mistake. I recently installed cpulimit, but I doubt that's the solution.Thanks for the nohup tip, Jeff! It's very useful. Link to CPU Limit package

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • Tunnel only one program (UDP & TCP) through another server

    - by user136036
    I have a windows machine at home and a server with debian installed. I want to tunnel the UDP traffic from one (any only this) program on my windows machine through my server. For tcp traffic this was easy using putty as a socks5 proxy and then connecting via ssh to my server - but this does not seem to work for UDP. Then I setup dante as a socks5 proxy but it seems to create a new instance/thread per connection which leads to a huge ram usage for my server, so this was no option either. So most people recommend openvpn, so my question: Can I use openvpn to just tunnel this one program through my server? Is there a way to maybe create a local socks5 proxy on my windows machine and set it as a proxy in my program and only this proxy then will use openvpn? Thank you for your ideas

    Read the article

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I have read the documentation here: http://www.truecrypt.org/docs/?s=command-line-usage But I can't figure out which switch I need to use to only open an image and not mount it. I am using the Mac version, and I have set up an alias for the TrueCrypt shell command, so I can just type: truecrypt -t -v - ?? [][]..

    Read the article

  • Can't bind spawn-fcgi to address

    - by Xeoncross
    Following some nice instructions I am almost through setting up PHP to run on nginx. However, every time I try to start spawn-fcgi I get an error message demo@desktop:/usr/bin$ sudo /etc/init.d/php-fastcgi start spawn-fcgi: bind failed: Cannot assign requested address My /etc/init.d/php-fastcgi startup script is: #!/bin/bash PHP_SCRIPT=/usr/bin/php-fastcgi FASTCGI_USER=demo RETVAL=0 case "$1" in start) su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; stop) killall -9 php5-cgi RETVAL=$? ;; restart) killall -9 php5-cgi su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL console output which loads /usr/bin/php-fastcgi #!/bin/sh /usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 6 -u demo -f /usr/bin/php5-cgi One thing to note is that I am running the PHP cgi as the user "demo" which is my account.

    Read the article

  • How to interpret output from Linux 'top' command?

    - by Ali
    Following a discussion made HERE about how PHP-FPM consuming memory, I just found a problem in reading the memory in top command. Here is a screenshot of my top just after restarting PHP-FPM. Everything is normal: about 20 PHP-FPM processes, each consuming 5.5MB memory (0.3% of total). Here is the aged server right before restart of PHP-FPM (one day after the previous restart). Here, we still have about 25 PHP-FPM with double memory usage (10MB indicating 0.5% of total). Thus, the total memory used should be 600-700 MB. Then, why 1.6GB memory has been used?

    Read the article

  • Real-time graphing in Java

    - by thodinc
    I have an application which updates a variable about between 5 to 50 times a second and I am looking for some way of drawing a continuous XY plot of this change in real-time. Though JFreeChart is not recommended for such a high update rate, many users still say that it works for them. I've tried using this demo and modified it to display a random variable, but it seems to use up 100% CPU usage all the time. Even if I ignore that, I do not want to be restricted to JFreeChart's ui class for constructing forms (though I'm not sure what its capabilities are exactly). Would it be possible to integrate it with Java's "forms" and drop-down menus? (as are available in VB) Otherwise, are there any alternatives I could look into? EDIT: I'm new to Swing, so I've put together a code just to test the functionality of JFreeChart with it (while avoiding the use of the ApplicationFrame class of JFree since I'm not sure how that will work with Swing's combo boxes and buttons). Right now, the graph is being updated immediately and CPU usage is high. Would it be possible to buffer the value with new Millisecond() and update it maybe twice a second? Also, can I add other components to the rest of the JFrame without disrupting JFreeChart? How would I do that? frame.getContentPane().add(new Button("Click")) seems to overwrite the graph. package graphtest; import java.util.Random; import javax.swing.JFrame; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartPanel; import org.jfree.chart.JFreeChart; import org.jfree.chart.axis.ValueAxis; import org.jfree.chart.plot.XYPlot; import org.jfree.data.time.Millisecond; import org.jfree.data.time.TimeSeries; import org.jfree.data.time.TimeSeriesCollection; public class Main { static TimeSeries ts = new TimeSeries("data", Millisecond.class); public static void main(String[] args) throws InterruptedException { gen myGen = new gen(); new Thread(myGen).start(); TimeSeriesCollection dataset = new TimeSeriesCollection(ts); JFreeChart chart = ChartFactory.createTimeSeriesChart( "GraphTest", "Time", "Value", dataset, true, true, false ); final XYPlot plot = chart.getXYPlot(); ValueAxis axis = plot.getDomainAxis(); axis.setAutoRange(true); axis.setFixedAutoRange(60000.0); JFrame frame = new JFrame("GraphTest"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); ChartPanel label = new ChartPanel(chart); frame.getContentPane().add(label); //Suppose I add combo boxes and buttons here later frame.pack(); frame.setVisible(true); } static class gen implements Runnable { private Random randGen = new Random(); public void run() { while(true) { int num = randGen.nextInt(1000); System.out.println(num); ts.addOrUpdate(new Millisecond(), num); try { Thread.sleep(20); } catch (InterruptedException ex) { System.out.println(ex); } } } } }

    Read the article

  • IDN and HTTP_HOST

    - by Sandman
    So, when I want to link my users to a specific page I always use (in php): "http://" . $_SERVER["HTTP_HOST"] . "/page.php", to be sure that the link points to the page they're currently surfing (and not one of the server aliases). But with IDN names, HTTP_HOST is set to "xn--hemmabst-5za.net" (for example) - which of course works but doesn't look very nice. Is there a way to have HTTP_HOST set to the correct IDN name in these cases (in this case - "hemmabäst.net")? I rather do it in Apache before it comes to PHP, because otherwise I'd have to replace all my usage of $_SERVER["HTTP_HOST"]. Any ideas?

    Read the article

  • Other solution instead of Cursoring

    - by dewacorp.alliances
    Hi there I have the following pivoting table that I manage to do and here's the result and I want to put a bit further. **NTRITCode; NTRIId; Parameter; Usage; Rate** CURRENT; 4; Peak; 100; 0.1 CURRENT; 4; NonPeak; 200; 0.2 PROPOSED; 6; Peak; 100; 0.2 PROPOSED; 6; NonPeak; 200; 0.3 PROPOSED; 8; Peak; 200; 0.3 PROPOSED; 8; NonPeak; 200; 0.5 As you can see there is 2 sets of proposed (ID=6 and 8). I want somehow display like this below so each set has a pair of CURRENT as well as the PROPOSED one as follow: **Sequence; NTRITCode; NTRIId; Parameter; Usage; Rate** 1; CURRENT; 4; Peak; 100; 0.1 1; CURRENT; 4; NonPeak; 200; 0.2 1; PROPOSED; 6; Peak; 100; 0.2 1; PROPOSED; 6; NonPeak; 200; 0.3 2; CURRENT; 4; Peak; 100; 0.1 2; CURRENT; 4; NonPeak; 200; 0.2 2; PROPOSED; 8; Peak; 200; 0.3 2; PROPOSED; 8; NonPeak; 200; 0.5 Again all I can think off is using combination of CURSOR and UNION but is there any TSQL that can do this? Thanks

    Read the article

  • Is it possible to rate-limit an scp/sftp/rsync/etc transfer from the command-line? ie, manual QoS on

    - by warren
    Specifically, I am looking to rate-limit an scp or sftp session (or other arbitrary network call) in the call itself. For example, let's say I want to copy 100MB to one server, and 1GB to another. I'd like to be able to run both of these at the same time, but maintain a QoS for "normal" computer usage - somewhat similar to how you can rate-limit bittorrent. Is there a way to do this without touching the networking hardware? I'm envisioning something akin to: magic-qos-tool 'scp file user@host:/path/to/file' Or.. scp -rate 40kbps file user@host:/path/to/file

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >