Search Results

Search found 4335 results on 174 pages for 'wait'.

Page 119/174 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • Computer sending data while turned off

    - by Nicklas Ansman
    I have a some what strange problem (which could have and easy and obvious solution for all I know). My problem is that when I've booted ubuntu (now 10.4 but same problem with 9.10) and turns it off it starts sending a HUGE amount of data via the ethernet cable, so much in fact that my router can't handle it and stops responding. As far as I can tell the computer is completely turned off with no fans spinning. I can add that if I boot windows I do not have this problem, just when exiting ubuntu. There are two "fixes" for my problem: Pull the ethernet cable until the next boot Turn off power to the PSU and wait for the capacitors to unload Is there anyone who knows what could be going on? I'd be happy to post some logs or conf-files. Currently I'm using the ethernet port on my motherboard which is a Asus P6T Deluxe V2 with an updated version of the BIOS (maybe not the latest but since it only happens when I've been in ubuntu I don't wanna mess with the BIOS too much). Regards Nicklas ---------Update 1---------- The router is a D-Link DIR 655 with the latest firmware. ---------Update 2---------- I've now reinstalled ubuntu (with 10.4) and I still experience the same problem.

    Read the article

  • smartctl short test doesn't seem to complete

    - by Cédric COPY
    I am working on project which involve automated HDD testing through smartctl. The station is working fine on most product, but I have two specific products that fail the smartctl test. Those two product are both WD product (WD2500BUDT series) Smartctl behaviour is quite strange, in fact the test is launched without any problem, i wait about 2min (test length), and when i check the smartctl, i have got no result at all. It's like I hadn't launched any test (no fail, no success in smartctl result). No error return on command, nothing in syslog, .. As i said before, the test is working for other product, thousands products worked well with this test. The main smartctl command used are : smarctl -t shortest /dev/sdX #Launch test smartctl -l selftest /dev/sdX #Look at test result I have tried to use: smartctl -s on /dev/sdX or smartctl -o on /dev/sdX But doesn't change anything. The system is using Debian 6.0, smartctl v5.40 (rev 3124) x86_64, HDD are plug through SATA to PCI controller. I have 4 HDD connected at a time. Well if anyone has some hints to give with this problem, because I have no idea how can i fix this. Thanks in advance. PS: Not sure if it was a serverfault topic, sorry if i was wrong!

    Read the article

  • Show full URI/URL in Chrome's developer tools Network tab

    - by Lev
    When using Chrome to debug, I find it incredibly difficult to be efficient due to the fact that I don't see how I can force the "Network" tab of the developer tools to show the full request URI. It will show the full URI if you hover the link and wait a second, but this is incredibly counterproductive. All of my AJAX requests are sent to ajax.php, and handled by using query string arguments, like: ajax.php?do=profile-set ajax.php?do=game-save ... etc. Since I use AJAX extensively, my network tab is filled with "ajax.php", but I have to manually hover each and every entry to find the request I am looking for. Surely there has got to be another way!? I am constantly fed up by something new in Firefox and immediately force myself back into Chrome, but it is always the developer tools in Chrome that keep me from using it for an extended period of time. Hopefully I can find out how to do this so I can continue using Chrome as my numero uno. I've provided a screen shot to show you where I mean:

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • windows 95 virtualization and CPU idle

    - by brtlvrs
    We are in the situation that we need to migrate our windows 95 systems (i know, that is so last century ). There is no hardware for a reasonable price available to run our windows 95 systems. So we are extending the overdue lifetime of it, by making them virtual. Now we see that VMware reports 100% CPU utilization of a windows 95 VM. This is because windows 95 doesn't know how to manage a CPU. For this reason software like rain or Waterfall, or CPUCool are introduced. The they are sending a HLT instruction to the CPU. This causes the CPU to halt and wait for new triggers to work. I've tested the mentioned programs in the VM, but they don't work, but generate errors. anyone has a valid workaround, solution ?? btw. I know that the best solutiuon is to replace the windows95 with windows XP. But in our situation that will take at least 5 years. Our windows 95 systems are running factory process control software.....

    Read the article

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Daisy-chainable DisplayPort Monitors

    - by thepurplepixel
    I am the proud new owner of a MacBook Pro with a mini-DisplayPort. My desk setup used to allow me to position the screen of my old MacBook beside an external monitor, essentially allowing dual-head. It was also advantageous that my old MBP and my external monitor had the same resolution, 1440x900. Now, I'm searching for a set of two monitors that I can use a HengeDock with. Unfortunately, the MacBook Pro suffers from having only one mini-DisplayPort. Looking up the spec for DisplayPort 1.2 (which the MBP supports), DisplayPort daisy chaining is supported. What I'm looking for is a monitor that has two DisplayPorts so I can daisy chain two monitors off the single mini-DisplayPort. What I don't want is a USB-based video solution. I need full acceleration on both monitors; an external video card won't cut it. I hope I don't have to wait a few years for these monitors to come out. TL;DR: I need two monitors with two DisplayPorts each that I can daisy-chain. Thanks!

    Read the article

  • Optimal video resolution and encoding for recording games for YouTube?

    - by Rookie
    I want to record video from games, therefore I cannot use very large video resolution, but I still want to make the large video view to look as sharp as the original encoded video before upload. I tried to use YouTube's recommended 854x640 resolution, but it wasn't possible with h264 and the encoding software I used (Handbrake) converted it to a width of the nearest multiple of 4, which I think is a limitation of the h264 format. The video I encoded was sharp and fine quality, but when I uploaded it to YouTube, it lost a lot of quality and the preferred large video view looks almost as bad as a 320p video. I tried to wait a few days but it never got sharper (in case it didn't process it completely yet). So, which resolution and encoding options I should use, if I want the large video player to have the sharpest possible video, retaining the original video quality as good as possible? I noticed that recording with 640x480, the video was sharper than with 1280x720, so I'm not sure what im doing wrong here; both were h264. Is it anyhow possible to prevent YouTube from re-encoding the videos? I just wonder how people can make so sharp videos, while mine are all blurry after upload, but before upload they looked fine. I also tried YouTube's suggested bitrates with h264, but it didn't work any better.

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • Will wear induced by turning computers off in the evening be offset by energy savings?

    - by sharptooth
    I'm asking this here because this is primarily a huge office scenario and administrators will more likely have the answer I'm looking for. Employees' desktop computers can be either left turned on for the whole night or switched off in the evening and turned back on in the morning. The latter will surely save energy. In the same time turning on and off is very harmful for the equipment - hardware often breaks specifically when turned on. Both energy and hardware replacements cost money. With energy it's quite obvious - you pay every month according to what your power meter shows. With hardware replacements it's worse - you need qualified stuff to quickly diagnose the problems and once something breaks the affected employee will have to wait for some time while his computer is fixed/replaced and the data is recovered. So the company has to choose between saving money on energy and saving money on computer maintaince and lost hours. Such decisions must be well though. Is there any detailed study of how turning computers off each evening affects their lifetime?

    Read the article

  • How to manage multiple email addresses on multiple domains in Exchange

    - by CAD bloke
    Using Hosted Exchange Server, mostly because I use an iPhone, webmail & Outlook on 2 laptops. I want to keep everything consistent and unfragmented. Also, I want push notifications. I have 2 domains, a professional one & a personal one. Each domain has about 5 (give or take) email addresses I use for various purposes. Each domain also has a few parked domains (.net, .org, .info) aliased to the .com domain. I would like to keep emails from the 2 domains separated. Do I need an extra mail box, meaning extra expense or can I create another Exchange user on the same mailbox and create an extra account in Outlook? In either case I will have to wait for iOS4 on the iPhone to manage 2 Exchange accounts. Or am I better off just using a set of rules and folders? The aliased domains are another joy to behold entirely. It looks like I will have to add each email address variant individually. Alternatively, I reckon I may just leave the aliased domains at the pop3 host and let Outlook gather those as edge-cases. Surely I can't be the only one making my life this difficult. Anyone out there done this? From the left field - is this (much) easier in gMail? I'm not committed to Exchange (yet). Previously I used Outlook as a pop3 client with a set of filters to direct incoming traffic to folders. This worked with the aliased domains because my host directed all the aliased TLDs to the same mailbox.

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Auto-rotate rotated images with mogrify

    - by Frank Presencia Fandos
    Some of my images have been taken rotated but kept this data. The problem is that, when using mogrify to convert them from JPG to png, that data seems to dissapear. For showing this problem, I think the best is to show the script and an screenshot. Script with the code. Put it in a text file, give it execution permission, double click, run (from terminal if you wish) and wait a while. All the JPGs in that folder will be converted to png. #! /bin/bash echo "Converting JPG to png. Please don't close this window." mogrify -alpha on -format png *.JPG mogrify -alpha on -format -alpha on png *.jpg It works great and adds an alpha channel. This is personally useful when I edit them later, not to add the channel individually. Now the screenshot that illustrates the problem: As you can see, the original ones' (JPGs) preview is right, the modified preview is wrong, the Shotwell rendering is right and the GIMP edit is wrong and didn't even say the image was rotated, as it uses to do with other images. How can I edit my script to preserve the orientation?

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • MacBook Pro with OSX 10.6.3 (Snow Leopard) Wi-Fi network connection breaks after few minutes

    - by Yanick Landry
    I have a MacBook Pro with OSX 10.6.3 (Snow Leopard). After connecting on a Wi-Fi network, the connection "breaks" after a few minutes. What I mean by "breaking" is that all requests, whether it is loading a web page, connecting to a share folder, connecting to my local router at 192.168.0.1, or pinging anything doesn't get through (time out). When in a "break" situation, I can see in the Network Settings panel that I still have an active IP, which I can successfully ping. I have this problem at home with a router D-Link DI-624 and at work with a D-Link WBR-2310, all with updated firmwares. I thought DHCP was the issue. So I tried assigning a fixed IP address (192.168.0.166). It successfully connects, but after a few minutes, the connection still breaks. The solution I'm currently using is that I disable the AirPort (on the Network icon menu in the top bar), wait a few seconds then re-enable it. It then quickly works, but the connection still breaks after a few minutes. I tried Googling my problem but I think I can't find any good keywords ! It's my first question here, so sorry if I don't respect some rules.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to send from my Z88 to my PC

    - by Bevan
    I've got a Cambridge Z88 that I want to get working with my PC. Around 6 years ago - in 2004 - I made heavy use of my Z88 to do a whole bunch of writing on the train while commuting to and from work. The Z88 is solid state, lightweight and has a full size silent keyboard, so it works very well as a writing instrument. I still have the serial cable I soldered up back then and used successfully in 2004. It has these connections: Z88 9 pin ----- ------- 2 TxD ------> RxD 2 3 RxD <------ TxD 3 7 GND <-----> GND 5 4 RTS ------> CTS 8 5 CTS <-+ RTS 7 8 DCD <-+---- DTR 4 9 DTR ----+-> DCD 1 +-> DSR 6 Unfortunately, I haven't been able to find my notes from 2004 that describe how I got it to work back then. I've spent several hours trying to Google a result, but to no avail. I'm pretty sure the cable is fine - after all, it's what I used successfully six years ago, and I've checked it out with a multimeter - so I'm focusing on the PC end of things, which is where I'd like some assistance. Q1: In my recent attempts, I've been using both Hyperterminal (as built into Windows XP) and the command line (copy com2: con:), but with no success. What's a good (better!) serial communications application to use? Is there one that allows me to see as deep as the signalling that's occurring on the wire? Q2: If you have a Z88 that works correctly with your PC, what software do you use on the PC end, and what's the pinout of your cable? I'm pretty sure that the Z88 itself is working properly: When using the built in Import/Export tool to send a file, I see different behaviour when my serial cable is connected compared to disconnected. When disconnected, the transmission appears to work, with a progress meter counting up and then finishing; when connected, nothing happens 'cept a timeout if I wait long enough.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >