Search Results

Search found 9271 results on 371 pages for 'whole foods'.

Page 321/371 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Windows 2008 running as KVM guest networking issue

    - by Evolver
    I have a strange networking problem with Windows 2008 server R2, running as guest under KVM-Qemu host. Host is CentOS 6.3 x86_64. It's network settings: # cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 BOOTPROTO=static BROADCAST=xx.xx.xx.63 IPADDR=xx.xx.xx.4 NETMASK=255.255.255.192 NETWORK=xx.xx.xx.0 ONBOOT=yes TYPE=Bridge # cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=xx:xx:xx:xx:xx:xx ONBOOT=yes BRIDGE=br0 IPV6INIT=yes IPV6_AUTOCONF=yes # cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=my.hostname GATEWAY=xx.xx.xx.1 # cat /etc/sysctl net.ipv4.ip_forward = 1 # tried to set it to 0 without any changes net.ipv4.conf.default.rp_filter = 1 # tried to set it to 0 without any changes net.ipv4.conf.default.accept_source_route = 0 # tried to set it to 1 without any changes kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.0 0.0.0.0 255.255.255.192 U 0 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1004 0 0 br0 0.0.0.0 xx.xx.xx.1 0.0.0.0 UG 0 0 0 br0 Node IP is xx.xx.xx.4, guest IP is xx.xx.xx.24, both host and guest is in the same network (/26). There are several linux guest running fine on the node (centos, debian, ubuntu, arch), and even Windows 2003 x86 also running fine. But Win2008 does not. I wonder, what's the difference. From Win2008 guest I can ping nothing: neither gateway, nor any other IP, even they are in the same subnet. From outside I also cannot ping guest. Almost. If I ping it from another server in same subnet, it's barely pinging, losing more than 90% packets. Firewall on the guest is completely off. Tried to set up network manually as well as via DHCP without success (BTW, DHCP set up network settings correctly). I suspect that is a kind of routing problem, but I spent whole day and still cannot figure it out. I would be appreciate for any help.

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • Allow email from a particular sender through spam filter

    - by Greg
    We are running exchange 2010 and are using the built in anti-spam feature. We have set up Content Filtering, IP Block List Providers, Sender ID, Sender Reputation and it filters out most of the junk but it also quarantines all emails from one of our customers. It is being quarantined because of the Content Filter agent (Report Below). How can I add an exception for this email address to the Content Filter. I can see how to setup an exception for a delivery address ("Don't filter messages sent TO the following recipients") but I want to add [email protected] to our safe list. I don't want to add the whole domain as it is a very popular ISP in Australia and we often get junk from them. Filter Report: > Diagnostic information for administrators: > > Generating server: something.com > > [email protected] > #550 5.2.1 Content Filter agent quarantined this message ## > > Original message headers: > > Received: from icp-osb-irony-out4.external.iinet.net.au (203.59.1.220) > by server.local.something.com.au (192.5.0.105) with Microsoft SMTP > Server id > 14.1.218.12; Mon, 5 Nov 2012 02:40:40 +1100 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: > AscOALeLllB8qwLw/2dsb2JhbABEKYUFhiigRQOWCwQEgQiBCIIZFAEBTiwCCAIBBwEIFDkBBBoqARoCAQIDAYd4uEuRXGEDiCWFT44UijeDAw > X-IronPort-AV: E=Sophos;i="4.80,710,1344182400"; > d="scan'208,217";a="55137861" Received: from unknown (HELO > asdf83c05c53a3) ([124.171.2.240]) by icp-osb-irony-out4.iinet.net.au > with ESMTP; 04 Nov 2012 23:40:26 +0800 Message-ID: > <E8C866D0299E4BCB8B156723893EB735@asdf83c05c53a3> From: Customer > <[email protected]> To: 'Person' <[email protected]> > Subject: A long sentance Date: Mon, 5 Nov 2011 06:07:57 +1100 > MIME-Version: 1.0 Content-Type: multipart/alternative; > boundary="----=_NextPart_000_0005_01C5F962.3CD09120" X-Priority: 3 > X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express > 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Return-Path: [email protected] Received-SPF: None > (server.local.something.com.au: [email protected] does not > designate permitted sender hosts)

    Read the article

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • Compiling PHP with GD crashes with EXC_BREAKPOINT (SIGTRAP) on PPC Mac

    - by Ömer
    First of all, I should say that I have searched the whole Internet for this problem but I couldn't find any solution yet. I have a Mac mini PowerPC (PPC) and I run Apache webserver (httpd-2.2.22) with PHP (5.4.0) and I do all the configure & compilation jobs by myself. If configure with: './configure' '--prefix=/usr/local/php5' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--with-config-file-path=/etc' '--with-zlib' '--with-zlib-dir=/usr' '--with-openssl=/usr' '--without-iconv' '--enable-exif' '--enable-ftp' '--enable-mbstring' '--enable-mbregex' '--enable-sockets' '--with-mysql=/usr/local/mysql' '--with-pdo-mysql=/usr/local/mysql' '--with-mysqli=/usr/local/mysql/bin/mysql_config' '--with-apxs2=/usr/local/apache2/bin/apxs' '--with-mcrypt' then the PHP works flawlessly. But if I add the GD module by adding these to the script above: '--with-gd' '--with-jpeg-dir=/usr/local/lib' '--with-freetype-dir=/usr/X11R6' '--with-png-dir=/usr/X11R6' '--with-xpm-dir=/usr/X11R6' the PHP gets configured and compiled without any errors but it causes EXC_BREAKPOINT (SIGTRAP) (see the Crash Reporter log below) when I request a page which calls PHP module. It's obvious that something related to the GD module is causing this, probably FreeType module because it's present in the log but it may not be definite of course. When the PHP crashes (or more accurately, httpd) the CPU goes 100% for 10 to 15 seconds until it recovers. I need to use the GD module and keep the Mac mini PowerPC. So, what should I do to solve this problem? Process: httpd [79852] Path: /usr/local/apache2/bin/httpd Identifier: httpd Version: ??? (???) Code Type: PPC (Native) Parent Process: httpd [79846] Date/Time: 2013-11-04 15:44:28.444 +0200 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 0178B7F8-2241-43F7-A651-9E7234D41A37 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x0000000093c11e0c Crashed Thread: 0 Application Specific Information: *** single-threaded process forked *** Thread 0 Crashed: 0 com.apple.CoreFoundation 0x93c11e0c __CFRunLoopFindMode + 328 1 com.apple.CoreFoundation 0x93c13d88 CFRunLoopAddSource + 276 2 com.apple.DiskArbitration 0x901a6e8c DAApprovalSessionScheduleWithRunLoop + 52 3 ...ple.CoreServices.CarbonCore 0x9512e67c _FSGetDiskArbSession(__DASession**, __DAApprovalSession**) + 540 4 ...ple.CoreServices.CarbonCore 0x9512e420 CreateDiskArbDiskForMountPath(char const*) + 84 5 ...ple.CoreServices.CarbonCore 0x9512d2c8 FSCacheableClient_GetVolumeCachedInfo(char const*, statfs const*, CachedVolumeInfo*, __DADisk*, __DADisk**) + 280 6 ...ple.CoreServices.CarbonCore 0x9512cca4 MountVolume(char const*, statfs*, unsigned char, unsigned char, __DADisk*, short*) + 352 7 ...ple.CoreServices.CarbonCore 0x9512ca48 MountInitialVolumes() + 172 8 ...ple.CoreServices.CarbonCore 0x9512c4d4 INIT_FileManager() + 164 9 ...ple.CoreServices.CarbonCore 0x9512c390 GetRetainedVolFSVCBByVolumeID(unsigned long) + 48 10 ...ple.CoreServices.CarbonCore 0x9512adf4 PathGetObjectInfo(char const*, unsigned long, unsigned long, VolumeInfo**, unsigned long*, unsigned long*, char*, unsigned long*, unsigned char*) + 184 11 ...ple.CoreServices.CarbonCore 0x9512acc4 FSPathMakeRefInternal(unsigned char const*, unsigned long, unsigned long, FSRef*, unsigned char*) + 64 12 libfreetype.6.dylib 0x0070a0fc FT_New_Face_From_Resource + 56 13 libfreetype.6.dylib 0x0070a3b0 FT_New_Face + 48 14 libphp5.so 0x0118d1a8 fontFetch + 824 15 libphp5.so 0x0118edac php_gd_gdCacheGet + 220 16 libphp5.so 0x0118d6d8 php_gd_gdImageStringFTEx + 360 17 libphp5.so 0x011763c0 php_imagettftext_common + 1504 18 libphp5.so 0x01176494 zif_imagefttext + 20 19 libphp5.so 0x014b9c68 zend_do_fcall_common_helper_SPEC + 1048 20 libphp5.so 0x01452898 _ZEND_DO_FCALL_SPEC_CONST_HANDLER + 440 21 libphp5.so 0x014ba878 execute + 776 22 libphp5.so 0x013f190c zend_execute_scripts + 316 23 libphp5.so 0x013779f4 php_execute_script + 596 24 libphp5.so 0x014bbe64 php_handler + 1972 25 httpd 0x000020c0 ap_run_handler + 96 26 httpd 0x00006ae0 ap_invoke_handler + 224 27 httpd 0x000305c4 ap_process_request + 116 28 httpd 0x0002c768 ap_process_http_connection + 104 29 httpd 0x00012d30 ap_run_process_connection + 96 30 httpd 0x00012ecc ap_process_connection + 92 31 httpd 0x000373e4 child_main + 1220 32 httpd 0x000376a8 make_child + 296 33 httpd 0x000377e4 startup_children + 100 34 httpd 0x000387d4 ap_mpm_run + 3988 35 httpd 0x0000a320 main + 3280 36 httpd 0x000019c0 start + 64

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • what is Remote Desktop Services in Windows Server 2008 R2 all about?

    - by fejesjoco
    Seriously, I'm lost in all that sales mumbo-jumbo. Let's say I want 1 or 2 users to be able to remotely log on to a server, run Word, Visual Studio, Firefox, and whatever. Do I gain anything at all if I install Remote Desktop Services? Or do I just install Desktop Experience feature pack, enable remote desktop and voila, nobody will ever notice the difference? Here's what TechNet says about Remote Desktop Session Host: A Remote Desktop Session Host (RD Session Host) server is the server that hosts Windows-based programs or the full Windows desktop for Remote Desktop Services clients. Users can connect to an RD Session Host server to run programs, to save files, and to use network resources on that server. Users can access an RD Session Host server by using Remote Desktop Connection or by using RemoteApp. The good old simple remote desktop can also host a full Windows desktop for remote clients so that they can run programs, save files and do all that stuff. Why do they write about it like it's such a great new invention, besides that they want to sell it? RDSH doesn't seem all that different at all. What do I install when I install RDSH, since all those features are already there in Windows? What's even more confusing is that you need to take special care when you want to install applications to an RDSH so that they will be usable by many concurrent users. Why? All the modern applications install the program files in one directory, store some common settings in the ProgramData folder and the HKLM hive, and store user specific settings in the Users folder and the HKCU hive. They are designed to be usable by many users on the same machine. 2 or 2000 users can use them concurrently without any efforts. I can sign in with 2 users to a server with only remote desktop enabled, and both of us can run Word or anything without any problems, can't we? So what changes if I set RDSH to install mode, or what happens if I don't? Why is the feature to switch between install and execute mode there at all? Yes I know of some advantages in Remote Desktop Services, like there's no 2 user limit, it supports virtualization, video acceleration and stuff, it has a whole infrastructure with gateway, web access, connection broker, etc. But I don't need those, so if you take these away, how are these two technologies different? From the articles it seems like they are completely different technologies, whereas it looks to me that they are completely the same at the core, and Remote Desktop Services just adds some additional features, but doesn't reinvent anything.

    Read the article

  • Missing startup screens and slow bootup/login after using WinClone to expand Bootcamp partition

    - by user26453
    I used WinClone to backup my Bootcamp partition, which was a Windows 7 Ultimate install, on my late 2006 Macbook Pro. I desired to expand the Bootcamp partition's size. It worked reasonably well with some hiccups along the way and some remaining issues. First issue I ran into was the Bootcamp Assistant utility - it would not recreate the partition. This was due to a lack of contiguous space that is required for the Bootcamp partition. As a result I wiped the whole drive and reinstalled Snow Leopard, did the minimum amount of system updates, and created and formated a new Bootcamp partition. WinClone restored the image without complaint and the image was automatically resized to the new partition's size. Second issue I ran into was after the first boot into Windows. The first thing I noticed was that instead of the newer "slick" startup screen (4 colors wisping around, a Windows 7 title), there was more of an old school style startup screen (a progress bar with block increments, yellow/greenish color, nothing else really). The initial bootup to a login screen was slow, perhaps as Windows dealt with the partition changes. After logging in, the screen goes blank and the computer seems to hang for a minute, before completing the login. After subsequent restarts, the slick screen is still missing, boot to login screen is normal, but the time from login to desktop active is still very slow. As a side note, this behavior of a long time from login to the desktop finally loaded I've previously only seen when the computer would try to hibernate and fail (battery is really bad). On the next startup, I would see this behavior, but not subsequently. So a potential cause: I imaged the partition after hibernating out of Windows. From reading some posts/guides on the subject, this was not recommended, and perhaps shouldn't even have worked? Could the partition be stuck in some weird mode as a result that makes the boot issues appear? I've attempted to disable hibernation and restart, trying to delete the .sys file that hibernation uses. Other fixes I'm thinking of attempting are booting a Win7 disc and repairing the install/partition. I can't shake the nagging feeling something isn't right as a result of the modified boot screens and the slow login process.

    Read the article

  • Possible capacitor plague - need help identifying

    - by cornjuliox
    I've been having some PC power issues lately, and I think I've tracked it down to a bad power supply. Lately, when I'm on my PC it will often restart without warning, displaying "Hypertransport sync flood error occurred last boot." once POST finishes. I've googled the error, but can't come to a definitive conclusion as to what's causing it. I've seen posts suggesting that it might be a power supply issue, but nothing conclusive. Here's what I've done so far: -I haven't installed anything suspect within the last 3 months. -I do overclock just a tiny bit, so I tried raising the voltages a little. That didn't work so I brought both CPU multiplier and voltages all back to their default settings, but that didn't solve the problem either. The problem still occurs. -AV scanned the whole system, nothing suspect. -I suspected that it might be a bad power supply so I cracked that open and found the following: I think it might be cap plague, but I'm not sure. It looks more like glue TBH. Could someone help me figure out what might be wrong with this PC? EDIT: Sometimes, after these restarts, I noticed that the GPU fan doesn't spin up, and the single rear case fan that just happens to be connected to the same molex Y-cable as the GFX card doesn't spin up either. Anything to that? EDIT 2: I do use the system quite heavily, but I don't know how that will factor into this. I often play Diablo 3 and EVE Online at the same time, frequently alt-tabbing between the two. I also have Firefox open in the background, sometimes with several tabs, and if I feel like it, I'll mute the in-game sound and open foobar2000 for better music. Could it be that I'm just pushing this thing too hard? EDIT 3: I also noticed something odd. Right before I experience these restarts, my monitor would suffer from very faint lines of static moving across the screen. The monitor is still very much useable, but it is very annoying. Following the restart it disappears, and then would gradually re-appear over the next few days, and then restarts again. I find it to be very odd. System specs for good measure: Orion 600 W PSU AMD Athlon II X3 440 (overclocked to 3.14 ghZ, raised the CPU multiplier to x13 from x10) MSI G40-775 motherboard 1 GB inno3D GTX 550 ti 4 GB DDR3 RAM 500 GB Samsung SATA HD

    Read the article

  • What's the best way to do user profile/folder redirect/home directory archiving?

    - by tpederson
    My company is in dire need of a redesign around how we handle user account administration. I've been tasked with automating the process. The end goal is to have the whole works triggered by the business, and IT only looking in when there's an error reported. The interim phase is going to be semi-manual. That is a level 2 tech inputs the user's info and supervises the process. The current hurdle I'm facing is user profile archiving. Our security team requires us to archive the profile directories for any terminated user for 60 days in case the legal team requires access to their files. Our AD is as much a mess as everything else, so there are some users with home directories and some with profiles. Anyone who has a profile dir in AD also has a good deal of their profile redirected to our file servers over DFS. In order to complete the process manually you find the user in AD, disable them, find their home/profile dir, go there and take ownership, create an archive folder, move all their files over, then delete the old dir. Some users have many many gigs of nonsense and this can take quite some time. Even automated the process would not be a quick one. I'm thinking that I need to have a client side C# GUI for the quick stuff and some server side batch script or console app to offload this long running process. I have a batch script that works decently using takeown and robocopy, but I wonder if a C# console app would do a better job. So, my question at long last is, what do you think is the best way to handle this? I can't imagine this is a unique problem, how do other admins get this done? The last place I worked was easily 10x larger than the place I'm in now. If we would have been doing this manual crap there, they'd have needed a team of at least 30 full time workers to keep up. I have decent skills in C#.net and batch scripting, but am a quick study and I have used most every language once or twice. Thank you for reading this and I look forward to seeing what imaginative solutions you all can come up with.

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • Active Directory Corrupted In Windows Small Business Server 2011 - Server No Longer Domain Controller

    - by ThinkerIV
    I have a rather bad problem with my Windows SBS 2011. First of all, I'll give the background to what caused the problem. I was setting up a new small business server network. I had my job about finished. The server was working great, all the workstations had joined the domain, and I had all my applications and data moved to the server. I thought I was done. But then it happened. I tried adding one more computer to the domain, and to my dismay the computer name was set to the same name as the server. Apparently when a computer joins a domain with the same name as another machine that is already on the domain, it overrides the first one. For normal workstations, this is not a big deal, you just delete the computer from AD and rejoin the original computer to the domain. However, for a server that is the domain controller it is a whole different story. Since the server got overridden in AD, it is no longer the domain controller. The DNS service is not working and all kinds of other services are failing also. So the question is, what are my options? I am embarrassed to admit it, but since this is a new server one thing I did not have setup yet was backup. So I have no backups to work from. I am worried that things are broken enough that I might need to do a reinstall. However, I already have several days worth of configuration into this server, so I would obviously prefer if there was a fix that would prevent me from needing to do a reinstall. All the server components are there and installed correctly, but they are misconfigured (I think it is basically just Active Directory). So I have the feeling that if I did the right thing I could solve the issue without a reinstall. Is there anyway to rerun the component that installs the initial configuration to "convert" the base windows server 2008 r2 install into a SBS? In other words in the program files folder there is an application called SBSsetup.exe, is there anyway to rerun this and have it reconfigure AD, etc. to work with SBS? Any insight will be greatly appreciated. Thanks.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • Getting an boot error when starting computer

    - by Rob Avery IV
    I was in the middle of watching a movie on Netflix, then suddenly everything started crashing. First, explorer.exe closed down, then Google chrome. I had multiple things running in the background (Steam, Raptr, etc.). Individuality, each of those apps closed down also. When they did, a small dialog box popped up for each of them, one at a time, saying that it was missing a file, it couldn't run anymore, or something similar to that. It also had some jumbled up "code" with numbers and letters that I couldn't read. Ever since then, everytime I turn my computer on, it will run for a few seconds and give this error "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". No matter how many times I try to reboot it, it always gives me the same error. A day later after this happened I was able to start the computer, but before it booted, it told me that I didn't shut down the computer properly and asked how I wanted to run the OS (Run Windows in Safety Mode, Run Windows Normally, etc.). Once I logged, everything went SUPER slow and everything crashed almost instantly. The only thing I opened was Microsoft Security Essentials and only got in about two clicks before it was "Not Responding". Then, after that the whole computer froze and I had to restart it. Now, it's back to saying what it originally said, "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". I built this PC back in February 2012. Here are the specs: OS: Windows 7 Ultimate CPU: AMD 8-core GPU: Nvidia GTX Force 560 Ti RAM: 16GB Hard Drive: Hitachi Deskstar 750GB I'm usually very good taking care of my PC. I don't download anything that's not from a trusted site or source. I don't open up any spam email or such or go to any harmful websites like porn or stream movies. I am very clean with the things I do with my PC and don't do many DIFFERENT things with it. I use it pretty often especially for video games and doing homework in Eclipse. Also, good to note that I don't have any Norton or antisoftware installed. I have Microsoft Security Essentials installed but never did a scan. Thanks!

    Read the article

  • graphic error on console

    - by Christian Elsner
    I have Linux on an embedded system. There is no graphic system, but I still have graphic errors. For example, if I type: ifconfig eth2 hw ether 00:0e:8c:d0:59:d2 I see: ifconfig eth hw ether 00:0e:8:2:2 If I type Enter, it accepts the command I typed, so it's just a matter of displaying. Everything is fine, when I log in via SSH. Anyone any ideas, what could be the cause or where to look at? Output of lspci: 00:00.0 Host bridge: Intel Corporation 3100 Chipset Memory I/O Controller Hub 00:00.1 Unassigned class [ff00]: Intel Corporation 3100 DRAM Controller Error Reporting Registers 00:01.0 System peripheral: Intel Corporation 3100 Chipset Enhanced DMA Controller 00:02.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A 00:03.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A1 00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2 (rev 01) 00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c9) 00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 631xESB/632xESB/3100 Chipset SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller (rev 01) 02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 03:01.0 Network controller: Siemens Nixdorf AG Device 4003 (rev 02) 03:01.1 Unassigned class [ff00]: Siemens Nixdorf AG Device 4003 (rev 02) 03:02.0 Ethernet controller: Siemens Nixdorf AG Device 4047 (rev 01) 03:03.0 Ethernet controller: National Semiconductor Corporation DP83815 (MacPhyter) Ethernet Controller 03:04.0 Unassigned class [ff00]: Siemens Nixdorf AG Device 4057 (rev 01) 04:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 05:00.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 44) 06:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 07:00.0 VGA compatible controller: Silicon Motion, Inc. SM720 Lynx3DM (rev c1) 07:01.0 USB Controller: NEC Corporation USB (rev 43) 07:01.1 USB Controller: NEC Corporation USB (rev 43) 07:01.2 USB Controller: NEC Corporation USB 2.0 (rev 04) The whole thing is running on an Intel Core 2 Duo U2500

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • Seeking past end of file causes Apache hang, and it never restarts.

    - by talkingnews
    I've actually solved my problem with a better script, but I'm still left wondering why Apache2 hung completely - this is an out-of-the-box ISPCONFIG 3.03 install, everything bang up to date, running perfectly. Until... The troublesome but innocent-looking script: $fp = fopen("/var/log/ispconfig/cron.log", "r"); fseek($fp, -5000, SEEK_END); $line_buffer = array(); while (!feof($fp)) { $line = fgets($fp, 1024); $line_buffer[] = $line; $line_buffer = array_slice($line_buffer, -10, 10); } foreach ($line_buffer as $line) { echo $line; } You get the idea, just a script I found on a forum somwehere. I did this for various logs, since it's a nice easy window on what's occurring (in a protect dir, of course!). One day, the logs having grown large an me having sorted all my cron, scripting and mail queue errors, I thought I was time to start afresh. updated, rebooted, archived and deleted the logs. When I ran my script a couple of hours later, it hung. And hung. 8 minutes I waited. Chrome timed the page out, of course, but the server never came back to life. htop showed /usr/sbin/apache2 -k restart using 100% CPU. Never came back until I did a service apache2 restart. Ran fine, as soon as I hit that logfile again...dead. So, I worked out it was the logfile script, and I worked out that seeking beyond the end of the file wasn't good, and I found a better script http://www.php.net/manual/en/function.fseek.php#90450 But what I'm left wondering is... why didn't something restart or kill the process? How was one hanging page able to bring down the whole server? It's running suphp. I say "out of the box", I've tweaked mysql and apache to fork and reserve sensible amounts of processes for the 512Mb RAM the VPS has, and it'll handle multiple refreshes of large pages, and hadn't hung before. Any ideas how I'd avoid this? Google isn't my friend in this instance beyond the reccs. above about number of processes vs RAM available.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >