Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 555/710 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • How to backup iTunes on Windows to folder/share, i.e. without "Back Up to Disc"? No DVD writer avail

    - by Chris W. Rea
    (Surprised I didn't already find an answer here to this!) I've got a computer on which I'd like to back up the iTunes library – music, movies, apps, everything. We're talking multiple gigabytes. Unfortunately, it seems that iTunes' own built-in "Back Up to Disc" feature (the only backup feature I can find in iTunes) only functions with a CD or DVD writer/burner. The computer in question does not have a DVD burner. While it has a CD burner, attempting to back up to CDs would require dozens of discs plus more time than I'm willing to spend swapping them. So: What is the recommended way to back up an entire iTunes library on a Windows computer, to a non-CD/DVD location such as an external hard drive or a network shared folder? Then, once such a backup has been performed, what is the process for restoring the library – e.g. after the computer has been repaved with a new version of Windows – so that iTunes is resurrected whole and recognizes devices it syncs with? Thank you!

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Regular issue with keys on temp tables

    - by Christian
    We run a large forum with lots of reads and writes, particularly to the posts and topics tables which are both innodb. Last week I started doing 12 hourly backups with innobackupex because mysqldump just takes forever (7+ million rows in posts table.) It seems that something doesn't like these backups because I have a recurring problem every other day. The symptoms; The front page of the site starts throwing errors The logs start showing errors like Error: 126 - Incorrect key file for table '/tmp/mysql/#sql_4e87_14.MYI'; try to repair it The /tmp/ dir fills up and we start getting Error: 1030 - Got error 28 from storage engine in the logs. The only way to fix is to optimize table on each of the posts and topics tables. I'm trying all I can to stop MySQL using disks for temp tables, but I'd have more problems than this if it used all my memory also. My my.cnf is here; https://gist.github.com/cbiggins/0aa26f6defb7a14541d7 The box has 32GB memory and I don't come near that usually. Currently at 15GB use. Thanks in advance. Update 1: Despite the conf looking like there is replication, there isn't. This is a stand alone instance.

    Read the article

  • What could be causing SVCHost to leak handles?

    - by Goz
    I have a problem that has been causing me all sorts of grief recently. SVCHost appears to be leaking resources all over the shop. This is the SVCHost run with the arguments "-k netsvcs". At the moment it is sitting at around 5,700 Handles being used. Before I rebooted the machine it was sitting at around 33,000 handles! This higher number has been causing me large problems as my software, thus, fails to obtain the handles it needs (The software tries to create around 2000 handles). I'm totally at a loss as to what is going wrong. IF anyone could help me stop this happening it would be much appreciated. I'm running on XP with SP3. Edit: I tracked this problem down to the WMI system. I'm not sure why or how the problem was occurring. Basically I used "sc change" to move it into its own process and suddenly everything seems to be fine. I'm not entirely sure what is going on ...

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Networking - intermittent, slow speeds

    - by jack
    Hi all I'm a novice when it comes to networking. I live in a large two storey building that used to be a school and we have an internet connection with BT (british telecoms provider), the connection speed is 12Mb.. Basically our connection is slow and very intermittent and I was wondering if anybody here could provide some help or ideas. There are about 11 people in the building who could be online at any time. We have a router on the ground floor which is bog standard supplied by BT. To provide Broadband access to the 1st and 2nd floors, we used an old switch that the school left, we have a cable running from the router on the first floor to the switch which connects to a wireless router which is configured as a bridge on the 2nd floor supplying broadband access to the 1st and 2nd floors. Additionally we have 3 computers that are connected via the switch through the ethernet sockets left by the school on the ground floor. The router we use on the 2nd floor came in a pack of 2 and cost about £15 (bought by another person). Sometimes the connection is perfectly fine, i.e. early hours of the morning or when everybody is out, we have rang BT who say that the connection cannot cope with the numbers of people online, plus I'm not sure whether each person is streaming etc. Can anybody offer any advice?

    Read the article

  • Some URLs fail to load on Windows web portal

    - by jpolache
    I’m working in a large data center and have been assigned to troubleshoot and issue with a windows (IIS) web server that acts as a portal for a customer of the data center. This portal server is on a DMZ at the local data center. I don’t have access to the portal desktop and am relying on an off-site administrator to work with me to do testing and report the condition of the portal. He tells me there are no software firewalls or other filtering configured. While most of the remote web pages work fine, several of the URSs the portal is suppose to serve up fail to load. I had wireshark installed on the portal system and had a capture taken of one of the failures. I used IE to access one of the remote web servers at issue. I could see the TCP SYN-ACK coming back from the remote server, but after several HTTP GETs fail to get a response the portal server sends a reset. The webmaster of the remote web server assures me that no sites are being blocked. I had a capture taken outside the local firewall, so there should be no issue there. Another tech set up a laptop and used the IP address of the portal (we took the portal off-line for the test). The laptop loads the URL as expected. I tried having Firefox loaded to make sure that the HTTP GET was not mal-formed. Same failure as with IE. So, it seems it is not the remote web server or the network, because there was no problem with the laptop. At this point, I’m not sure what other questions to ask or tests to do.

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • ntpd on Fedora Core 6 with high negative time reset values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Surely this shouldn't be this difficult... Mark...

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • Slow network file transfer (under 20KB/s) on newly built x64 Win7

    - by Mangoshake
    I am getting <20KB/s for local network file transfer. If I transfer a very small file (less than 100KB) it would start quickly then slow down to <20KB/s. all subsequently network file transfer would be slow, a reboot is needed to reset this. If I transfer a large file it would be stuck on calculating for a long time and then begin with <20KB/s immediately. This is a newly built desktop running Windows 7 x64 SP1. Realtek gigabit LAN from the motherboard (ASRock Extreme3 gen3). Problematic speed is observed on the private LAN, both through ethernet and WiFi. The Router is D-Link DIR-655. Remote Differential Compression is off. Drivers are up-to-date from ASRock's website. I have tested network file transfer to and from another Windows 7 laptop and a MacBook Pro, so I am fairly certain it is the desktop's problem. The slow speed only happens with one direction also, outbound from the desktop, regardless of whether I initiate the file transfer action from the origin or the destination. Inbound network file transfer and internet speeds are fine, so I don't think this is a hardware issue. I am getting 74.8MB/s internet upload speed from speedtest.net (http://www.speedtest.net/result/1852752479.png). Inbound network file transfer I can get around 10-15MB/s. I am hoping this community has some insight for me to troubleshoot this. I don't see anything obviously related from the Event Viewer, and beyond that I just don't know where else to look. Any suggestions are greatly appreciated, thank you in advance.

    Read the article

  • Asterisk, IAXModem & Hylafax how-tos?

    - by Brian Postow
    I'm trying to set up Asterisk and IAXModem to send faxes via T38 (Yes, I know I'm swatting a fly with a Buick...) However, since I'm trying to do something so small with a product so large, I'm having trouble finding samples or how-tos that show me how to set this up. I've got all three installed, and I THINK I have my IAXModem config correct. I'm pretty sure that I have Hylafax correct (I've used it with T38Modem) so, I need to know which of the Asterisk samples I need to use, and how to use them. I think I want to use some combination of iax.conf, iaxprov.conf, sip.conf and sip_notify.conf. But I'm not sure where to put them, or what to change... I'm sure that the answer is RTFM, but I'm not sure WHICH M, or where in it to R... thanks. EDIT On a mailing list, someone told me that this actually WON'T WORK because IAX doesn't do T38. So, is there some other way to get Asterisk to work with Hylafax and send T38? I know that Asterisk does T38, the question is how to get the data from Hylafax and back...

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Tell postfix to merge three Authentication-Results:-Lines into one?

    - by Peter
    I am running a postfix mta with debian wheezy. I am using postfix-policyd-spf-python, openkdim and opendmarc. When receiving e-mails from google (google apps with own domain) for example, the header looks like this: [...] Authentication-Results: mail.xx.de; dkim=pass reason="1024-bit key; insecure key" header.d=yyy.com [email protected] header.b=OswLe0N+; dkim-adsp=pass; dkim-atps=neutral<br> [...] Authentication-Results: mail.xx.de; spf=pass (sender SPF authorized) smtp.mailfrom=yyy.com (client-ip=2a00:1450:400c:c00::242; helo=mail-wg0-x242.google.com; [email protected]; [email protected]) [...] Authentication-Results: mail.xx.de; dmarc=pass header.from=yyy.com<br> [...] This means any of these programs creates it's own Authentication-Results:-Line. Is it possible to tell postfix to merge this into one single Authentication-Results:-Line? When I send an e-mail to google, it says: [...] Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates xxx.xxx.xxx.xxx as permitted sender) [email protected]; dkim=pass [email protected]; dmarc=pass (p=NONE dis=NONE) header.from=xxx.com [...] And this is exactly what I want. Just one Authentication-Results-Header. How can I do this? Thanks. Regards, Peter

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • Had almost 300 GB worth of files with random names on my computer, and now they are gone. Any idea what they were and where they went?

    - by John
    A couple of days ago I noticed I had a folder on my computer with more than 15 files in it. All the files were the exact same size (215 MB). They all had different names (just a bunch of random characters like Abe327(/-38s etc. I wasn't sure what they were so I decided to try to delete them. But then I noticed they disappeared from the D drive. Then the next day I noticed a new folder, with similar names and file sizes showed up on my C drive. The timestamps on the first set of files was almost all from a few months ago. Like the timestamps were saying 3:52 AM, 403 AM, etc. all from the same date. Then the set of files on the C drive that just appeared had yesterday's date on them. But similarly, all the files had timestamps within a 24 hour period. Like they had all just been created. Now this morning, all of those files are gone, and I didn't delete them. There are now no files like this in either drive. Any idea what these files were? Why were they so large, and why are they switching drives? Why did they disappear completely now, after the initial files were there for a few months.

    Read the article

  • SQL 2K5 - Multiple databases vs. Multiple files

    - by Bob Palmer
    Hey all, quick question. Our current legacy system was built using multiple distinct databases (about ten of them). These are all part of the same discreet system, and a large number of SPs and functionalty span multiple databases. There are also key relationships that span (for example, a header table may be in database A with history, etc. in database B). When deploying multiple copies of our app to the same server therefore, we have to use multiple instances (because the database names are coded into so many sprocs). We're evaluating the idea of taking these ten databases (about 30gb total with individual sizes ranging from 100mb to 10gb) and merging them into a single database. Currently, we have our databases spread accross multiple spindles for better IO. The question I have is whether or not there is any performance loss or benefit of having 10 different databases vs. 10 different database files? i.e. rather than having three databases (A, B, and C) Disk D: A.mdf (1gb) Disk E: B.mdf (4gb) Disk F: C.mdf (10gb) Disk G: A_Log.ldf, B_Log.ldf, C_Log.ldf have one database (X) Disk D: X1.mdf (5gb) Disk E: X2.mdf (5gb) Disk F: X3.mdf (5gb) Disk G: X1_log.ldf,X2_log.ldf,X3_log.ldf Thanks! -Bob

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >