Search Results

Search found 10417 results on 417 pages for 'large'.

Page 21/417 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • the effect of large number of files on disk space in unix filesystems

    - by user46976
    If I have a text file in Unix that contains N-many independent entries (e.g. records about employees, where each employee has a separate record), is it expected that this file will take up less space than if I split the file into N files, each containing the entry for one employee? in other words, can one save significant space on unix file systems by concatenating many files together, or is the difference negligible? thanks.

    Read the article

  • DFS - Stop sync of large folder that has since been removed

    - by g18c
    We have a site to site DFSR on Windows Server 2008 R2 that has been running perfectly between site A to site B until someone dumped a 20GB folder. This has overwhelmed the upload and make the internet almost useless at site A (the upload is low at the branch office). We have removed this folder from the DFS share on site A, however the internet is still really slow. Is there any way to cancel this sync or other way to get DFSR back in to a happy state?

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Maximum execution time of 300 seconds exceeded error while importing large MySQL database

    - by Spacedust
    I'm trying to import 641 MB MySQL database with a command: mysql -u root -p ddamiane_fakty < domenyin_damian_fakty.sql but I got an error: ERROR 1064 (42000) at line 2351406: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '<br /> <b>Fatal error</b>: Maximum execution time of 300 seconds exceeded in <b' at line 253 However limits are set much higher: mysql> show global variables like "interactive_timeout"; +---------------------+-------+ | Variable_name | Value | +---------------------+-------+ | interactive_timeout | 28800 | +---------------------+-------+ 1 row in set (0.00 sec) and mysql> show global variables like "wait_timeout"; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | wait_timeout | 28800 | +---------------+-------+ 1 row in set (0.00 sec)

    Read the article

  • nvidia twinview in ubuntu with large resolutions not working

    - by knittl
    i bought an external monitor and can't get it to work properly with my laptop screen. my laptop screen has a resolution of 1920x1200 and the new monitor has 1920x1080 when i open nvidia-settings and select the maximum resolution for each of the screen, one screen will always stay blank. if i select a smaller resolution for one of the two it will work. 1920x1200 + 1440x900 = works 1680x1050 + 1920x1080 = works 1920x1200 + 1920x1080 = doesn't work (but that's what i want to have!) my graphics card is an 01:00.0 VGA compatible controller: nVidia Corporation Quadro FX 360M (rev a1) (output from lspci), driver is proprietary nvidia driver, operating system is ubuntu. any help greatly apreciated

    Read the article

  • debian 6 losing a large amount of packets

    - by Sc0rian
    I have a rather strange problem. We covered all the obvious hardware related issues (different nic, eth cable and switch) however I cannot seem to stop eth dropping packets. I have 4 servers all exactly the same. driver: e1000e version: 1.2.20-k2 firmware-version: 1.8-0 bus-info: 0000:06:00.0 They are all running the latest kernel(2.6.32-5-amd64). However they do this: RX packets:17073870634 errors:0 dropped:14147208 overruns:0 frame:0 another server: eth0 Link encap:Ethernet HWaddr e0:69:95:05:2f:cb inet addr:10.10.10.86 Bcast:10.10.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5455209277 errors:0 dropped:375445 overruns:0 frame:0 TX packets:3666134366 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6688414486673 (6.0 TiB) TX bytes:1611812171539 (1.4 TiB) Interrupt:20 Memory:d0600000-d0620000 eth1 Link encap:Ethernet HWaddr 00:1b:21:b7:7a:ce inet addr:10.10.0.86 Bcast:10.10.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15473695728 errors:0 dropped:5808325 overruns:0 frame:0 TX packets:20112364421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9192378766434 (8.3 TiB) TX bytes:20216368266761 (18.3 TiB) Interrupt:17 Memory:d0280000-d02a0000 A massive amount of dropped packets. I have tried to load on the latest driver, 1.9.5. This did nothing. I'm not sure what else to do.

    Read the article

  • Deleting large no of files on linux eats up CPU

    - by Sanjay
    I generate more than 50GB of cache files on my RHEL server (and typical file size is 200kb so no of files is huge). When I try to delete these files it takes 8-10 hours. However, the bigger issue is that the system load goes to critical for these 8-10 hours. Is there anyway where I can keep the system load under control during the deletion. I tried using nice -n19 rm -rf * but that doesn't help in system load.

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • accidentally concatenate a large file on a remote system

    - by Dan
    Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • windows 2000 freezing during large disk write

    - by robert
    We have a windows 2000 sp4 server which freezes up for about 1 minutes while its web-app does a ~500mb write operation. I can see the webapp start to do I/O activity (through process explorer) then the RDP session becomes unresponsive, you can click on windows and buttons but nothing happens. When the disk write finally finishes the session 'catches up' on all the mouse clicks you did during the freeze in a mad flurry of window activity and the server returns to normal. During the freeze the web-app stops as well. The same behaviour happens on the console of the server. (so I know its not a network thing) Nothing appears in the Event logs. Its like nothing happened. I have upgraded all the HP hardware drivers to the latest proliant support pack. And also run a HP hardware diagnostics which found nothing wrong. What would cause a disk write to lock the rest of the OS?

    Read the article

  • Visual annotator for large images

    - by pts
    I have a few hundred images of 30000 x 10000 pixels in size. Each image has lots of text (rendered as pixels) on it. I'd like to translate all text to another language. I speak both languages, and it's fine for me to translate each phrase manually. I need an image editor which can open these images quickly (faster than Inkscape, which needs about 60 seconds to open such an image), lets me zoom and rotate by 90 degrees, lets me erase (i.e. change the color of a selected rectangle to solid white), lets me add text, and lets me save the file as quickly as possible. I'd like to minimize the time I have to wait for the software to load, render and save images. Which is the best program for that on Windows? On Linux?

    Read the article

  • How to backup a large FreeNAS?

    - by Ze'ev
    We have a 12TB FreeNAS box in the office, and are looking for a way to keep a backup of it offsite. We're considering (1) tape; (2) a bunch of bare drives (popped into a spare hotswap bay); (3) external drives. Any advice on which solution is best? (Online backup is not an option because our internet connection is too slow.) And, is there some software that will keep track of which files have been backed up and which haven't? So that when one backup unit fills up, we can continue the backup on the next? (We don't want to have to back up to a 12TB device.) This software could run, preferably, on the NAS itself; or from one of our Mac clients. Our goal is a situation where we attach some backup device; it automatically fills up with stuff from the server; the contents of this unit are catalogued somewhere something prompts us to replace with a fresh drive/tape; backup continues until full, including any files that have changed since being backed up.

    Read the article

  • Automatically cycle numerous or large files to the trash

    - by minameismud
    I've been tasked with fixing a vendor's program that, under certain conditions, dumps gigs of junk files into a log directory. It ends up filling users' machines. My task is to figure out how to make it stop without any source code or additional running processes, and without making the program kasplode. In other words, I'm looking to use a feature of the file system to control the growth. One idea I had was to make a hard link from that folder to NUL, as you might with /dev/null in the linux world. However, my attempts to use the mklink program to create a junction result in a message that says Local volumes are required to complete the operation. Any ideas on how to complete the junction, or other ideas to solve the problem?

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • method for transfering large files for newbies

    - by doug
    Hi there One of my friends is now in china and he wana send me his home-mode video files. I have a linux hosting account on godaddy and i've configured a ftp account for him. Unfortunately he has trouble in using the ftp account. Can you recommend a better option? TY

    Read the article

  • Xen P2V for large physical hosts with much free space

    - by Sirex
    I need to P2V a rhel5 machine to xen under rhel5. I know I can use dd if=/dev/sda then using virt-install --import on the host, but the downside of this is the original machine has 80% free space on its drive. Does anyone know of (or can document) a quick and easy method which works reliably, to produce a bootable xen image which can run under a hvm in such cases ? I tried clonezilla to make the image, to avoid the free space problem, but it failed to do the clone with "something went wrong" (useless info, i know). At the moment im looking at doing a dd of each partition, and a file level copy of the partition which is mostly empty, then creating a new virtual disk, copying the partitions over to it by mounting both the new image and the virtual drive on a second vm, then copying the boot sectors over, then copying the file level backup..... there must be an easier way ? Oh, and budget is $0. :)

    Read the article

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • How to version large binary files?

    - by Walter White
    I run Windows XP inside a virtual machine for some tasks. I attempted to use git to version the image for virtual box; however, it is about 6GB after all the service packs. I only have 6GB of ram and git bombs out saying it is out of memory. I would basically like to have snapshots of Windows so that I can simply blow away an image and start anew when I want to. I like to have something I can rollback to in the event that an upgrade doesn't work so I would prefer to use version control or snapshots if the filesystem supports it. Any ideas on what tools I can use to do that?

    Read the article

  • SQL server availability issue: large query stops other connections from connecting

    - by Carlos
    I've got a high-spec (multicore, RAID) server running MS SQL 2008, with several databases on it. I have a low throughput process that periodically needs a small amount of information from one of the DBs, and the code seems to work fine. However, sometimes when one of my colleagues does a huge query against one of the other DBs, I see full CPU usage on the machine, and connections from my app time out. Why does this happen? I would have thought the many cores and harddisks would somehow (together with cleverly written DB server) be able to keep at least some of the resources free for other apps? I'm pretty sure he doesn't use multiple connections for his query. What can I do to prevent this?

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • Wrap app with dynamic libraries into one large static app

    - by progo
    I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >