Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 46/558 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • accidentally concatenate a large file on a remote system

    - by Dan
    Every once in a while on a computer I'm ssh'd into, I will accidentally type "cat largefile.txt" and my screen will start rushing with text for the next 10 minutes. I'm always working in a screen session, so my current solution is to just log out and then log back in, and since it can go 100X faster when I'm logged out, it'll finish in the short time it takes me to type my password in again. Is there a better way? Either involving the fact I'm in a screen session? Or a way to do this within SSH? What doesn't work: detaching from the screen session (doesn't respond until file is done outputting) trying command to move to a different window in the screen session (also doesn't respond) typing ctrl+C to kill cat command (also doesn't respond, probably because the command is done and the buffers just have to catch up)

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • windows 2000 freezing during large disk write

    - by robert
    We have a windows 2000 sp4 server which freezes up for about 1 minutes while its web-app does a ~500mb write operation. I can see the webapp start to do I/O activity (through process explorer) then the RDP session becomes unresponsive, you can click on windows and buttons but nothing happens. When the disk write finally finishes the session 'catches up' on all the mouse clicks you did during the freeze in a mad flurry of window activity and the server returns to normal. During the freeze the web-app stops as well. The same behaviour happens on the console of the server. (so I know its not a network thing) Nothing appears in the Event logs. Its like nothing happened. I have upgraded all the HP hardware drivers to the latest proliant support pack. And also run a HP hardware diagnostics which found nothing wrong. What would cause a disk write to lock the rest of the OS?

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • Automatically cycle numerous or large files to the trash

    - by minameismud
    I've been tasked with fixing a vendor's program that, under certain conditions, dumps gigs of junk files into a log directory. It ends up filling users' machines. My task is to figure out how to make it stop without any source code or additional running processes, and without making the program kasplode. In other words, I'm looking to use a feature of the file system to control the growth. One idea I had was to make a hard link from that folder to NUL, as you might with /dev/null in the linux world. However, my attempts to use the mklink program to create a junction result in a message that says Local volumes are required to complete the operation. Any ideas on how to complete the junction, or other ideas to solve the problem?

    Read the article

  • Visual annotator for large images

    - by pts
    I have a few hundred images of 30000 x 10000 pixels in size. Each image has lots of text (rendered as pixels) on it. I'd like to translate all text to another language. I speak both languages, and it's fine for me to translate each phrase manually. I need an image editor which can open these images quickly (faster than Inkscape, which needs about 60 seconds to open such an image), lets me zoom and rotate by 90 degrees, lets me erase (i.e. change the color of a selected rectangle to solid white), lets me add text, and lets me save the file as quickly as possible. I'd like to minimize the time I have to wait for the software to load, render and save images. Which is the best program for that on Windows? On Linux?

    Read the article

  • How to backup a large FreeNAS?

    - by Ze'ev
    We have a 12TB FreeNAS box in the office, and are looking for a way to keep a backup of it offsite. We're considering (1) tape; (2) a bunch of bare drives (popped into a spare hotswap bay); (3) external drives. Any advice on which solution is best? (Online backup is not an option because our internet connection is too slow.) And, is there some software that will keep track of which files have been backed up and which haven't? So that when one backup unit fills up, we can continue the backup on the next? (We don't want to have to back up to a 12TB device.) This software could run, preferably, on the NAS itself; or from one of our Mac clients. Our goal is a situation where we attach some backup device; it automatically fills up with stuff from the server; the contents of this unit are catalogued somewhere something prompts us to replace with a fresh drive/tape; backup continues until full, including any files that have changed since being backed up.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • method for transfering large files for newbies

    - by doug
    Hi there One of my friends is now in china and he wana send me his home-mode video files. I have a linux hosting account on godaddy and i've configured a ftp account for him. Unfortunately he has trouble in using the ftp account. Can you recommend a better option? TY

    Read the article

  • Xen P2V for large physical hosts with much free space

    - by Sirex
    I need to P2V a rhel5 machine to xen under rhel5. I know I can use dd if=/dev/sda then using virt-install --import on the host, but the downside of this is the original machine has 80% free space on its drive. Does anyone know of (or can document) a quick and easy method which works reliably, to produce a bootable xen image which can run under a hvm in such cases ? I tried clonezilla to make the image, to avoid the free space problem, but it failed to do the clone with "something went wrong" (useless info, i know). At the moment im looking at doing a dd of each partition, and a file level copy of the partition which is mostly empty, then creating a new virtual disk, copying the partitions over to it by mounting both the new image and the virtual drive on a second vm, then copying the boot sectors over, then copying the file level backup..... there must be an easier way ? Oh, and budget is $0. :)

    Read the article

  • SQL server availability issue: large query stops other connections from connecting

    - by Carlos
    I've got a high-spec (multicore, RAID) server running MS SQL 2008, with several databases on it. I have a low throughput process that periodically needs a small amount of information from one of the DBs, and the code seems to work fine. However, sometimes when one of my colleagues does a huge query against one of the other DBs, I see full CPU usage on the machine, and connections from my app time out. Why does this happen? I would have thought the many cores and harddisks would somehow (together with cleverly written DB server) be able to keep at least some of the resources free for other apps? I'm pretty sure he doesn't use multiple connections for his query. What can I do to prevent this?

    Read the article

  • How to version large binary files?

    - by Walter White
    I run Windows XP inside a virtual machine for some tasks. I attempted to use git to version the image for virtual box; however, it is about 6GB after all the service packs. I only have 6GB of ram and git bombs out saying it is out of memory. I would basically like to have snapshots of Windows so that I can simply blow away an image and start anew when I want to. I like to have something I can rollback to in the event that an upgrade doesn't work so I would prefer to use version control or snapshots if the filesystem supports it. Any ideas on what tools I can use to do that?

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • Wrap app with dynamic libraries into one large static app

    - by progo
    I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!

    Read the article

  • What are people using as Login scripts in large enterprises

    - by beakersoft
    Hi, We have recently been tasked with looking after the user login side of things in our enterprise (windows clients in active directory). We have a system at the moment that uses a vbscript login/loggof script to call a couple of DLL written in vb 6. The DLL's actions are controled by some config files based on users/groups witch is administrated from a central app. This is quite a good system, but kind of want to come away from vb6 for the dll's (maybe port them to c++ but them you have to make them com+ to call them from vbscripts etc) and possibly away from vbscript for the actual login scripts themselves. Just wondered what other people are using, what people can suggest etc Thanks Luke

    Read the article

  • Cutting up videos (excerpting) on Mac OS X -- iMovie produces super-large files

    - by markvgti
    I need to cut out parts of a video (+ the associated audio, of course) to make a short clip. For example, take 2 minutes from one location, 3 minutes from another part of the video, 30 seconds from another location and join it all together to form one single clip. The format of the input video is mp4 (H.264 encoding, AFAICR). Don't need very sophisticated merges or transitions from one part to the next, or sophisticated banners (text) on-screen, but some ability to do so would be a plus point. I've done this with iMovie in the past, but where the original file was under 5MB/min of play time, the chopped-up version was over 11MB/min of play time, which to me seems really bad. Is there a better/different way of doing this on OS X? Looking for free (gratis) solutions. OS: OS X 10.9.3

    Read the article

  • kvm process has too large a memory footprint on host

    - by gucki
    I'm using latest ubuntu quantal and start a kvm guest which should have 2048 MB of memory. Now after a few hours I can see that the kvm process of this guest is around 2700 MB, so 700 MB more than the guest should be able to consume. I mean a small overhead like 1% would be ok, but not 30%?! root 8631 74.0 22.2 4767484 2752336 ? Sl Nov07 512:58 kvm -cpu kvm64 -smp sockets=1,cores=2 -cpu kvm64 -m 2048 -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=rbd:data/vm-disk-1,if=none,id=drive-virtio0,cache=writeback,aio=native -device virtio-net-pci,netdev=net0,bus=pci.0,addr=0x12,id=net0,mac=02:7a:86:e6:1a:6c,bootindex=200 -netdev type=tap,id=net0,vhost=on -usbdevice tablet -nodefaults -enable-kvm -daemonize -boot menu=on -vga cirrus root 8694 0.0 0.0 0 0 ? S Nov07 0:00 [kvm-pit/8631] How is this possible and how to prevent it?

    Read the article

  • In BASH, are wildcard expansions guaranteed to be in order?

    - by ArtB
    Is the expansion of a wildcard in BASH guaranteed to be in alphabetical order? I forced to split a large file into [10Mb pieces][1] so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor In its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka '*' ) would always be in alphabetical order so that .aa came before .ab ( as opposed to be timestamp ordering or something like that ). Also, are there any flaws in my plan? How great is the performance cost of cating the file together?

    Read the article

  • How large administrators team should be? [closed]

    - by Artyom
    I'm trying to find an answer about how many server administrators/technicians are required to run a server farm with 7/24 availability of let's 10, 100, 1000 Linux servers? Are there any studies for this? Edit I was not expected this question to be closed. There are lots of studies about for example software development where from "lines of code" you can approximate the software development cost (COCOMO), so I was searching for something similar in administration. Note, I'm 100% understand that it is not a straightforward or easy to answer question, but it is a real question...

    Read the article

  • How to manage large number of desktop VMs?

    - by symcbean
    I'm looking at the feasibility of providing remote access to multiple virtual machines. The VMs themselves will provide user desktops. To make best use of the available resources, I'd like the VMs to hibernate when the user disconnects. Which implies being able to start them up when a user connects. Ideally each user would 'own' a VM image - but if not then I'd require that the session was terminated. Obviously this would require the remote access protocol to be tied into the VM management. Is there anything out there to provide such functionality? (extra credit for open protocols! ;)

    Read the article

  • Sending a large number of mails causing problems on CentOS 6 / Plesk 10

    - by papakost
    I have a VPS running CentOS 6. When the system tries to send daily newsletter, after some time (e.g. after sending about 2000 emails), I get error "Unable to send mail" and the system memory goes really high. Till this moment, the mails are delivered normally. The rest symptoms are: I cannot see anything on /var/log/maillog (File seems not to be written) All files on /var/spool/mail have 0 bytes size. From time to time on httpd log I get errors like: /usr/sbin/sendmail: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 23 "Activate mail service on domain" setting in Plesk is deactivated. Any idea on what's going wrong here?

    Read the article

  • Uploading many large files to a remote server

    - by TiernanO
    I am in the process of creating an offsite backup, and need to do a initial load of data. Currently, that's about 400Gb, give or take 10Gb or so... The backup system is producing files which are about 4Gb each, and has some other, smaller related files also. So, i need to transfer all 400ish gigs to a remote server, but how? What is the best method? I have full remote access to the server, so i can install anything i need to install. There are Windows, Linux and a Solaris VM running on the box itself, so any of those can be used there, and i have Windows and Linux at home. I have 2 internet connections in house, 10Mb/s uploading on each, so something that could potentially split the number of connections would be handy (kind of like GetRight, but in reverse... PutRight?).

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >