Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 55/1833 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Xen P2V for large physical hosts with much free space

    - by Sirex
    I need to P2V a rhel5 machine to xen under rhel5. I know I can use dd if=/dev/sda then using virt-install --import on the host, but the downside of this is the original machine has 80% free space on its drive. Does anyone know of (or can document) a quick and easy method which works reliably, to produce a bootable xen image which can run under a hvm in such cases ? I tried clonezilla to make the image, to avoid the free space problem, but it failed to do the clone with "something went wrong" (useless info, i know). At the moment im looking at doing a dd of each partition, and a file level copy of the partition which is mostly empty, then creating a new virtual disk, copying the partitions over to it by mounting both the new image and the virtual drive on a second vm, then copying the boot sectors over, then copying the file level backup..... there must be an easier way ? Oh, and budget is $0. :)

    Read the article

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • SQL server availability issue: large query stops other connections from connecting

    - by Carlos
    I've got a high-spec (multicore, RAID) server running MS SQL 2008, with several databases on it. I have a low throughput process that periodically needs a small amount of information from one of the DBs, and the code seems to work fine. However, sometimes when one of my colleagues does a huge query against one of the other DBs, I see full CPU usage on the machine, and connections from my app time out. Why does this happen? I would have thought the many cores and harddisks would somehow (together with cleverly written DB server) be able to keep at least some of the resources free for other apps? I'm pretty sure he doesn't use multiple connections for his query. What can I do to prevent this?

    Read the article

  • Deleting files using .NET that were migrated from win2k3

    - by Andrew Duncan
    We recently migrated an ASP.NET website from Windows 2003 to Windows 2008 R2, by zipping up all the files and extracting them to the new site. Since migrating the web application is still able to upload and delete files (that are new), however, it's unable to delete files that were copied from the original Win 2k3 app. We're guessing it's a permissions problem because the error is: Access to the path 'E:.......PATH.....' is denied. We've been trying to match the permissions of a newly uploaded file to that of a migrated files. Newly uploaded files seem to get the APP POOL user as a permission and the OWNER. However, the original files didn't have this. Any help that anyone can be would be fantastic. Thanks,

    Read the article

  • Wrap app with dynamic libraries into one large static app

    - by progo
    I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!

    Read the article

  • Cutting up videos (excerpting) on Mac OS X -- iMovie produces super-large files

    - by markvgti
    I need to cut out parts of a video (+ the associated audio, of course) to make a short clip. For example, take 2 minutes from one location, 3 minutes from another part of the video, 30 seconds from another location and join it all together to form one single clip. The format of the input video is mp4 (H.264 encoding, AFAICR). Don't need very sophisticated merges or transitions from one part to the next, or sophisticated banners (text) on-screen, but some ability to do so would be a plus point. I've done this with iMovie in the past, but where the original file was under 5MB/min of play time, the chopped-up version was over 11MB/min of play time, which to me seems really bad. Is there a better/different way of doing this on OS X? Looking for free (gratis) solutions. OS: OS X 10.9.3

    Read the article

  • kvm process has too large a memory footprint on host

    - by gucki
    I'm using latest ubuntu quantal and start a kvm guest which should have 2048 MB of memory. Now after a few hours I can see that the kvm process of this guest is around 2700 MB, so 700 MB more than the guest should be able to consume. I mean a small overhead like 1% would be ok, but not 30%?! root 8631 74.0 22.2 4767484 2752336 ? Sl Nov07 512:58 kvm -cpu kvm64 -smp sockets=1,cores=2 -cpu kvm64 -m 2048 -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=rbd:data/vm-disk-1,if=none,id=drive-virtio0,cache=writeback,aio=native -device virtio-net-pci,netdev=net0,bus=pci.0,addr=0x12,id=net0,mac=02:7a:86:e6:1a:6c,bootindex=200 -netdev type=tap,id=net0,vhost=on -usbdevice tablet -nodefaults -enable-kvm -daemonize -boot menu=on -vga cirrus root 8694 0.0 0.0 0 0 ? S Nov07 0:00 [kvm-pit/8631] How is this possible and how to prevent it?

    Read the article

  • How large administrators team should be? [closed]

    - by Artyom
    I'm trying to find an answer about how many server administrators/technicians are required to run a server farm with 7/24 availability of let's 10, 100, 1000 Linux servers? Are there any studies for this? Edit I was not expected this question to be closed. There are lots of studies about for example software development where from "lines of code" you can approximate the software development cost (COCOMO), so I was searching for something similar in administration. Note, I'm 100% understand that it is not a straightforward or easy to answer question, but it is a real question...

    Read the article

  • In BASH, are wildcard expansions guaranteed to be in order?

    - by ArtB
    Is the expansion of a wildcard in BASH guaranteed to be in alphabetical order? I forced to split a large file into [10Mb pieces][1] so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor In its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka '*' ) would always be in alphabetical order so that .aa came before .ab ( as opposed to be timestamp ordering or something like that ). Also, are there any flaws in my plan? How great is the performance cost of cating the file together?

    Read the article

  • How to manage large number of desktop VMs?

    - by symcbean
    I'm looking at the feasibility of providing remote access to multiple virtual machines. The VMs themselves will provide user desktops. To make best use of the available resources, I'd like the VMs to hibernate when the user disconnects. Which implies being able to start them up when a user connects. Ideally each user would 'own' a VM image - but if not then I'd require that the session was terminated. Obviously this would require the remote access protocol to be tied into the VM management. Is there anything out there to provide such functionality? (extra credit for open protocols! ;)

    Read the article

  • Linux Log Viewer with Web interface

    - by user180039
    I have been asked at work to find a solution to one of our problems. We have several logs that customers need access to, because we don't want to give them direct access to the folder/share we are looking to implement a simple Web based solution that permits customers to login see a list of files they have permissions to and download the file. It would need to be able to setup permissions so User01 can see file01 and file03 and User02 can see file04 and file06, optimally all the files would be under the same folder, so permissions are based on files rather then based on folders. Anyone got any ideas Many Thanks

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Extract duplicity difftar files manually

    - by isnogud
    I have a duplicity backup which i am not able to recover with duplicity. By calling duplicity file:///path/to/backups /path/to/dir, it returns "Local and Remote metadata are syncronized, no sync needed." but the /path/to/dir is empty. I decrypted all backup volumes and I'm able to view and extract the files from the different difftar files. My only problem is that there are files partitioned and saved in folders named after the files. Can anyone give me a simple script or at least a hint how to untar these difftar files so i get the actual files instead of the partitioned ones?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • I can't open a Word file because it's too large

    - by Jane
    I was creating a file with MS Word 2007 where I included a number of images. I didn't compress them as I was putting them into the file. I managed to save the file, but have not been able to reopen it ever since, as it says that I have exceeded the 32 MB limit. I am working on an old Macbook (OS X 10.4.11). I have tried to open the file in both OpenOffice and LibreOffice, but it just causes those programs to crash. Is there any way of reducing the file size without opening the document?

    Read the article

  • Streaming a large file

    - by Rich
    quick question say i wanted to download a file of considerable size 10gb say and i sent the GET request to a web server to download that file question is, if the client stopped reading the TCP connection, would the entire file still be downloaded, or does it depend on the client sending back an ack or something to the circuit asking for the next packet hope that makes sense This question was originally asked involving the Tor network, but i just want to know how a standard internet connection would handle this Thanks

    Read the article

  • Migrate 3 terabytes of files to a new server windows 2003

    - by smackaysmith
    We have a new file server to handle the obscene amount of files generated by the company (PDFs, XLS, DOCs and JPGs). Files being moved to the new server total about 3tb. The problem is we can't take the company down for days to move the files. The other problem is the applications creating all these files have to reference previous files, so we can't simply point them to the new server. Also, there isn't an option to have the applications create files on the new server, but reference the old server for existing files. The servers are x64 win2003 r2. Both servers are on the same subnet. DFS doesn't work. Is there an application that can handle this amount of data to copy the files over, throttle bandwidth, and do a 'merge'? By merge I mean constantly copying over newly created files until the two servers are synched.

    Read the article

  • Source File not updating Destination Files in Excel

    - by user127105
    I have one source file that holds all my input costs. I then have 30 to 40 destination files (costing sheets) that use links to data in this source file for their various formulae. I was sure when I started this system that any changes I made to the source file, including the insertion of new rows and columns was updated automatically by the destination files, such that the formula always pulled the correct input costs. Now all of a sudden if my destination files are closed and I change the structure of the source file by adding rows - the destination files go haywire? They pick up changes to their linked cells, but don't pick up changes to the source sheet that have shifted their relative positions in the sheet. Do I really need to open all 40 destination files at the same time I alter the source file structure? Further info: all the destination files are protected, and I am working on DropBox.

    Read the article

  • Extracting a .zip file into Program Files (x86)

    - by Evan
    I just got 64 bit Vista system after being on Windows XP. I'm trying to get all my useful programs up to date, and I've recently had a problem extracting files into the 32-bit program files directory (Program Files (x86)). I'm using 7zip to extract the eclipse-SDK-3.5-win32.zip directory into C:\Program Files (x86) Unfortunately, every time I've tried to do this, 7Zip reports can not open output file C:\Program Files (x86)\eclipse\... I've been able to extract it to C:\ and then move it, I'm assuming there's some protection on the Program Files directory that is causing some problems. Any suggestions?

    Read the article

  • SVN Checkout error on large repositories

    - by Brian Mitchell
    I wonder if anyone can help me. We have recently migrated our Subversion repository from a VisualSVN Server on Windows to a subversion server on CentOS. The migration was succesfull however we are getting the following error message Error REPORT of svn'/svn/MangoRepository/!svn/vcc/default': Could not read chunk size: Error connection was closed by server (http://servername) Now the workaround for this is simply to perform a update on the repo and it will contine where is left off. Im just wondering if anyone was a permanent fix for this as it can be quite frustrating to repeat my self to 60-70 developers.

    Read the article

  • Copy any file with a specific file extension in subfolders into a folder

    - by Onyxius
    I found a script on here that would use 7zip and extract all the files in all the sub-folders of a specific folder and put them in their own folder using the script below. What I need is add to it or maybe use another script if i have to and specify where i want those files to go instead of putting them in their own folder within the folder. I don't know how to do this and hope someone would be able to help. Thanks for the help @echo on FOR /D /r %%F in ("*") DO ( pushd %CD% cd %%F FOR %%X in (*.rar *.zip *.tar) DO ( "C:\Program Files\7-zip\7z.exe" x -o"%%~nX" "%%X" ) popd )

    Read the article

  • NOTEPAD++ Need macro or typeitin for automation of large lists

    - by user2526699
    I'm sure there is a way to do this but I can not seem to figure it out. I will try my best to explain this. I have a list with 20,000 lines in notepad++. I have two tabs open in notepad++. The right side tab is the main list. The left side tab is what needs to be added to the beginning of each line in the right tab. Here is an image of my notepad++ to give you a better understanding. I need to be able to do the following in an automated way as I have over 20,000 lines to do this way. copy line 1 of tab 'new 7' switch to tab 'new 6' paste clipboard(line 1 of tab 'new 7') at beginning of line 1 tab 'new 6' switch back to tab 'new 7' copy line 2 of tab 'new 7' switch to tab 'new 6' paste clipboard(line 2 of tab 'new 7') at beginning of line 2 tab 'new 6' I have both pasteitin and typeitin download but if i need some other program/app or if it's built in to notepad++ that would be great. I need to do this by the program itself or for me to only have to press a button to do each of these.

    Read the article

  • Large recovery partitions

    - by Unsigned
    Is there any good reason as to why factory restore partitions are generally much larger than they need to be? Examples I have found in my own experience: Dell XPS laptop Partition: 13.67 GB Used: 6.68 GB Dell Inspiron laptop Partition: 14.7 GB Used: 7.2 GB Toshiba laptop Partition: 15.3 GB Used: 9 GB In all cases, shrinking the partition to only slightly more than the Used space had no ill effects on future factory restorations. Why the exorbitant amount of extra space, given that neither of the three computers ever writes any data to the recovery partition? Is there a good reason I'm overlooking?

    Read the article

  • Loading Obj Files in Soya3d engine

    - by John Riselvato
    I recently just found soya3d and from what i have seen through the tutorials i will be able to make exactly what i wanted with python skills. Now i have built this map generator. The only issue is that i can not manage to understand from any documents how to load obj files. At first i figured that i had to convert it to a .data file, but i dont understand how to do this. I just want to load a simple model of a house. I tried using the soya_editor, but i can not figure out at all how to do anything with that. Heres my script so far: import sys, os, os.path, soya, soya.sdlconst width, height = 760, 375 soya.init("Generator 0.1", width, height) soya.path.append(os.path.join(os.path.dirname(sys.argv[0]), "data")) scene = soya.World() model = soya.model.get("house") light = soya.Light(scene) light.set_xyz(0.5, 0.0, 2.0) camera = soya.Camera(scene) camera.z = 2.0 soya.set_root_widget(camera) soya.MainLoop(scene).main_loop() house is in .obj form at folder data/models The error i get is: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit... So i am figuring that because i dont understand how to turn my files into .data files, i will need to learn that. So my question is, how do i use my own models?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >