Search Results

Search found 10748 results on 430 pages for 'disk encryption'.

Page 377/430 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • Make isolinux 4.0.3 chainload itself

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Windows Server 2008 backup VHD's - is it possible to mount/open in Windows 7?

    - by Simon
    Hi All, Is it possible to mount the VHD files created by the Windows Server 2008 backup utility onto a Windows 7 (release) client? Following an array failure I was very worried that there was a problem with both the backup sets on different USB drives as attaching the VHD to a Win 7 box did not show the expected structure (instead they behaved like unformatted disk space). Subsequently, I've attached the backup drive to a 2008r2 machine that I'd intended to be the replacement and the backup set can be browsed without issue (seemingly). When the new disks arrive I'll go through the recovery process and see where we are, but it looks promising so far. Is it simply the case that you can't take server created VHD's and mount them on desktop machines? (Rather than hyper-ventilating at the thought of years of lost photos and email, I'm now just mildly curious) Edit:One thing that has confused things is that the backup utility on Win7 is more restrictive about restoring from external devices than the equivilent on 2008r2. With r2, I can restore files 'from another server' and browse to external storage. Win7 only allows the back to be located on a network share. Once my box of new disks arrive and I've got something to restore onto, I'll move the smaller of the backup VHDs onto network storage reachable by Win7 and see if the VHD is readable. I haven't read up on the VHD process used by the backup app - I'm assuming it's a base VHD and differencing files used for incremental backups and that the restore app understands this. Finally: In retrospect the question should have been, 'can I restore a 2008r2 backup set via a Win 7 client' Thanks

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • CPU-adaptive compression

    - by liori
    Hello, Let assume I need to send some data from one computer to another, over a pretty fast network... for example standard 100Mbit connection (~10MB/s). My disk drives are standard HDD, so their speed is somewhere between 30MB/s and 100MB/s. So I guess that compressing the data on the fly could help. But... I don't want to be limited by CPU. If I choose an algorithm that is intensive on CPU, the transfer will actually go slower than without compression. This is difficult with compressors like GZIP and BZIP2 because you usually set the compression strength once for the whole transfer, and my data streams are sometimes easy, sometimes hard to compress--this makes the process suboptimal because sometimes I do not use full CPU, and sometimes the bandwidth is underutilized. Is there a compression program that would adapt to current CPU/bandwidth and hit the sweet spot so that the transfer will be optimal? Ideally for Linux, but I am still curious about all solutions. I'd love to see something compatible with GZIP/BZIP2 decompressors, but this is not necessary. So I'd like to optimize total transfer time, not simply amount of bytes to send. Also I don't need real time decompression... real time compression is enough. The destination host can process the data later in its spare time. I know this doesn't change much (compression is usually much more CPU-intensive than decompression), but if there's a solution that could use this fact, all the better. Each time I am transferring different data, and I really want to make these one-time transfers as quick as possible. So I won't benefit from getting multiple transfers faster due to stronger compression. Thanks,

    Read the article

  • During Vista Repair - No operating system is listed.

    - by Jack Marchetti
    After a Windows update, my brother's Gateway computer loads to the "Step 3 of 3: 0%" and reboots. Safe Mode does not work. I placed a Vista DVD in the drive, and re-booted. (Note, this is my Vista DVD, not the Recovery/System disc that would come with a computer. Gateway does not give you CD's anymore. I believe they store recovery on a partition, but that partition has been wiped out). I chose "Repair Your Computer" I get a dialog box, but no operating system is listed. I'm then prompted to "Load Drivers". What drivers am I supposed to be loading here and where from? I placed a CD in the drive to "load drivers" but I don't see my DVD drive listed. All I saw where X:/Sources along with several Removable Media slots that were empty. On another screen I tried Startup Repair, which didn't do anything. I attempted to use System Restore - but it doesn't detect the hard drive. I'm guessing that I'm missing some sort of SATA driver and that is why the hard disk is not being found. Any ideas on this?

    Read the article

  • How to calculate RAM value on performance per dollar spent

    - by Stucko
    Hi, I'm trying to make decisions on buying a new PC. I have most specifications (processor/graphic card/hard disk) pin-downed except for RAM. I am wondering what is the best RAM configuration for the amount of money I'm spending. As the question of best is subjective, I'd like to know how would I calculate the value of RAM sticks sold. 1.(sample)The value of amount of memory: 1) CORSAIR PC1333 D3 2GB = costs $80 2) CORSAIR PC1333 D3 4GB = costs $190 would it be better to buy 2 of item 1) instead of 1 of item 2) ?? Although I would normally choose to have 1 of 2) as the difference is only (190-(80*2)) = 30 as I would save 1 DIMM slot, What I need is the value per amount: 1) 80/ 2 = $40 per 1GB 2) 190/ 4 = $47.5 per 1GB 2. The value of frequency: 1) CORSAIR PC1333 4GB = costs 190 2) CORSAIR PC1600C7 4GB = costs 325 Im not even sure of the denominator ... $ per 1 ghz speed? 3. The value of latency: 1) CORSAIR CMP1600C8 8-8-8-24 2GBx3 (triple channel) = costs 589 2) CORSAIR CMP1600C7D 7-7-7-20 2GBx3 (triple channel) = costs 880 Im not even sure of the denominator ... $ per 1 ghz speed? Just for your information i'd like to get the best out of the money im going to spend to put on a 6 DIMM slot i7core motherboard.

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Why can't I boot in to Windows Recovery Environment to fix my HDD or salvage my data?

    - by Kevin
    I've been trying to get in to WindowsRE to salvage the files on my Sony Vaio laptop after it failed to load Vista (it finally, consistently displays "Error loading operating system" after months of such intermittent failures, usually rectified via restarts or utilizing Startup Repair or CHKDSK from WindowsRE) . The problem is, after successfully accessing it once after this failure (and many times before over the course of the laptop's life), I can no longer get it to load. During the last successful access (right after the failure), I ran startup repair, which itself failed and notified me that the boot sector was corrupt. I attempted to head in to Sony's proprietary recovery tools menu, which is accessible from WindowsRE when it is loaded from the recovery partition or recovery disk, however it hung. I have since been unable to access the recovery environment after restarting, using any of these methods: Access via the recovery partition (pressing F10 on boot) Access via recovery DVD (created using the same computer when it was healthy) Access via a Windows Vista installation DVD All three methods produce the same results: The computer acknowledges the boot attempt The computer successfully gets passed the "Windows is loading files" screen The computer successfully gets passed the Windows loading screen The computer then stalls at a black screen, while showing HDD activity (via indicator light). After a few minutes, the HDD activity ceases, and after a few more minutes, the over sized cursor that is utilized in WindowsRE appears on the black screen. The actual recovery environment, however, never appears, even after leaving the computer in such a state overnight. What is fustrating is that other bootable utilities, such as SeaTools for DOS and MemTest, boot up and run fine. In running perfectly normally, MemTest was able to produce a plethora of errors utilizing my RAM. I'm inclined to believe the RAM's faultiness may causing the WindowsRE booting to fail. Would this be a valid assumption? If I'm not mistaken, booting from external media utilizes the RAM, so such a reason is plausible, assuming my knowledge of bootloading is correct. Other than that, I can't figure out any reason why all the bootable utilities except WindowsRE run fine. Does anyone know what the problem is, or could be? Any solutions?

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • Malware Defense Shows Up in PlayOn Settings/Logs Although System Has Been Thoroughly Cleaned

    - by nicorellius
    I was hit really hard by some nasty malware: Malware Defense. I was doing something I should not have been doing when I got it (surfing Pirate Bay for TV shows). It locked up my system and I had to reboot in safe mode. I was able to shut down the process and remove it using a malware killer tool. I then installed, after my machine was cleaned up a bit, Clamwin, Malwarebytes, and another AV tool. I cleaned the heck out of my system. Simultaneously, while this was going on, I was having trouble with my media-server, PlayOn. This tool is great, but has some bugs. One in particular is that it will not function well with AV software running. I found a way to allow the new AV software to run while using PlayOn, but it still says I have Malware Defense on. Firstly, Malware Defense is long gone. I cleaned all remnants from my registry and scoured my system with the above tools multiple times. PlayOn is getting some information that I have this crap installed on my system, but it's not. The system runs OK, but not optimally. I have a feeling it is causing my streaming to be interrupted sometimes. How is it that I can't even find Malware Defense on my system if I tried but yet somehow PlayOn is getting a finger print of it somewhere? I have gone back and forth with MediaMall to no avail. I kind of just gave up, because the streaming works OK. BTW, I also uninstalled/reinstalled PlayOn several times, reverted back to previous versions, etc. The only thing I haven't done is reformat my disk and reinstall Windows. I really don't want to do this if there is another way to remove this little print. Any ideas?

    Read the article

  • Broken filesystem on Windows XP / 7 virtual machine

    - by Pekka
    I created a virtual machine with Windows XP as the guest system in Microsoft's Virtual PC that ships along with Windows 7. I then installed Virtualbox and began running the MS machine in it. It worked fine. Then, I accidentally started the machine in Microsoft's Virtual PC again. The screen stayed blank, so after a while, realizing my mistake, I closed the Machine. Since then, the VM won't start any more, claiming massive file system problems. Starting Windows in normal mode results in a SOMETHING_FILESYSTEM blue screen; I can start in protected mode and run a checkdisk. That will fix something on every run, but every time I restart, it will start again. I tried re-booting the VM with the Windows CD and doing a repair install. I didn't watch whether that worked out, but I'm caught in the reset / check disk / reset cycle again. Is there anything VM specific that can still be done? On a physical machine, I would say reformat. Is there any way to get hold of the data on the virtual machine through either Virtual PC or Virtualbox? It was an experimental machine, but I had started entering some data on it that would be nice to recover.

    Read the article

  • External USB HD with -optional- mains?

    - by Stephen
    Hi, I'm Christmas-present-buying, and I'd appreciate recommendations for a USB HD with an optional mains power input. I've hunted, but can't find all the information I want (partially due to sketchy product specifications). Background: This is for a digital TV which I do not own, and so I'd like to get it correct first time. The TV has a USB port to allow recording straight to disk, but the manuals don't say how much power can be drawn through the USB port. The manual's instructions state, possibly generically, to plug the drive in before connecting to the TV. Ideally I'd like a small (2.5"?) drive which can draw power over USB, with an mains power input if it turns out the USB port on the TV doesn't offer enough juice. The ideal is to use one cable, two max. A powered USB hub would introduce too much clutter. I've spotted that the LaCie Petit drives have what appears to be an additional power input, but I'm not even sure from the specs what that is. And the device doesn't ship with a mains adapter. Suggestions?

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

  • windows xp blue screen dumping physical memory

    - by dotnet-practitioner
    I get following blue screen after running my laptop for an hour... A problem has been detected and windows has been shut down to prevent damange to your computer. If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to be sure you have adequate disk space. If a driver is identified in the stop message, disable the driver or check with the manufacturer for driver updates. Try changing video adapters. Check with your hardware vendor for any BIOS updates. Disable BIOS memory options such as cashing or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select advanced startup options, and the select safe mode. Technical Information: * STOP 0x0000008E (0xc0000005, 0x805B03F5, 0xF703DC7C, 0x00000000) Beginning dump of physical memory Physical memory dump complete. Contact you system administrator or technical support group for further assistance. so.... if this is a faulty memory.... from where I could buy RAM for following laptop.... TOSHIBA SATELLITE A45-S250 My local Frys store does not carry memory for this laptop.

    Read the article

  • FreeBSD ZFS RAID-Z2 performance issues

    - by Axel Gneiting
    I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial. The SATA controllers are running in AHCI mode. Output of zpool status: pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M" vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2?

    Read the article

  • NetInstall working on some systems, not working on others

    - by cduruk
    Hi, I'm having an issue where my NetInstall setup works on some computers and fails on others. I am not able to diagnose the issue. I created an image of a Mac Mini and then created a NetRestore image using the System Image Utility found on Snow Leopard Server. NetBoot and NFS all seem to be working fine on the server, which is an XServe. Then I select the NetInstall image from the Startup Disk on a machine. On some of the machines, the process works as expected. On some of them, I see the globe icon blink a few times and then the system boots to the regular hard drive. I have captured the tracedump and the system.log logs from the server on both cases where NetInstall seems to work and fail. Here is the link that has all the logs http://gist.github.com/232232 The gist of the failure seems to be from the lack of BSDP DISCOVER in the failure but I'm not able to identify why that exactly is happening. I'd really appreciate any help on this issue.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Disc drive busy on MacBook, with disc stuck inside.

    - by ayaz
    I have a white MacBook, running Snow Leopard, 10.6.3 with the latest updates. I popped in a DVD that the system failed to mount. I did not see any conspicuous errors. As a result of this failure, the DVD got stuck in the drive. Neither pressing the eject button on the keyboard nor running the diskutil eject command caused the DVD to come out. The commands drutil eject and drutil tray open could not get the DVD to budge at all. The 'mount' and 'eject' buttons on the window for Disk Utility are dimmed out, while it is written in the middle for the DVD drive that that particular disc drive is busy. This is not the first time this has happened with me. I know that I will ultimately have to resort to rebooting the system and holding down the eject button to get the DVD to come out. But, is there any workaround that does not involve rebooting the system and prying the disc out? The drive on this MacBook does not have a needle-pin reset button -- at least, I couldn't find it anywhere. Any help will be greatly appreciated. Many thanks.

    Read the article

  • How to set up GRUB2 chainloader to other Grub (Fedora, Debian) on GPT

    - by basic6
    I'm trying to set up a dedicated GRUB2 which (chain-)loads another GRUB on a disk with GPT partition table. Relevant partitions: /dev/sda1 BIOS_BOOT /dev/sda2 BOOT (ext2) /dev/sda3 FEDORA (ext4) /dev/sda6 DEBIAN (ext4) I installed Fedora first, using /dev/sda2 as boot partition. Then I installed Debian. The Debian installer recognized the Fedora installation and added it as boot entry, then installed its GRUB into the MBR. While this works for the moment, it's pretty messy, because every Debian update may change the boot config, removing the Fedora entry (tried it) and the other way around. That's why I want both systems to have their own boot loader and one main boot loader (that could reside on /dev/sda2), which loads one of them. This is what I've tried: Moved everything from /dev/sda2 to /dev/sda3/boot Removed /boot mount point in Fedora (so /dev/sda2 isn't used anymore) From a live Linux, installed GRUB2 to the MBR (grub-install --boot-directory=sda2 /dev/sda) Wrote a menu.lst: title Fedora root (hd0,2) chainloader +1 (Again, for Debian) Converted that to a grub.cfg script (grub-menu2cfg or something like that) When booting, actually got a GRUB2 menu with "Fedora" (and "Debian") When selecting any one of those: error: invalid signature Issued "grub-install /dev/sda6" (and ...sda3) from all kinds of live Linux systems, all of which failed with another error message (in the case of the Debian installer, without explanation at all) Added --force to the chainloader line, now it says "loading", then reboots Found douzens of howtos, none of which seem to work for me Since I get the self-made GRUB2 menu on bootup, I've at least successfully installed the first stage of GRUB, right? When trying to chainload, some signature is checked and seems to be wrong - how do I fix it? The boot menus (Fedora with its different Kernel versions and Debian with Debian and Fedora as well) are now on the system partitions (/dev/sda3, /dev/sda6), is there anything else to do on these partitions, so they can be chainloaded? Any help is greatly appreciated.

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >