Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 306/462 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • Fix for OpenSolaris with no gcc vs. Nexenta with no ext3

    - by Jake Wharton
    I'm attempting to migrate my server from linux to a Solaris variant during a hardware upgrade. The machine is based around an Abit AN-M2 board which has an NForce chipset. I have what seems to be a chicken-and-egg problem of sorts: OpenSolaris 2009.06 does not recognize the NIC and I cannot compile the drivers for it as it also lacks gcc. I haven't tested as to whether or not I can mount an ext3 partition yet but its moot if there is no networking. Nexenta 3.0b3 recognizes the NIC but I cannot get the ext3 drives mounted due to FSWfspart refusing to install. I do not know much about Solaris but I wager this is due to the fact that Nexenta is based around Debian as well. While I am reusing the mobo/CPU combo, I did just spent a lot of money on the other hardware around it and would very much like to get it up and running smoothly and quickly. Does anyone have any suggestions that are not: Get a new mobo/CPU Run another OS Use alternate NIC

    Read the article

  • Need help diagnosing cause of BSOD

    - by n00neimp0rtant
    I've got a Windows 7 PC here that was getting very frequent BSODs. The error codes were variant (IRQL_NOT_LESS_OR_EQUAL, PAGE_FAULT_IN_NONPAGED_AREA, ATTEMPTED_WRITE_TO_READONLY_MEMORY), and it was impossible to use the computer for longer than an hour without getting one. I backed it all up and reinstalled Windows completely, thinking that it was a corrupt driver issue. Unfortunately, I'm still getting the BSODs with the same error messages on this fresh installation. This tells me that it must be some faulty piece of hardware, but I'm not sure what it would be. I ran Memtest+ on the machine, but did not get any errors after 2 passes. I also ran a few built-in recovery mode scans from the HP recovery partition. I need some ideas on how to test the rest of the hardware to solve this issue.

    Read the article

  • Windows XP waits for usb drive on almost every operation.

    - by Tomasz Kowalczyk
    Hello everyone, I have a problem with my Windows XP operating system, particularly with the USB device that is plugged in - 1TB WD My Book external drive. I haven't found any information about such behaviour when searching in Internet, so I have to ask You. The problem is: when I am using computer, especially during work (programming), when I try to access any information on a hard disk ("internal" one), Windows seems to "consult" it with the external drive. For example, when I open file selection dialog window, if I try to change directory, system activates external drive, reads something (I hear the disk's operational noise) and after some seconds of such pause it makes the operations I requested. There are many situations in which I can reproduce this behavior - opening My Computer, shutting down system, opening partition folder from My Computer - every operation involves the usage of external drive. Please understand me properly - this is not something that happens EVERY time, but at least "many" times a day. What causes such behavior and how can I "turn off" external drive when it's not needed? Thanks in advance for your answers.

    Read the article

  • What could explain my partitions disappearing on a new SSD with Windows 7?

    - by charlesrandall
    So this morning, I had a fresh install of Windows 7 pro, on a new SSD (Patriot TorqX 128gb), which I just put in to a new Dell Studio XPS 9000. Everything was fine. I booted to windows, no problem. I go to work. 8 hours later, I come home, and I'm greeted by my boot screen complaining about no bootable devices. Windows repair from the Windows 7 pro disk says it can't fix the problem. It doesn't see any windows installs. I boot up GParted, and my SSD is completely unallocated. No space used, no partitions. Perhaps this is related to allowing windows 7 to create a utility partition when I installed? Only thing I can think of. Is there some kind of known hardware issue that can result in an SSD completely wiping itself?

    Read the article

  • homegroup administrator user no longer exists

    - by Beninja
    I had a PC with windows 7 that was the homegroup administrator for my network. I recently upgraded to windows 8 I went to homegroup in control panel and saw that the original homegroup was never removed. It says to talk to the administrator on and obtain the password to join the homegroup. I need to create a new homegroup but I cant unless I somehow remove the old one. And I cant do that because the user that had rights to the old one no longer exists. Please help!! Ben

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • Setting up autotest with rspec in ubuntu

    - by Reactor5
    I'm trying to set up autotest on Ubuntu, and no matter what my configuration, I get this: loading autotest/rails_rspec2 style: RailsRspec2 /home/brian/.rvm/gems/ruby-1.9.2-rc2@rails3tutorial/gems/redgreen-1.2.2/lib/redgreen/autotest.rb:6:in `<top (required)>': uninitialized constant Object::PLATFORM (NameError) the .autotest (~/.autotest) file I have is as follows: #!/usr/bin/env ruby require 'redgreen/autotest' def self.notify title, msg, img, pri='low', time=3000 `notify-send -i #{img} -u #{pri} -t #{time} '#{msg}'` end Autotest.add_hook :ran_command do |at| results = [at.results].flatten.join("\n") output = results.slice(/(\d+)\s+examples?,\s*(\d+)\s+failures?(,\s*(\d+)\s+not implemented)?(,\s*(\d+)\s+pending)?/) folder = "~/Pictures/autotest/" if output =~ /([123456789]|[\d]{2,})\sfailures?/ notify "FAIL:", "#{output}", folder+"rails_fail.png", 'critical', 10000 elsif output =~ /[1-9]\d*\spending?/ notify "PENDING:", "#{output}", folder+"rails_pending.png", 'normal', 10000 else notify "PASS:", "#{output}", folder+"rails_ok.png" end end what am I doing wrong here?

    Read the article

  • Eager/Lazy loaded member always empty with JPA one-to-many relationship

    - by Kaleb Pederson
    I have two entities, a User and Role with a one-to-many relationship from user to role. Here's what the tables look like: mysql> select * from User; +----+-------+----------+ | id | name | password | +----+-------+----------+ | 1 | admin | admin | +----+-------+----------+ 1 row in set (0.00 sec) mysql> select * from Role; +----+----------------------+---------------+----------------+ | id | description | name | summary | +----+----------------------+---------------+----------------+ | 1 | administrator's role | administrator | Administration | | 2 | editor's role | editor | Editing | +----+----------------------+---------------+----------------+ 2 rows in set (0.00 sec) And here's the join table that was created: mysql> select * from User_Role; +---------+----------+ | User_id | roles_id | +---------+----------+ | 1 | 1 | | 1 | 2 | +---------+----------+ 2 rows in set (0.00 sec) And here's the subset of orm.xml that defines the tables and relationships: <entity class="User" name="User"> <table name="User" /> <attributes> <id name="id"> <generated-value strategy="AUTO" /> </id> <basic name="name"> <column name="name" length="100" unique="true" nullable="false"/> </basic> <basic name="password"> <column length="255" nullable="false" /> </basic> <one-to-many name="roles" fetch="EAGER" target-entity="Role" /> </attributes> </entity> <entity class="Role" name="Role"> <table name="Role" /> <attributes> <id name="id"> <generated-value strategy="AUTO"/> </id> <basic name="name"> <column name="name" length="40" unique="true" nullable="false"/> </basic> <basic name="summary"> <column name="summary" length="100" nullable="false"/> </basic> <basic name="description"> <column name="description" length="255"/> </basic> </attributes> </entity> Yet, despite that, when I retrieve the admin user, I get back an empty collection. I'm using Hibernate as my JPA provider and it shows the following debug SQL: select user0_.id as id8_, user0_.name as name8_, user0_.password as password8_ from User user0_ where user0_.name=? limit ? When the one-to-many mapping is lazy loaded, that's the only query that's made. This correctly retrieves the one admin user. I changed the relationship to use eager loading and then the following query is made in addition to the above: select roles0_.User_id as User1_1_, roles0_.roles_id as roles2_1_, role1_.id as id9_0_, role1_.description as descript2_9_0_, role1_.name as name9_0_, role1_.summary as summary9_0_ from User_Role roles0_ left outer join Role role1_ on roles0_.roles_id=role1_.id where roles0_.User_id=? Which results in the following results: +----------+-----------+--------+----------------------+---------------+----------------+ | User1_1_ | roles2_1_ | id9_0_ | descript2_9_0_ | name9_0_ | summary9_0_ | +----------+-----------+--------+----------------------+---------------+----------------+ | 1 | 1 | 1 | administrator's role | administrator | Administration | | 1 | 2 | 2 | editor's role | editor | Editing | +----------+-----------+--------+----------------------+---------------+----------------+ 2 rows in set (0.00 sec) Hibernate obviously knows about the roles, yet getRoles() still returns an empty collection. Hibernate also recognized the relationship sufficiently to put the data in the first place. What problems can cause these symptoms?

    Read the article

  • How do boot sectors and multiple drives works?

    - by GiH
    I don't fully understand the concept of a boot sector, I was hoping someone could clear this up for me. If you have two hard drives, with an OS installed on each, does each drive have its own boot sector? Does each drive need an MBR partition? I've got Linux and Windows on two separate drives. I've had issues when installing Linux and grub, and now I've finally decided to use the Windows bootloader to start up. Would Windows have gotten rid of grub when i used /fixmbr or does it stay there on the boot sector of the other drive?

    Read the article

  • How to locate chrome bookmarks in linux

    - by xenon
    I upgraded from Karmic Koala to Lucid Lynx beta, was working fine for a while (was even rebooting). But, after some time, it is not booting and i cant find a solution. I have tried installing the grub again, doesn't help. Well, the problem is all my settings, bookmarks and passwords are blocked in that partition. I cant find where the Chrome stores bookmarks in Ubuntu. Can you help me either getting my system rebooted or getting the bookmarks ? Thanks. p.s. I am currently on liveusb.

    Read the article

  • XEN disk mapping problem under opensolaris

    - by Louis
    I have a system with two harddisks, i wanted to use the simplicity of ZFS for my file server and i also need to run a linux. I choosed XEN virtualization for that, supported on both system. My GRUB is well configured and i can boot both system. I would like is to run both system with solaris as a dom0 and the debian installed on the 2nd HD as a virtual machine. My problem is that i want to use the partitions of my 1st harddisk (sda1 under linux) and it does not work. I didn't find my use case on the web- Here is my Opensolaris device name of this partition : /dev/rdsk/c7d0p1 But when i use : disk = [ 'phy:rdsk/c7d0p1,sda1,w' ] as a disk mapping in my XEN configuration file i have the error : Error: Device 2049 (vbd) could not be connected. error: "rdsk/c7d0p1" is not a valid block device. I am "lost".

    Read the article

  • Restoring from .wim image without access to Windows DVD

    - by Steven H
    I'm attempting to fix a friend's computer. It will not boot to anything Windows-related (see my earlier question for more information). I was able to boot into Peppermint OS to back up her files and grab the HP OEM image (.wim) so that I can restore from it (OEM W7 key, so I can't just do a W7 reinstall). However, I cannot figure out what the heck I need to do to be able to actually restore her computer to that image. I tried using these instructions on TechNet to create a WinPE flash drive, but those instructions don't actually make the flash drive bootable, so that option didn't work (the partition is labeled as active, but when trying to boot from it I get the message "Remove disks or other media. Press any key to restart."). All of the other instructions that I found require that I get into WinRE or boot from an install disk, which I cannot do. Any suggestions as to how I can apply this .wim boot image?

    Read the article

  • Windows 7 Image Restore with a smaller hard drive

    - by Vaccano
    I have a 500 GB drive that I have made a system image of. I would like to move that to a 250 GB drive (because it is a Solid State drive). I have made a Windows 7 Backup Image of my 500 GB drive. I am currently only using 163 GB of that drive. Can I just restore that to the target drive or will the restore be expecting a 500 GB drive? If it is expecting it I can shrink my partition to less that 250 and backup again. But I would rather not if that is not needed. Will the restore realize that it is not using all the space and just take what it needs?

    Read the article

  • BitNami LAMP stack on ubuntu

    - by Desmond Liang
    I just installed BitNami LAMP stack on ubuntu. When I visit localhost/127.0.0.1 Apache returns "403 Forbidden. You don't have permission to access / on this server." I try repointing Apache's home directory to another folder (same hard drive, same partition) that's set to 777 recursively. Still getting 403. And then I change the ownership of the directory to under my username and daemon group from root/root. Same error. Am I missing something here?

    Read the article

  • How to uncompress a 9GB file in Windows FAT32

    - by Kashif
    I have a 2GB RAR file that contains a 9GB video file. I'm using a FAT32 file system. Now I want to unzip that file but after 4GB I get an error due to the FAT32 file size limit. Now I want to know that how I can extract that video? I know that one way is to convert my partition to NTFS but I don't want to follow that way. I've also tried 7-zip but that again gives error after 4GB. One other way is to split that file but I don't know how I can split a video file that is zipped. So any idea please? How can I get rid of this problem.

    Read the article

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • Upgrading Windows 8 Consumer Preview to Release Preview

    - by user1407016
    I currently have my main hard drive split into two partitions. One being Windows 7 with about 600 GB of memory, the other Windows 8 Consumer Preview with about 50 GB. As you can guess it is set up for a dual boot. Today while looking up how to get the C# Facebook SDK for Metro apps I learned about the Release Preview being released. I was wondering: How do I go about getting rid of Windows 8 Consumer Preview and replacing it with the Release Preview? I know i can't just wipe it off my second partition because the dual boot uses Windows 8 to choose the operating system to boot.

    Read the article

  • Is there anyway to build a raid system without all drives?

    - by xenoterracide
    I'm building a raid1 (ok it will probably be a raid10,f2 but the difference with 2 drives... isn't much) system with 2 1TB drives. However, 1 of the drives I've ordered is bad so I'm RMA-ing it. I'm wondering if I could partition and install to the 1 drive and then rebuild the array when I get the second drive (after I test it of course) My initial investigation doesn't show me a way of creating the array without specifying all devices... and the device the second drive will be is one that has data that I will need to migrate (plus it's not big enough). Is it possible that I could create an array without specifying all devices? or specify false ones and reconfigure to the right ones later? Or some other method I'm not thinking of.

    Read the article

  • Fastest booting desktop linux distro? [closed]

    - by Kim
    I'm currently running Ubuntu 9.04 on my laptop and I'm very happy with it. But boot times aren't great... So I'd like to have a second distribution on my hard disc that I can boot to quickly check my email and stuff like that. It really only needs to run firefox and a terminal. Ext4 support would be a plus since my Ubuntu partition is ext4. In the next couple of hours I will try xPUD and DSL. Any other suggestions? EDIT: Tried xpud, hangs on boot.

    Read the article

  • safeboot and wubi

    - by G S
    Hi, Having read various posts it seems clear that installing Wubi on a machine which already has safeboot installed with disk encryption is a non-starter at the moment - nor is repartitioning the disk it would seem. However while the c: partion is encrypted - it has an NTFS formatted q: partition that is not encrypted and has about 4GB free - so I was wondering if there was any way to install WUBI using that partion. Obvisouly stil need a means of booting into it without stuffing my safeboot booting mechanism. I'm thinking that this should be possible as the safeboot booting mechanism will get me to the boot choice options and thereafter all references in the boot.ini entry for the wubi installtion should be to files on the non-encrypted NTFS partion and so it should be OK and not attempt any changes to my envrypted c: partion (other than adding the extra entry to boot.ini) ? any thoughts? ta

    Read the article

  • Vista won't boot - just get black screen

    - by DisgruntledGoat
    When I boot into Windows Vista (Ultimate), I just get a black screen (with the mouse visible and working). If I run in safe mode, it seems to pause for a while when loading crcdisk.sys. A lot of research says it could be a problem with the hard drive, but I dual boot Ubuntu and that works fine and I can still see and use the Windows partition absolutely fine from Ubuntu. I tried using the "startup repair" option on the Vista install disk but it didn't detect any problems. I have run chkdsk several times, with this: chkdsk C: /f /r And also drive D (the recycle bin). The first two times it detected and fixed errors on the C drive but now it doesn't detect any errors. Is there anything else that could be causing this problem?

    Read the article

  • why is usb disk corrupted by Vista restore

    - by Martin
    I have a laptop with Vista Business on an 80GB disk. I have created a full backup and stored that on the original 80GB drive. On my new 320GB disk, I have created a partition with exactly the same number of bytes as the original 80GB disk. I swap the disks so that the 320GB is internal, and the 80GB is in a USB caddy. I boot from the NEO restore CD and everything looks fine: I select the dump on the USB drive, target is drive C:, start the restore. After a few seconds, the restore fails with "not enough disks in machine or disk not large enough" error (I did note the exact phrase). I then swap the 80GB disk back to the internal drive, but the thing is unbootable. Why has the restore process scrubbed the boot status of the USB drive ?

    Read the article

  • Preparing a new physical system with VMWare

    - by Max
    I need to create a new installation of Windows, but at the same time I need this computer. So I decided to create a new physical disk from within VMWare, install windows/drivers/software and then just replace the HDD in the computer. I've bought a new HDD, split it into two partions and installed Windows 7 using the VMWare's ability to use phusical disks. I can see the windows files and directories that have been created on this partition, but when I'm replacing the HDD in the host machine it cannot boot from it. Why is that? Is it at all possible to create a bootable physical disk with VMWare or I should create a virtual disk and then use some HDD imaging tool to copy the HDD image to a physical disk? Maybe there's a better way of installing a new system and working on the computer at the same time?

    Read the article

  • Underbraces in Word math zones and dealing with stretchy parentheses

    - by Johannes Rössel
    Parentheses in Word usually stretch with whatever they're containing. This might be un-noticeable for things like but for stuff like it's definitely nice, especially compared to the fact that naïve LaTeX users often produce uglinesses such as There is a problem, however, when using under-/overbraces in math and putting parentheses around the complete term it becomes ugly. For simple things like shown here this can be solved by not letting the parentheses stretch which looks almost right. However, for more complex things it's certainly not an option: Both variants look horrible. So is there a way of letting the parentheses only stretch around the actual term parts, not including the under-/overbraces? Those are frequently used for annotations of individual pieces, so simply not using them is a bad idea too. In LaTeX you can get away with guesswork and using explicit sizes for the parentheses instead of relying on \left and \right but I haven't found a comparable option in Word yet. Since the underbrace is (tree-wise) a sibling of the term in parentheses it probably simply has to stretch and there probably can't be an algorithm that determines when to stretch or when not, considering that \above and \below are used for annotations as well but also for other things where perentheses have to stretch. Also, since the parenthesized expression is opaque from the outside one has to put the underbrace inside. From a markup point of view, at least. One can probably draw the rest around but that falls apart when styles change and wouldn't be a good idea either.

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >