Search Results

Search found 5809 results on 233 pages for 'isolated storage'.

Page 151/233 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • HDD situation - what would be best - data and backup

    - by Sam Johnson
    I just installed W8 on an Intel 330 180 GB SSD. I have 3 1TB HDDs. 1 HDD will be external for backup. 2 HDDs are then available for my PC. I do not need 2 TB of storage, so I thought I'd set these up to be exact clones of one another, so that if one dies I have a backup in the computer to go along with my external. Is this a good set up? How best would this be accomplished? I've heard people suggest RAID but I've never done RAID, have no idea what it is, and have no idea how to set it up in my BIOS. Thanks in advance

    Read the article

  • Best way to convert from IMAP to POP3?

    - by Brad
    At work, I connect to a corporate Exchange server via IMAP and Thunderbird 3. Over the course of a year or so, I've created quite a few folders on the server and have a lot of mail stored there. I'm hitting the storage limit of my mail account and want to convert to pulling mail down to my local box (running Linux) via POP3. I know that polling mail will only get mail in INBOX, but I'm wondering if there are solutions out there that could be used to pull mail from the other folders as well, or am I doomed to moving mail into the inbox manually and polling over and over again?

    Read the article

  • jQuery - I'm getting unexpected outputs from a basic math formula.

    - by OllieMcCarthy
    Hi I would like to start by saying I'd greatly appreciate anyones help on this. I have built a small caculator to calculate the amount a consumer can save annually on energy by installing a ground heat pump or solar panels. As far as I can tell the mathematical formulas are correct and my client verified this yesterday when I showed him the code. Two problems. The first is that the calculator is outputting ridiculously large numbers for the first result. The second problem is that the solar result is only working when there are no zeros in the fields. Are there some quirks as to how one would write mathematical formulas in JS or jQuery? Any help greatly appreciated. Here is the link - http://www.olliemccarthy.com/test/johncmurphy/?page_id=249 And here is the code for the entire function - $jq(document).ready(function(){ // Energy Bill Saver // Declare Variables var A = ""; // Input for Oil var B = ""; // Input for Storage Heater var C = ""; // Input for Natural Gas var D = ""; // Input for LPG var E = ""; // Input for Coal var F = ""; // Input for Wood Pellets var G = ""; // Input for Number of Occupants var J = ""; var K = ""; var H = ""; var I = ""; // Declare Constants var a = "0.0816"; // Rate for Oil var b = "0.0963"; // Rate for NightRate var c = "0.0558"; // Rate for Gas var d = "0.1579"; // Rate for LPG var e = "0.121"; // Rate for Coal var f = "0.0828"; // Rate for Pellets var g = "0.02675"; // Rate for Heat Pump var x = "1226.4"; // Splittin up I to avoid error var S1 = ""; // Splitting up the calcuation for I var S2 = ""; // Splitting up the calcuation for I var S3 = ""; // Splitting up the calcuation for I var S4 = ""; // Splitting up the calcuation for I var S5 = ""; // Splitting up the calcuation for I var S6 = ""; // Splitting up the calcuation for I // Calculate H (Ground Sourced Heat Pump) $jq(".es-calculate").click(function(){ $jq(".es-result-wrap").slideDown(300); A = $jq("input.es-oil").val(); B = $jq("input.es-storage").val(); C = $jq("input.es-gas").val(); D = $jq("input.es-lpg").val(); E = $jq("input.es-coal").val(); F = $jq("input.es-pellets").val(); G = $jq("input.es-occupants").val(); J = ( A / a ) + ( B / b ) + ( C / c ) + ( D / d ) + ( E / e ) + ( F / f ) ; H = A + B + C + D + E + F - ( J * g ) ; K = ( G * x ) ; if ( A !== "0" ) { S1 = ( ( ( A / a ) / J ) * K * a ) ; } else { S1 = "0" ; } if ( B !== "0" ) { S2 = ( ( ( B / b ) / J ) * K * b ) ; } else { S2 = "0" ; } if ( C !== "0" ) { S3 = ( ( ( C / c ) / J ) * K * c ) ; } else { S3 = "0" ; } if ( D !== "0" ) { S4 = ( ( ( D / d ) / J ) * K * d ) ; } else { S4 = "0" ; } if ( E !== "0" ) { S5 = ( ( ( E / e ) / J ) * K * e ) ; } else { S5 = "0" ; } if ( F !== "0" ) { S6 = ( ( ( F / f ) / J ) * K * f ) ; } else { S6 = "0" ; } I = S1 + S2 + S3 + S4 + S5 + S6 ; if(!isNaN(H)) {$jq("span.es-result-span-h").text(H.toFixed(2));} else{$jq("span.es-result-span-h").text('Error: Please enter numerals only');} if(!isNaN(I)) {$jq("span.es-result-span-i").text(I.toFixed(2));} else{$jq("span.es-result-span-i").text('Error: Please enter numerals only');} }); });

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • Archive Outlook mail items into SQL Server

    - by marc_s
    I am looking (and so far not finding any) for a solution to archive e-mail items from my Outlook into SQL Server. My PST is beginning to get really really big, and I'd love to extract my older e-mail into SQL Server in a way so I can still easily find mails if needed. I would prefer SQL Server as the storage medium since I'm familiar with it, and it's rock solid - I don't want to have a collection of PST files or CHM files or anything like that. Does anyone know of such a solution? I'm a power/home user - I can't afford $5'000 enterprise licenses - I need a sub-$100 solution for private use.

    Read the article

  • Recommending simple appliance for DansGuardian, iptables, snort inline

    - by SRobertJames
    I'm currently using a Linksys E2000 with dd-wrt. I'd like to add DansGuardian for Content Filtering and snort-inline for IPS; but those require a more powerful box (mainly, more storage). Can you recommend a good device to use? I'm open to both overwrite-the-firmware (like dd-wrt) and designed-to-be-customized boxes. Requirements: 1. 5+ Ethernet ports, pref. GigE 2. small form factor 3. No noise (office environment) 4. low power 5. Not sure about 802.11 wireless Budget < $400, pref. less.

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Parity Initialization after putting in two new disks

    - by lbanz
    All my firmware is up to date on the server and the controllers. Storage crashed over the weekend. I rebooted it and it detected that I put in two new disks last week (I did check that both disk completed the rebuilding process last week). After it booted into the OS I see that it gave me an information message. After 18 hours it is at 54% so it is looking healthy. But I need to replace 5 more disk in the msa. Should I wait for this message to finish before replacing more disks? 785 Background parity initialization is currently queued or in progress on Logical Drive 1 (15.0 TB, RAID 5). If background parity initialization is queued, it will start when I/O is performed on the drive. When background parity initialization completes, the performance of the logical drive will improve.

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • System has reached the maximum size allowed for the system part of the registry

    - by Bob Denny
    To be precise System has reached the maximum size allowed for the system part of the registry. Additional storage requests will be ignored. WinXP/64 running fine for 2 years (no /3Gb switch), just started happening. I used ntregopt and the problem went away at least temporarily. However, looking before and after in Windows\System32\Config I see that my System file was reduced only by 10% and is still 170+ Mb. According to my rather extensive research with Google, this is "huge" and should be more like 10-20Mb. The system runs fine. There is a System.bak that is only 11Mb and has the date when I ran ntregopt. That's what I know. Now my question: Is there anything I can do to reduce or rebuild the System registry hive given the above info?

    Read the article

  • Best way to replicate servers

    - by Matthew
    I currently have two servers both with linux software RAID1 configurations. They use heartbeat and DRBD to create a shared DRBD device that hosts a a exported NFS directory. The servers run Ubuntu Server with a LXDE GUI and some IP These servers are going to be placed on fishing vessels to act has redundant storage for IP cameras. My boss wants me to figure out the most efficient way to create these servers. We might be looking at pushing out several systems a week. Each configuration will be almost identical besides IP addressing. What would be the best method to automate the configuration process? We are trying to cut down on labor costs to set these up. Imaging and Proceeding are both on my mind right now

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • Ways to increase my Ubuntu partition space

    - by Andreas Grech
    I am currently running Ubuntu and Windows 7 as dual-boot on a single HD. The problem is that when I installed Ubuntu, I didn't allocate as much space as I thought I would need and now I need 'reinstall' Ubuntu so that I can increase the amount of storage space. Now there are two ways to go about this. Either I use use gparted to increase my partition space (but I read that it's not really that safe as regards data loss) or create the new partition with more space and reinstall Ubuntu there. But if want to reinstall Ubuntu, is there a way I can somehow "save" my current Ubuntu and install that one? What I mean is that I don't want to lose my current installed packages and files that I have on this partition. Is there a way to kind of maybe 'streamline' my current Ubuntu so that I install this one on the new partition? If not, what are your opinions as regards gparted?

    Read the article

  • Can someone explain the physical architecture of RAID 10 in complete layman's terms?

    - by Hank
    I am a newbie in the world of storage and I am having a hard time digesting the physical architecture of some of the RAID levels. I am particularly interested in RAID 10, and 50. I asked the question specifically about RAID 10, because I feel if I understand that, I'll understand the other. So, I get the definition of RAID 10 - "minimum 4 disks, a striped array whose segments are mirrored". If I've got 4 disks and Disks 1 and 2 are a mirrored pair, and Disks 3 and 4 are a mirrored pair - where does the data get striped? Thanks.

    Read the article

  • RHEL raw device (over VMware RDM) performance issues

    - by jifa
    I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device. I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s. I created raw devices using udev pretty straightforwardly: ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" per device Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s. Can anyone point me in a helpful direction? Thanks in advance, -- jifa

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • Network adapters reliability

    - by casey_miller
    Can you help me with understanding of reliability of network adapters. Most of the time servers do have at least 2 NIC's bonded to provide sort of a HA for it. So in case of one NIC fails, the second would still do the job. I wonder which factors work when you use network adapters. I know that, the most important and weakest part of any computer system is: storage (i.e HDD). but how reliable actually network adapters are? There are more expensive ones, and cheaper adapters. In which cases do they actually fail? In what circumstances. May it be a intensive usage of them Time when it's on In your experience how often you found yourself changing NIC's due to their fail? Or just what's the typical lifetime of commodity NIC's? thanks.

    Read the article

  • User accounts in FTP

    - by Brad
    I have an FTP server(proftpd on debian) that I'm going to allow a couple friends access to, and I want some safety nets in place, just in case. These are some of the things I'd like to do: Jail the accounts to their home directories and impose a cap on the amount of data they can upload Allow them access to a shared folder(via symlink or something) where they have full access(Also with a storage cap, but larger) Allow my own account full access to the system(Using groups I guess) Not allow anonymous access, or allow it with its own folder, separate from the shared user folder Currently, I've got the accounts set up and jailed, but it seems like the symlink that I put in is not allowing them to visit the shared folder. I suppose this has to do with them not having read permissions anywhere but their own home directories, or maybe it's something else, I'll continue to look into it and provide any information that is requested. Is what I'm trying to do possible? Any tips or resources that you can share are appreciated. Thanks.

    Read the article

  • Using Folder Redirection GPO and Offline Files and Folders

    - by user132844
    I want to use Folder Redirection to redirect user's My Documents to a network share. First question is: What is best practices for mapping the drive? Should I use the profile tab in AD with the %username% variable, or a net use logon script, or something else? Second question is: How do I deal with laptops and syncing the network with the local storage? I want to have 2-way syncing so if they manually map their networked home drive and edit it from a different computer, it will sync the newer version to their My Documents folder the next time they connect their normal work computer. I also want to be sure that if they edit a file offline on their laptop while away from the office, that the network version syncs the changes the next time they connect that laptop. Please advise best practices for this scenario in a 2008 R2/Win7 environment. I am also interested in Mac clients for this environment, and while I am very Mac savvy, I would like to hear what others consider to be best practices for Mac network homedirs in a Win environment.

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Command-line access for Apple Time Machine?

    - by Stefan Lasiewski
    We use Apple's Time Machine to back up our workstations at the office. If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way. Is there a way to browse the Time Machine archive using the Mac's command-line? I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like: cp .snapshot/daily.0/filename.txt . or cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow Is there an equivalent for Apple's Time Machine?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >