Search Results

Search found 607 results on 25 pages for 'tb'.

Page 12/25 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How can I turn a single element in a list into multiple elements using Python?

    - by Trivun
    I have a list of elements, and each element consists of four seperate values that are seperated by tabs: ['A\tB\tC\tD', 'Q\tW\tE\tR', etc.] What I want is to create a larger list without the tabs, so that each value is a seperate element: ['A', 'B', 'C', 'D', 'Q', 'W', 'E', 'R', etc.] How can I do that in Python? I need it for my coursework, due tonight (midnight GMT) and I'm completely stumped.

    Read the article

  • Changing the block size of a dfs file in Hadoop

    - by Sam
    I found that my map tasks is currently inefficient when parsing one particular set of files (total 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster, does anyone know the command that would change the block size when I upload it (ie copy from local to the dfs)? Thanks!

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Dropped hard drive won't mount

    - by Dave DeLong
    I have a 2 TB HFS+-formatted external hard drive that got dropped a couple of days ago while transferring files onto a Macbook Pro. Now the drive's partitions won't mount. Disk Utility can see the drive, but doesn't recognize that it has any partitions. I've tried using Data Rescue 2 to recover files off of it, but it couldn't find anything. In addition, our local computer repair shop said they couldn't find anything on there either. I know that I could ship the drive off to someone like DriveSavers, but I was hoping for a cheaper option (since they start at about $500 for the attempt). Is there something else I could try on my own? Would TestDisk be able to help with something like this?

    Read the article

  • Move smaller hard drive to partition on a larger hard drive

    - by bluejeansummer
    My parents bought a new hard drive for a laptop that I've owned for several years. It's much larger than the current one, so I plan on splitting it up to dual boot it with Ubuntu. I have no problem with partitioning a drive (I always keep a LiveCD handy), but my question is this: how can I go about moving the existing partition to the new drive? This is a laptop, so I can't simply plug the new drive into another slot. Also, even if I manage to move it, will Windows still work on the new drive in a larger partition? I've had this laptop for quite a while, and I've lost the recovery discs that came with it a long time ago. I also have a lot of software without CDs to reinstall them with. This makes not reinstalling Windows a high priority. In case it helps, both drives use 2.5" PATA, and I have a 1 TB external drive available if it's needed.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • low-cost RAID NAS for home use?

    - by gravyface
    Have a noisy, power-hungry Pentium 4 based Ubuntu server that I want to replace with a nice, low-power mini-ITX/Intel Atom-based machine to do my network services (DHCP, DNS, IPSec, Web/mail, FTP, etc.) and am thinking of a (hopefully) equally-low powered NAS using NFS over GbE with at least 1 TB space and a RAID 5 (preferred) or RAID 0 (likely) configuration for redundancy with a couple of spare disks I can swap in as needed down the road. Would I be better off getting a full sized ATX mobo/case and configuring the RAID internally? I really want to keep power consumption down as much as possible as I leave my home server up 24/7.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • System backup with Norton Ghost

    - by Mehper C. Palavuzlar
    I will upgrade my system from Windows Vista Home Premium (x64) to Windows 7 (x64). Before starting the upgrade process, I want to back up my current system with Norton Ghost. I have never used it before, so I need assistance to do that. At the moment, there is 139 GB used space by Vista and I have 1 TB external HD connected via USB. If you can tell me the step by step instructions about how to back up and how to restore if the upgrade somehow fails, I'll appreciate that. Thanks.

    Read the article

  • Hyper-V VMs hanging 10 minutes after startup

    - by Ken George
    Hyper-V running under a fresh install of 2008 R2 DC 2 VMs both running 2008 R2 STD One VM has SQL 2008 Server w/ SP2 and Office 2007 Enterprise w/ SP2 Othe VMS only Office 2007 w/ SP2. Approximately 10 minutes after reboot the Hyuper-V host the VMs will hang Hang = answers pings, but no RDP connections and Hyper-V console session is non responsive Disabled Hyper-V and had no proble with 2008 R2 DC host. Started the three Hyper-V services and 10 minutes later was hung again. Hardware is HP DL380 G4 2 socket, 48 GB, Internal SAS controller 1.5TB C drive VMs .VHDs are on external SAS controller on a 1.5 TB RAID5 volume. Nothing in event log on either VMs or Hyper-V host. Ken

    Read the article

  • Windows 7 Reading Proper Disk Usage Statistics on Mounted Volumes

    - by Troy Perkins
    I'm running windows 7 with 2 x 1.5 TBYTE Drives. The second internal drive is setup as a mounted volume as C:\Archives Clicking computer icon in windows explorer, it only shows capacity stats for C: and Not C:\Archives Also, the usage stats that do show for C: show to be 100% capacity red - yet the system runs fine. No warnings. Can someone explain this? I do have a lot of stuff on the c: drive, but I'm sure its not 1.5 TB worth and C:\Archives hardly has anything it. Thanks! Troy

    Read the article

  • Suggest me a good php-fpm configuartion

    - by Werulz
    I am configuring a server for a friend.The server has the following specs 8GB RAM Quad Core processor 1 TB HDD 100 mbps port However all php files are loadking very slowly.I did a speedtest and server takes 16 secs to Load FIRST byte.I strongly believe its my php-fpm configuration.Server uses nginx and php only , no mysql etc... My current php-fpm configuration pm.max_children = 50 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 35 Server load and ram usage are perfectly fine Please suggest me a good configuration for this server UPDATE: This configuration works fine pm.max_children = 20 pm.start_servers = 7 pm.min_spare_servers = 5 pm.max_spare_servers = 10 pm.max_requests = 100 The problem with first byte load time is solved.However after like 15-20 hours First byte load time increase gradually. I have to reload php-fpm to get small load time Based on my conf above what i modify to it so that first byte load time remain small and i don't have to restart it:P

    Read the article

  • Windows Home Server style redundancy/multi-disk-support on Windows Server 2008 R2?

    - by user19597
    I'm setting up a fileserver for our department. It'll be connected to the domain. I want it to have a very large amount of storage (several TB). Ideally, it should also preserve disk space by identifying identical files and only storing them once. It should be fault tollerant so that if one of the drives fails, that drive can be replaced without losing any data. All of these features are available in Microsoft's consumer offering - Windows Home Server. However, I can't find these kind of features within the enterprise Windows Server 2008 R2. Am I missing something? I know that I could buy a Drobo, or similar, and use this instead. However, I would prefer to use a built-in feature of Windows Server should it exist. It seems surprising to me that these features should be available in Home Server but not in an enterprise fileserver.

    Read the article

  • Fan twitches and LEDs blink when computer is plugged in

    - by Zifre
    I just finished assembling a desktop for the first time. The specs are: Gigabyte GA-H55M-S2H motherboard Core i3 530 CPU 4 GB DDR3 RAM 1 TB SATA hard drive 500 Watt PSU As soon as I plug in the computer, the "phase LED" starts blinking orange and the system fan LED blinks while the fan "twitches". This continues until about three seconds after I unplug the computer. This worries me a lot because I haven't even turned the computer on and it continues even after there is no power. I did make sure the PSU is on the proper power setting. What is causing this and how can I fix it? Is the motherboard dead?

    Read the article

  • Screen resolution of Googlebot mobile?

    - by Baumr
    Does Googlebot-Mobile have a viewport resolution it sends across? If so, what is it? It's a general question with broad relevance, but I am asking with reference to responsive design: particularly when serving different image resolution to different viewports via JavaScript. While Googlebot has its issues with JavaScript, it will become better with time. Thus, it would be good to know which version of the same image would be crawled (since most responsive image JS solutions base their logic on resolution). Feature phones Googlebot-Mobile: SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html) DoCoMo/2.0 N905i(c100;TB;W24H16) (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html) Smartphone Googlebot-Mobile: Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)

    Read the article

  • Performance hit with new hard drive?

    - by aaaidan
    I've recently upgraded my laptop's internal hard drive from a 160GB to 1TB drive. I cloned the drive, then installed it. The general system performance seems appreciably slower. In particular application launches seem to take much longer. Is this possible, or am I just expecting too much from the new drive? It's running a Macbook Pro which is a couple of years old. Any ideas? 160 GB 7MB cache 5400 rpm NCQ (Hitachi HTS545016B9SA02) -- original drive 1 TB 8MB cache 5400 rpm SATA300 NCQ (Western Digital WD10TPVT-00HT5T0) Sisoftware links: Hitachi HTS545016B9SA02 Western Digital WD10TPVT-00HT5T0

    Read the article

  • Does any know fix/cause of USB drives that lose connection

    - by Burch Kealey
    I have been having significant problems trying to copy files to a USB drive (G0-Flex 1.5 tb) No matter what I use to try to copy the files (python code, Windows copy and paste or Drobo Copy utility) I ultimately get a failure. The failure is always some indication that the device is not ready or able to be written to. I am pretty sure the problem is that the drive is losing its connection. I have done everything I can find so far including setting the USB Root hub to never turn off the power, I updated some usb drivers etc. I have found references to this problem primarily with Win7-64 bit. I have also had USB connection problems with other devices- we kept losing a connection to our Bravo Disc Publisher when we went to Win7 and finally bought a newer model and have not had problems since. Any pointers about diagnosing and or understanding the problem would be appreciated.

    Read the article

  • How do I use an internal SSD as a scratch disk for FCP X?

    - by andrewb
    I'm contemplating setting up my MacBook Air as a video editing machine. If I do this, I'll upgrade to a 256 GB SSD, and I should be able to keep around 100 GB or more free for video editing. The video files would of course be stored externally, but save purchasing some expensive Thunderbolt RAID device (which I suppose is gradually becoming more of an option), it will be slow for read/writes. How can I have a set up where I take advantage of my SSD's speed for a scratch disk/cache for FCP X, but still have the TB(s) of storage of externals? I don't want to have to be moving files constantly back and forth, this is about saving time not wasting it.

    Read the article

  • Corrupt file indicative of corrupt hard drive?

    - by Elipsicon
    I have noticed that two files on my (almost full) 2 TB hard drive have been corrupted. One file has 20 kB (!) corrupted, i.e. consecutive 20 kB have changed, even though the modification date of the file hasn't changed and I haven't worked with this file for over a year. This tells me that something "below" the file system level has messed with the data and the only thing I can think of is hardware failure, most likely hard disk failure. I've tested my RAM already and it works flawlessly. I'm using ext4 on Linux, if that is of any help. Is this normal? Is it time to change my hard drive disk before something worse happens? What can I do to prevent that from happening in the future? Is there some built-in feature of, or extension to ext4 that includes additional error correcting code and/or watches files for changes that haven't been caused by the OS?

    Read the article

  • Is 850W SMPS enough for the below Config?

    - by bali208
    We are planing to assemble a file server of the following configuration. 1) Intel S2400SC2 Motherboard. 2) Xeon 2407 Processor x 2 (Dual Processors) 3) 8gb x 8 ECC DDR3 RAM 4) One 2 TB Seagate HDD. 5) 8 cabinet fans fitted for NZXT Switch 810 Cabinet. Our question is is CoolerMaster 850 Watts power supply enough for our machine? Or we have to put 1050 Watts power Supply? Please suggest the best Brand and Wattage of SMPS we have to buy for our system. Thank you.

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Apache Prefork Configuration

    - by user1618606
    I'm newbie on VPS configuration. So, I've installed apache, php and mysql and now I need to know how to configure Prefork to optimize Apache. The system configuration is: CPU Cores 2 x 2 Ghz @ 4 Ghz RAM Memory 2304 MB DDR3 Burst Memory 3 GB DDR3 Disk Space 30 GB SSD Bandwidth 3 TB SwitchPort 1 Gbps Actually, after linux, mysql, apache and php, there are 250 MB memory in use. Well, I don't have idea to calculate. I saw in some websistes, some vars like: KeepAlive On KeepAliveTimeout 1 MaxKeepAliveRequests 100 StartServers 15 MinSpareServers 15 MaxSpareServers 15 MaxClients 20 MaxRequestsPerChild 0 or StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 How I could to do: Prefork or worker? Where and how the vars are placed? In httpd.conf?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >