Search Results

Search found 1864 results on 75 pages for 'raid 0'.

Page 72/75 | < Previous Page | 68 69 70 71 72 73 74 75  | Next Page >

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software? (90% of the projects are .net C#)

    Read the article

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Should I be backing up a webapp's data to another host continuously ?

    - by user196289
    I have webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database. If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes of where I was up to ? Or seconds ? Or should I build in a background process to continually mirror the database elsewhere ? What is normal / sensible ? Obviously a good host will have RAID and other redundancy so the likelihood of total loss should be low, and if they have periodic backups I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it). Any suggestions / advice ? Are there off the shelf frameworks for doing this ? (I'm primarily working in Java) And should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe ?

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Partitioning recommendations for a Proxmox VM Server (OpenVZ)

    - by luison
    We are new to virtualization and we are planning to turn our online server into a virualized one, mainly for maintenance, backup and recovery improvements. Initially we would only have one real virtual system with load plus 1-3 copys for testing and recovering and maybe a small centralized syslog virtual machine. We would like, if possible the host machine to include an iptables plus rsync to back up to other machines and some other global security systems. Due to this and the offerings of our hosting supplier we are mainly considering Proxmox for its simplicity (we like the idea of its web admin panel) and as I also understand that the container approach of OpenVMZ systems may fit well resource wise with our setup. The base system comes with debian so we can personalise it to our requirements. Proxmox installations default installs an LVM partition for the VMs. Our doubts are with the fact of what would be the best partition structure for this considering that: we would like to have a mirror of the root partition we could boot from if required (our provider supports booting the system from another partition via control panel) we ideally would like to have a partition that could be shared among the VM systems. We still don't know if this is possible directly with OpenVMZ containers, otherwise we are considering doing this by sharing it via NFS on the host machine. we want to use the backup system available on the proxmox host administrator to programme VMs backups and then rsync it to another machine. With this based on a Linux Raid of aprox (750Gb) we are considering something like: ext3_1/ - (20Gb) ext3_2/bak_root - (20Gb) mostly unmounted, root partition sync LVM_1 /var/lib/vz - (390Gb) partition for virtual images LVM_2 /shared_data - (30Gb) LVM_3 /backups - (300Gb) where all backups would be allocated Our initial tests with Proxmox seem to have issues with snapshots backups like this, perhaps caused by the fact that they can not be done to another LVM partition (error: command 'lvcreate --size 1024M --snapshot --name vzsnap-ns204084.XXX.net-0 /dev/pve/LV' failed with exit code 5) in which case we might have to use a standart ext3 partition (but unsure if we can do this with the 4 primary partition limitations). Does this makes more or less sense? Would it be mad to for example write VMs /var/logs to a NFS mounted partition (on the host system)? Are their any other easier ways to mount host system partitions (or folders) to the VMs?

    Read the article

  • Best available technology for layered disk cache in linux

    - by SpliFF
    I've just bought a 6-core Phenom with 16G of RAM. I use it primarily for compiling and video encoding (and occassional web/db). I'm finding all activities get disk-bound and I just can't keep all 6 cores fed. I'm buying an SSD raid to sit between the HDD and tmpfs. I want to setup a "layered" filesystem where reads are cached on tmpfs but writes safely go through to the SSD. I want files (or blocks) that haven't been read lately on the SSD to then be written back to a HDD using a compressed FS or block layer. So basically reads: - Check tmpfs - Check SSD - Check HD And writes: - Straight to SSD (for safety), then tmpfs (for speed) And periodically, or when space gets low: - Move least frequently accessed files down one layer. I've seen a few projects of interest. CacheFS, cachefsd, bcache seem pretty close but I'm having trouble determining which are practical. bcache seems a little risky (early adoption), cachefs seems tied to specific network filesystems. There are "union" projects unionfs and aufs that let you mount filesystems over each other (USB device over a DVD usually) but both are distributed as a patch and I get the impression this sort of "transparent" mounting was going to become a kernel feature rather than a FS. I know the kernel has a built-in disk cache but it doesn't seem to work well with compiling. I see a 20x speed improvement when I move my source files to tmpfs. I think it's because the standard buffers are dedicated to a specific process and compiling creates and destroys thousands of processes during a build (just guessing there). It looks like I really want those files precached. I've read tmpfs can use virtual memory. In that case is it practical to create a giant tmpfs with swap on the SSD? I don't need to boot off the resulting layered filesystem. I can load grub, kernel and initrd from elsewhere if needed. So that's the background. The question has several components I guess: Recommended FS and/or block layer for the SSD and compressed HDD. Recommended mkfs parameters (block size, options etc...) Recommended cache/mount technology to bind the layers transparently Required mount parameters Required kernel options / patches, etc..

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • Low framerate on background apps

    - by user1698923
    My problem is that when a game is running in the foreground, in Full Screen mode, any applications on my second monitor (such as youtube videos, videos, not app specific) drop their frame-rate to about 2-3 FPS. It seems like some sort of power management option that I can't track down. As far as I can tell, it's not due to the GPU not being able to keep up. For instance, my PC can play League of Legends at about 280FPS when the framerate is uncapped. If i cap it at 60FPS using the in-game option, it has no affect on the performance of the background app. Summary Operating System Windows 8 Pro 64-bit CPU Intel Core i7 3820 @ 3.60GHz 42 °C Sandy Bridge-E 32nm Technology RAM 12.0GB Triple-Channel DDR3 @ 533MHz (7-7-7-20) Motherboard Gigabyte Technology Co., Ltd. X79-UD3 (SOCKET 0) 37 °C Graphics DELL U2713HM (2560x1440@59Hz) DELL U2713HM (2560x1440@59Hz) 1280MB NVIDIA GeForce GTX 570 (Gigabyte) 58 °C Hard Drives 212GB Volume0 (RAID) 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 36 °C 1863GB Western Digital WDC WD20EARS-00MVWB0 (SATA) 34 °C Optical Drives No optical disk drives detected Audio ASUS Xonar Essence STX Audio Device Operating System Windows 8 Pro 64-bit Computer type: Desktop Graphics Monitor 1 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Secondary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY4\Monitor0 Monitor 2 Name DELL U2713HM on NVIDIA GeForce GTX 570 Current Resolution 2560x1440 pixels Work Resolution 2560x1400 pixels State Enabled, Output devices support Multiple displays Extended, Primary, Enabled Monitor Width 2560 Monitor Height 1440 Monitor BPP 32 bits per pixel Monitor Frequency 59 Hz Device \\.\DISPLAY5\Monitor0 NVIDIA GeForce GTX 570 Manufacturer NVIDIA Model GeForce GTX 570 GPU GF110 Device ID 10DE-1086 Revision A2 Subvendor Gigabyte (1458) Series GeForce GTX 500 Current Performance Level Level 3 Current GPU Clock 845 MHz Current Memory Clock 1900 MHz Current Shader Clock 1690 MHz Voltage 0.988 V Technology 40 nm Die Size 520 mm² Release Date Dec 07, 2010 DirectX Support 11.0 OpenGL Support 5.0 Bus Interface PCI Express x16 Temperature 57 °C Driver version 9.18.13.2018 BIOS Version 70.10.55.00.01 ROPs 40 Shaders 512 unified Memory Type GDDR5 Memory 1280 MB Bus Width 64x5 (320 bit) Filtering Modes 16x Anisotropic Noise Level Moderate Max Power Draw 219 Watts Count of performance levels : 3 Level 1 - "Default" GPU Clock 50 MHz Memory Clock 135 MHz Shader Clock 101 MHz Level 2 - "2D Desktop" GPU Clock 405 MHz Memory Clock 324 MHz Shader Clock 810 MHz Level 3 - "3D Applications" GPU Clock 845 MHz Memory Clock 1900 MHz Shader Clock 1690 MHz Things I've tried: 1) Updating the graphics driver 2) Setting windows power mode to High Performance 3) Reset Nvidia Global Performance settings to default

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Windows 7 x64 installation freezes on new PC build

    - by jhsowter
    Symptoms While attempting to install Windows 7 (64 bit) on my new PC build, it freezes usually at the point where it is expanding the windows image, but has frozen as early as accepting the licence agreement, and as late as just after the first restart. My specs are at the bottom of the post. So far I have tried the following to identify the problem, in rough chronological order: Tried different hard drives with different sata cables. Same symptoms. I later used a different computer to install windows on the same hard drive with no problems. Tried the RAM in different slots, and tried one RAM stick instead of two. Same symptoms. Updated the BIOS to 1.60. Same symptoms. Ran Memtest86+ with RAM in dual channel. It passed about 6 times when I left it running overnight. Used USB to install windows instead of an optical drive. Same symptoms. Change SATA configuration from AHCI to IDE. Same symptoms. Tried various different SATA ports. Same symptoms. Updated BIOS to 1.70. Same symptoms. I saw the RAM did not list my motherboard as being supported even though the motherboard did list the RAM as being supported. So I tried some Kingston DDR3 1333MHz RAM instead. Same symptoms. Other (possibly) pertinent information My CPU idles at about 30 °C. I can't tell what it gets to when it's working. When I installed the CPU, the lever which locks the CPU in place took quite a bit of force to pull down. Now I didn't just yank it down without rechecking the CPU was seated properly about 5 times, but it does seems unusual, and I wonder if the CPU was seated badly if I would see these symptoms? I am out of ideas and don't know how to diagnose any further. I suppose either the motherboard or CPU must be the problem. I am on the verge of taking it to a specialist. The Question How should I proceed from here? Is there anything I can rule out as being the source of the symptoms I am seeing? My Specs CPU: Intel i5 3570k RAM: G.Skill RipjawsX 8GB kit HDD: single 3.5" 500GB SATA or 160GB 2.5" SATA (at different times and sometime together. But no RAID or anything). MB: ASRock Extreme4 Z77 PSU: Silverstone Strider Plus 600W ST60F-P

    Read the article

  • Optimal setup for ASUS P6X58D Premium BIOS (no OC)?

    - by rumtscho
    Normally, I'd trust the mainboard manufacturer to choose the best options as defaults. But I had trouble with the board, because even with Quick Boot enabled, it booted twice as slowly as a Pentium 4 Celeron. Then I changed lots of options at once (most of them weren't explained in the manual, just mentioned with a single sentence) and the boot time is only marginally worse than the Pentium 4 (54 sec against 46 sec from button to pw entering screen). Now I don't know if I have turned something off which should have stayed on. I guess I even won't be able to boot from a CD now, because even though it is present in the boot sequence, I took off a timeout I think it needs to check whether there is a disk in the drive. The second reason is that I don't have an internal HDD, only a SSD. I forgot my sources blush but I am under the impression that today's BIOS and OS options are geared toward booting from a HDD, which is often less than optimal when one boots from a SSD, especially when there are functions which cause avoidable writing cycles, as a SSD wears out after too many writing cycles. Most of the things I've read concern the OS, but there are some BIOS-relevant options too. I am especially confused about the disk mode. The board supports AHCI, IDE-simulation and RAID, but of the different articles I've read, there is a proponent for each and no clear arguments for any. So can one tell me which options are important in general and which are important for a SSD-only system? I don't want to overclock the CPU, so you don't have to say anything about this (yes I know the board is meant for OC:)). I am thinking of overclocking the RAM, since they sold me 1600er heatsinked modules which are running at 1066 now, but I'm not sure yet about that. The rest of the system: i7-930, Intel X25-m G2, 6 GB RAM, GTS 250, some no-name Blue-ray ROM. 2 external HDDs over USB 2.0. Lots of other USB-connected hardware (12 devices I think), no SATA 3 drives (will disabling the controller have an impact on performance?), no LAN, only WiFi. Lucid Lynx 64 bit, no dual boot, no virtual installations. The main uses of the system are: managing and playing/showing all the media stored on the external disks, lots of image manipulation, some video editing, a bit of (non-demanding) gaming, rarely development. Lots of Internet surfing too, but this shouldn't have much impact on performance.

    Read the article

  • Log - Server kernel: INFO: task httpd:000000 blocked for more than 120 seconds

    - by valter
    Almost everyday my server is crashing due to hight server load, and even restarting apache or mysql can't solve the problem. I need to reboot the server to solve, or it crash again due to the high load. The log system records something like this when it crashes: Aug 11 18:33:53 server kernel: INFO: task httpd:20008 blocked for more than 120 seconds. Aug 11 18:33:53 server kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 11 18:33:53 server kernel: httpd D ffffffff801538ac 0 20008 5816 20066 19809 (NOTLB) Aug 11 18:33:53 server kernel: ffff81025a299dc8 0000000000000082 ffff81033b4c0740 ffffffff80009a14 Aug 11 18:33:53 server kernel: ffff8101063f8d80 0000000000000009 ffff8100b758f7e0 ffff8101c57187e0 Aug 11 18:33:53 server kernel: 00009436d4100b6c 000000000001d50f ffff8100b758f9c8 000000083b531588 Aug 11 18:33:53 server kernel: Call Trace: Aug 11 18:33:53 server kernel: [<ffffffff80009a14>] __link_path_walk+0x173/0xfb9 Aug 11 18:33:53 server kernel: [<ffffffff8002cc16>] mntput_no_expire+0x19/0x89 Aug 11 18:33:53 server kernel: [<ffffffff80063c4f>] __mutex_lock_slowpath+0x60/0x9b Aug 11 18:33:53 server kernel: [<ffffffff80023908>] __path_lookup_intent_open+0x56/0x97 Aug 11 18:33:53 server kernel: [<ffffffff80063c99>] .text.lock.mutex+0xf/0x14 Aug 11 18:33:53 server kernel: [<ffffffff8001b21f>] open_namei+0xea/0x712 Aug 11 18:33:54 server kernel: [<ffffffff8002768a>] do_filp_open+0x1c/0x38 Aug 11 18:33:54 server kernel: Firewall: *UDP_IN Blocked* IN=eth1 OUT= MAC=ff:ff:ff:ff:ff:ff:00:30:48:9e:6e:99:08:00 SRC=208.43.135.158 DST=255.255.255.255 LEN=151 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=38354 DPT=6112 LEN=131 Aug 11 18:33:54 server kernel: [<ffffffff8001a061>] do_sys_open+0x44/0xbe Aug 11 18:33:54 server kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 I googled a lot trying to find a solution. But it looks that the solution is just to update the kernel or disk driver, thinks that I don't know how to do. In this url http://bugs.centos.org/view.php?id=4515 a lot o people report similar problems, except the fact that they are not related to httpd like mine. According to one member, one solution would be to add "elevator=noop " to /etc/grub.conf like in this example: title CentOS (2.6.18-238.12.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-238.12.1.el5xen ro root=/dev/VolGroup00/LogVol00 elevator=noop initrd /initrd-2.6.18-238.12.1.el5xen.img Would this really solve the problem? My disk are working in RAID. Can this cause some problem to my server? Is there any other solution?

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75  | Next Page >