Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 46/67 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Please help to find a solution for two way, real-time synchronization on Centos 5.5 64Bit

    - by Vipul Limbachiya
    I am in need of a real time, two way synchronization software for Centos 5.5 / 64Bit. Here's little explanation: It needs to be able to perform: Two way synchronization. It must be realtime. By realtime means it can be almost realtime, i.e. a delay of 1 second for example is fine. And the folders are on the same server. I am currently using GlusterFS across two webservers. However, it has extremely poor small file read performance and it's slowing down my website. There's nothing more that can be done to improve this, I have already tested many configurations. As a solution, I was going to mount a RAM drive (tmpfs) that mirrors the GlusterFS web files but get the webserver to use the RAM drive. The issue is that I need two way realtime mirroring or replication between glusterfs and the RAM drive. I need this is as Apache writes files as wells. As I said, realtime two way synchronization across two folders. Which are in fact 2 different mounts points. The RAM (tmpfs) mount poing and the GlusterFS mount point. I already know about: Rsync - Which is one way Unison - Which is not realtime Please suggest me any solution free or paid. Thanks in advance

    Read the article

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • SSH to remote host (edgemarc 4200 or 4500 series routers) and pull arp data

    - by MaQleod
    I've been trying to think of a method to do this for days, but have not come up with anything yet. Ideally, this is what I'm looking to do: From a windows XP machine, I need to open an SSH connection to a remote host, send the arp command, and pull the text results of the command back for use on the client. I will need to parse this data and preferably produce a 2D array of IPs and MAC addresses. There will be no shared keys, this is all done with a username and password that will always be different, they will need to be fed into the command via variables that will be pulled from a database using an autoit script based on the WAN ip of the remote host. Now the actual parsing of the data and creation of the array will be easy if I can just get the text of the arp table. Is there any way to ssh to a remote host, run a command and return the data from that command to the client in a batch script or perl script (it is ok if it writes the text to a file, I can read it out of the file later, I just need it to get to the client)?

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • How does one skip “Windows did not shut down successfully” in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • Attempted hack on VPS, how to protect in future, what were they trying to do?

    - by Moin Zaman
    UPDATE: They're still here. Help me stop or trap them! Hi SF'ers, I've just had someone hack one of my clients sites. They managed to get to change a file so that the checkout page on the site writes payment information to a text file. Fortunately or unfortunately they stuffed up, the had a typo in the code, which broke the site so I came to know about it straight away. I have some inkling as to how they managed to do this: My website CMS has a File upload area where you can upload images and files to be used within the website. The uploads are limited to 2 folders. I found two suspicious files in these folders and on examining the contents it looks like these files allow the hacker to view the server's filesystem and upload their own files, modify files and even change registry keys?! I've deleted some files, and changed passwords and am in the process of trying to secure the CMS and limit file uploads by extensions. Anything else you guys can suggest I do to try and find out more details about how they got in and what else I can do to prevent this in future?

    Read the article

  • Redis connection issue

    - by mre
    We are currently experiencing a lot of Redis errors with the message Unable to connect: read error on connection, trying next server We run Redis on FreeBSD using PHP Redis and we have a hard time reproducing the error on Ubuntu so this might be a hint. There's a long-running issue on that topic on github. Basically we get a socket from the operating system with a call to connect(host, port, timeout) in phpredis, but when we do a select(db_index) afterwards, we get an exception. Could there be an issue with persistance? I assume that connect does nothing in the background and select tries to access the connection, which is actually closed. We don't run into a timeout. We tried tuning TIME_WAIT without success. Any other ideas on where the problem might come from? What is the best way to track the issue down? dtrace maybe? Update We are currently looking into our BGSAVE settings. Interestingly it takes half a second and more to create a fork for the process which regularly writes the data to disk (persistence) and maybe redis can't respond to connect() requests during that timespan.

    Read the article

  • Printing a PDF results in garbled text (sometimes)

    - by Scott Whitlock
    We have a system that renders a report as a PDF, and displays it in the browser for the user. In the browser, the document always appears to display fine, but when printed on one machine, it sometimes changes some of the data in the report to seemingly random characters. Here are some examples of the strings it inserts: Ebuf; Bvhvt ul1: -!3122 Ti jqqf e!Wjb; Nfttf ohf s!Tf swjdf Additionally, the inter-character spacing is weird. It sometimes writes characters overlapping each other. I noticed some repetition in the garbled text, so I typed a few of them into Google, and surprisingly got a lot of hits. Here is the string I searched for: pdf cjmp ebuf nftf up! The Google search summaries contain the garbled text. However, when I click on those links in Google, I get perfectly readable PDF files. It's as if Google's PDF crawler has the same bug. Has anyone figured this out? Is this an Acrobat Reader bug?

    Read the article

  • i7-980X at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. Intel i7-980X@3,33 GHz with 12 GB of Team Group 1600 MHz DDR3 RAM. When we use Gurobi the computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an i7 930 all cores are at a near 100% level even after longer solution times. We first thought that the Hard disk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of RAM we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • i7 x980 at 70% speed

    - by Buxley
    Hi we bought a nice computer to use to solve optimization problems. intel i7 X980@3,33 Mhz with 12 Gb of Team Group 1600 MHz ddr3 Ram. When we use Gurobi The Computer uses all 12 cores at maximum in the beginning of the solve. However after a while (about 8 hrs) it all cores jump between 65 and 85% When I solve the same models on an I7 930 all cores are at a near 100% level even after longer solution times. we first thought that the Harddisk was the bottleneck since Gurobi writes out nodefiles after the memorylimit is used. However since the new computer have 12 GB of Ram we put the memorylimit to 7 GB so the solver only used the RAM and still with the same performance in the processor. Any Ideas about the bottleneck? As I said earlier it works at 100% for the first hours or so . Thanks very much for any answers! Our plan was to overclock it but we can't even get it to work at normal speed yet!

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Advice on cloning disk

    - by hks
    I'm going to buy a second disk for backup, the same size as my laptops. I want to mount it in a casing via usb and backup an entire hdd every soemtime. That's because I want the posibility to just switch drives in case of something goes wrong. I'm using Linux and obviously the right tool seems to be dd. The thing is that my laptop drive has a speed of around 50-70 MB/s and usb 2.0 is 57 MB/s. So to copy my 250GB disk should take me more than 1 hour if I'm lucky. I can't wait this much. I want some differential backup. I read one of JWZ articles. In it he gives more details for using rsync on Mac. He writes that there is possibility of making rsync'ed disk bootable. So my question is: how to make rsync'ed hdd bootable under Linux or are there other 'quick backup' tools for Linux that would allow me to just swap drives? Or should I just stick to dd :( ?

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • mysql disk io keeps increasing ... is that normal?

    - by trustfundbaby
    So I've been trying to figure out this disk IO problem I have been having with my linode VPS. Over the last day or two I've just left watch -n1 pidstat -d running in a console window and the output looks like this: Monitoring it over the last few days, I've noticed that my problem lies with the init, searchd, and mysql processes. Searchd is sphinx and all its indexes are on disk, so disk io there is inevitable (apparently). What I can't understand is why the disk reads (kB_rd/s) for mysql refuse to stabilize and just keep going up. It started out at 154 yesterday and is up to what you see in that screen shot. but disk writes (kB_wr/s) have remained pretty constant the entire time. My VPS only has 768MB RAM, my mysql db has a size of about 220MB and after running mysqltuner.pl and reading a bit about it, I've been advised to set my innodb_buffer_pool_size to 220MB but I simply cannot afford to do that ... I have it up to 150MB. My question is twofold. Why does the init process have that much disk reading to do? Why is mysql doing so much disk reading?

    Read the article

  • Windows XP VM on VMWare ESXi 4.1 "pausing" / blocked occasionally

    - by FelixD
    We have an issue with Windows XP SP3 VMs on VMWare ESXi 4.1.0 (the free version): They sometimes seem to "pause" for several minutes. This happens rarely (maybe once a month per VM, at least noticed only that often), but still is an issue for us. It happens for three different but similar VMs on three pretty different hosts (different hardware). I have the feeling that the "pausing" is not actually the CPU blocking, but probably the harddisks, but not 100% sure. The servers have one IDE disk (C:) and one SCSI (D:) and it might be either of the two. I have seen scheduled tasks simply not starting for up to 9 minutes and then running normally again with normal speed. They were totally blocked. This is not a load issue, the VMWare hosts have average load and the VMs in question already have reserved CPU resources plus high priorities for CPU and disk. The Windows boxes run mainly MySQL, Tomcat, FileZilla server, Cygwin stuff, Java + R applications, VMWare client, Elusiva Terminal Server pro, Nagios client. Not sure if this might be related with any of that software (e.g. Elusiva). Trying to debug this, there was nothing visible in Windows Event log, other logs in C:\Windows, VMWare events etc. Unfortunately the vmware.log file ends with "Log throttled". We found that we ran into 2 VMWare bugs there: The VMWare client writes lots on bogus messages in the vmware.log, which we now disabled (log level error setting) plus the bug that VMWare does not unthrottle the log (at least so far despite VM reboots). I know there is not much guidance and that may also be the reason why I so far didn't find anything related on the web or on ServerFault, but maybe some of this rings a bell with someone? Or please direct me to what more info to post. I hope that the vmware.logs get unthrottled eventually (can't easily restart the hosts at the moment). Thanks for any input!

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Moving an external hard drive while running

    - by user1108939
    I mean physically moving the drive around. I've never dealt with external hard drives before. Just plugged this wd mypassport to test the transfer rate. At one point I 'safely ejected' the drive. A minute later I decide to check the underside of the drive, not realizing the disk is still spinning. I lift the drive, rotating my writs about 70 degrees to the left... I hear a sequence of three high pitched sounds. I couldn't determine whether that was an indication beep by an internal security feature or the head scratching the plate (oh god...). Drive stops and usb power is disconnected. I reconnect it - it shows up fine - reads/writes. The drive was not reading/writing when i moved it. Did I damage my drive? Are these things that fragile? I thought them to be at least as durable as a standard 2.5" internal drive. Am I mistaken?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >