Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 24/67 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • VMWare ESX installation on sata disk

    - by ilansch
    I have a PC with Gigabyte H77 motherboard with Intel I5-3550 CPU 8 GB RAM 1600MHz and a 500GB Harddisk (7200RPM) - WD Sata III disk I wish to install esx on it and run some virtual machines on it. not alot, something like 2-3 VMs. My hardisk is Sata, is it possible to install ESX Server on it ? I am not worried about loading issues. When i try loading the installation it writes it cannot detect my disk (since its not SCSI disk). How can i bypass this ? or find a solution. thanks

    Read the article

  • Why is music tagging software so inconsistent?

    - by Billy ONeal
    Hello :) A few years ago I spent an insane amount of time using the excellent Tag&Rename program. However, I find that for random, inexplicable reasons, some music tools simply disregard my tags, and drop or destroy the album art, or have strange handling around some characters. For example, "AC/DC" is poorly handled by most music players when I use Tag&Rename to write the tags. Is there a piece of software that works like Tag&Rename but is more compatible, or is there a way to ensure Tag&Rename writes more compatible tags?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

  • Optimal Disk Setup for OLTP SQL Server

    - by Chris
    We have a high transaction (lots of reads and writes) database server (running SQL 2005) that is currently set up with a RAID 1 OS partition (C:) and a RAID 5 data/log/tempdb partition (D:). The C: has 2 drives and the D: has 4 drives. The server has around 300 databases ranging from 10MB to 2GB in size. I have been reading up on best practices for partioning the disks, but would like some opinions on our setup since we are so limited in the number of disks. It seems like RAID 10 is popular, but I dont think we could use it with only 6 total disks to work with. Thanks. Update I went with 3 RAID 1 Partitions (2 disks each) Partition 1: OS, TempDB, Backups Partition 2: Logs Partition 3: Data

    Read the article

  • headings numbering not updating

    - by Marwen Hizaoui
    I'm writing a report and I have some problems with headings. I have this structure: Chapter 1(heading1) 1.1(heading2) 1.1.1(heading3) 1.2(heading2) Chapter2(heading1) 2.1 2.2 The problem is that when I choose heading 1 word writes a number before chapter - 1 chapter 1 I want numbers to appear only from heading 2. I managed to change it using multilevel lists but it doesn't update appropriately. I mean I removed the number before chapter 1 for example but it did not update chapter 2 subheadings that became numbering from 1 (not 2). Please help me out. thanks.

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0

    - by pjv
    What are the disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0 I'm particularly interested in the /proc/sys/vm/swappiness=0 case. How much writes are still done, in practice, to that swap file, and does it have a negative impact to the SSD or any other disadvantage? Or would it nearly compare to not having a swap file? I am pretty aware of what swappiness=0 means, just not of what it amounts to in practice. My question stems from a problem I am experiencing without a swap: http://stackoverflow.com/questions/4567972/error-executing-aapt-all-of-the-sudden. There are similar questions regarding SSD and swap but they don't go in-depth into the swappiness=0: Disadvantages of not having a swap partition, Should I keep my swap file on an SSD drive?

    Read the article

  • What are the default/recommendet access rights for %ALLUSERSPROFILE%?

    - by RED SOFT ADAIR-StefanWoe
    We have a Windows application that reads and writes some data for all users. We place it at %ALLUSERSPROFILE%\OurProgram*.* We now encounter a few cases in larger companies, where users do not have write permission to %ALLUSERSPROFILE%. Most of these cases are running Windows 7. The problem does not occur on a normal desktop installation of Windows 7 though. What is the recommended policy for this location? I have not found any "official" information about this. Is there a different location where all users have write permission?

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

  • consulting a network admin for rails and php applications

    - by Karo Devos
    Hi I'm a web developer who writes most of the time rails applications. Next month I'm going to switch from my current VPS to linode. I'm wondering how much it would cost to properly set (or teach me how to do it) everything to get my app up and running. My requirements are probably: nginx/apache, REE/ruby, passenger, full blown php environment, system wide RVM, search engine such a sphinx, being able to perform cronjobs. I have some knowledge of unix and I was able to install everything I needed on my development system. However I had quite a few issues setting up everything on my production server.

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • Kill child process when the parent exits

    - by kolypto
    I'm preparing a script for Docker, which allows only one top-level process, which should receive the signals so we can stop it. Therefore, I'm having a script like this: one application writes to syslog (bash script in this sample), and the other one just prints it. #! /usr/bin/env bash set -eu tail -f /var/log/syslog & exec bash -c 'while true ; do logger aaaaaaaaaaaaaaaaaaa ; sleep 1 ; done' Almost solved: when the top-level process bash gets SIGTERM -- it exists, but tail -f continues to run. How do I instruct tail -f to exit when the parent process exits? E.g. it should also get the signal. Note: Can't use bash traps since exec on the last line replaces the process completely.

    Read the article

  • Where to look for regular scripts?

    - by fontan
    It seems to me that our server freezes every 30 days around noon due to the huge utilization of xvda data transfer partition - writes are 50 times higher than normally (according to the health monitor in plesk). This seems to me as the reason why the apache & co becomes instable as (for example) all apache's processes are waiting to write their log (according to the service's full status). I am, however, unable to find any scheduled task that would be executed during that period. I have checked both cron and anacron setup and there is only one monthly anacron task which is not executed (according to the /var/log/cron - and there is nothing unusual) around noon. Are there any other places where to look for periodical processes? (I am just about to ask server's provider the same question about any external maintenance run around this time but I don't expect them to run anything time/resource consuming during the day.)

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Disable writing RAID degraded mode

    - by jolivier
    I have a RAID5 with 5 disks on my machine and suspect the motherboard chipset to fail at some points and make my raid going in degraded mode. Last time it happened I noticed it on the failure of the 2nd drive connected to the same chipset and lost a lot of data. So I would like to prevent this, and especially I would like to have mdadm disable writes on the raid if one of the disk fails. So that in between I get notified, I recover and can use my system again. Sadly I could not find it in man mdadm so I was wondering if this is possible via a tool or hidden option since for me it looks like a standard feature of a RAID system. If this is not possible I would also be happy with a solution to stop the raid if degraded.

    Read the article

  • How much data does windows write on boot

    - by soandos
    This question was inspired by Bob's comment to my answer here. On boot, windows writes files to the hard drive (I imagine this to be the case, as it has a way of detecting if the boot was previously interrupted by a hard power-off, and I am sure many other things). But assuming that there is a "smooth" boot, where there are no error, etc, and no logon scripts that run, and things like that, about how much (a few KB, a few MB, a few GB) data gets written to the drive? For simplicity's sake, assume that: hibernation is turned off windows 7 pagefile is turned off (does this matter right at boot, or only later?) How could one go about measuring this? Are there resources that have this information?

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something? (Can't be FAT32, I need to store 4gb files). I've read some horror stories of corruption, and would like to know from a sysadmin POV all the risks involved in such scenario.

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something?

    Read the article

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >