Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 24/67 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • GNU Screen Draw Lag

    - by Daeden
    I like using screen with multiple splits. I usually like 3 sections Resource Monitoring using HTop Text Editor using VIM Command line using Bash My issue is that, when I am doing something that writes a good deal of text to STDOUT like running Make and if I am focused on that section, Screen lags on me. So much so, that the other sections no longer update and screen is not responsive to commands like CTRL-A + TAB. I'm not entirely sure what the problem is, but it appears to have something to do with the cursor location which blinks wildly while this is happening. I'm aware that using the vertical split functionality of Screen can lead to lag, but is this the cause? If so, is there a way to fix it aside from redirecting STDOUT?

    Read the article

  • Cannot get nscd to run. DNS cache stale as a result

    - by Phunt
    I'm trying to troubleshoot an issue on a MediaTemple server (running CentOS5) where the DNS cache has grown stale - I think because nscd has crashed. I've tried restarting nscd: # service nscd restart Stopping nscd: [FAILED] Starting nscd: [ OK ] This makes sense since I believe nscd has crashed so it shouldn't already be running, but When I view the status of nscd: # service nscd status nscd dead but subsys locked And ps -A returns no processes related to nscd (I assume because it's dead). I've edited /etc/nscd.conf and uncommented the line that defines the location for the log file. It created the file but it never writes anything to it. I tried looking at the init script but found that it's no help since the script thinks everything is running fine - the service returns that it started up correctly. How do I 'unlock' the subsys that nscd is complaining about?

    Read the article

  • How to pipe differently the body of the curl answer and the printed output?

    - by Antoine Lizée
    I would like to print in the command line some output of curl, like the http headers, followed by the body of the answer processed by a stdin/stdout program. For instance: Print the status code: curl -s -w "%{http_code} \\n" -o "/dev/null" http://myURL.com And then process the output with a json parsing tool: curl -s http://myURL.com | python -mjson.tool I would like to do both with one command, and I have the feeling that it may be possible thanks to the -o option that makes the difference between the output of curl and the actual answer from the query. The problem is that -o writes directly to a file. Somebody's got a hack?

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • headings numbering not updating

    - by Marwen Hizaoui
    I'm writing a report and I have some problems with headings. I have this structure: Chapter 1(heading1) 1.1(heading2) 1.1.1(heading3) 1.2(heading2) Chapter2(heading1) 2.1 2.2 The problem is that when I choose heading 1 word writes a number before chapter - 1 chapter 1 I want numbers to appear only from heading 2. I managed to change it using multilevel lists but it doesn't update appropriately. I mean I removed the number before chapter 1 for example but it did not update chapter 2 subheadings that became numbering from 1 (not 2). Please help me out. thanks.

    Read the article

  • Configure fallback redis server

    - by snøreven
    I am using redis as a cache server. Can I somehow configure multiple redis servers, that the cache is fully functional (read/write) even if some of them go offline? I looked into master-slave, but the problem I see there is, that if the master fails, and I allow writes to the slaves, they get overwritten once the master is up again. Now the master just serves the old data. The only solution I could come up was disabling write-to-disc, but that sucks as I loose everything if I have to restart the master. And I guess, slaves wouldn't be synced anymore if the master is gone.

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • consulting a network admin for rails and php applications

    - by Karo Devos
    Hi I'm a web developer who writes most of the time rails applications. Next month I'm going to switch from my current VPS to linode. I'm wondering how much it would cost to properly set (or teach me how to do it) everything to get my app up and running. My requirements are probably: nginx/apache, REE/ruby, passenger, full blown php environment, system wide RVM, search engine such a sphinx, being able to perform cronjobs. I have some knowledge of unix and I was able to install everything I needed on my development system. However I had quite a few issues setting up everything on my production server.

    Read the article

  • What are the default/recommendet access rights for %ALLUSERSPROFILE%?

    - by RED SOFT ADAIR-StefanWoe
    We have a Windows application that reads and writes some data for all users. We place it at %ALLUSERSPROFILE%\OurProgram*.* We now encounter a few cases in larger companies, where users do not have write permission to %ALLUSERSPROFILE%. Most of these cases are running Windows 7. The problem does not occur on a normal desktop installation of Windows 7 though. What is the recommended policy for this location? I have not found any "official" information about this. Is there a different location where all users have write permission?

    Read the article

  • Linux: Limiting data throughput (pipe) in bytes per second?

    - by sdaau
    Hi all, I was wandering if there is a Linux program that can limit data throughput of a pipe - in actual bytes per second?. From what I gather, applicable for the purposes would be bfr, however, it has been removed from Debian (Removal candidate: bfr) cpipe, however, it seems the lowest resolution it will support is kB/s, meaning that buffer writes can still reach MB/s ([SOLVED] Is there a program to limit terminal pipe speed? - Page 2 - Ubuntu Forums) What I'd want is to be able to specify something like cat example.txt | ratelimit -Bps 100 > /dev/ttyUSB0 ... and actually have a single byte from example.txt sent each 1/100 = 0.01 sec (or 10 ms) to 'output'.. Thanks in advance for any suggestions, Cheers!

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0

    - by pjv
    What are the disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0 I'm particularly interested in the /proc/sys/vm/swappiness=0 case. How much writes are still done, in practice, to that swap file, and does it have a negative impact to the SSD or any other disadvantage? Or would it nearly compare to not having a swap file? I am pretty aware of what swappiness=0 means, just not of what it amounts to in practice. My question stems from a problem I am experiencing without a swap: http://stackoverflow.com/questions/4567972/error-executing-aapt-all-of-the-sudden. There are similar questions regarding SSD and swap but they don't go in-depth into the swappiness=0: Disadvantages of not having a swap partition, Should I keep my swap file on an SSD drive?

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • large RAID 10 vs small RAID1

    - by user116399
    The machine will store and serve millions of small files (<15Kb each), and all those files require a total storage space of 400G Considering the exact same SATA hard drives maker and models, on the exact same environment (OS, cpu, ram, raid controller, etc...) which one of the setups bellow would be faster? A) RAID 1 with 2 drives of 2T each, making up total storage of 2T B) RAID 10 with 4 drives of 2T each, making up total storage of 4T [EDIT]: I'm aware RAID10 is faster than RAID1. The larger the disk, at least in theory, the longer will take to do seeks/writes. So, will the performance gain of RAID10 will be outweighed by the "drag" caused the larger disk area when seek/write operations happened?

    Read the article

  • Where to look for regular scripts?

    - by fontan
    It seems to me that our server freezes every 30 days around noon due to the huge utilization of xvda data transfer partition - writes are 50 times higher than normally (according to the health monitor in plesk). This seems to me as the reason why the apache & co becomes instable as (for example) all apache's processes are waiting to write their log (according to the service's full status). I am, however, unable to find any scheduled task that would be executed during that period. I have checked both cron and anacron setup and there is only one monthly anacron task which is not executed (according to the /var/log/cron - and there is nothing unusual) around noon. Are there any other places where to look for periodical processes? (I am just about to ask server's provider the same question about any external maintenance run around this time but I don't expect them to run anything time/resource consuming during the day.)

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Override some DNS records and delegate others to another nameserver

    - by Addev
    I'm starting playing with nameservers. Currently I have: A domain: mydomain.com Access for writing the whois ns records. Access for writing DNS records at the domain hosting provider. nsX.foo.com A shared (hostC) hosting which cPanel writes in the nameserver of nsX.bar.com Basically I want the following structure: hostA.mydomain.com -> hostA hostB.mydomain.com -> hostB hostC.mydomain.com -> hostC mydomain.com -> hostC *.mydomain.com -> hostC Whats the correct way of configuring this? By the way I have configured the following records hostA.mydomain.com IN A [IP_OF_hostA] (at foo.com nameservers) hostB.mydomain.com IN A [IP_OF_hostB] (at foo.com nameserver) But now I dont know how to specify that @.mydomain.com and *.mydomain.com are resolved by the ns1.bar.com and ns2.bar.com, and it is hard to try with the delays after editing the records.

    Read the article

  • Minimum rights to access the whole Users directory on another computer

    - by philipthegreat
    What is the minimum rights required to access the Users directory on another computer via an admin share? I have a batch file that writes some information to a few other computers using a path of \\%COMPUTERNAME%\c$\Users\%USERNAME%\AppData\Roaming. The batch files run under an unprivileged user (part of Domain Users only). How do I set appropriate rights so that service account can access the AppData\Roaming folder for every user on another computer? I'd like to give rights lower than Local Admin, which I know will work. Things I've attempted: As Domain Admin, attempted to give Modify rights to the C:\Users\ directory on the local computer. Error: Access Denied. Set the service account as Local Admin on the other computer. This works, but is against IT policy where I work. I'd like to accomplish this with rights lower than Local Admin. Any suggestions?

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something?

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something? (Can't be FAT32, I need to store 4gb files). I've read some horror stories of corruption, and would like to know from a sysadmin POV all the risks involved in such scenario.

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • How much data does windows write on boot

    - by soandos
    This question was inspired by Bob's comment to my answer here. On boot, windows writes files to the hard drive (I imagine this to be the case, as it has a way of detecting if the boot was previously interrupted by a hard power-off, and I am sure many other things). But assuming that there is a "smooth" boot, where there are no error, etc, and no logon scripts that run, and things like that, about how much (a few KB, a few MB, a few GB) data gets written to the drive? For simplicity's sake, assume that: hibernation is turned off windows 7 pagefile is turned off (does this matter right at boot, or only later?) How could one go about measuring this? Are there resources that have this information?

    Read the article

  • Disk controller speed responsible for slow write speeds?

    - by vizvayu
    I have question. I'm using ESXi 4.0U1 in an IBM x3200M2 with an integrated LSI 1064e RAID controller, without any kind of cache. I have 3 250GB HOT-SWAP SATA HDs configured in RAID1E (IME). ESXi works fine, read speed are quite OK, but write speeds are incredible slow, never more than 8MB/s, and this is the best case scenario, benchmarking with iozone streaming writes, using a VMWare Paravirtual controller and with only this VM active, no swapping of any kind (total vm memory reserved). Already wrote to IBM but I don't have any kind of pay support so they didn't even answered, so I'm just wondering... anybody has any experience with a similar setup? I just want to be sure this is hardware related and can't be fixed with some kind of config option, because I'm thinking on buying a new RAID controller (Adaptec 2405 looks nice). Thanks again!

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >