Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 22/67 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Overlapping Samba Shares

    - by Toaomalkster
    Is it OK to have samba shares that overlap, like the following: [whole-drive] path = /mnt/myusbdrive ... [music] path = /mnt/myusbdrive/music ... [movies] path = /mnt/myusbdrive/movies ... I have a mounted external HDD with music and movies, plus a whole bunch of other stuff like backups. I want to expose the music and movies directories as separate samba shares (probably with guest access), so that they're uncluttered with all the other stuff; and I want to expose the entire drive as a separate samba share (with higher permissions) for doing more administrative things across the drive. Does Samba behave well with this configuration? I'm wondering if I'd end up with problems like phantom writes if the same file is accessed at the same time across two different shares. Details: OS: Debian GNU/Linux wheezy/sid on Raspberry Pi HDD: NTFS, mounted as ntfs-3g. Samba: version 3.6.6

    Read the article

  • Join performance on MyISAM and InnoDB tables

    - by j0nes
    I am thinking about converting some tables from MyISAM to InnoDB in my mysql server. The tables will certainly benefit from the change because a lot of write requests come to these tables, while there are also quite a lot of read request at the same time. However, they are often joined together with some tables that almost don't get any writes. Is there a performance penalty when joining together MyISAM and InnoDB tables or should everything work fine? Second question: During backups at night, I am copying data from the InnoDB tables to MyISAM tables for archiving purposes. In these backups, a lot of write-requests happen, however there is almost no read from these archive tables. Would these tables also benefit from using InnoDB or is this just a waste of space and RAM?

    Read the article

  • Cron Permission Denied

    - by worldthreat
    good day, I have a bash script in my home directory that works properly from the command line (file structure is default media temple DV. < noted for certain permission issues) but receive this error from cron: "/home/myFile.sh: line 2: /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql: Permission denied" NOTICE: it's just line 2... it writes to the local server just fine. Below is the Bash File: #!/bin/bash mysqldump -uUSER -pPASSWORD -hHOST dbName> /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql mysql -uadmin -pPASSWORD -hlocalhost dbName< /var/www/vhosts/domain.com/subdomains/techspatch/installation.sql can't chmod from bash (lol, yeah i tried). writing the file there and setting the permissions before the transfer is useless... i have googled the heck out of this situation and this one still seems unique.... any insight is appreciated

    Read the article

  • Drive configuration for 5 large databases

    - by Mr. Flibble
    I've got 5 databases, each 300GB, currently on a RAID 5 array consisting of 5 drives. All the databases are used heavily, at the same time, so drive speed is an issue. Would I see better performance if I got rid of the RAID 5 configuration and just put each database on a separate drive? The redundancy provided by RAID 5 is not necessary due to mirroring elsewhere. Will the server then be able to perform reads / writes to different databases drives in parallel? More so at least than when it's in RAID? This is all on Windows 2003 / SQL 2008.

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Building optimal custom machine for Sql Server

    - by Chad Grant
    Getting the hardware in the mail any day. Hardware related to my question: x10 15.5k RPM SAS Segate Cheetah's x2 Adaptec 5405 PCIe Raid cards Motherboard has integrated SAS raid. Was thinking I would build 2 RAID 10 arrays one for data and one for logs The remaining 2 drives a RAID 0 for TempDB Will probably throw in a drive for OS. Does putting the Sql Server application / exe's on a raid make a difference and is there any impact of leaving the OS on a relatively slow disk compared to the raid arrays? I have 5/6 DBs combined < 50 gigs. With a relatively good / constant load. Estimating 60-7% reads vs writes. Planning on using log shipping as well if that matters. Any advice or suggestions?

    Read the article

  • How can I get rid of / hide :2eDS_Store files on my linux netatalk server?

    - by Douglas Mayle
    I'm running a netatalk server process on my linux server that serves files up to Mac client machines. Whenever you use Mac's Finder to access foreign filesystems over netatalk, it creates '.DS_Store' files to store information about the folder. Normally, these files would be hidden by default, and I wouldn't care. Unfortunately, netatalk doesn't allow access to local hidden files, so when the Mac writes and reads these, it renames them :2eDS_Store on the local filesystem. When you have a deep tree, you end up with these littered all over the place, and other Windows and Linux clients have to deal with them. How do I make these available to Mac clients and hidden from everyone else?

    Read the article

  • Why can't PHP script write a file on server 2008 via command line or task scheduler?

    - by rg89
    I have a PHP script. It runs well when I use a browser. It writes an XML file in the same directory. The script takes ~60 seconds to run, and the resulting XML file is ~16 MB. I am running PHP 5.2.13 via FastCGI on Server 2008 64 bit. I created a task in task scheduler to run c:\php5\php.exe "D:\inetpub\tools\something.php" No error returned, but no file created. If I run this same path and argument at a command line it does not error and does not create the file. I am doing a simple fopen fwrite fclose to save the contents of a php variable to a .xml file, and the file only gets created when the script is run through the browser. Thanks

    Read the article

  • Alternatives for heapdumps creation with higher performance than jmap?

    - by Christian
    Hi, I have to create heapdumps, which works nice with jmap. My problem is, that jmap takes very long to create the heapdump file. Especially when the heap is getting bigger ( 1GB) it is taking too long. One situation as example: When the server gets into trouble with the heapspace, I want to restart it automatically and create a heapdump before the restart. This works, but takes too long to write the heapdump. This way the server is down for too long. The heapdump creation takes longer than one hour. I know about -XX:+HeapDumpOnOutOfMemoryError, but most of the time I can find the memory problem before the exception is thrown by the jvm. Is there an alternative to jmap which writes the heapdumps faster? A special solution for the example above would also be appreciated. This question is a mix between programming and system-administration, but I think I'm at the right place here.

    Read the article

  • Can unexpected power loss harm a Linux install?

    - by Johan Elmander
    I am developing an application on a Linux embedded board (runs Debian) e.g. Raspberry Pi, Beagle Board/Bone, or olimex. The boards works on an environment that the electricity is cut unexpectedly (it is far complicated to place PSU, etc.) and it would happen every day couple times. I wonder if the unexpected power cuts would cause crash/problem on the Linux Operation System? If it is something that I should worry, what would you suggest to prevent the damages on OS against the unexpected power cuts? PS. The application needs to writes some data to the storage medium (SD card), I think it would not be suitable to mount it as read-only.

    Read the article

  • Worth it to move /var to physical disk vs logical?

    - by Tammer Ibrahim
    Brief question about partition layout. I use an SSD for /, /boot, /usr, & /home partitions. I'd like to move /var to a mechanical disk to minimize writes to the SSD. I'm mainly concerned about maximizing drive life rather than maximizing performance (although I obviously wouldn't want to cripple my server). My mechanical disks consist of two drives sharing LVM, and a third used for nightly rsync backups. I also have a bunch of old 2.5in hard disks lying around. My question is, should I simply create a new LVM volume '/var' on my primary data store, or would it be worth the increased energy consumption (in terms of maximizing the lifetime of the LVMed drives) to install a low volume 2.5in disk to use just for /var? On a more general level my question is about the trade offs of placing OS mounts on the same physical volumes as my data. Thanks for any help!

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Postfix sendmail -f configuration

    - by William
    I have Postfix installed on two servers. One of them writes e-mail (satellite) and the other one delivers the e-mails (smarthost). When I write e-mail from the satellite server I'm using the sendmail command. My problem is that when e-mail arrives the Return-Path is set to the user@hostname where user is the user that is running sendmail and hostname is the servers hostname. If I use the parameter -f with sendmail I change that, but I'm hoping there is a way to do it in a configuration file for Postfix. Is this possible or should I just deal with having to configure all my software to add the -f argument? Thanks in advanced.

    Read the article

  • Need software to save videos from 4tube.com - to watch the videos smoothly

    - by Carl
    Isn't there a programs that will capture screen writes at the hardware level? I have tried several Firefox add-ons, and several stand-alone programs, and none of them will save videos from this site. I even paid for Replay Media Catcher, and it didn't work, so I got a refund. (The website for the best Firefox video downloader I have, Downloadhelper, said Replay Media Catcher worked with that site.) I have a slow internet connection, and cannot watch videos smoothly unless I can cache them. This site (4tube.com) doesn't cache, when you restart, it reloads, when you pause it stops - so I need to be able to save the videos to be able to watch them.

    Read the article

  • Bash - Program is writing directly to terminal

    - by Salis
    Valve's dedicated server for the Source Engine (srcds_run) on Linux writes directly to the terminal, not stdout. I want to run it as an /etc/init.d daemon on Debian 6, and I'd like to redirect/capture the output to a file. How can I do that? And better yet, why would they output directly to the terminal, is there any benefit in doing that? I suppose I could start another bash instance just for srcds_run, but that seems like a dirty solution, and I still don't know how to redirect the output.

    Read the article

  • Tool to check if XML is valid in my VS2012 comments

    - by davidjr
    I am writing the documentation for our companies software developed with vs2012. I need to add xml examples to the summary of each class, due to xml instantiation of objects. We are using sandcastle to create the documentation (company choice), and I want to be able to review my xml comments without building the help file every time. Is there an application that anyone would recommend where I can view how the xml renders before I build the help file? Here is my example: /// <summary> /// Performs DFT on a data array, writes output in a CSV file. /// </summary> /// <example> /// <para>XML declaration</para> /// <code lang="xml" xml:space="preserve"> /// %lt;DataProvider name="DftDP" description="Computes DFT" etc... I want to check the XML to make sure it is valid, maybe by copy and pasting it into a tool of some sort?

    Read the article

  • MongoDB ReplicaSet Elections when some nodes are down

    - by SecondThought
    I'm trying to get into ReplicaSet concept, and found something weird in mongoDB Documentation: For a node to be elected primary, it must receive a majority of votes. This is a majority of all votes in the set: if you have a 5-member set and 4 members are down, a majority of the set is still 3 members (floor(5/2)+1). Each member of the set receives a single vote and knows the total number of available votes. If no node can reach a majority, then no primary can be elected and no data can be written to that replica set (although reads to secondaries are still possible). (taken from here) So, If I got that right, in the 5-member case mentioned there the one node that's still standing WILL NOT be chosen as primary and the whole set will not get any writes? and that's even if this single node was the last primary before the elections? If it's true there can be many less-radical cases which will end up with a degenerated set. How can we avoid this?

    Read the article

  • How to subscribe to a youtube feed from linux command line?

    - by Tim
    I want to subscribe to a youtube channel and automatically download new videos to my linux machine. I know I could do this e.g. with miro, but I will not watch the videos using Miro, want to choose the quality and would like to run it as a cronjob. It should be able to: know which feed entries are new and not download old entries resume (or at least redownload) failed/incomplete downloads from older sessions Are there any complete solutions for this? If not it would be enough for me (maybe even preferable) to just have a command line rss reader that remembers which entries have already been there and writes the new video urls (e.g. http://www.youtube.com/watch?v=FodYFMaI4vQ&feature=youtube_gdata from http://gdata.youtube.com/feeds/api/users/tedxtalks/uploads) into a file. I could then accomplish the rest using a bash script and youtube-dl. What would be programs usable for this purpose?

    Read the article

  • What is the harm in giving developers read access to application server application event logs?

    - by Jim Anderson
    I am a developer working on an ASP.NET application. The application writes logging messages to the Windows event log - a custom application log just for this application. However, I do not have any access to testing or staging web/application servers. I thought an admin could just give me read access to this event log to help in debugging problems (currently a service that is working in dev is not working in test environment and I have no idea why) but that is against my client's (I'm a consultant) policy. I feel silly to keep asking an admin to look at the event log for me. What is the harm in giving developers read access to application server application event logs? Is there a different method of application logging that sysadmins prefer programmers use? Surely, admins don't want to be fetching logging messages for developers all the time.

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Why do I get a My Disney window when installing VMWare Workstation?

    - by Marc Esher
    I'm assuming this is a virus, though my virus checker can't find it. I downloaded the latest vmware workstation 7 installer. I'm running Windows 7 64bit. I go to install it, and the installer window is a Disney website. Upon further investigation, what's happening is that the vmware installer extracts/writes a bunch of files to a temp directory. One of those files is an index.htm file. When I open it, sure enough it's the Disney file. I used sysinternals Process Monitor to look for anything fishy, but the only thing I see touching that index.htm file is the vmware installer and explorer.exe

    Read the article

  • Music tagging software more consistent than Tag&Rename?

    - by Billy ONeal
    A few years ago I spent an insane amount of time using the excellent Tag&Rename program. However, I find that for random, inexplicable reasons, some music tools simply disregard my tags, drop or destroy the album art, or have strange handling around some characters. For example, "AC/DC" is poorly handled by most music players when I use Tag&Rename to write the tags. And if I write the tag in iTunes, Winamp seems to not like it, vice versa, and neither of those work with Amarok. Is there a piece of software that works like Tag&Rename but is more compatible, or is there a way to ensure Tag&Rename writes more compatible tags?

    Read the article

  • Add registry entries for all users

    - by George02
    I've installed a software on my windows 8 computer which writes entries in my registry. How can I modify this registry entries for all users ? For example what I need to modify is values from this key but this key only refers to a single user: [HKEY_USERS\S-1-5-21-543895283-3741240661-2983116896-500\Software\IvoSoft\ClassicStartMenu\Settings] But "S-1-5-21-543895283-3741240661-2983116896-500" is different depending on the user name. How can I change that key for all users ? I've tried to work with this key but is not possible. [HKEY_USERS\S-1-5-21-*\Software\IvoSoft\ClassicStartMenu\Settings]

    Read the article

  • plesk 9 spamassassin server wide blacklist via cron?

    - by Kqk
    hi, we're running ubuntu 8.04 LTS and plesk 9.2 our simple task is to set up a periodic black list for spamassassin, e.g. using this script .. #!/bin/sh #! Script by AJR to update local spamassassin rules cd /tmp wget -c http://www.stearns.org/sa-blacklist/sa-blacklist.current mv sa-blacklist.current local.cf -f mv local.cf /etc/mail/spamassassin -f rm local.cf -f /etc/init.d/psa-spamassassin restart now, this script runs fine, but plesk doesn't seem to recognize the blacklist in its GUI. which is annoying, especially because plesk itself writes to /etc/mail/spamassassin/local.cf. i wasn't able to find out the secret place, where plesk distinguishes between entries in local.cf added via GUI and command line. any help is appreciated! thanks.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >