Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 71/1640 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • MDI WinForm application and duplicate child form memory leak

    - by Steve
    This is a WinForm MDI application problem (.net framework 3.0). It will be described in C#. Sorry it is a bit long because I try to make things as clear as possible. I have a MDI application. At some point I find that one MDI child form is never released. There is a menu that creates the MDI child form and show it. When the MDI child form is closed, it is supposed to be destroyed and the memory taken by it should be given back to .net. But to my surprise, this is not true. All the MDI child form instances are kept in memory. This is obviously a "memory leak". Well, it is not a real leak in .net. It is just that I think the closed form should be dead but somehow there is at least one unknown reference from outside world that still connect with the closed form. I read some articles on the Web. Some says that when the MDI child form is closing, I should unwire all the event handlers, otherwise some event handlers may keep my form alive. Some says that DataBindings should be cleaned before the form is closing otherwise the DataBindings will add references to some global Hashtable and thus keep my form alive. My form contains quite a lot things. Many event handlers and many DataBindings and many BindingSources and few suspected controls containing user control and HelpProvider. I create a big method that unwires all the event handlers from all the relevant controls, clear all the DataBindings and DataSources. The HelpProvider and user controls are disposed carefully. At the end, I find that, I don't have to clear DataBindings and DataSources. Event handlers are definitely causing the problem. And MDI form structure also contributes to something. During my experiments, I find that, if you create a MDI child form, even if you close it, there will still be one instance in the memory. The reference is from PropertyStore of the main form. This means, unless the main form is closed (application ends), there will always be one instance of MDI child form in the memory. The good news is that, no matter how many times you open and close the child form, there will be only one instance, not a big "leak". When it comes to event handlers, things become more tricky. I have to address that, all the event handlers on my form are anonymous event handlers. Here is an example code: //On MDI child form's design code... Button btnSave = new Button(); btnSave.Click += new System.EventHandler(btnSave_Click); Where btnSave_Click is also a method in MDI child form. The above is always the case for various controls and various types of event. To me, this is a bi-directional circular reference. btnSave keeps a reference of MDI child form via the event handler. MDI child form keeps a reference of btnSave instance. To me again, such bi-directional circular reference should not cause any problem for .net's garbage collector. This means that I do not have to explicitly unwire the event when the form is being disposed: btnSave.Click -= btnSave_Click; But the truth is not so. For some event handlers, they are safe. Ignoring them do not cause any duplicate instance. For some other event handlers, they will cause one instance remaining in the memory (similar effect as the MDI form structure, but this time caused by the hanging event handlers). For some other event handlers, they will cause every instance opened in the memory. I am totally confused about the differences between these three types of event handlers. The controls are created in the same way and the event is attached in the same way. What is the difference? (Don't tell me it is the event handle methods that make difference.) Anyone has experience of this wired scenario and has an answer for me? Thanks a lot. So now, for safety issue, I will have to unwire all the event handlers when the form is being disposed. That will be a long list of similar code for each control. Is there a general way of removing events from controls in recursive way using reflection? What about performance issue? That's the end of my story and I am still in the middle of my problem. For any help, I thank you.

    Read the article

  • How do I prevent duplicates, in XSL?

    - by LOlliffe
    How do I prevent duplicate entries into a list, and then ideally, sort that list? What I'm doing, is when information at one level is missing, taking the information from a level below it, to building the missing list, in the level above. Currently, I have XML similar to this: <c03 id="ref6488" level="file"> <did> <unittitle>Clinic Building</unittitle> <unitdate era="ce" calendar="gregorian">1947</unitdate> </did> <c04 id="ref34582" level="file"> <did> <container label="Box" type="Box">156</container> <container label="Folder" type="Folder">3</container> </did> </c04> <c04 id="ref6540" level="file"> <did> <container label="Box" type="Box">156</container> <unittitle>Contact prints</unittitle> </did> </c04> <c04 id="ref6606" level="file"> <did> <container label="Box" type="Box">154</container> <unittitle>Negatives</unittitle> </did> </c04> </c03> I then apply the following XSL: <xsl:template match="c03/did"> <xsl:choose> <xsl:when test="not(container)"> <did> <!-- If no c03 container item is found, look in the c04 level for one --> <xsl:if test="../c04/did/container"> <!-- If a c04 container item is found, use the info to build a c03 version --> <!-- Skip c03 container item, if still no c04 items found --> <container label="Box" type="Box"> <!-- Build container list --> <!-- Test for more than one item, and if so, list them, --> <!-- separated by commas and a space --> <xsl:for-each select="../c04/did"> <xsl:if test="position() &gt; 1">, </xsl:if> <xsl:value-of select="container"/> </xsl:for-each> </container> </did> </xsl:when> <!-- If there is a c03 container item(s), list it normally --> <xsl:otherwise> <xsl:copy-of select="."/> </xsl:otherwise> </xsl:choose> </xsl:template> But I'm getting the "container" result of <container label="Box" type="Box">156, 156, 154</container> when what I want is <container label="Box" type="Box">154, 156</container> Below is the full result that I'm trying to get: <c03 id="ref6488" level="file"> <did> <container label="Box" type="Box">154, 156</container> <unittitle>Clinic Building</unittitle> <unitdate era="ce" calendar="gregorian">1947</unitdate> </did> <c04 id="ref34582" level="file"> <did> <container label="Box" type="Box">156</container> <container label="Folder" type="Folder">3</container> </did> </c04> <c04 id="ref6540" level="file"> <did> <container label="Box" type="Box">156</container> <unittitle>Contact prints</unittitle> </did> </c04> <c04 id="ref6606" level="file"> <did> <container label="Box" type="Box">154</container> <unittitle>Negatives</unittitle> </did> </c04> </c03> Thanks in advance for any help!

    Read the article

  • Avoid duplicate custom post type posts in multiple loops in Wordpress

    - by christinaaa
    I am running two loops with a custom post type of Portfolio (ID of 3). The first loop is for Featured and the second is for the rest. I plan on having more than 3 Featured posts in random order. I would like to have the Featured ones that aren't displaying in the first loop to show up in my second loop. How can I set this up so there are no duplicate posts? <?php /* Template Name: Portfolio */ get_header(); ?> <div class="section-bg"> <div class="portfolio"> <div class="featured-title"> <h1>featured</h1> </div> <!-- end #featured-title --> <div class="featured-gallery"> <?php $args = array( 'post_type' => 'portfolio', 'posts_per_page' => 3, 'cat' => 3, 'orderby' => 'rand' ); $loop = new WP_Query( $args ); while ( $loop->have_posts() ) : $loop->the_post(); ?> <div class="featured peek"> <a href="<?php the_permalink(); ?>"> <h1> <?php $thetitle = $post->post_title; $getlength = strlen($thetitle); $thelength = 40; echo substr($thetitle, 0, $thelength); if ($getlength > $thelength) echo '...'; ?> </h1> <div class="contact-divider"></div> <p><?php the_tags('',' / '); ?></p> <?php the_post_thumbnail('thumbnail', array('class' => 'cover')); ?> </a> </div> <!-- end .featured --> <?php endwhile; ?> </div> <!-- end .featured-gallery --> <div class="clearfix"></div> </div> <!-- end .portfolio --> </div> <!-- end #section-bg --> <div class="clearfix"></div> <div class="section-bg"> <div class="portfolio-gallery"> <?php $args = array( 'post_type' => 'portfolio', 'orderby' => 'rand'); $loop = new WP_Query( $args ); while ( $loop->have_posts() ) : $loop->the_post(); ?> <div class="featured peek"> <a href="<?php the_permalink(); ?>"> <h1> <?php $thetitle = $post->post_title; $getlength = strlen($thetitle); $thelength = 40; echo substr($thetitle, 0, $thelength); if ($getlength > $thelength) echo '...'; ?> </h1> <div class="contact-divider"></div> <p><?php the_tags('',' / '); ?></p> <a href="<?php the_permalink(); ?>"><?php the_post_thumbnail('thumbnail', array('class' => 'cover')); ?></a> </a> </div> <!-- end .featured --> <?php endwhile; ?> <div class="clearfix"></div> </div> <!-- end .portfolio-gallery --> <div class="clearfix"></div> </div> <!-- end #section-bg --> <?php get_footer(); ?> If possible, could the answer outline how to implement it into my existing code? Thank you. :)

    Read the article

  • java ioexception error=24 too many files open

    - by MattS
    I'm writing a genetic algorithm that needs to read/write lots of files. The fitness test for the GA is invoking a program called gradif, which takes a file as input and produces a file as output. Everything is working except when I make the population size and/or the total number of generations of the genetic algorithm too large. Then, after so many generations, I start getting this: java.io.FileNotFoundException: testfiles/GradifOut29 (Too many open files). (I get it repeatedly for many different files, the index 29 was just the one that came up first last time I ran it). It's strange because I'm not getting the error after the first or second generation, but after a significant amount of generations, which would suggest that each generation opens up more files that it doesn't close. But as far as I can tell I'm closing all of the files. The way the code is set up is the main() function is in the Population class, and the Population class contains an array of Individuals. Here's my code: Initial creation of input files (they're random access so that I could reuse the same file across multiple generations) files = new RandomAccessFile[popSize]; for(int i=0; i<popSize; i++){ files[i] = new RandomAccessFile("testfiles/GradifIn"+i, "rw"); } At the end of the entire program: for(int i=0; i<individuals.length; i++){ files[i].close(); } Inside the Individual's fitness test: FileInputStream fin = new FileInputStream("testfiles/GradifIn"+index); FileOutputStream fout = new FileOutputStream("testfiles/GradifOut"+index); Process process = Runtime.getRuntime().exec ("./gradif"); OutputStream stdin = process.getOutputStream(); InputStream stdout = process.getInputStream(); Then, later.... try{ fin.close(); fout.close(); stdin.close(); stdout.close(); process.getErrorStream().close(); }catch (IOException ioe){ ioe.printStackTrace(); } Then, afterwards, I append an 'END' to the files to make parsing them easier. FileWriter writer = new FileWriter("testfiles/GradifOut"+index, true); writer.write("END"); try{ writer.close(); }catch(IOException ioe){ ioe.printStackTrace(); } My redirection of stdin and stdout for gradif are from this answer. I tried using the try{close()}catch{} syntax to see if there was a problem with closing any of the files (there wasn't), and I got that from this answer. It should also be noted that the Individuals' fitness tests run concurrently. UPDATE: I've actually been able to narrow it down to the exec() call. In my most recent run, I first ran in to trouble at generation 733 (with a population size of 100). Why are the earlier generations fine? I don't understand why, if there's no leaking, the algorithm should be able to pass earlier generations but fail on later generations. And if there is leaking, then where is it coming from? UPDATE2: In trying to figure out what's going on here, I would like to be able to see (preferably in real-time) how many files the JVM has open at any given point. Is there an easy way to do that?

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Indentify Codecs & Technical Information About Video Files

    - by DigitalGeekery
    Have you ever wanted to play an audio or video file but didn’t have the proper codec installed? Today we’ll show how to determine codecs, along with a host of other technical details about your media files with MediaInfo. Installation Download and install MediaInfo. You can find the download link at the bottom of the page. Note: When installing MediaInfo there is a recommended software bundle which you can opt out of by selecting Do not install option. Each recommended software choice may be different, like in this example it offers Spyware Terminator. The cool thing though is they use Open Candy which opts you out of the install. Just double check to make sure you’re not installing extra crapware. Using MediaInfo The first time you run MediaInfo it will display the Preferences window. There are various option such as language, output format, and whether or not you want MediaInfo to check for new versions. Click OK. Select a file or folder to analyze by clicking on the File or Folder icons on the left of the application window or by selecting File > Open from the menu. You can also drag and drop a file directly onto the application. MediaInfo will display details of your media file. In Basic view, you’ll see basic information. Notice in the example below the video and audio codecs, along with file size, running time of the media file, and even the application used to create the video file (Writing application).    You can switch to some of the other views by selecting View from the Menu and choosing form the dropdown list.   Sheet View will present the information a bit more clearly. You can see in the example below that the video and audio codec are listing in clearly identified columns. (AVC is often more commonly referred to H.264.)   Tree View is perhaps the most detailed. You can see from the example below the codec used for this AVI file is XviD.   Scrolling down even further you’ll see additional information like video and audio bit rates, frame rate, aspect ratio, and more.   In Basic View (and also in Sheet view) you can click to find a player for your file. In this instance with an MP4 file, it took me to the download page for Quicktime. This is by no means the only media player for this file, but if you are stuck for how to play a media file, this will forward you to a solution that works. You can do the same thing with Video codec. Click Go to the web site of this video codec to find a download.   MediaInfo is a simple but powerful tool that can be used to discover the details of a media file, or just to find a compatible codec. It works with most any video file type and is available for Windows, Mac, and Linux. Some Mac and Linux versions, however, are currently command line only. Download MediaInfo Similar Articles Productive Geek Tips How to Convert Videos to 3GP for Mobile PhonesFix for VLC Skipping and Lagging Playing High-Def Video FilesUsing VLC Player Under VistaUse Your Mac Mini as a Media Server Part 2How to Play .OGM Video Files in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Customisation / overriding of the Envelop ecs files

    - by Dheeraj Kumar M
    There are few usecases where the requirement is to customise the envelop information (Interchange/Group ecs file). Such scenarios might be required to be used for only few of the customers. Hence, in addition to the default seeded envelop definitions, it also required to upload the customised definitions. Here is the steps for achieving the same. 1. Create only the Interchange ecs and save 2. Create only the group ecs and save 3. Use the same in B2B 1. Create only the Interchange ecs and save :       Open the document editor and select the required version and doctype. During creating new ecs, ensure to select the checkbox for insert envelop.       Once created, delete the group and transactionset nodes and retain only the Interchange ecs nodes, including both header and trailer. Save this file. 2. Create only the group ecs and save       After creating the ecs file as mentioned in steps of Interchange creation, delete the Interchange and transactionset nodes and retain only the group ecs nodes, including both header and trailer. Save this file. 3. Use the same in B2B       These newly created ecs can be used in B2B by 2 ways.              a. By overriding at the trading partner Level:              This will be very useful when the configuration is complete and then need to incorporate the customisation. In this case, just select the Trading partner - document - select the document which need to be customised.              Upload the newly created Interchange and group ECS files under the Interchange and group tabs respectively and re-deply the associated agreement.              The advantage of this approach is              - Flexibility to add customised envelop definitions to the partners              - Save the re-work of design time effort.              b. By adding another document definition in Administration - document screen:              This scenario can be used if there is no configuration done at the trading partner level. Create the required document revision and overtide the Interchange and group ECS files under the Interchange and group tabs respectively. Add the document in Trading partner - document. Create and deploy the agreements

    Read the article

  • Duplicate elements when adding XElement to XDocument

    - by Andy
    I'm writing a program in C# that will go through a bunch of config.xml files and update certain elements, or add them if they don't exist. I have the portion down that updates an element if it exists with this code: XDocument xdoc = XDocument.Parse(ReadFile(_file)); XElement element = xdoc.Elements("project").Elements("logRotator") .Elements("daysToKeep").Single(); element.Value = _DoRevert; But I'm running into issues when I want to add an element that doesn't exist. Most of the time part of the tree is in place and when I use my code it adds another identical tree, and that causes the program reading the xml to blow up. here is how I am attempting to do it xdoc.Element("project").Add(new XElement("logRotator", new XElement("daysToKeep", _day))); and that results in a structure like this(The numToKeep tag was already there): <project> <logRotator> <daysToKeep>10</daysToKeep> </logRotator> <logRotator> <numToKeep>13</numToKeep> </logRotator> </project> but this is what I want <project> <logRotator> <daysToKeep>10</daysToKeep> <numToKeep>13</numToKeep> </logRotator> </project>

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • Avoiding duplicate objects in Java deserialization

    - by YGL
    I have two lists (list1 and list2) containing references to some objects, where some of the list entries may point to the same object. Then, for various reasons, I am serializing these lists to two separate files. Finally, when I deserialize the lists, I would like to ensure that I am not re-creating more objects than needed. In other words, it should still be possible for some entry of List1 to point to the same object as some entry in List2. MyObject obj = new MyObject(); List<MyObject> list1 = new ArrayList<MyObject>(); List<MyObject> list2 = new ArrayList<MyObject>(); list1.add(obj); list2.add(obj); // serialize to file1.ser ObjectOutputStream oos = new ObjectOutputStream(...); oos.writeObject(list1); oos.close(); // serialize to file2.ser oos = new ObjectOutputStream(...); oos.writeObject(list2); oos.close(); I think that sections 3.4 and A.2 of the spec say that deserialization strictly results in the creation of new objects, but I'm not sure. If so, some possible solutions might involve: Implementing equals() and hashCode() and checking references manually. Creating a "container class" to hold everything and then serializing the container class. Is there an easy way to ensure that objects are not duplicated upon deserialization? Thanks.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • JBoss 5.1.0.GA and huge vfs-nested.tmp

    - by Petteri Hietavirta
    I noticed this while running a performance test with JMeter. For first half an hour everything was fine and the /server/all/tmp directory size was around 36M. Then suddenly the tmp directory grew up to 6.1G. The space was taken by jar files inside vfs-nested.tmp. I found https://jira.jboss.org/jira/browse/JBAS-7126 but adding that config but it made no difference.

    Read the article

  • E-mail spam analyzing tools

    - by goran
    I have some mail logs, for which I assume that come from our hosted mail server antivirus: 1, antispam: 1, sanesecurity: 1, chkuser: 1, chkrbl: 1, chkmx: 1, chkptr: 0, greylistlevel: 0, rejectemptyfrom: 1, spamscore: 7.00, redirectspam: 1, maxrcpt: 30, maxdatabytes: 50000000, nightguard: 0, whitelistsigned: 1 (+ info on each message score) as plain text files. I was wondering if anyone knows which tool produce such logs and if there are any tools that would parse and analyze the logs?

    Read the article

  • "error module dav_svn does not exist"

    - by chris12892
    I accidentally deleted the mod_svn on my webserver, and now I am stuck. Everything I try to do anything with it (remove it or reinstall it with apt-get), I get that message and apt fails. I know I could reinstall Apache, but I am trying to avoid that at all costs (unless I can do it in such a way that would keep my config files). Any ideas on how to deal with this?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >