Search Results

Search found 39339 results on 1574 pages for 'batch files'.

Page 80/1574 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Dreamweaver - template not recognising child files until they are opened

    - by Chris
    Hi I've got a new section for a website which I have generated for a data source and it has markup for using a Dreamweaver template. When I add the new files and folders to the site , then update my template , it doesn't find the new files to update. If I open one of the new files , make a change in the template , then it recognises the new file is using the template. So it's almost like I have to touch all the files with Dreamweaver first. I've tried to open all the new files which need to use the template but then Dreamweaver CS4 crashes, I presume because of the number of files it's opening. Anyway, does anyone know if there is a way to make Dreamweaver recognise that a block of new files belong to the template , it doesn't seem to just work automatically Thanks Chris

    Read the article

  • archiving (ubuntu tar) hidden directories

    - by broiyan
    tar on a directory "mydir" will archive hidden files and hidden subdirectories, but tar from within "mydir" with a wildcard will not. Is this a longstanding and known inconsistency or bug or is it that hardly anybody ever looks inside a lengthy tar log long enough to notice? Edit (additional information): tar from within "mydir" with a wildcard will not "see" nor archive hidden files and hidden subdirectories in the immediate directory, with emphasis on "immediate". However, in subdirectories of "mydir" (obviously non-hidden) hidden files and hidden subdirectories will be archived.

    Read the article

  • Piping to findstr's input, dos prompt

    - by Gauthier
    I have a text file with a list of macro names (one per line). My final goal is to get a print of how many times the macro's name appears in the files of the current directory. The macro's names are in C:\temp\macros.txt. type C:\temp\macros.txt in the dos prompt prints the list alright. Now I want to pipe that output to the standard input of findstr. type C:\temp\macros.txt | findstr *.ss (ss is the file type where I am looking for the macro names). This does not seem to work, I get no result (very fast, it does not seem to try at all). findstr <the first row of the macro list> *.ss does work. I also tried findstr *.ss < c:\temp\macros.txt with no success.

    Read the article

  • Using Delphi or FFMpeg to create a movie from image sequence

    - by Hein du Plessis
    Hi all My Delphi app has created a squence called frame_001.png to frame_100.png. I need that to be compiled into a movie clip. I think perhaps the easiest is to call ffmpeg from the command line, according to their documentation: For creating a video from many images: ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi The syntax foo-%03d.jpeg specifies to use a decimal number composed of three digits padded with zeroes to express the sequence number. It is the same syntax supported by the C printf function, but only formats accepting a normal integer are suitable. From: http://ffmpeg.org/ffmpeg-doc.html#SEC5 However my files are (lossless) png format, so I have to convert using imagemagick first. My command line is now: ffmpeg.exe -f image2 -i c:\temp\wentelreader\frame_%05d.jpg -r 12 foo.avi But then I get the error: [image2 @ 0x133a7d0]Could not find codec parameters (Video: mjpeg) c:\temp\wentelreader\Frame_C:\VID2EVA\Tools\Mencoder\wentel.bat5d.jpg: could not find codec parameters What am I doing wrong? Alternatively can this be done easily with Delphi?

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Sort files by name in Java differs from Windows Explorer

    - by Martyn Hopkins
    I have a simple Java program which reads a file directory and outputs a file list. I sort the files by name: String [] files = dirlist.list(); files = sort(files); My problem is that it sorts by name in a different way than Windows Explorer does. For instance if I have these files: abc1.doc, abc12.doc, abc2.doc. Java will sort like this: abc1.doc abc12.doc abc2.doc When I open the folder in Explorer, my files are sorted like this: abc1.doc abc2.doc abc12.doc How can I make Java sorts my files like in Windows Explorer? Is this a Windows trick?

    Read the article

  • Working with multiple input and output files in Python

    - by Morlock
    I need to open multiple files (2 input and 2 output files), do complex manipulations on the lines from input files and then append results at the end of 2 output files. I am currently using the following approach: in_1 = open(input_1) in_2 = open(input_2) out_1 = open(output_1, "w") out_2 = open(output_2, "w") # Read one line from each 'in_' file # Do many operations on the DNA sequences included in the input files # Append one line to each 'out_' file in_1.close() in_2.close() out_1.close() out_2.close() The files are huge (each potentially approaching 1Go, that is why I am reading through these input files one at a time. I am guessing that this is not a very Pythonic way to do things. :) Would using the following form good? with open("file1") as f1: with open("file2") as f2: # etc. If yes, could I do it while avoiding the highly indented code that would result? Thanks for the insights!

    Read the article

  • Default permission for newly-created files/folders using ACLs not respected by commands like "unzip"

    - by Ngoc Pham
    I am having trouble with setting up a system for multiple users accessing the same set of files. I've read tuts and docs around and played with ACLs but haven't succeeded yet. MY SCENARIO: Have multiple users, for example, user1 and user2, which is belong to a group called sharedusers. They must have all WRITE permission to a same set of files and directories, say underlying in /userdata/sharing/. I have the folder's group set to sharedusers and SGID to have all newly created files/dirs inside set to same group. ubuntu@home:/userdata$ ll drwxr-sr-x 2 ubuntu sharedusers 4096 Nov 24 03:51 sharing/ I set ACLs for this directory so I can have permission of sub dirs/files inheritted from its parents. ubuntu@home:/userdata$ setfacl -m group:sharedusers:rwx sharing/ ubuntu@home:/userdata$ setfacl -d -m group:sharedusers:rwx sharing/ Here's what I've got: ubuntu@home:/userdata$ getfacl sharing/ # file: sharing/ # owner: ubuntu # group: sharedusers # flags: -s- user::rwx group::r-x group:sharedusers:rwx mask::rwx other::r-x default:user::rwx default:group::r-x default:group:sharedusers:rwx default:mask::rwx default:other::r-x Seems okay as when I create new folder with new files inside and the permission is correct. ubuntu@home:/userdata/sharing$ mkdir a && cd a ubuntu@home:/userdata/sharing/a$ touch a_test ubuntu@home:/userdata/sharing/a$ getfacl a_test # file: a_test # owner: ubuntu # group: sharedusers user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:rw- mask::rw- other::r-- As you can see, the sharedusers group has effective permission rw-. HOWEVER, if I have a zip file, and use unzip -q command to unzip the file inside the folder sharing, the extracted folders don't have group write permisison. Therefore, the users from group sharedusers cannot modify files under those extracted folders. ubuntu@home:/userdata/sharing$ unzip -q Joomla_3.0.2-Stable-Full_Package.zip ubuntu@home:/userdata/sharing$ ll drwxrwsr-x+ 2 ubuntu sharedusers 4096 Nov 24 04:00 a/ drwxr-xr-x+ 10 ubuntu sharedusers 4096 Nov 7 01:52 administrator/ drwxr-xr-x+ 13 ubuntu sharedusers 4096 Nov 7 01:52 components/ You an spot the difference in permissions between folder a (created before) and folder administrator extracted by unzip. And the ACLs of a files inside administrator: ubuntu@home:/userdata/sharing$ getfacl administrator/index.php # file: administrator/index.php # owner: ubuntu # group: ubuntu user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:r-- mask::r-- other::r-- It also has ubuntu group, not sharedusers group as expected. Could someone please explain the problem and give me advice? Thank you in advance!

    Read the article

  • Visual Studio 2010: Publish minified javascript files instead of the original ones

    - by salgiza
    I have a Scripts folder, that includes all the .js files used in the project. Using the Ajax Minifier task, I generate .min.js files for each one. Depending on whether the application is running in debug or release mode, I include the original .js file, or the minified one. The Scripts folder looks like this: Scripts/script1.js Scripts/script1.min.js // Outside the project, generated automatically on build Scripts/script2.js Scripts/script2.min.js // Outside the project, generated automatically on build The .min.js files are outside the project (although in the same folder as the original files), and they are not copied into the destination folder when we publish the project. I have no experience whatsoever using build tasks (well, apart from including the minifier task), so I would appreciate if anyone could advise me as to which would be the correct way to: Copy the .min.js files to the destination folder when I publish the app from Visual Studio. Delete / Not copy the original js files (this is not vital, but I'd rather not copy files that will not be used in the app). Thanks,

    Read the article

  • how to start stop tomcat server using CMD?

    - by ali
    i set the path for the tomcat and set all variables like 1.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.6.0_22 2.CATALINA_HOME=G:\springwork\server\apache-tomcat-6.0.29 3.CLASSPATH=G:\springwork\server\apache-tomcat-6.0.29\lib\servlet-api.jar;G:\springwork\server\apache-tomcat-6.0.29\lib\jsp-api.jar;.; when i go to bin folder and double click on startup.bat then my tomcat start and when i double click on shutdown.bat tomcat is stop. But I want using CMD start and stop the tomcat. And in any folder i write command startup.bat the server will start and when i write the server will stop...... Thanks in advace..........

    Read the article

  • Sending files through a webservice

    - by Jay
    Hi, I have to send some files through a webservice in C#. The files to be sent can be from different locations i.e. there is one folder having 4 files and another folder having 5 files. Assuming i have a mechanism to select which files to send. What would be the best way to send those files? Should I be sending them one by one and let the client figure out how to put them together, or zip all the files into a single file and send that zip file to the client. If there is any other way to implement this, I would be more than happy to look into that approach too. Thanks

    Read the article

  • Is it possible to store only a checksum of a large file in git?

    - by Andrew Grimm
    I'm a bioinformatician currently extracting normal-sized sequences from genomic files. Some genomic files are large enough that I don't want to put them into the main git repository, whereas I'm putting the extracted sequences into git. Is it possible to tell git "Here's a large file - don't store the whole file, just take its checksum, and let me know if that file is missing or modified." If that's not possible, I guess I'll have to either git-ignore the large files, or, as suggested in this question, store them in a submodule.

    Read the article

  • How can I create a boolean in the `if` statement within the for loop to check for existence of term prepended to filename %%f?

    - by user784637
    I have lyrics for about 60% of my song collection. The filename of the lyrics is the same as the file name of song with zzz_ prepended to the filename and .lrc as the extension. C:\Songs\album\song.mp3 C:\Songs\album\zzz_song.lrc I currently print the file names like so for /r "C:\Songs" %%f in (*.mp3 *.flac) do ( echo %%f ) How can I create a boolean in the if statement within the for loop as a check on the existence of lyrics files? I was thinking something like if exist zzz_%f echo zzz_%f.lrc but zzz_%f prepends zzz_ to the full file path (ex. zzz_C:\Songs\album\song.mp3) and .lrc is appended to the existing extension

    Read the article

  • .BAT Switches Parsing

    - by giiYanJ
    Erm.. I'm working with a switches parsing in BAT files, which goes like this: <commands> -a -b -x -y -z -u abc ...... The user may input a lot of switch, or none. So I used looped shift to make parsing infinite switches possible: :loop IF "%1"=="-a" ... SHIFT GOTO loop But when the script ends, I always get cmd executing the switches and showing up error like '-n' is not recognized as an internal... So, someone got any idea? Thanks a lot... P/S: Make solution sticks with BAT script if possible as using other language may cause dependencies problem as this script is aimed on ANY computer with Windows. Finally, thanks again =) EDIT: Tried shf301 suggestion, i found out that I used DEL %0 to delete itself but it seems the %0 is shifted into the arguments because of the SHIFT command.

    Read the article

  • reading files provided via $_GET

    - by Max
    I have a php script which takes a relative pathname via $_GET, reads that file and creates a thumbnail of it. I dont want the user to be able to read any file from the server. Only files from a certain directory should be allowed, otherwiese the script should exit(). Here is my folder structure: files/ <-- all files from this folder are public my_stuff/ <-- this is the folder of my script that reads the files My script is accessed via mydomain.com/my_stuff/script.php?pathname=files/some.jpg. What should not be allowed e. g.: mydomain.com/my_stuff/script.php?pathname=files/../db_login.php So, here is the relevant part of the script in my_stuff folder: ... $pathname = $_GET['pathname']; $pathname = realpath('../' . $_GET['pathname']); if(strpos($pathname, '/files/') === false) exit('Error'); ... I am not really sure about that approach, doesnt seem too safe for me. Anyone with a better idea?

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Linux: prevent VNC from swapping like mad

    - by Weezy
    I'm accessing a MacMini (with MacOS X 10.4) from my Linux machine using VNC and there's an issue that is driving me crazy... My Linux machine has 4 GB of ram and I run a lot of various apps on it and I've got no issue at all. It's all snappy and don't hear the hard disk swapping/read/writing too often. Now with VNC, the hard disk is swapping like mad... When I'm moving things on the OS X desktop. So I was thinking of creating a ramdisk and forcing the temp VNC files to go into that ramdisk but the problem is I can't find any temp files. I've attempted to do that: #!/bin/bash while [ true ] do lsof | grep vnc done And eyeball parse the output to try to find some temp file: no luck. The VNC version I'm using is this one: $ vncviewer -version VNC Viewer Free Edition 4.1.1 for X - built Jan 30 2009 19:33:16 Copyright (C) 2002-2005 RealVNC Ltd. No matter how much data is coming from the Mac, there should be plenty of memory (4 GB of ram) so there's really no reason to swap like crazy. This is driving me mad. Any help as to how I could solve this problem is most welcome because this is literally driving me nuts.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • java ioexception error=24 too many files open

    - by MattS
    I'm writing a genetic algorithm that needs to read/write lots of files. The fitness test for the GA is invoking a program called gradif, which takes a file as input and produces a file as output. Everything is working except when I make the population size and/or the total number of generations of the genetic algorithm too large. Then, after so many generations, I start getting this: java.io.FileNotFoundException: testfiles/GradifOut29 (Too many open files). (I get it repeatedly for many different files, the index 29 was just the one that came up first last time I ran it). It's strange because I'm not getting the error after the first or second generation, but after a significant amount of generations, which would suggest that each generation opens up more files that it doesn't close. But as far as I can tell I'm closing all of the files. The way the code is set up is the main() function is in the Population class, and the Population class contains an array of Individuals. Here's my code: Initial creation of input files (they're random access so that I could reuse the same file across multiple generations) files = new RandomAccessFile[popSize]; for(int i=0; i<popSize; i++){ files[i] = new RandomAccessFile("testfiles/GradifIn"+i, "rw"); } At the end of the entire program: for(int i=0; i<individuals.length; i++){ files[i].close(); } Inside the Individual's fitness test: FileInputStream fin = new FileInputStream("testfiles/GradifIn"+index); FileOutputStream fout = new FileOutputStream("testfiles/GradifOut"+index); Process process = Runtime.getRuntime().exec ("./gradif"); OutputStream stdin = process.getOutputStream(); InputStream stdout = process.getInputStream(); Then, later.... try{ fin.close(); fout.close(); stdin.close(); stdout.close(); process.getErrorStream().close(); }catch (IOException ioe){ ioe.printStackTrace(); } Then, afterwards, I append an 'END' to the files to make parsing them easier. FileWriter writer = new FileWriter("testfiles/GradifOut"+index, true); writer.write("END"); try{ writer.close(); }catch(IOException ioe){ ioe.printStackTrace(); } My redirection of stdin and stdout for gradif are from this answer. I tried using the try{close()}catch{} syntax to see if there was a problem with closing any of the files (there wasn't), and I got that from this answer. It should also be noted that the Individuals' fitness tests run concurrently. UPDATE: I've actually been able to narrow it down to the exec() call. In my most recent run, I first ran in to trouble at generation 733 (with a population size of 100). Why are the earlier generations fine? I don't understand why, if there's no leaking, the algorithm should be able to pass earlier generations but fail on later generations. And if there is leaking, then where is it coming from? UPDATE2: In trying to figure out what's going on here, I would like to be able to see (preferably in real-time) how many files the JVM has open at any given point. Is there an easy way to do that?

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >