Search Results

Search found 79415 results on 3177 pages for 'log file'.

Page 293/3177 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Batch script is not executed if chcp was called

    - by Andy
    Hello! I'm trying to delete some files with unicode characters in them with batch script (it's a requirement). So I run cmd and execute: > chcp 65001 Effectively setting codepage to UTF-8. And it works: D:\temp\1>dir Volume in drive D has no label. Volume Serial Number is 8C33-61BF Directory of D:\temp\1 02.02.2010 09:31 <DIR> . 02.02.2010 09:31 <DIR> .. 02.02.2010 09:32 508 1.txt 02.02.2010 09:28 12 delete.bat 02.02.2010 09:20 95 delete.cmd 02.02.2010 09:13 <DIR> Rún 02.02.2010 09:13 <DIR> ????? ??????? 3 File(s) 615 bytes 4 Dir(s) 11 576 438 784 bytes free D:\temp\1>rmdir Rún D:\temp\1>dir Volume in drive D has no label. Volume Serial Number is 8C33-61BF Directory of D:\temp\1 02.02.2010 09:56 <DIR> . 02.02.2010 09:56 <DIR> .. 02.02.2010 09:32 508 1.txt 02.02.2010 09:28 12 delete.bat 02.02.2010 09:20 95 delete.cmd 02.02.2010 09:13 <DIR> ????? ??????? 3 File(s) 615 bytes 3 Dir(s) 11 576 438 784 bytes free Then I put the same rmdir commands in batch script and save it in UTF-8 encoding. But when I run nothing happens, literally nothing: not even echo works from batch script in this case. Even saving script in OEM encoding does not help. So it seems that when I change codepage to UTF-8 in console, scripts just stop working. Does somebody know how to fix that?

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • download large files using servlet

    - by niks
    I am using Apache Tomcat Server 6 and Java 1.6 and am trying to write large mp3 files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it and getting socket exception broken pipe. File fileMp3 = new File(objDownloadSong.getStrSongFolder() + "/" + strSongIdName); FileInputStream fis = new FileInputStream(fileMp3); response.setContentType("audio/mpeg"); response.setHeader("Content-Disposition", "attachment; filename=\"" + strSongName + ".mp3\";"); response.setContentLength((int) fileMp3.length()); OutputStream os = response.getOutputStream(); try { int byteRead = 0; while ((byteRead = fis.read()) != -1) { os.write(byteRead); } os.flush(); } catch (Exception excp) { downloadComplete = "-1"; excp.printStackTrace(); } finally { os.close(); fis.close(); }

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Know the row with max characters (C)

    - by l_core
    I have wrote a program in C, to find the row with the max number of characters. Here is the code #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <string.h> int main (int argc, char *argv[]) { char c; /* used to store the character with getc */ int c_tot = 0, c_rig = 0, c_max = 0; /* counters of characters*/ int r_tot = 0; /* counters of rows */ FILE *fptr; fptr = fopen(argv[1], "r"); if (fptr == NULL || argc != 2) { printf ("Error opening the file %s\n'", argv[1]); exit(EXIT_FAILURE); } while ( (c = getc(fptr)) != EOF) { if (c != ' ' && c != '\n') { c_tot++; c_rig++; } if (c == '\n') { r_tot++; if (c_rig > c_max) c_max = c_rig; c_rig = 0; } } printf ("Total rows: %d\n", r_tot); printf ("Total characters: %d\n", c_tot); printf ("Total characters in a row: %d\n", c_max); printf ("Average number of characters on a row: %d\n", (c_tot/r_tot)); printf ("The row with max characters is: %s\n", ??????) return 0; } I can easily find the row with the highest number of characters but how can i print that out? Thank You Folks

    Read the article

  • CertMgr fails trying to import an SPC file

    - by nsr81
    We have an SPC files which came with the Cisco IP Communicator installer. It needs to be imported into the localMachine ROOT store. However, which the certmgr.exe is run against this SPC file, it errors out. Doesn't matter if it's run from within the installer or manually. The commands I've tried using are: certmgr.exe -add -all CDPcredentials.spc -s -r localMachine root The result displayed is: Error: Failed to save to the destination store CertMgr Failed There is no other information, no log file, nothing in the eventviewer. I's almost as if the ROOT store is in a read-only state. I would also like to point out that I'm able to import single certificates. Just not an SPC files, which contains multiple certificates. I have also tried different versions of the CertMgr utility. Running on Windows 7 Enterprise 64bit. Any assistance would be appreciated.

    Read the article

  • File system loop detected in /var/named/chroot/var/named/ CentOS6.3

    - by wilco
    When I use find command on shell, I got the following error. find: File system loop detected; /var/named/chroot/var/named' is part of the same file system loop as/var/named'. I verified the inode number and it comes out the same as below. [root@serverone ~]# ls -ldi /var/named/chroot/var/named/ /var/named 6684673 drwxr-x--- 6 root named 4096 Sep 7 17:17 /var/named 6684673 drwxr-x--- 6 root named 4096 Sep 7 17:17 /var/named/chroot/var/named/ I cannot remove the directory with rm -f and it is saying this is directory. It is minimal CentOS6.3 install with plesk 11. Any help would be appreciated.

    Read the article

  • Unable to format disk: 'The system cannot find the file specified'

    - by ACarter
    I have a USB flash drive, which I may have mucked up, so I used DISKPART's CLEAN to clean it up. I created a simple volume, and tried to format it. (This is all using Windows' disk management.) I was told The system cannot find the file specified. So I tried using DISKPART (as an admin): DISKPART select volume 9 Volume 9 is the selected volume. DISKPART format recommended DiskPart has encountered an error: The system cannot find the file specified. See the System Event Log for more information. DISKPART As you can see, no luck. When I plug the drive in, the computer makes a beep noise as though it has recognised something, but nothing appears in My Computer How can I format the disk so I can use it again?

    Read the article

  • Unable to execute file in the temporary directory

    - by Bixal
    I am using Windows 8.1 Pro 64-bit. I see this error, almost everytime I launch an executable file (to install it) but not for all of them. I don't see the error when I use Run as Administrator. I looked around, and found a solution: I needed to give permissions to the current user for the temp file as shown in the picture below: The problem here is solved temporarily, but it goes back to give me the same problem after restarting the PC. What can I do to prevent such a thing? I don't really want to use the built in Administrator account all the time. Update: The problem is caused by the cracked version of Adobe Acrobat. And the root cause is the cracked amtlib.dll Read more here: http://www.sabernova.com/2013/12/cracked-adobe-acrobat-xi-will-revert.html#axzz2r8VSzZi9

    Read the article

  • Mac server default file permissions

    - by Bobby Jack
    How do I change the default file permissions for files created on a Mac server? In case it's relevant, this is a Mac Mini running Mac OS 10.6.7. It's currently used mainly as a file server, and there are several users who need to share files. These files need to be writable by all, rather than the default which is writable only by the owner. I've been trying to do something with umask and a startup script, but I'm not sure there's a startup script that will apply to connections via Finder. I also need this to apply to files created on a client (also Macs) and copied onto the server.

    Read the article

  • runit - unable to open supervise/ok: file does not exist

    - by Alexandr Kurilin
    I'm trying to figure out why runit will not boot or give me the status for the managed applications. Running on Ubuntu 12.04. I created /service, /etc/sv/myapp (with a run script, a config file, a log folder and a run script inside of it). I create a symlink from /service/ to /etc/sv/myapp When I run sudo sv s /service/* I get the following error message: warning: /service/myapp: unable to open supervice/ok: file does not exist Some of my Googling revealed that supposedly rebooting the svscan service might fix this, but killing it and running svscanboot didn't make a difference. Any suggestions? Am I missing a step here somewhere?

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos.com
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • Conky truncates text loaded from file

    - by takeshin
    I'm trying to configure conky on Ubuntu, because I need to display my todo list on the desktop. The the file is displayed, but the text is truncated (not rectangular, just after some character limit). How to display the whole file? Here is my setup: # Text alignment, other possible values are commented alignment top_right # Gap between borders of screen and text gap_x 10 gap_y 10 # Maximum size of buffer for user text, i.e. below TEXT line. max_user_text 16384 # stuff after 'TEXT' will be formatted on screen TEXT ${execi 30 cat /home/user/Documents/todo.txt}

    Read the article

  • nginx tmp file folder runing out of diskspace

    - by user1179459
    I get mysql diskspace error Can't create/write to file '/tmp/#sql_777_0.MYI' (Errcode: 28) mainly because my ngnix server is writing file into the tmp folder which doesn't get clean up.. i added this command as per instructions on the nginx manual to the crontab but doesn't seems to be doing the trick, (i don't understand what it does too) 0 */1 * * * /usr/sbin/tmpwatch -am 1 /tmp/nginx_client then i had to do this commands mannually cd /tmp/nginx_client find -name * | xargs rm i need to know what should i do to automate this clean up ? is there way to increase the /tmp/ - /var/tmp/ size without reformatting or doing any dangerous things ? Can i change the location of the MYSQL - TMP files ?

    Read the article

  • wamp php scan additional php.ini file

    - by user137971
    In addition to the main php.ini file. I would like to scan a php.ini file located in the root directory of a website on localhost. Is this possible? I have just done a lot of reading about this, but I am still not grasping exactly how this is done, or if it is even possible. I can do this on my remote server and it works. So not really understanding why wamp won't search my root directory for a php.ini.

    Read the article

  • Single Msi File - Many RemoteApps

    - by Mikeon
    How can I create a single .MSI installer file for many Remote Apps in Remote Desktop Services? Suppose I have 10 applications exposed via RDS. To make life easier I created MSI installer packages so users can "install" those apps. currently I have 10 different .msi files which forces users to install 10 times. Is it possible to make all/some apps into a single .msi file? (I don't control user machines so installing via GPO or other magic is out of question).

    Read the article

  • Why are my 2 new windows 7 (64 bit) workstations taking over 10 minutes to log on to the SBS 2008 do

    - by Howie Hughes
    Hi, we have a SBS 2008 domain. On this we have windows XP clients. However, we are testing the windows 7 (64 bit) machines on the network. It takes between 10 & 15 minutes to log on - every time! I have checked the event logs on the client machine, and the only error I can see is; Event ID: 6005 The winlogon notification subscriber is took 615 seconds to handle the notification event (CreateSession). I have no warnings in the server event log, everything pings ok by name, so am guessing DNS is fine. Can someone please lend a hand with this, as we really want to go with windows 7. Lastly, both the server, and the windows 7 machines are fully patched and updated. Thank you.

    Read the article

  • Can't create PID file on MySQL server, permission denied

    - by James Barnhill
    The MySQL server won't start and is reporting the following error: /usr/local/mysql/bin/mysqld: Can't create/write to file '/usr/local/mysql/data/James-Barnhills-Mac-Pro.local.pid' (Errcode: 13) Can't start server: can't create PID file: Permission denied All the permissions are set recursively as: lrwxr-xr-x 1 _mysql wheel 27 Nov 22 09:25 mysql -> mysql-5.5.18-osx10.6-x86_64 but it won't start. I've tried reinstalling several times to no avail. I'm running as root on Mac OS, and MySQL has read, write, and execute permissions on the "data" folder.

    Read the article

  • Java file manager won't load in Firefox

    - by Arthur
    I am using Webmin and I can't get the file manager to load in Firefox. It is a simple, Java based file manager. When I try to load it I get the following error: This module requires java to function, but your browser does not support java Internet Explorer works fine and I have yet to try on Chrome. Java is installed and I have the same problem on Windows and Linux. Java seems to work fine with everything else, with the exception of webcams. Any advice on the issue would be appreciated. Edit: I just checked and it doesn't work in Chrome either.

    Read the article

  • gparted installed on OpenSuse shows all file system types as greyed out except for hfs

    - by cmdematos
    I have had this problem before and fixed it, but I don't recall how I did it and I did not record it (sadness :( ) I have all the requisite commands installed on OpenSuse to support gparted's efforts in creating any of the supported file systems. I recall that the problem was that gparted could not find the commands, in any event all the file systems are greyed out in the context menu except for the legacy hfs partition which only supports < 2gb. Even extfs2-extfs4 are greyed out. How do I fix this?

    Read the article

  • Cannot create file in directory even though it's writable by a group I belong to

    - by Alan Berndt
    I have a directory structure owned by a certain group, and I am a member of the group that owns these directories. I am able to create files in one directory, but not in another, even though the permissions are the same. alan@bricky:/mnt/storage/media$ stat Music Music\ \(Lossy\)/ File: `Music' Size: 34 Blocks: 8 IO Block: 4096 directory Device: fb00h/64256d Inode: 4215424 Links: 3 Access: (2775/drwxrwsr-x) Uid: ( 1001/ media) Gid: ( 1001/ media) Access: 2011-08-19 11:45:03.182586898 -0700 Modify: 2011-08-19 11:44:01.412840027 -0700 Change: 2011-08-19 11:45:02.734603240 -0700 File: `Music (Lossy)/' Size: 6 Blocks: 8 IO Block: 4096 directory Device: fb00h/64256d Inode: 1512056832 Links: 2 Access: (2775/drwxrwsr-x) Uid: ( 1001/ media) Gid: ( 1001/ media) Access: 2011-08-19 11:45:03.190586606 -0700 Modify: 2011-08-19 10:34:46.526530313 -0700 Change: 2011-08-19 11:45:02.738603094 -0700 alan@bricky:/mnt/storage/media$ touch Music/test alan@bricky:/mnt/storage/media$ touch Music\ \(Lossy\)/test touch: cannot touch `Music (Lossy)/test': Permission denied

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >