Search Results

Search found 615 results on 25 pages for 'stat r'.

Page 4/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Neural Network Basics

    - by Stat Onetwothree
    I'm a computer science student and for this years project, I need to create and apply a Genetic Algorithm to something. I think Neural Networks would be a good thing to apply it to, but I'm having trouble understanding them. I fully understand the concepts but none of the websites out there really explain the following which is blocking my understanding: How the decision is made for how many nodes there are. What the nodes actually represent and do. What part the weights and bias actually play in classification. Could someone please shed some light on this for me? Also, I'd really appreciate it if you have any similar ideas for what I could apply a GA to. Thanks very much! :)

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • Add leading zeros to this javascript countdown script?

    - by eddjedi
    I am using the following countdown script which works great, but I can't figure out how to add leading zeros to the numbers (eg so it displays 09 instead of 9.) Can anybody help me out please? Here's the current script: function countDown(id, end, cur){ this.container = document.getElementById(id); this.endDate = new Date(end); this.curDate = new Date(cur); var context = this; var formatResults = function(day, hour, minute, second){ var displayString = [ '<div class="stat statBig">',day,'</div>', '<div class="stat statBig">',hour,'</div>', '<div class="stat statBig">',minute,'</div>', '<div class="stat statBig">',second,'</div>' ]; return displayString.join(""); } var update = function(){ context.curDate.setSeconds(context.curDate.getSeconds()+1); var timediff = (context.endDate-context.curDate)/1000; // Check if timer expired: if (timediff<0){ return context.container.innerHTML = formatResults(0,0,0,0); } var oneMinute=60; //minute unit in seconds var oneHour=60*60; //hour unit in seconds var oneDay=60*60*24; //day unit in seconds var dayfield=Math.floor(timediff/oneDay); var hourfield=Math.floor((timediff-dayfield*oneDay)/oneHour); var minutefield=Math.floor((timediff-dayfield*oneDay-hourfield*oneHour)/oneMinute); var secondfield=Math.floor((timediff-dayfield*oneDay-hourfield*oneHour-minutefield*oneMinute)); context.container.innerHTML = formatResults(dayfield, hourfield, minutefield, secondfield); // Call recursively setTimeout(update, 1000); }; // Call the recursive loop update(); }

    Read the article

  • Suggestions for duplicate file finder algorithm (using C)

    - by Andrei Ciobanu
    Hello, I wanted to write a program that test if two files are duplicates (have exactly the same content). First I test if the files have the same sizes, and if they have i start to compare their contents. My first idea, was to "split" the files into fixed size blocks, then start a thread for every block, fseek to startup character of every block and continue the comparisons in parallel. When a comparison from a thread fails, the other working threads are canceled, and the program exits out of the thread spawning loop. The code looks like this: dupf.h #ifndef __NM__DUPF__H__ #define __NM__DUPF__H__ #define NUM_THREADS 15 #define BLOCK_SIZE 8192 /* Thread argument structure */ struct thread_arg_s { const char *name_f1; /* First file name */ const char *name_f2; /* Second file name */ int cursor; /* Where to seek in the file */ }; typedef struct thread_arg_s thread_arg; /** * 'arg' is of type thread_arg. * Checks if the specified file blocks are * duplicates. */ void *check_block_dup(void *arg); /** * Checks if two files are duplicates */ int check_dup(const char *name_f1, const char *name_f2); /** * Returns a valid pointer to a file. * If the file (given by the path/name 'fname') cannot be opened * in 'mode', the program is interrupted an error message is shown. **/ FILE *safe_fopen(const char *name, const char *mode); #endif dupf.c #include <errno.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include "dupf.h" FILE *safe_fopen(const char *fname, const char *mode) { FILE *f = NULL; f = fopen(fname, mode); if (f == NULL) { char emsg[255]; sprintf(emsg, "FOPEN() %s\t", fname); perror(emsg); exit(-1); } return (f); } void *check_block_dup(void *arg) { const char *name_f1 = NULL, *name_f2 = NULL; /* File names */ FILE *f1 = NULL, *f2 = NULL; /* Streams */ int cursor = 0; /* Reading cursor */ char buff_f1[BLOCK_SIZE], buff_f2[BLOCK_SIZE]; /* Character buffers */ int rchars_1, rchars_2; /* Readed characters */ /* Initializing variables from 'arg' */ name_f1 = ((thread_arg*)arg)->name_f1; name_f2 = ((thread_arg*)arg)->name_f2; cursor = ((thread_arg*)arg)->cursor; /* Opening files */ f1 = safe_fopen(name_f1, "r"); f2 = safe_fopen(name_f2, "r"); /* Setup cursor in files */ fseek(f1, cursor, SEEK_SET); fseek(f2, cursor, SEEK_SET); /* Initialize buffers */ rchars_1 = fread(buff_f1, 1, BLOCK_SIZE, f1); rchars_2 = fread(buff_f2, 1, BLOCK_SIZE, f2); if (rchars_1 != rchars_2) { /* fread failed to read the same portion. * program cannot continue */ perror("ERROR WHEN READING BLOCK"); exit(-1); } while (rchars_1-->0) { if (buff_f1[rchars_1] != buff_f2[rchars_1]) { /* Different characters */ fclose(f1); fclose(f2); pthread_exit("notdup"); } } /* Close streams */ fclose(f1); fclose(f2); pthread_exit("dup"); } int check_dup(const char *name_f1, const char *name_f2) { int num_blocks = 0; /* Number of 'blocks' to check */ int num_tsp = 0; /* Number of threads spawns */ int tsp_iter = 0; /* Iterator for threads spawns */ pthread_t *tsp_threads = NULL; thread_arg *tsp_threads_args = NULL; int tsp_threads_iter = 0; int thread_c_res = 0; /* Thread creation result */ int thread_j_res = 0; /* Thread join res */ int loop_res = 0; /* Function result */ int cursor; struct stat buf_f1; struct stat buf_f2; if (name_f1 == NULL || name_f2 == NULL) { /* Invalid input parameters */ perror("INVALID FNAMES\t"); return (-1); } if (stat(name_f1, &buf_f1) != 0 || stat(name_f2, &buf_f2) != 0) { /* Stat fails */ char emsg[255]; sprintf(emsg, "STAT() ERROR: %s %s\t", name_f1, name_f2); perror(emsg); return (-1); } if (buf_f1.st_size != buf_f2.st_size) { /* File have different sizes */ return (1); } /* Files have the same size, function exec. is continued */ num_blocks = (buf_f1.st_size / BLOCK_SIZE) + 1; num_tsp = (num_blocks / NUM_THREADS) + 1; cursor = 0; for (tsp_iter = 0; tsp_iter < num_tsp; tsp_iter++) { loop_res = 0; /* Create threads array for this spawn */ tsp_threads = malloc(NUM_THREADS * sizeof(*tsp_threads)); if (tsp_threads == NULL) { perror("TSP_THREADS ALLOC FAILURE\t"); return (-1); } /* Create arguments for every thread in the current spawn */ tsp_threads_args = malloc(NUM_THREADS * sizeof(*tsp_threads_args)); if (tsp_threads_args == NULL) { perror("TSP THREADS ARGS ALLOCA FAILURE\t"); return (-1); } /* Initialize arguments and create threads */ for (tsp_threads_iter = 0; tsp_threads_iter < NUM_THREADS; tsp_threads_iter++) { if (cursor >= buf_f1.st_size) { break; } tsp_threads_args[tsp_threads_iter].name_f1 = name_f1; tsp_threads_args[tsp_threads_iter].name_f2 = name_f2; tsp_threads_args[tsp_threads_iter].cursor = cursor; thread_c_res = pthread_create( &tsp_threads[tsp_threads_iter], NULL, check_block_dup, (void*)&tsp_threads_args[tsp_threads_iter]); if (thread_c_res != 0) { perror("THREAD CREATION FAILURE"); return (-1); } cursor+=BLOCK_SIZE; } /* Join last threads and get their status */ while (tsp_threads_iter-->0) { void *thread_res = NULL; thread_j_res = pthread_join(tsp_threads[tsp_threads_iter], &thread_res); if (thread_j_res != 0) { perror("THREAD JOIN FAILURE"); return (-1); } if (strcmp((char*)thread_res, "notdup")==0) { loop_res++; /* Closing other threads and exiting by condition * from loop. */ while (tsp_threads_iter-->0) { pthread_cancel(tsp_threads[tsp_threads_iter]); } } } free(tsp_threads); free(tsp_threads_args); if (loop_res > 0) { break; } } return (loop_res > 0) ? 1 : 0; } The function works fine (at least for what I've tested). Still, some guys from #C (freenode) suggested that the solution is overly complicated, and it may perform poorly because of parallel reading on hddisk. What I want to know: Is the threaded approach flawed by default ? Is fseek() so slow ? Is there a way to somehow map the files to memory and then compare them ?

    Read the article

  • How to keep variable preserve while running script through ssh

    - by Ali Raza
    I am trying to run while loop with read through ssh: #!/bin/bash ssh [email protected] "cat /var/log/syncer/rm_filesystem.log | while read path; do stat -c \"%Y %n\" "$path" >> /tmp/fs_10.10.10.10.log done" But the issue is my variable $path is resolving on my localhost where as I want to resolve it on remote host so that it can read file on remote host and take stat of all folder/files listed in "rm_filesystem.log"

    Read the article

  • Using find and tar with files with special characters in the name

    - by Costi
    I want to archive all .ctl files in a folder, recursively. tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print` The error message : tar: Removing leading `/' from member names tar: /home/db/dunn/j: Cannot stat: No such file or directory tar: 74.ctl: Cannot stat: No such file or directory I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?

    Read the article

  • How do I permanently delete e-mail messages in the sendmail queue and keep them from coming back?

    - by Steven Oxley
    I have a pretty annoying problem here. I have been testing an application and have created some test e-mails to bogus e-mail addresses (not to mention that my server isn't really set up to send e-mail anyway). Of course, sendmail is not able to send these messages and they have been getting stuck in the sendmail queue. I want to manually delete the messages that have been building up in the queue instead of waiting the 5 days that sendmail usually takes to stop retrying. I am using Ubuntu 10.04 and /var/spool/mqueue/ is the directory in which every how-to I have read says the e-mails that are queued up are kept. When I delete the files in this directory, sendmail stops trying to process the e-mails until what appears to be a cron script runs and re-populates this directory with the messages I don't want sent. Here are some lines from my syslog: Jun 2 17:35:19 sajo-laptop sm-mta[9367]: o530SlbK009365: to=, ctladdr= (33/33), delay=00:06:27, xdelay=00:06:22, mailer=esmtp, pri=120418, relay=e.mx.mail.yahoo.com. [67.195.168.230], dsn=4.0.0, stat=Deferred: Connection timed out with e.mx.mail.yahoo.com. Jun 2 17:35:48 sajo-laptop sm-mta[9149]: o4VHn3cw003597: to=, ctladdr= (33/33), delay=2+06:46:45, xdelay=00:34:12, mailer=esmtp, pri=3540649, relay=mx2.hotmail.com. [65.54.188.94], dsn=4.0.0, stat=Deferred: Connection timed out with mx2.hotmail.com. Jun 2 17:39:02 sajo-laptop CRON[9510]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) Jun 2 17:39:43 sajo-laptop sm-mta[9372]: o52LHK4s007585: to=, ctladdr= (33/33), delay=03:22:18, xdelay=00:06:28, mailer=esmtp, pri=1470404, relay=c.mx.mail.yahoo.com. [206.190.54.127], dsn=4.0.0, stat=Deferred: Connection timed out with c.mx.mail.yahoo.com. Jun 2 17:39:50 sajo-laptop sm-mta[9149]: o51I8ieV004377: to=, ctladdr= (33/33), delay=1+06:31:06, xdelay=00:03:57, mailer=esmtp, pri=6601668, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.114], dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. Jun 2 17:40:01 sajo-laptop CRON[9523]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Does anyone know how I can get rid of these messages permanently? As a side note, I'd also like to know if there is a way to set up sendmail to "fake" sending e-mail. Is there?

    Read the article

  • Java Prepared Statement Error

    - by Suresh S
    Hi Guys the following code throws me an error i have an insert statement created once and in the while loop i am dynamically setting parameter , and at the end i says ps2.addBatch() again while ( (eachLine = in.readLine()) != null)) { for (int k=stat; k <=45;k++) { ps2.setString (k,main[(k-2)]); } stat=45; for (int l=1;l<= 2; l++) { ps2.setString((stat+l),pdp[(l-1)]);// Exception } ps2.addBatch(); } This is the error java.lang.ArrayIndexOutOfBoundsException: 45 at oracle.jdbc.dbaccess.DBDataSetImpl._getDBItem(DBDataSetImpl.java:378) at oracle.jdbc.dbaccess.DBDataSetImpl._createOrGetDBItem(DBDataSetImpl.java:781) at oracle.jdbc.dbaccess.DBDataSetImpl.setBytesBindItem(DBDataSetImpl.java:2450) at oracle.jdbc.driver.OraclePreparedStatement.setItem(OraclePreparedStatement.java:1155) at oracle.jdbc.driver.OraclePreparedStatement.setString(OraclePreparedStatement.java:1572) at Processor.main(Processor.java:233)

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

  • testing directory S_ISDIR acts inconsistently

    - by coubeatczech
    hi, I'm doing simple tests on all files in directory. But from some reason, sometimes, they behave wrongly? What's bad with my code? using namespace std; int main() { string s = "/home/"; struct dirent * file; DIR * dir = opendir(s.c_str()); while ((file = readdir(dir)) != NULL){ struct stat * file_info = new (struct stat); stat(file-d_name,file_info); if ((file_info-st_mode & S_IFMT) == S_IFDIR) cout << "dir" << endl; else cout << "other" << endl; } closedir(dir); }

    Read the article

  • how to solve error processing /usr/lib/python2.7/dist-packages/pygst.pth:?

    - by ChitKo
    Error processing line 1 of /usr/lib/python2.7/dist-packages/pygst.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Error processing line 1 of /usr/lib/python2.7/dist-packages/pygtk.pth: Traceback (most recent call last): File "/usr/lib/python2.7/site.py", line 161, in addpackage if not dircase in known_paths and os.path.exists(dir): File "/usr/lib/python2.7/genericpath.py", line 18, in exists os.stat(path) TypeError: must be encoded string without NULL bytes, not str Remainder of file ignored Traceback (most recent call last): File "/usr/share/apport/apport-gtk", line 16, in <module> from gi.repository import GObject File "/usr/lib/python2.7/dist-packages/gi/importer.py", line 76, in load_module dynamic_module._load() File "/usr/lib/python2.7/dist-packages/gi/module.py", line 222, in _load version) File "/usr/lib/python2.7/dist-packages/gi/module.py", line 90, in __init__ repository.require(namespace, version) gi.RepositoryError: Failed to load typelib file '/usr/lib/girepository-1.0/GLib-2.0.typelib' for namespace 'GLib': Invalid magic header

    Read the article

  • Character Stats and Power

    - by Stephen Furlani
    I'm making an RPG game system and I'm having a hard time deciding on doing detailed or abstract character statistics. These statistics define the character's natural - not learned - abilities. For example: Mass Effect: 0 (None that I can see) X20 (Xtreme Dungeon Mastery): 1 "STAT" Diablo: 4 "Strength, Magic, Dexterity, Vitality" Pendragon: 5 "SIZ, STR, DEX, CON, APP" Dungeons & Dragons (3.x, 4e): 6 "Str, Dex, Con, Wis, Int, Cha" Fallout 3: 7 "S.P.E.C.I.A.L." RIFTS: 8 "IQ, ME, MA, PS, PP, PE, PB, Spd" Warhammer Fantasy Roleplay (1st ed?): 12-ish "WS, BS, S, T, Ag, Int, WP, Fel, A, Mag, IP, FP" HERO (5th ed): 14 "Str, Dex, Con, Body, Int, Ego, Pre, Com, PD, ED, Spd, Rec, END, STUN" The more stats, the more complex and detailed your character becomes. This comes with a trade-off however, because you usually only have limited resources to describe your character. D&D made this infamous with the whole min/max-ing thing where strong characters were typically not also smart. But also, a character with a high Str typically also has high Con, Defenses, Hit Points/Health. Without high numbers in all those other stats, they might as well not be strong since they wouldn't hold up well in hand-to-hand combat. So things like that force trade-offs within the category of strength. So my original (now rejected) idea was to force players into deciding between offensive and defensive stats: Might / Body Dexterity / Speed Wit / Wisdom Heart Soul But this left some stat's without "opposites" (or opposites that were easily defined). I'm leaning more towards the following: Body (Physical Prowess) Mind (Mental Prowess) Heart (Social Prowess) Soul (Spiritual Prowess) This will define a character with just 4 numbers. Everything else gets based off of these numbers, which means they're pretty important. There won't, however, be ways of describing characters who are fast, but not strong or smart, but absent minded. Instead of defining the character with these numbers, they'll be detailing their character by buying skills and powers like these: Quickness Add a +2 Bonus to Body Rolls when Dodging. for a character that wants to be faster, or the following for a big, tough character Body Building Add a +2 Bonus to Body Rolls when Lifting, Pushing, or Throwing objects. [EDIT - removed subjectiveness] So my actual questions is what are some pitfalls with a small stat list and a large amount of descriptive powers? Is this more difficult to port cross-platform (pen&paper, PC) for example? Are there examples of this being done well/poorly? Thanks,

    Read the article

  • Ubuntu btrfs: how to remove rootflags=subvol=@ from grub.cfg

    - by mnpria
    When i mount "btrfs" as a root filesytem, the mount info is as below: root@ubuntu1304Btrfs:~# mount /dev/mapper/ubuntu1304Btrfs--vg-root on / type btrfs (rw,subvol=@) Is there a way to have a mount info without the "subvol" information ? I have tried executing what was mentioned here. I also updated the grub.cfg. Still rootflags=subvol=@ is not removed. Is there a way to remove this subvol information ? root@ubuntu1304Btrfs:/home# mount /dev/mapper/ubuntu1304Btrfs--vg-root on / type btrfs (rw,subvol=@) /dev/mapper/ubuntu1304Btrfs--vg-root on /home type btrfs (rw,subvol=@home) root@ubuntu1304Btrfs:/# stat / File: ‘/’ Size: 262 Blocks: 0 IO Block: 4096 directory Device: 12h/18d Inode: 256 Links: 1 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2013-11-11 19:56:04.548121873 +0530 Modify: 2013-11-11 19:55:18.008120103 +0530 Change: 2013-11-11 19:55:18.008120103 +0530 Birth: - root@ubuntu1304Btrfs:/# stat /home/ File: ‘/home/’ Size: 230 Blocks: 0 IO Block: 4096 directory Device: 19h/25d Inode: 256 Links: 1 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2013-11-12 12:24:52.346377976 +0530 Modify: 2013-11-12 12:24:50.338377900 +0530 Change: 2013-11-12 12:24:50.338377900 +0530 Birth: -

    Read the article

  • while installing kde 4.11 through terminal showing error while 'processing initramfs-tools'

    - by saptarshi nag
    the full output is------- Setting up initramfs-tools (0.103ubuntu0.2.1) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.5.0-42-generic cp: cannot stat ‘/module-files.d/libpango1.0-0.modules’: No such file or directory cp: cannot stat ‘/modules/pango-basic-fc.so’: No such file or directory E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1. update-initramfs: failed for /boot/initrd.img-3.5.0-42-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Mounting filesystem with special user id set

    - by qbi
    I want to mount the device /dev/sda3 to the directory /foo/bar/baz. After mounting the directory should have the uid of user johndoe. So I did: sudo -u johndoe mkdir /foo/bar/baz stat -c %U /foo/bar/baz johndoe and added the following line to my /etc/fstab: /dev/sda3 /foo/bar/baz ext4 noexec,noatime,auto,owner,nodev,nosuid,user 0 1 When I do now sudo -u johndoe mount /dev/sda3 the command stat -c %U /foo/bar/baz results in root rather than johndoe. What is the best way to mount this ext4-filesystem with uid johndoe set?

    Read the article

  • can rsyslog transfer files present in a directory

    - by Tarun
    I have configured rsyslog and its working fine as its intended to be this is the conf files: Server Side: $template OTHERS,"/rsyslog/test/log/%fromhost-ip%/others-log.log" $template APACHEACCESS,"/rsyslog/test/log/%fromhost-ip%/apache-access.log" $template APACHEERROR,"/rsyslog/test/log/%fromhost-ip%/apache-error.log" if $programname == 'apache-access' then ?APACHEACCESS & ~ if $programname == 'apache-error' then ?APACHEERROR & ~ *.* ?OTHERS Client Side: # Apache default access file: $ModLoad imfile $InputFileName /var/log/apache2/access.log $InputFileTag apache-access: $InputFileStateFile stat-apache-access $InputFileSeverity info $InputRunFileMonitor #Apache default Error file: $ModLoad imfile $InputFileName /var/log/apache2/error.log $InputFileTag apache-error: $InputFileStateFile stat-apache-error $InputFileSeverity error $InputRunFileMonitor if $programname == 'apache-access' then @10.134.125.179:514 & ~ if $programname == 'apache-error' then @10.134.125.179:514 & ~ *.* @10.134.125.179:514 Now in rsyslog can I instead of defining separate files can I give the complete directory so that the client sends all the log files automatically present in the directory /var/log/apache2 and on syslog server side these files gets automatically stored in different filenames?

    Read the article

  • cpan won't configure correctly on centos6, can't connect to internet

    - by dan
    I have Centos 6 setup and have installed perl-CPAN. When I run cpan it takes me through the setup and ends by telling me it can't connect to the internet and to enter a mirror. I enter a mirror, but it still can't install the package. What am I doing wrong? If you're accessing the net via proxies, you can specify them in the CPAN configuration or via environment variables. The variable in the $CPAN::Config takes precedence. <ftp_proxy> Your ftp_proxy? [] <http_proxy> Your http_proxy? [] <no_proxy> Your no_proxy? [] CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror mirror.cc.columbia.edu::cpan Configuration does not allow connecting to the internet. Current set of CPAN URLs: mirror.cc.columbia.edu::cpan Enter another URL or RETURN to quit: [] New urllist mirror.cc.columbia.edu::cpan Please remember to call 'o conf commit' to make the config permanent! cpan shell -- CPAN exploration and modules installation (v1.9402) Enter 'h' for help. cpan[1]> install File::Stat CPAN: Storable loaded ok (v2.20) LWP not available Warning: no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp918'. Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225 ^CCaught SIGINT, trying to continue Warning: Cannot install File::Stat, don't know what it is. Try the command i /File::Stat/ to find objects with matching identifiers. cpan[2]>

    Read the article

  • APC + FPM memory fragmentation without a reason

    - by palmic
    My setup @ debian 6-testing is: nginx 1.1.8 + php 5.3.8 + php-fpm + APC 3.1.9 Because i use automatic deployment with apc_clear_cache() after deploy, my target is to set up APC to cache all my project (up to 612 php files smaller than 1MB) without files changes checking. My confs: FPM: pm.max_children = 25 pm.start_servers = 4 pm.min_spare_servers = 2 pm.max_spare_servers = 10 pm.max_requests = 500 APC: pc.enabled="1" apc.shm_segments = 1 apc.shm_size="64M" apc.num_files_hint = 640 apc.user_entries_hint = 0 apc.ttl="7200" apc.user_ttl="7200" apc.gc_ttl="600" apc.cache_by_default="1" apc.filters = "apc\.php$,apc_clear.php$" apc.canonicalize="0" apc.slam_defense="1" apc.localcache="1" apc.localcache.size="512M" apc.mmap_file_mask="/tmp/apc-php5.XXXXXX" apc.enable_cli="0" apc.max_file_size = 2M apc.write_lock="1" apc.localcache = 1 apc.localcache.size = 512 apc.report_autofilter="0" apc.include_once_override="0" apc.coredump_unmap="0" apc.stat="0" apc.stat_ctime="0" The problem is i have my APC memory fragmented into 5 pieces at 14,5MB loaded into memory which capacity is 64M. My system memory is 640MB, used about 270MB at most. The http responses lasts about 300ms - 5s. When i switch on apc.stat="1", the responses are about 50ms - 80ms which is quite good, but i cannot understand why is apc.stat="0" so much whorse. The APC diagnostic tool shows "1 Segment(s) with 64.0 MBytes (mmap memory, pthread mutex Locks locking)" in general cache information window so i hope i am right that it's mmap setup which allows tweaking APC shm onto upper values than shows system limit in /proc/sys/kernel/shmmax (my shows 33MB).

    Read the article

  • maillog "No route to host" error

    - by Sherwood Hu
    I have a CentOS server. It has sendmail installed but not used for a mail server. I forwarded the root email to another email address. However, I keep getting errors in maillog: Dec 6 08:49:16 server1 sm-msp-queue[16191]: qB6601et005433: to=root, ctladdr=root (0/0), delay=08:49:15, xdelay=00:00:00, mailer=relay, pri=883224, relay=[127.0.0.1], dsn=4.0.0, stat=Deferred: [127.0.0.1]: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfQ014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=6965048, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfR014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=7004959, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host In the forwarded email address, I received notification "it can't deliver email to [email protected]. subdoamin.example.com does have a MX record, and I do not want to add one. Is there any configuration that I can change to prevent this error? I want all emails to the root to be forwarded to the forward address.

    Read the article

  • Rsyslog stops sending data to remote server after log rotation

    - by Vincent B.
    In my configuration, I have rsyslog who is in charge of following changes of /home/user/my_app/shared/log/unicorn.stderr.log using imfile. The content is sent to another remote logging server using TCP. When the log file rotates, rsyslog stops sending data to the remote server. I tried reloading rsyslog, sending a HUP signal and restarting it altogether, but nothing worked. The only ways I could find that actually worked were dirty: stop the service, delete the rsyslog stat files and start rsyslog again. All that in a postrotate hook in my logrotate file. kill -9 rsyslog and start it over. Is there a proper way for me to do this without touching rsyslog internals? Rsyslog file $ModLoad immark $ModLoad imudp $ModLoad imtcp $ModLoad imuxsock $ModLoad imklog $ModLoad imfile $template WithoutTimeFormat,"[environment] [%syslogtag%] -- %msg%" $WorkDirectory /var/spool/rsyslog $InputFileName /home/user/my_app/shared/log/unicorn.stderr.log $InputFileTag unicorn-stderr $InputFileStateFile stat-unicorn-stderr $InputFileSeverity info $InputFileFacility local8 $InputFilePollInterval 1 $InputFilePersistStateInterval 1 $InputRunFileMonitor # Forward to remote server if $syslogtag contains 'apache-' then @@my_server:5000;WithoutTimeFormat :syslogtag, contains, "apache-" ~ *.* @@my_server:5000;SyslFormat Logrotate file /home/user/shared/log/*.log { daily missingok dateext rotate 30 compress notifempty extension gz copytruncate create 640 user user sharedscripts post-rotate (stop rsyslog && rm /var/spool/rsyslog/stat-* && start rsyslog 2&1) || true endscript } FYI, the file is readable for the rsyslog user, my server is reachable and other log files which do not rotate on the same cycle continue to be tracked properly. I'm running Ubuntu 12.04.

    Read the article

  • why sendmail resolves to ISP domain?

    - by digital illusion
    I wish to setup a local mail server for debugging purposes using fedora 15 I set up sendmail, but there is a problem. When I'm not connected to the internet, the local mail server delivers correctly (to localhost). And in /var/log/mail I see that I correctly delivered a mail to [email protected]: Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], size=328, class=0, nrcpts=1, msgid=<[email protected]>, relay=adriano@localhost Jun 21 18:24:56 PowersourceII sendmail[6020]: p5LGOuSV006020: from=<[email protected]>, size=506, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=PowersourceII.localdomain [127.0.0.1] Jun 21 18:24:56 PowersourceII sendmail[6019]: p5LGOttt006019: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:00, mailer=relay, pri=30328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGOuSV006020 Message accepted for delivery) When I connect, networkmanager fills in /etc/resolv.conf with: domain fastwebnet.it search fastwebnet.it localdomain nameserver 62.101.93.101 nameserver 83.103.25.250 Now sendmail does not work any longer and tries to send messages to my ISP domain, as seen in the log: Jun 21 18:40:02 PowersourceII sendmail[6348]: p5LGe1LV006348: [email protected], [email protected] (500/500), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=30327, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5LGe10n006352 Message accepted for delivery) Jun 21 18:40:02 PowersourceII sendmail[6354]: p5LGe10n006352: to=<[email protected]>, delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=120651, relay=mx3.fastwebnet.it. [85.18.95.21], dsn=5.1.1, stat=User unknown As you can see, it tries to deliver a mail to [email protected], and fails The setup is working under other ISPs. How can I avoid the fastweb ISP DNS relay? Thank you

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

  • ruby recursive regex

    - by Reed Debaets
    So why is this not working? I'm creating a regex that will match a formula (which is then part of a larger standard description). But I'm stuck here, as it doesn't appear to want to match embedded formulas within a formula. stat = /(Stat3|Stat2|Stat1)/ number_sym = /[0-9]*/ formula_sym = /((target's )?#{stat}|#{number_sym}|N#{number_sym})\%?/ math_sym = /(\+|\-|\*|\/|\%)/ formula = /^\((#{formula}|#{formula_sym}) (#{math_sym} (#{formula}|#{formula_sym}))?\)$/ p "(target's Stat2 * N1%)".match(formula).to_s #matches p "((target's Stat2 * N1%) + 3)".match(formula).to_s #no match p "(Stat1 + ((target's Stat2 * N1%) + 3))".match(formula).to_s #no match

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >