Search Results

Search found 615 results on 25 pages for 'stat r'.

Page 19/25 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Memory leak / GLib issue.

    - by Andrei Ciobanu
    1: /* 2: * File: xyn-playlist.c 3: * Author: Andrei Ciobanu 4: * 5: * Created on June 4, 2010, 12:47 PM 6: */ 7:   8: #include <dirent.h> 9: #include <glib.h> 10: #include <stdio.h> 11: #include <stdlib.h> 12: #include <sys/stat.h> 13: #include <unistd.h> 14:   15: /** 16: * Returns a list all the file(paths) from a directory. 17: * Returns 'NULL' if a certain error occurs. 18: * @param dir_path. 19: * @param A list of gchars* indicating what file patterns to detect. 20: */ 21: GSList *xyn_pl_get_files(const gchar *dir_path, GSList *file_patterns) { 22: /* Returning list containing file paths */ 23: GSList *fpaths = NULL; 24: /* Used to scan directories for subdirs. Acts like a 25: * stack, to avoid recursion. */ 26: GSList *dirs = NULL; 27: /* Current dir */ 28: DIR *cdir = NULL; 29: /* Current dir entries */ 30: struct dirent *cent = NULL; 31: /* File stats */ 32: struct stat cent_stat; 33: /* dir_path duplicate, on the heap */ 34: gchar *dir_pdup; 35:   36: if (dir_path == NULL) { 37: return NULL; 38: } 39:   40: dir_pdup = g_strdup((const gchar*) dir_path); 41: dirs = g_slist_append(dirs, (gpointer) dir_pdup); 42: while (dirs != NULL) { 43: cdir = opendir((const gchar*) dirs->data); 44: if (cdir == NULL) { 45: g_slist_free(dirs); 46: g_slist_free(fpaths); 47: return NULL; 48: } 49: chdir((const gchar*) dirs->data); 50: while ((cent = readdir(cdir)) != NULL) { 51: lstat(cent->d_name, &cent_stat); 52: if (S_ISDIR(cent_stat.st_mode)) { 53: if (g_strcmp0(cent->d_name, ".") == 0 || 54: g_strcmp0(cent->d_name, "..") == 0) { 55: /* Skip "." and ".." dirs */ 56: continue; 57: } 58: dirs = g_slist_append(dirs, 59: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 60: } else { 61: fpaths = g_slist_append(fpaths, 62: g_strconcat((gchar*) dirs->data, "/", cent->d_name, NULL)); 63: } 64: } 65: g_free(dirs->data); 66: dirs = g_slist_delete_link(dirs, dirs); 67: closedir(cdir); 68: } 69: return fpaths; 70: } 71:   72: int main(int argc, char** argv) { 73: GSList *l = NULL; 74: l = xyn_pl_get_files("/home/andrei/Music", NULL); 75: g_slist_foreach(l,(GFunc)printf,NULL); 76: printf("%d\n",g_slist_length(l)); 77: g_slist_free(l); 78: return (0); 79: } 80:   81:   82: -----------------------------------------------------------------------------------------------==15429== 83: ==15429== HEAP SUMMARY: 84: ==15429== in use at exit: 751,451 bytes in 7,263 blocks 85: ==15429== total heap usage: 8,611 allocs, 1,348 frees, 22,898,217 bytes allocated 86: ==15429== 87: ==15429== 120 bytes in 1 blocks are possibly lost in loss record 1 of 11 88: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 89: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 90: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 91: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 92: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 93: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 94: ==15429== by 0x8048848: main (main.c:18) 95: ==15429== 96: ==15429== 129 bytes in 1 blocks are possibly lost in loss record 2 of 11 97: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 98: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 99: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 100: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 101: ==15429== by 0x8048848: main (main.c:18) 102: ==15429== 103: ==15429== 360 bytes in 3 blocks are possibly lost in loss record 3 of 11 104: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 105: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 106: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 107: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 108: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 109: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 110: ==15429== by 0x8048848: main (main.c:18) 111: ==15429== 112: ==15429== 508 bytes in 1 blocks are still reachable in loss record 4 of 11 113: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 114: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 115: ==15429== by 0x409624D: ??? (in /lib/libglib-2.0.so.0.2400.1) 116: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 117: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 118: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 119: ==15429== by 0x8048848: main (main.c:18) 120: ==15429== 121: ==15429== 508 bytes in 1 blocks are still reachable in loss record 5 of 11 122: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 123: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 124: ==15429== by 0x409626F: ??? (in /lib/libglib-2.0.so.0.2400.1) 125: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 126: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 127: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 128: ==15429== by 0x8048848: main (main.c:18) 129: ==15429== 130: ==15429== 508 bytes in 1 blocks are still reachable in loss record 6 of 11 131: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 132: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 133: ==15429== by 0x4096291: ??? (in /lib/libglib-2.0.so.0.2400.1) 134: ==15429== by 0x409710C: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 135: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 136: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 137: ==15429== by 0x8048848: main (main.c:18) 138: ==15429== 139: ==15429== 1,200 bytes in 10 blocks are possibly lost in loss record 7 of 11 140: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 141: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 142: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 143: ==15429== by 0x40971F6: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 144: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 145: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 146: ==15429== by 0x8048848: main (main.c:18) 147: ==15429== 148: ==15429== 2,040 bytes in 1 blocks are still reachable in loss record 8 of 11 149: ==15429== at 0x402425F: calloc (vg_replace_malloc.c:467) 150: ==15429== by 0x408113B: g_malloc0 (in /lib/libglib-2.0.so.0.2400.1) 151: ==15429== by 0x40970AB: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 152: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 153: ==15429== by 0x80488F0: xyn_pl_get_files (xyn-playlist.c:41) 154: ==15429== by 0x8048848: main (main.c:18) 155: ==15429== 156: ==15429== 4,320 bytes in 36 blocks are possibly lost in loss record 9 of 11 157: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 158: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 159: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 160: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 161: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 162: ==15429== by 0x80489D2: xyn_pl_get_files (xyn-playlist.c:58) 163: ==15429== by 0x8048848: main (main.c:18) 164: ==15429== 165: ==15429== 56,640 bytes in 472 blocks are possibly lost in loss record 10 of 11 166: ==15429== at 0x4024106: memalign (vg_replace_malloc.c:581) 167: ==15429== by 0x4024163: posix_memalign (vg_replace_malloc.c:709) 168: ==15429== by 0x40969C1: ??? (in /lib/libglib-2.0.so.0.2400.1) 169: ==15429== by 0x4097222: g_slice_alloc (in /lib/libglib-2.0.so.0.2400.1) 170: ==15429== by 0x40988A5: g_slist_append (in /lib/libglib-2.0.so.0.2400.1) 171: ==15429== by 0x8048A0D: xyn_pl_get_files (xyn-playlist.c:61) 172: ==15429== by 0x8048848: main (main.c:18) 173: ==15429== 174: ==15429== 685,118 bytes in 6,736 blocks are definitely lost in loss record 11 of 11 175: ==15429== at 0x4024F20: malloc (vg_replace_malloc.c:236) 176: ==15429== by 0x4081243: g_malloc (in /lib/libglib-2.0.so.0.2400.1) 177: ==15429== by 0x409B85B: g_strconcat (in /lib/libglib-2.0.so.0.2400.1) 178: ==15429== by 0x80489FE: xyn_pl_get_files (xyn-playlist.c:62) 179: ==15429== by 0x8048848: main (main.c:18) 180: ==15429== 181: ==15429== LEAK SUMMARY: 182: ==15429== definitely lost: 685,118 bytes in 6,736 blocks 183: ==15429== indirectly lost: 0 bytes in 0 blocks 184: ==15429== possibly lost: 62,769 bytes in 523 blocks 185: ==15429== still reachable: 3,564 bytes in 4 blocks 186: ==15429== suppressed: 0 bytes in 0 blocks 187: ==15429== 188: ==15429== For counts of detected and suppressed errors, rerun with: -v 189: ==15429== ERROR SUMMARY: 7 errors from 7 contexts (suppressed: 17 from 8) 190: ---------------------------------------------------------------------------------------------- I am using the above code in order to create a list with all the filepaths in a certain directory. (In my case fts.h or ftw.h are not an option). I am using GLib as the data structures library. Still I have my doubts in regarding the way GLib is allocating, de-allocating memory ? When invoking g_slist_free(list) i also free the data contained by the elements ? Why all those memory leaks appear ? Is valgrind a suitable tool for profilinf memory issues when using a complex library like GLib ? LATER EDIT: If I g_slist_foreach(l,(GFunc)g_free,NULL);, the valgrind report is different, (All the memory leaks from 'definitely lost' will move to 'indirectly lost'). Still I don't see the point ? Aren't GLib collections implement a way to be freed ?

    Read the article

  • PHP create huge errors file on my server feof() fread()

    - by Nik
    I have a script that allows users to 'save as' a pdf, this is the script - <?php header("Content-Type: application/octet-stream"); $file = $_GET["file"] .".pdf"; header("Content-Disposition: attachment; filename=" . urlencode($file)); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Description: File Transfer"); header("Content-Length: " . filesize($file)); flush(); // this doesn't really matter. $fp = fopen($file, "r"); while (!feof($fp)) { echo fread($fp, 65536); flush(); // this is essential for large downloads } fclose($fp); ? An error log is being created and gets to be more than a gig in a few days the errors I receive are- [10-May-2010 12:38:50] PHP Warning: filesize() [function.filesize]: stat failed for BYJ-Timetable.pdf in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: Cannot modify header information - headers already sent by (output started at /home/byj/public_html/pdf_server.php:10) in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: fopen(BYJ-Timetable.pdf) [function.fopen]: failed to open stream: No such file or directory in /home/byj/public_html/pdf_server.php on line 12 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 The line 13 and 15 just continue on and on... I'm a bit of a newbie with php so any help is great. Thanks guys Nik

    Read the article

  • ps forrest for session id

    - by azatoth
    Often I want to get a nice readout what process are running and their relationship; I usually by habit runs ps auxfww and eventual grep for the process in question. Having been thinking about the problem I tried to create an oneliner to get the process tree in ps ufww format for all processes which has the session id specified by arbitrary process name(s); ending up in following code: ps ufww --sid=$(ps -C apache2 -o sess --no-headers | sort | uniq | grep -v -E '^ +0$' | awk 'NR==1{x=$0;next}NF{x=x","$0};END{gsub(/[[:space:]]*/,"",x);print x}') giving for example following output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4157 0.0 0.1 41264 3120 ? Ss Jun11 0:00 /usr/sbin/apache2 -k start www-data 4329 0.0 0.0 41264 1976 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 4330 0.0 0.0 41264 2028 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 4331 0.0 0.0 41264 2028 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 4332 0.0 0.0 41264 2028 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 4333 0.0 0.0 41264 2032 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 6648 0.0 0.0 41264 1884 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 6654 0.0 0.0 41264 1884 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start www-data 6655 0.0 0.0 41264 1884 ? S Jun11 0:00 \_ /usr/sbin/apache2 -k start I do wonder now if anyone has an better idea to solve this issue? Are there anything out there that is easier to "oneline" and gives above or better information? For example I would actually want to have included all childs relative any parent. (uncertain if this should be on SF instead, but felt it was more like an programming question)

    Read the article

  • Help debugging c fifos code - stack smashing detected - open call not functioning - removing pipes

    - by nunos
    I have three bugs/questions regarding the source code pasted below: stack smashing deteced: In order to compile and not have that error I have addedd the gcc compile flag -fno-stack-protector. However, this should be just a temporary solution, since I would like to find where the cause for this is and correct it. However, I haven't been able to do so. Any clues? For some reason, the last open function call doesn't work and the programs just stops there, without an error, even though the fifo already exists. I want to delete the pipes from the filesystem after before terminating the processes. I have added close and unlink statements at the end, but the fifos are not removed. What am I doing wrong? Thanks very much in advance. P.S.: I am pasting here the whole source file for additional clarity. Just ignore the comments, since they are in my own native language. server.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define FIFO_NAME_MAX_LEN 20 #define FIFO_DIR "/tmp/" #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" typedef enum { false, true } bool; bool background = false; char* logfile = NULL; void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); read(fd, *var, n_bytes); printf("read %d bytes '%s'\n", n_bytes, *var); } void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } int main(int argc, char* argv[]) { //CRIA FIFO CMD_CLI_TO_SRV, se ainda nao existir char* fifo_name_cmd_cli_to_srv; fifo_name_cmd_cli_to_srv = (char*) malloc ( (strlen(FIFO_NAME_CMD_CLI_TO_SRV) + strlen(FIFO_DIR) + 1) * sizeof(char) ); strcpy(fifo_name_cmd_cli_to_srv, FIFO_DIR); strcat(fifo_name_cmd_cli_to_srv, FIFO_NAME_CMD_CLI_TO_SRV); int n = mkfifo(fifo_name_cmd_cli_to_srv, 0660); //TODO ver permissoes if (n < 0 && errno != EEXIST) //se houver erro, e nao for por causa de ja haver um com o mesmo nome, termina o programa { fprintf(stderr, "erro ao criar o fifo\n"); fprintf(stderr, "errno: %d\n", errno); exit(4); } //se por acaso já existir, nao cria o fifo e continua o programa normalmente //le informacao enviada pelo cliente, nesta ordem: //1. pid (em formato char*) do processo cliente //2. comando /CONNECT //3. nome de fifo INFO_SRV_TO_CLIXXX //4. nome de fifo MSG_SRV_TO_CLIXXX char* command; char* fifo_name_info_srv_to_cli; char* fifo_name_msg_srv_to_cli; char* client_pid_string; int client_pid; int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open(fifo_name_cmd_cli_to_srv, O_RDONLY); read_from_fifo(fd_cmd_cli_to_srv, &client_pid_string); client_pid = atoi(client_pid_string); read_from_fifo(fd_cmd_cli_to_srv, &command); //recebe commando /CONNECT read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_info_srv_to_cli); //recebe nome de fifo INFO_SRV_TO_CLIXXX read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_msg_srv_to_cli); //recebe nome de fifo MSG_TO_SRV_TO_CLIXXX //CIRA FIFO MSG_CLIXXX_TO_SRV char fifo_name_msg_cli_to_srv[FIFO_NAME_MAX_LEN]; strcpy(fifo_name_msg_cli_to_srv, FIFO_DIR); strcat(fifo_name_msg_cli_to_srv, "lrc_msg_cli"); strcat(fifo_name_msg_cli_to_srv, client_pid_string); strcat(fifo_name_msg_cli_to_srv, "_to_srv"); n = mkfifo(fifo_name_msg_cli_to_srv, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_msg_cli_to_srv); fprintf(stderr, "errno: %d\n", errno); exit(5); } //envia ao cliente a resposta ao commando /CONNECT fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_WRONLY); write_to_fifo(fd_info_srv_to_cli, fifo_name_msg_cli_to_srv); free(logfile); free(fifo_name_cmd_cli_to_srv); close(fd_cmd_cli_to_srv); unlink(fifo_name_cmd_cli_to_srv); unlink(fifo_name_msg_cli_to_srv); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } client.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define PID_BUFFER_LEN 10 #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" #define FIFO_NAME_INFO_SRV_TO_CLI "lrc_info_srv_to_cli" #define FIFO_NAME_MSG_SRV_TO_CLI "lrc_msg_srv_to_cli" #define COMMAND_MAX_LEN 100 #define FIFO_DIR "/tmp/" typedef enum { false, true } bool; char* nickname; char* name; char* email; void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); printf("read '%s'\n", *var); read(fd, *var, n_bytes); } int main(int argc, char* argv[]) { pid_t pid = getpid(); //CRIA FIFO INFO_SRV_TO_CLIXXX char pid_string[PID_BUFFER_LEN]; sprintf(pid_string, "%d", pid); char* fifo_name_info_srv_to_cli; fifo_name_info_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_INFO_SRV_TO_CLI) + strlen(pid_string) + 1 ) * sizeof(char) ); strcpy(fifo_name_info_srv_to_cli, FIFO_DIR); strcat(fifo_name_info_srv_to_cli, FIFO_NAME_INFO_SRV_TO_CLI); strcat(fifo_name_info_srv_to_cli, pid_string); int n = mkfifo(fifo_name_info_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(6); } int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open("/tmp/lrc_cmd_cli_to_srv", O_WRONLY); char command[COMMAND_MAX_LEN]; printf("> "); scanf("%s", command); while (strcmp(command, "/CONNECT")) { printf("O primeiro comando deverá ser \"/CONNECT\"\n"); printf("> "); scanf("%s", command); } //CRIA FIFO MSG_SRV_TO_CLIXXX char* fifo_name_msg_srv_to_cli; fifo_name_msg_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_MSG_SRV_TO_CLI) + strlen(pid_string) + 1) * sizeof(char) ); strcpy(fifo_name_msg_srv_to_cli, FIFO_DIR); strcat(fifo_name_msg_srv_to_cli, FIFO_NAME_MSG_SRV_TO_CLI); strcat(fifo_name_msg_srv_to_cli, pid_string); n = mkfifo(fifo_name_msg_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(7); } // ENVIA COMANDO /CONNECT write_to_fifo(fd_cmd_cli_to_srv, pid_string); //envia pid do processo cliente write_to_fifo(fd_cmd_cli_to_srv, command); //envia commando /CONNECT write_to_fifo(fd_cmd_cli_to_srv, fifo_name_info_srv_to_cli); //envia nome de fifo INFO_SRV_TO_CLIXXX write_to_fifo(fd_cmd_cli_to_srv, fifo_name_msg_srv_to_cli); //envia nome de fifo MSG_TO_SRV_TO_CLIXXX // recebe do servidor a resposta ao comanddo /CONNECT printf("msg1\n"); printf("vamos tentar abrir %s\n", fifo_name_info_srv_to_cli); fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_RDONLY); printf("%s aberto", fifo_name_info_srv_to_cli); if (fd_info_srv_to_cli < 0) { fprintf(stderr, "erro ao criar %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); } printf("msg2\n"); char* fifo_name_msg_cli_to_srv; printf("msg3\n"); read_from_fifo(fd_info_srv_to_cli, &fifo_name_msg_cli_to_srv); printf("msg4\n"); free(nickname); free(name); free(email); free(fifo_name_info_srv_to_cli); free(fifo_name_msg_srv_to_cli); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } makefile: CC = gcc CFLAGS = -Wall -lpthread -fno-stack-protector all: client server client: client.c $(CC) $(CFLAGS) client.c -o client server: server.c $(CC) $(CFLAGS) server.c -o server clean: rm -f client server *~

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • Using the groupby method in Python, example included

    - by randombits
    Trying to work with groupby so that I can group together files that were created on the same day. When I say same day in this case, I mean the dd part in mm/dd/yyyy. So if a file was created on March 1 and April 1, they should be grouped together because the "1" matches. Here's the code I have so far: #!/usr/bin/python import os import datetime from itertools import groupby def created_ymd(fn): ts = os.stat(fn).st_ctime dt = datetime.date.fromtimestamp(ts) return dt.year, dt.month, dt.day def get_files(): files = [] for f in os.listdir(os.getcwd()): if not os.path.isfile(f): continue y,m,d = created_ymd(f) files.append((f, d)) return files files = get_files() for key, group in groupby(files, lambda x: x[1]): for file in group: print "file: %s, date: %s" % (file[0], key) print " " The problem is, I get lots of files that get grouped together based on the day. But then I'll see multiple groups with the same day. Meaning I might have 4 files grouped that were created on the 17th. Later on I'll see another unique set of 2 files that are also created on the 17th. Where am I going wrong?

    Read the article

  • Managing lots of callback recursion in Nodejs

    - by Maciek
    In Nodejs, there are virtually no blocking I/O operations. This means that almost all nodejs IO code involves many callbacks. This applies to reading and writing to/from databases, files, processes, etc. A typical example of this is the following: var useFile = function(filename,callback){ posix.stat(filename).addCallback(function (stats) { posix.open(filename, process.O_RDONLY, 0666).addCallback(function (fd) { posix.read(fd, stats.size, 0).addCallback(function(contents){ callback(contents); }); }); }); }; ... useFile("test.data",function(data){ // use data.. }); I am anticipating writing code that will make many IO operations, so I expect to be writing many callbacks. I'm quite comfortable with using callbacks, but I'm worried about all the recursion. Am I in danger of running into too much recursion and blowing through a stack somewhere? If I make thousands of individual writes to my key-value store with thousands of callbacks, will my program eventually crash? Am I misunderstanding or underestimating the impact? If not, is there a way to get around this while still using Nodejs' callback coding style?

    Read the article

  • UnicodeDecodeError on attempt to save file through django default filebased backend

    - by Ivan Kuznetsov
    When i attempt to add a file with russian symbols in name to the model instance through default instance.file_field.save method, i get an UnicodeDecodeError (ascii decoding error, not in range (128) from the storage backend (stacktrace ended on os.exist). If i write this file through default python file open/write all goes right. All filenames in utf-8. I get this error only on testing Gentoo, on my Ubuntu workstation all works fine. class Article(models.Model): file = models.FileField(null=True, blank=True, max_length = 300, upload_to='articles_files/%Y/%m/%d/') Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 24. return view_func(request, *args, **kwargs) File "/var/www/localhost/help/wiki/views.py" in edit_article 338. new_article.file.save(fp, fi, save=True) File "/usr/lib/python2.6/site-packages/django/db/models/fields/files.py" in save 92. self.name = self.storage.save(name, content) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in save 47. name = self.get_available_name(name) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in get_available_name 73. while self.exists(name): File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in exists 196. return os.path.exists(self.path(name)) File "/usr/lib/python2.6/genericpath.py" in exists 18. st = os.stat(path) Exception Type: UnicodeEncodeError at /edit/ Exception Value: ('ascii', u'/var/www/localhost/help/i/articles_files/2010/03/17/\u041f\u0440\u0438\u0432\u0435\u0442', 52, 58, 'ordinal not in range(128)')

    Read the article

  • Why isn't module_filters filtering Mail::IspMailGate in CPAN::Mini?

    - by user304122
    Edited - Ummm - now have a module in schwigon giving same problem !! I am on a corporate PC that forces mcshield on everything that moves. I get blocked when trying to mirror on ... authors/id/J/JV/JV/EekBoek-2.00.01.tar.gz ... updated authors/id/J/JV/JV/CHECKSUMS ... updated Could not stat tmpfile '/cygdrive/t/cpan_mirror/authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz-4712': No such file or directory at /usr/lib/perl5/site_perl/5.10/LWP/UserAgent.pm line 851. authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz At this point, I get the Virus scanner mcshield sticking its Awr in. To maintain my mirror I execute:- #!/usr/bin/perl CPAN::Mini->update_mirror( remote => "http://mirror.eunet.fi/CPAN", local => "/cygdrive/t/cpan_mirror/", trace => 1, errors => 1, module_filters => [ qr/kjkjhkjhkjkj/i, qr/clamav/i, qr/ispmailgate/i, qr/IspMailGate/, qr/Mail-IspMailGate/, qr/mail-ispmailgate/i, ], path_filters => [ qr/ZZYYZZ/, #qr/WIED/, #qr/RJBS/, ] ); It skips OK if I enable the path_filter WIED. Just cannot get it to skip the module failing module to complete other WIED modules. Any ideas?

    Read the article

  • C: lseek() related question.

    - by Andrei Ciobanu
    I want to write some bogus text in a file ("helloworld" text in a file called helloworld), but not starting from the beginning. I was thinking to lseek() function. If I use the following code: #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <stdlib.h> #include <stdio.h> #define fname "helloworld" #define buf_size 16 int main(){ char buffer[buf_size]; int fildes, nbytes; off_t ret; fildes = open(fname, O_CREAT | O_TRUNC | O_WRONLY, S_IRUSR | S_IWUSR); if(fildes < 0){ printf("\nCannot create file + trunc file.\n"); } //modify offset if((ret = lseek(fildes, (off_t) 10, SEEK_END)) < (off_t) 0){ fprintf(stdout, "\nCannot modify offset.\n"); } printf("ret = %d\n", (int)ret); if(write(fildes, fname, buf_size) < 0){ fprintf(stdout, "\nWrite failed.\n"); } close(fildes); return (0); } , it compiles well and it runs without any apparent errors. Still if i : cat helloworld The output is not what I expected, but: helloworld Can Where is "Can" comming from, and where are my empty spaces ? Should i expect for "zeros" instead of spaces ? If i try to open helloworld with gedit, an error occurs, complaining that the file character encoding is unknown.

    Read the article

  • Why isn't module_filters filetering Mail::IspMailGate in CPAN::Mini?

    - by user304122
    Edited - Ummm - now have a module in schwigon giving same problem !! I am on a corporate PC that forces mcshield on everything that moves. I get blocked when trying to mirror on ... authors/id/J/JV/JV/EekBoek-2.00.01.tar.gz ... updated authors/id/J/JV/JV/CHECKSUMS ... updated Could not stat tmpfile '/cygdrive/t/cpan_mirror/authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz-4712': No such file or directory at /usr/lib/perl5/site_perl/5.10/LWP/UserAgent.pm line 851. authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz At this point, I get the Virus scanner mcshield sticking its Awr in. To maintain my mirror I execute:- #!/usr/bin/perl CPAN::Mini->update_mirror( remote => "http://mirror.eunet.fi/CPAN", local => "/cygdrive/t/cpan_mirror/", trace => 1, errors => 1, module_filters => [ qr/kjkjhkjhkjkj/i, qr/clamav/i, qr/ispmailgate/i, qr/IspMailGate/, qr/Mail-IspMailGate/, qr/mail-ispmailgate/i, ], path_filters => [ qr/ZZYYZZ/, #qr/WIED/, #qr/RJBS/, ] ); It skips OK if I enable the path_filter WIED. Just cannot get it to skip the module failing module to complete other WIED modules. Any ideas?

    Read the article

  • Tags usage in Struts 2

    - by Panther24
    Hi, I have the following code snippet <s:iterator status="stat" value="masterAccountList"> <tr> <td><s:property value="name"/></td> <td><s:property value="status"/></td> <s:set name="DrStat" id="DrStat" value="<s:property value='status'/>"/> <td><s:if test='DrStat.contains("Out")'> Dr. Is Available </s:if> <s:else> Dr. Is not Available </s:else> </td> </tr> </s:iterator> I need to check the status if it contains a keyword and display text accordingly. When I try this, I always get 'Not Available' status. I'm not even sure what the set returns, how can I see that?

    Read the article

  • R: How to remove outliers from a smoother in ggplot2?

    - by John
    I have the following data set that I am trying to plot with ggplot2, it is a time series of three experiments A1, B1 and C1 and each experiment had three replicates. I am trying to add a stat which detects and removes outliers before returning a smoother (mean and variance?). I have written my own outlier function (not shown) but I expect there is already a function to do this, I just have not found it. I've looked at stat_sum_df("median_hilow", geom = "smooth") from some examples in the ggplot2 book, but I didn't understand the help doc from Hmisc to see if it removes outliers or not. Is there a function to remove outliers like this in ggplot, or where would I amend my code below to add my own function? library (ggplot2) data = data.frame (day = c(1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7), od = c( 0.1,1.0,0.5,0.7 ,0.13,0.33,0.54,0.76 ,0.1,0.35,0.54,0.73 ,1.3,1.5,1.75,1.7 ,1.3,1.3,1.0,1.6 ,1.7,1.6,1.75,1.7 ,2.1,2.3,2.5,2.7 ,2.5,2.6,2.6,2.8 ,2.3,2.5,2.8,3.8), series_id = c( "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1"), replicate = c( "A1.1","A1.1","A1.1","A1.1", "A1.2","A1.2","A1.2","A1.2", "A1.3","A1.3","A1.3","A1.3", "B1.1","B1.1","B1.1","B1.1", "B1.2","B1.2","B1.2","B1.2", "B1.3","B1.3","B1.3","B1.3", "C1.1","C1.1","C1.1","C1.1", "C1.2","C1.2","C1.2","C1.2", "C1.3","C1.3","C1.3","C1.3")) > data day od series_id replicate 1 1 0.10 A1 A1.1 2 3 1.00 A1 A1.1 3 5 0.50 A1 A1.1 4 7 0.70 A1 A1.1 5 1 0.13 A1 A1.2 6 3 0.33 A1 A1.2 7 5 0.54 A1 A1.2 8 7 0.76 A1 A1.2 9 1 0.10 A1 A1.3 10 3 0.35 A1 A1.3 11 5 0.54 A1 A1.3 12 7 0.73 A1 A1.3 13 1 1.30 B1 B1.1 This is what I have so far and is working nicely, but outliers are not removed: r <- ggplot(data = data, aes(x = day, y = od)) r + geom_point(aes(group = replicate, color = series_id)) + # add points geom_line(aes(group = replicate, color = series_id)) + # add lines geom_smooth(aes(group = series_id)) # add smoother, average of each replicate

    Read the article

  • x86 .net application with system.OutOfMemoryException

    - by Allen
    Hi,Guys I got OutOfMemoryException after the app running for 1 day, the app totally use 1.5G memory , all consumed by managed heap, gen 2 used 200mb , and LOB used 1.3mb, however the weired thing is, 900mb of space are Free. from perf counter I saw there had number of gen 2 gc collection happened, why GC collector cannot collect those 900mb free space in gen2 and LOB? I'm really appreicate for your help. following info are from windbg: 0:000> !eeheap -gc Number of GC Heaps: 1 generation 0 starts at 0x183153f0 generation 1 starts at 0x182aa834 generation 2 starts at 0x02131000 ephemeral segment allocation context: none segment begin allocated size 02130000 02131000 0312f284 0xffe284(16769668) 07750000 07751000 0874fc5c 0xffec5c(16772188) 09e30000 09e31000 0ae2fc2c 0xffec2c(16772140) 0b230000 0b231000 0c22ffec 0xffefec(16773100) 0c230000 0c231000 0d22f6f0 0xffe6f0(16770800) 0d230000 0d231000 0e22ea10 0xffda10(16767504) 0e230000 0e231000 0f22c1c4 0xffb1c4(16757188) 10390000 10391000 1138ddf4 0xffcdf4(16764404) 154e0000 154e1000 164da90c 0xff990c(16750860) 34aa0000 34aa1000 35a9dbfc 0xffcbfc(16763900) 7aca0000 7aca1000 7bc9edfc 0xffddfc(16768508) 49760000 49761000 4a75ef64 0xffdf64(16768868) 7bca0000 7bca1000 7cc99bac 0xff8bac(16747436) 17a70000 17a71000 183313fc 0x8c03fc(9176060) Large object heap starts at 0x03131000 segment begin allocated size 03130000 03131000 041250c8 0xff40c8(16728264) 08920000 08921000 099102f8 0xfef2f8(16708344) .... .... 4c760000 4c761000 4d71d578 0xfbc578(16500088) 1bb10000 1bb11000 1ca110d0 0xf000d0(15728848) 57760000 57761000 5862d7f8 0xecc7f8(15517688) Total Size: Size: 0x5ab13450 (1521562704) bytes. ------------------------------ GC Heap Size: Size: 0x5ab13450 (1521562704) bytes. 0:000> !dumpheap -stat total 0 objects Statistics: MT Count TotalSize Class Name 73037c78 1 12 System.Configuration.GenericEnumConverter 73036da0 1 12 System.Configuration.InfiniteIntConverter .... .... 69161c3c 35025 6809420 System.Windows.EffectiveValueEntry[] 69164748 54 12471072 MS.Internal.WeakEventTable+EventKey[] 710e2228 9540 190389260 System.Byte[] 710dd2b8 1317031 339257932 System.String 0035a670 6427 902224056 Free Total 3615631 objects

    Read the article

  • Return a list of imported Python modules used in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • Query Level 2 Caching throwing ClassCastException

    - by Sameer Malhotra
    Hi, I am using JPA and Hibernate for the database. I have configured (EHCacache) second level cache and query level cache, but just to make sure that caching is working I was trying to get the statistics which is throwing class cast exception.Any help will be highly appreciated. My main goal is to see all the objects which have been cached to make sure that the caching is working properly. Here is the code: public List<CodeValue> findByCodetype(String propertyName) { try { final String queryString = "select model from CodeValue model where model.codetype" + "= :propertyValue" + " order by model.code"; Query query = em.createQuery(queryString); query.setHint("org.hibernate.cacheable", true); query.setHint("org.hibernate.cacheRegion", "query.findByCodetype"); query.setParameter("propertyValue", propertyName); List resultList = query.getResultList(); org.hibernate.Session session = (Session) em.getDelegate(); SessionFactory sessionFactory = session.getSessionFactory(); Map cacheEntries = sessionFactory.getStatistics() .getSecondLevelCacheStatistics("query.findByCodetype") .getEntries(); logger.info("The statistics are: " + cacheEntries); return resultList; } catch (RuntimeException re) { logger.error("findByCodetype failed in trauma patient", re); throw re; } } The error is existing right when I am trying to print the statistics. Below is exception: [6/7/10 19:23:17:059 GMT] 00000034 SystemOut O java.lang.ClassCastException: org.hibernate.cache.QueryKey incompatible with org.hibernate.cache.CacheKey at org.hibernate.stat.SecondLevelCacheStatistics.getEntries(SecondLevelCacheStatistics.java:51) at com.idph.trauma.registry.service.TraumaPatientDAO.findByCodetype(TraumaPatientDAO.java:439) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy209.findByCodetype(Unknown Source) Do you know what's going on?

    Read the article

  • mod_python req.subprocess_env not "seeing" PythonOptions

    - by Brandon
    I'm having trouble getting an environmental variable out of apache config. (don't ask why it's being done this way, I didn't originally code it) This is what I have in the apache config. <Location "/var/www"> SetHandler python-program PythonHandler mod_python.publisher PythonOption MYSQL_PWD ########### PythonDebug On </Location> This is the problem code... #this is the problem code in question. def index(req): req.add_common_vars() os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] req.content_type = "text/html" statText = getStatText() here is the traceback I'm getting from executing this. Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 213, in handler published = publish_object(req, object) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 425, in publish_object return publish_object(req,util.apply_fs_data(object, req.form, req=req)) File "/usr/lib/python2.5/site-packages/mod_python/util.py", line 554, in apply_fs_data return object(**args) File "/var/www/admin/Stat.py", line 299, in index os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] KeyError: 'MYSQL_PWD'

    Read the article

  • Netlogo: error when putting variable in table, only constants allowe??

    - by Chantal
    Hello, Currently I am working on a Netlogo program where I need to use nodes and links for vehicle routing problem. (links are called streets in the program) Here I have some practical problems of how to input variable linkspeed in a table with another node. Constants like 200 etc are fine. Online I found some examples where variables are used, but I do not know why I keep getting the following error: Expected a constant. (or why netlogo expects a constant) Here is the relevant piece of code: extensions [table] streets-own [linkspeed linktoll] nodes-own [netw] ;; In another piece of code linkspeed is assigned successfully to the links to cheapcalc ;; start conditions set costs very high 300000 ;; state 3 unsearched state 2 searching state 1 searched (for later purposes) ask nodes [ set i 0 set j count nodes set netw table:make while [i < j][ table:put netw (i) [3000000 3] set i (i + 1)]] set i 0 let k 0 ask node 35 ;; here i use node 35 as an example. ;; node 35 is connected to node 34, 36, 20 and 50 [table:put netw (35) [0 1] ;; node need to search costs to travel to itself ;; putting constants is ok. while [i < j] [ask my-links [ask both-ends [if (who != 35) [set color blue ;; set temp ([linkspeed] of street 35 who) ;; here my real goal is to put this in stat of i. but i is easier than linkspeed. table:put netw (who) [ i 2 ] ] ] ] set i (i + 1)] ] ;; next node for later, no it is just repetition of the same. end I hope somebody knows what is going on... Kind regards, Chantal

    Read the article

  • Getting Feedburner Subscriber With CURL

    - by Eray Alakese
    I'm using FeedBurner Awareness API. XML data like this : <rsp stat="ok"> - <!-- This information is part of the FeedBurner Awareness API. If you want to hide this information, you may do so via your FeedBurner Account. --> - <feed id="9n66llmt1frfir51p0oa367ru4" uri="teknoblogo"> <entry date="2011-01-15" circulation="11" hits="18" reach="0"/> </feed> </rsp> I want to get circulation data (11) . I'm using this code: $whaturl="https://feedburner.google.com/api/awareness/1.0/GetFeedData?uri=teknoblogo"; //Initialize the Curl session $ch = curl_init(); //Set curl to return the data instead of printing it to the browser. curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //Set the URL curl_setopt($ch, CURLOPT_URL, $whaturl); //Execute the fetch $data = curl_exec($ch); //Close the connection curl_close($ch); $xml = new SimpleXMLElement($data); $fb = $xml->feed->entry['circulation']; echo $fb; echo "OK"; But, returned data is blank. There isn't any error. Only return OK . How can i solve this ? EDIT : echo $data; returning blank, too.

    Read the article

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • C system calls open / read / write / close problem.

    - by Andrei Ciobanu
    Hello, given the following code (it's supposed to write "hellowolrd" in a "helloworld" file, and then read the text): #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #define FNAME "helloworld" int main(){ int filedes, nbytes; char buf[128]; /* Creates a file */ if((filedes=open(FNAME, O_CREAT | O_EXCL | O_WRONLY | O_APPEND, S_IRUSR | S_IWUSR)) == -1){ write(2, "Error1\n", 7); } /* Writes hellow world to file */ if(write(filedes, FNAME, 10) != 10) write(2, "Error2\n", 7); /* Close file */ close(filedes); if((filedes = open(FNAME, O_RDONLY))==-1) write(2, "Error3\n", 7); /* Prints file contents on screen */ if((nbytes=read(filedes, buf, 128)) == -1) write(2, "Error4\n", 7); if(write(1, buf, nbytes) != nbytes) write(2, "Error5\n", 7); /* Close rile afte read */ close(filedes); return (0); } The first time i run the program, the output is: helloworld After that every time I to run the program, the output is: Error1 Error2 helloworld I don't understand why the text isn't appended, as I've specified the O_APPEND file. Is it because I've included O_CREAT ? It the file is already created, shouldn't O_CREAT be ignored ?

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >