Search Results

Search found 13227 results on 530 pages for 'memory efficiency'.

Page 58/530 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Is Python Interpreted or Compiled?

    - by crodjer
    This is just a wondering I had while reading about interpreted and compiled languages. Ruby is no doubt an interpreted language, since source code is compiled by an interpreter at the point of execution. On the contrary C is a compiled language, as one have to compile the source code first according to the machine and then execute. This results is much faster execution. Now coming to Python: A python code (somefile.py) when imported creates a file (somefile.pyc) in the same directory. Let us say the import is done in a python shell or django module. After the import I change the code a bit and execute the imported functions again to find that it is still running the old code. This suggests that *.pyc files are compiled python files similar to executable created after compilation of a C file, though I can't execute *.pyc file directly. When the python file (somefile.py) is executed directly ( ./somefile.py or python somefile.py ) no .pyc file is created and the code is executed as is indicating interpreted behavior. These suggest that a python code is compiled every time it is imported in a new process to crate a .pyc while it is interpreted when directly executed. So which type of language should I consider it as? Interpreted or Compiled? And how does its efficiency compare to interpreted and compiled languages? According to wiki's Interpreted Languages page it is listed as a language compiled to Virtual Machine Code, what is meant by that? Update Looking at the answers it seems that there cannot be a perfect answer to my questions. Languages are not only interpreted or only compiled, but there is a spectrum of possibilities between interpreting and compiling. From the answers by aufather, mipadi, Lenny222, ykombinator, comments and wiki I found out that in python's major implementations it is compiled to bytecode, which is a highly compressed and optimized representation and is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Also the the languages are not interpreted or compiled, but rather language implementations either interpret or compile code. I also found out about Just in time compilation As far as execution speed is concerned the various benchmarks cannot be perfect and depend on context and the task which is being performed. Please tell if I am wrong in my interpretations.

    Read the article

  • Help with malloc and free: Glibc detected: free(): invalid pointer

    - by nunos
    I need help with debugging this piece of code. I know the problem is in malloc and free but can't find exactly where, why and how to fix it. Please don't answer: "Use gdb" and that's it. I would use gdb to debug it, but I still don't know much about it and am still learning it, and would like to have, in the meanwhile, another solution. Thanks. #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <unistd.h> #include <string.h> #include <sys/wait.h> #include <sys/types.h> #define MAX_COMMAND_LENGTH 256 #define MAX_ARGS_NUMBER 128 #define MAX_HISTORY_NUMBER 100 #define PROMPT ">>> " int num_elems; typedef enum {false, true} bool; typedef struct { char **arg; char *infile; char *outfile; int background; } Command_Info; int parse_cmd(char *cmd_line, Command_Info *cmd_info) { char *arg; char *args[MAX_ARGS_NUMBER]; int i = 0; arg = strtok(cmd_line, " "); while (arg != NULL) { args[i] = arg; arg = strtok(NULL, " "); i++; } num_elems = i;precisa em free_mem if (num_elems == 0) return 0; cmd_info->arg = (char **) ( malloc(num_elems * sizeof(char *)) ); cmd_info->infile = NULL; cmd_info->outfile = NULL; cmd_info->background = 0; bool b_infile = false; bool b_outfile = false; int iarg = 0; for (i = 0; i < num_elems; i++) { if ( !strcmp(args[i], "<") ) { if ( b_infile || i == num_elems-1 || !strcmp(args[i+1], "<") || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->infile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->infile, args[i]); b_infile = true; } else if (!strcmp(args[i], ">")) { if ( b_outfile || i == num_elems-1 || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "<") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->outfile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->outfile, args[i]); b_outfile = true; } else if (!strcmp(args[i], "&")) { if ( i == 0 || i != num_elems-1 || cmd_info->background ) return -1; cmd_info->background = true; } else { cmd_info->arg[iarg] = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->arg[iarg], args[i]); iarg++; } } cmd_info->arg[iarg] = NULL; return 0; } void print_cmd(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("infile=\"%s\"\n", cmd_info->infile); printf("outfile=\"%s\"\n", cmd_info->outfile); printf("background=\"%d\"\n", cmd_info->background); } void get_cmd(char* str) { fgets(str, MAX_COMMAND_LENGTH, stdin); str[strlen(str)-1] = '\0'; } pid_t exec_simple(Command_Info *cmd_info) { pid_t pid = fork(); if (pid < 0) { perror("Fork Error"); return -1; } if (pid == 0) { if ( (execvp(cmd_info->arg[0], cmd_info->arg)) == -1) { perror(cmd_info->arg[0]); exit(1); } } return pid; } void type_prompt(void) { printf("%s", PROMPT); } void syntax_error(void) { printf("msh syntax error\n"); } void free_mem(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) free(cmd_info->arg[i]); free(cmd_info->arg); free(cmd_info->infile); free(cmd_info->outfile); } int main(int argc, char* argv[]) { char cmd_line[MAX_COMMAND_LENGTH]; Command_Info cmd_info; //char* history[MAX_HISTORY_NUMBER]; while (true) { type_prompt(); get_cmd(cmd_line); if ( parse_cmd(cmd_line, &cmd_info) == -1) { syntax_error(); continue; } if (!strcmp(cmd_line, "")) continue; if (!strcmp(cmd_info.arg[0], "exit")) exit(0); pid_t pid = exec_simple(&cmd_info); waitpid(pid, NULL, 0); free_mem(&cmd_info); } return 0; }

    Read the article

  • NSMutableDictionary, alloc, init and reiniting...

    - by Marcos Issler
    In the following code: //anArray is a Array of Dictionary with 5 objs. //here we init with the first NSMutableDictionary *anMutableDict = [[NSMutableDictionary alloc] initWithDictionary:[anArray objectAtIndex:0]]; ... use of anMutableDict ... //then want to clear the MutableDict and assign the other dicts that was in the array of dicts for (int i=1;i<5;i++) { [anMutableDict removeAllObjects]; [anMutableDict initWithDictionary:[anArray objectAtIndex:i]]; } Why this crash? How is the right way to clear an nsmutabledict and the assign a new dict? Thanks guy's. Marcos.

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • An interesting case of delete and destructor (C++)

    - by Viet
    I have a piece of code where I can call destructor multiple times and access member functions even the destructor was called with member variables' values preserved. I was still able to access member functions after I called delete but the member variables were nullified (all to 0). And I can't double delete. Please kindly explain this. Thanks. #include <iostream> using namespace std; template <typename T> void destroy(T* ptr) { ptr->~T(); } class Testing { public: Testing() : test(20) { } ~Testing() { printf("Testing is being killed!\n"); } int getTest() const { return test; } private: int test; }; int main() { Testing *t = new Testing(); cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; delete t; cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; //delete t; // <======== Don't do it! Double free/delete! cout << "t->getTest() = " << t->getTest() << endl; return 0; }

    Read the article

  • Finding leaks under GeneralBlock-16?

    - by erastusnjuki
    If ObjectAlloc cannot deduce type information for the block, it uses 'GeneralBlock'. Any strategies to get leaks from this block that may eliminate the need of my 'trial and error' methods that I use? The Extended Detail thing doesn't really do it for me as I just keep guessing.

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Remove pointer object whose reference is mantained in three different lists

    - by brainydexter
    I am not sure how to approach this problem: 'Player' class mantains a list of Bullet* objects: class Player { protected: std::list< Bullet* > m_pBullet_list; } When the player fires a Bullet, it is added to this list. Also, inside the constructor of bullet, a reference of the same object is updated in CollisionMgr, where CollisionMgr also mantains a list of Bullet*. Bullet::Bullet(GameGL*a_pGameGL, Player*a_pPlayer) : GameObject( a_pGameGL ) { m_pPlayer = a_pPlayer; m_pGameGL->GetCollisionMgr()->AddBullet(this); } class CollisionMgr { void AddBullet(Bullet* a_pBullet); protected: std::list< Bullet*> m_BulletPList; } In CollisionMgr.Update(); based on some conditions, I populate class Cell which again contain a list of Bullet*. Finally, certain conditions qualify a Bullet to be deleted. Now, these conditions are tested upon while iterating through a Cell's list. So, if I have to delete the Bullet object, from all these places, how should I do it so that there are no more dangling references to it? std::list< Bullet*>::iterator bullet_it; for( bullet_it = (a_pCell->m_BulletPList).begin(); bullet_it != (a_pCell->m_BulletPList).end(); bullet_it++) { bool l_Bullet_trash = false; Bullet* bullet1 = *bullet_it; // conditions would set this to true if ( l_Bullet_Trash ) // TrashBullet( bullet1 ); continue; } Also, I was reading about list::remove, and it mentions that it calls the destructor of the object we are trying to delete. Given this info, if I delete from one list, the object does not exist, but the list would still contain a reference to it..How do I handle all these problems ? Can someone please help me here ? Thanks PS: If you want me to post more code or provide explanation, please do let me know.

    Read the article

  • JVMTI: FollowReferences : how to skip Soft/Weak/Phantom references?

    - by Jayan
    I am writing a small code to detect number of objects left behind after certain actions in our tool. This uses FollowReferences() JVMTI-API. This counts instances reachable by all paths. How can I skip paths that included weak/soft/phantom reference? (IterateThroughHeap counts all objects at the moment, so the number is not fully reliable) Thanks, Jayan

    Read the article

  • What is an acceptable GC frequency for a SlimDX/Windows/.NET game?

    - by Rei Miyasaka
    I understand that the Windows GC is much better than the Xbox/WP7 GC, being that it's generational and multithreaded -- so I don't need to worry quite as much about avoiding memory allocation. SlimDX even has some unavoidable functions that generate some amount of garbage (specifically, MapSubresource creates DataBoxes), yet people don't seem to be too upset about it. I'd like to use some functional paradigms to write my code too, which also means creating objects like closures and monads. I know premature optimization isn't a good thing, but are there rules of thumb or metrics that I can follow to know whether I need to cut down on allocations? Is, say, one gen 0 GC per frame too much? One thing that has me stumped is object promotions. Gen 0 GCs will supposedly finish within a millisecond or two, but if I'm understanding correctly, it's the gen 1 and 2 promotions that start to hurt. I'm not too sure how I can predict/prevent these.

    Read the article

  • iPhone objective-c autoreleasing leaking

    - by okami
    I do this: NSString *fullpath = [[NSBundle mainBundle] pathForResource:@"text_file" ofType:@"txt"]; Why the following message appear? Is my code leaking? 2010-03-31 13:44:18.649 MJIPhone[2175:207] *** _NSAutoreleaseNoPool(): Object 0x3909ba0 of class NSPathStore2 autoreleased with no pool in place - just leaking Stack: (0x1656bf 0xc80d0 0xcf2ad 0xcee0e 0xd3327 0x2482 0x2426) 2010-03-31 13:44:18.653 MJIPhone[2175:207] *** _NSAutoreleaseNoPool(): Object 0x390b0b0 of class NSPathStore2 autoreleased with no pool in place - just leaking Stack: (0x1656bf 0xc80d0 0xc7159 0xd0c6f 0xd3421 0x2482 0x2426) 2010-03-31 13:44:18.672 MJIPhone[2175:207] *** _NSAutoreleaseNoPool(): Object 0x390d140 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0x1656bf 0xc6e62 0xcec1b 0xd4386 0x24ac 0x2426)

    Read the article

  • Why does accessing a member of a malloced array of structs seg fault?

    - by WSkinner
    I am working through Learn C The Hard Way and am stumped on something. I've written a simplified version of the problem I am running into to make it easier to get down to it. Here is the code: #include <stdlib.h> #define GROUP_SIZE 10 #define DATA_SIZE 64 struct Dummy { char *name; }; struct Group { struct Dummy **dummies; }; int main() { struct Group *group1 = malloc(sizeof(struct Group)); group1->dummies = malloc(sizeof(struct Dummy) * GROUP_SIZE); struct Dummy *dummy1 = group1->dummies[3]; // Why does this seg fault? dummy1->name = (char *) malloc(DATA_SIZE); return 0; } when I try to set the name pointer on one of my dummies I get a seg fault. Using valgrind it tells me this is uninitialized space. Why is this?

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • Tomcat memory issue

    - by user305210
    Hello, I have noticed that my application that is running on Tomcat 5 starts with 1gig of memory and as soon as it starts receiving requests from client, the memory starts dropping until it is down to 100MBs and troubles start from there. I am looking at /manager/status page of tomcat under JVM section where "Free Memory", "Total Memory", "Max Memory" is listed. Is this an indicator of memory leak? Memory does not seem to be freed-up automatically even if there are no requests coming from client machines.

    Read the article

  • Page Fault Interrupt Problems

    - by Vikas
    This is a statement referring to problem caused by page fault:(from Silberschatz 7th ed P-310 last para) 'We cant simply restart instructions when instruction modifies several different location Ex:when a instruction moves 256 bytes from source to dest and either src or dest straddles on page boundary , then,after a partial move, if a page fault occurs, 'we can't simply restart the instructions' My question is Why not? Simply restart the instruction again do the same copy after page is in. Is there any problem in it?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >