Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 276/555 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • When is a Web Service constructor called? [Java Netbeans 6.7.1 & Tomcat 6.0.18]

    - by Shaitan00
    I am migrating a Java RMI application to Java Web Service (school assignment) and I've encountered an issue... Currently my Java Server creates an instance of the Remote Object, this object has a constructor and takes a parameter (int ID) which tells it which database to load in memory - works like a charm ... Now, migrating this to Web Services is causing my a problem - first I needed to add a default constructor because it wouldn't deploy without it, and then while doing some reading all these discussions about "stateless web services" kept coming up ... For example, if I "start" my webservice with parameter(0) it would load from Databse 0 and all requests from Clients would be done using that data... I want this to only happen when I start the WebService and NOT everytime the client connects... Loading from the DB is expensive and takes time, so I want to do it once so that clients when they connect just deal with the data in memory ... This is how it works with my Java RMI .... but can this also work with Web Services? Any advice would be much appreciated. Thanks,

    Read the article

  • How do i prevent my code from being stolen?

    - by Calmarius
    What happens exactly when I launch a .NET exe? I know that C# is compiled to IL code and I think the generated exe file just a launcher that starts the runtime and passes the IL code to it. But how? And how complex process is it? IL code is embedded in the exe. I think it can be executed from the memory without writing it to the disk while ordinary exe's are not (ok, yes but it is very complicated). My final aim is extracting the IL code and write my own encrypted launcher to prevent scriptkiddies to open my code in Reflector and just steal all my classes easily. Well I can't prevent reverse engineering completely. If they are able to inspect the memory and catch the moment when I'm passing the pure IL to the runtime then it won't matter if it is a .net exe or not, is it? I know there are several obfuscator tools but I don't want to mess up the IL code itself. EDIT: so it seems it isn't worth trying what I wanted. They will crack it anyway... So I will look for an obfuscation tool. And yes my friends said too that it is enough to rename all symbols to a meaningless name. And reverse engineering won't be so easy after all.

    Read the article

  • Delete on a very deep tree

    - by Kathoz
    I am building a suffix trie (unfortunately, no time to properly implement a suffix tree) for a 10 character set. The strings I wish to parse are going to be rather long (up to 1M characters). The tree is constructed without any problems, however, I run into some when I try to free the memory after being done with it. In particularly, if I set up my constructor and destructor to be as such (where CNode.child is a pointer to an array of 10 pointers to other CNodes, and count is a simple unsigned int): CNode::CNode(){ count = 0; child = new CNode* [10]; memset(child, 0, sizeof(CNode*) * 10); } CNode::~CNode(){ for (int i=0; i<10; i++) delete child[i]; } I get a stack overflow when trying to delete the root node. I might be wrong, but I am fairly certain that this is due to too many destructor calls (each destructor calls up to 10 other destructors). I know this is suboptimal both space, and time-wise, however, this is supposed to be a quick-and-dirty solution to a the repeated substring problem. tl;dr: how would one go about freeing the memory occupied by a very deep tree? Thank you for your time.

    Read the article

  • How to "pin" C++/CLI pointers

    - by Kumar
    I am wrapping up a class which reading a custom binary data file and makes the data available to a .net/c# class However a couple of lines down the code, i start getting the memory access violation error which i believe is due to the GC moving memory around, the class is managed Here's the code if ( ! reader.OpenFile(...) ) return ; foreach(string fieldName in fields) { int colIndex = reader.GetColIndex( fieldName ); int colType = reader.GetColType( colIndex ); // error is raised here on 2nd iteration } for ( int r = 0 ; r < reader.NumFields(); r++ ) { foreach(string fieldName in fields) { int colIndex = reader.GetColIndex( fieldName ); int colType = reader.GetColType( colIndex ); // error is raised here on 2nd iteration switch ( colType ) { case 0 : // INT processField( r, fieldName, reader.GetInt(r,colIndex) ); break ; .... } } } .... i've looked at interior_ptr, pin_ptr but they give an error c3160 cannot be in a managed class Any workaround ? BTW, this is my 1st C++ program in a very long time !

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • php | Multidimensional array sorting

    - by user889349
    I have an array and need to be sorted (based on id): Array ( [0] => Array ( [qty] => 1 [id] => 3 [name] => Name1 [sku] => Model 1 [options] => [price] => 100.00 ) [1] => Array ( [qty] => 2 [id] => 1 [name] => Name2 [sku] => Model 1 [options] => Color: <em>Black (+10$)</em>. Memory: <em>32GB (+99$)</em>. [price] => 209.00 ) ) Is it possible to sort my array to get output (id based)? Array ( [0] => Array ( [qty] => 2 [id] => 1 [name] => Name2 [sku] => Model 1 [options] => Color: <em>Black (+10$)</em>. Memory: <em>32GB (+99$)</em>. [price] => 209.00 ) [1] => Array ( [qty] => 1 [id] => 3 [name] => Name1 [sku] => Model 1 [options] => [price] => 100.00 ) ) Thanks!

    Read the article

  • Is there a way to capture a bitmap from a WPF window using native C++?

    - by Mike Caron
    Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels? I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine. I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.

    Read the article

  • Error while sending image through ajax to WCF

    - by Samar Rizvi
    Here is my form: <form id="register" enctype="multipart/form-data"> <input type="text" name="first_name" placeholder="First Name" id="first_name" /> <input type="text" name="last_name" placeholder="Last Name" id="last_name" /> <input type="text" name="input_email" placeholder="Confirm your email" id="input_email" class="loginEmail" /> <input type="password" name="input_password" placeholder="Password" id="input_password" class="loginPassword" /> <input type="password" name="repeat_password" placeholder="Repeat password" id="repeat_password" class="loginPassword" /> <input type="file" name="image_file" id="image_file" /> <div class="logControl"> <div class="memory"></div> <input type="submit" name="submit" value="Register" class="buttonM bBlue" id="register_submit"/> <div class="clear"></div> </div> <p><h3>Or click <a href="login.html">here</a> to login</h3></p> </form> Here is jquery call that I make: function WCFJSON() { $(".memory").html('<img src="images/elements/loaders/7s.gif" />'); Data = new FormData($('form')[0]); $.ajax({ type: 'POST', //GET or POST or PUT or DELETE verb url: "WCFService/Service.svc/Register", // Location of the service data: Data, //Data sent to server async:false, cache:false, contentType: false, // content type sent to server dataType: DataType, //Expected data format from server processdata: false, //True or False success: function(msg) {//On Successfull service call ... }, error: ...// When Service call fails }); } $(document).ready(function(){ $("#register").submit(function(){ $('#input_password').val(CryptoJS.MD5($('#input_password').val())); $('#repeat_password').val(CryptoJS.MD5($('#repeat_password').val())); WCFJSON(); return false; }); }); Now when I submit the form , page refreshes with get elements in the url. But if I remove the file input from the form, jquery works fine.

    Read the article

  • What could the negative effects be of attaching to a process as a debugger?

    - by I_like_traffic_lights
    Background A client of mine has a major problem. They have a CRM system, which was created by a single person over a period of 9 years. Unfortunatelly, a few weeks ago, this person died. I believe the company has learned their lesson, and they have started a project of rewriting the CRM system to a modern platform. I have been hired to create a solution in the meantime to make adaptations to the CRM system. I have given up understanding the code, as this would take too long. My solution, is therefore, to make a window and show this on top of the CRM system, whenever this CRM system is showing. This part works fine, but my major problem is extracting the data from the CRM system. Proposed solution After excluding 6 approaches, including runtime code injection, memory searching, database integration, I have arrived at attaching to the process as a debugger, so I get notified about event, and use this in combination with reading from process memory. This approach seems to work, but I am worried about possible side-effects of this approach. Question What are the dangers of using this in a production environment, where there are 250 employees utilizing the system. Needless to say, I cannot risk reducing the already shaky stability of the system.

    Read the article

  • Lazy load images in UITableViewCell

    - by lostInTransit
    Hi I have some 50 custom cells in my UITableView. I want to display an image and a label in the cells where I get the images from URLs. I want to do a lazy load of images so the UI does not freeze up while the images are being loaded. I tried getting the images in separate threads but I have to load each image every time a cell becomes visible again (Otherwise reuse of cells shows old images) Apps like Facebook load images only for cells currently visible and once the images are loaded, they are not loaded again. Can someone please tell me how to duplicate this behavior. Thanks. Edit Trying to cache images in an NSMutableDictionary object creates problems when the user scrolls fast. I am getting images only when scrolling completely stops and clearing out the cache on memory warning. But the app invariably gets a memory warning (due to size of images being cached) and clears the cache before reloading. If scrolling is very fast, it crashes. Any other suggestions are welcome

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Android designing an app to keep me logged into a wifi access point

    - by MrGibbage
    At the gym where I work out, they have an open wifi access point. The way it is set up, once you "connect", you have to navigate to a web page (it is a 1.1.X.X ip address) and click the "I agree" button, after presumably reading the user agreement. The problem is, they have it set up to log you out once an hour, which always happens in the middle of my workout. I have the SSID remembered, so it connects automatically when I come in range, but I get an android notification that further action is needed to fully connect. What I was wondering is if there a work around so that I don't have to click-through every hour? I was thinking of writing an app that could detect when I was in range, or when "half-connected", and then have it somehow complete the registration process. Perhaps this will have to be done by loading the web page in memory and then somehow clicking the "I agree" button. What I would like help with is: 1) what is the terminology involved here? What state is the connection in when I am connected, but I haven't clicked through? What other connection states may apply? If I knew that, I might just be able to research this and come up with a solution. Are these different states "detectable"? It seems like it is since I get a notification that I need to complete the registration process when I am "half-connected". 2) I know there are plugins for desktop browsers that can click buttons (like the keepass plugins, which will log you into a site). How could I replicate this in Android? Ideally I would like to do it internally, in memory, rather than firing up a browser. Possible? Comments? Is my understanding and thought process sound here, or am I overlooking something?

    Read the article

  • Is a web-server (e.g servlets) a good solution for an IM server?

    - by John
    I'm looking at a new app, broadly speaking an IM application with a strong client-server model - all communications go through a server so they can be logged centrally. The server will be Java in some form, clients could at this point be anything from a .NET Desktop app to Flex/Silverlight, to a simple web-interface using JS/AJAX. I had anticipated doing the server using standard J2EE so I get a thread-safe, multi-user server for 'free'... to make things simple let's say using Servlets (but in practice SpringMVC would be likely). This all seemed very neat but I'm concerned if the stateless nature of Servlets is the best approach. If my memory of servlets (been a year or two) is right, each time a client sent a HTTP request, typically a new message entered by the user, the servlet could not assume it had the user/chat in memory and might have to get it from the DB... regardless it has to look it up. Then it either has to use some PUSH system to inform other members of the chat, or cache that there are new messages, for other clients who poll the server using AJAX or similar - and when they poll it again has to lookup the chat, including new messages, and send the new data. I'm wondering if a better system would be the server is running core Java, and implements a socket-based communication with clients. This allows much more immediate data transfer and is more flexible if say the IM client included some game you could play. But then you're writing a custom server and sockets don't sound very friendly to a browser-based client on current browsers. Am I missing some big piece of the puzzle here, it kind of feels like I am? Perhaps a better way to ask the question would simply be "if the client was browser-based using HTML/JS and had to run on IE7+,FF2+ (i.e no HTML5), how would you implement the server?" edit: if you are going to suggest using XMPP, I have been trying to get my head around this in another question, so please consider if that's a more appropriate place to discuss this specifically.

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • !gcroot output leads nowhere

    - by Jeff Costa
    I am troubleshooting memory fragmentation in an app pool, as evidenced by a small number of Free objects consuming the most space on the heap: 0x000007ff00256728 6,543 3,890,208 System.Collections.Hashtable+bucket[] 0x000007ff002649a8 7,297 22,979,560 System.Byte[] 0x000007ff001e0d90 251,347 30,374,304 System.String 0x0000000001d0c830 373 48,036,816 Free Running the !dumpgen 3 command reveals the fragmentation; There is a repeating pattern of Free and System.Object objects of the same size: 000000017feb7350 24 **** FREE **** 000000017feb7368 8192 System.Object[] 000000017feb9368 24 **** FREE **** 000000017feb9380 8192 System.Object[] 000000017febb380 24 **** FREE **** 000000017febb398 8192 System.Object[] 000000017febd398 24 **** FREE **** 000000017febd3b0 8192 System.Object[] 000000017febf3b0 24 **** FREE **** 000000017febf3c8 8192 System.Object[] 000000017fec13c8 24 **** FREE **** 000000017fec13e0 8192 System.Object[] 000000017fec33e0 24 **** FREE **** 000000017fec33f8 8192 System.Object[] 000000017fec53f8 24 **** FREE **** 000000017fec5410 14024 System.Object[] 000000017fec8ad8 24 **** FREE **** 000000017fec8af0 8192 System.Object[] 000000017fecaaf0 24 **** FREE **** 000000017fecab08 8192 System.Object[] 000000017feccb08 24 **** FREE **** 000000017feccb20 8192 System.Object[] 000000017feceb20 24 **** FREE **** 000000017feceb38 8192 System.Object[] 000000017fed0b38 24 **** FREE **** 000000017fed0b50 8192 System.Object[] 000000017fed2b50 24 **** FREE **** 000000017fed2b68 8192 System.Object[] When I try to obtain the root of one of the System.Objects with !gcroot, I get a pinned handle, but no additional stack data: Scan Thread 41 OSThread 1044 DOMAIN(0000000001D51330):HANDLE(Pinned):15217e8:Root: 000000017fe60fe8(System.Object[]) As you can see, there is no additional data to go on. Running a !handle command also yields nothing: 0:041> !handle 000000017fe7a068 ff Handle 000000017fe7a068 Type <Error retrieving type> unable to query object information unable to query object information No object specific information available How can I trace out this memory leak when I cannot find what is rooting System.Object?

    Read the article

  • How do I read hex numbers into an unsigned int in C [Solved]

    - by sil3nt
    I'm wanting to read hex numbers from a text file into an unsigned integer so that I can execute Machine instructions. It's just a simulation type thing that looks inside the text file and according to the values and its corresponding instruction outputs the new values in the registers. For example, the instructions would be: 1RXY - Save register R with value in memory address XY 2RXY - Save register R with value XY BRXY - Jump to register R if xy is this and that etc.. ARXY - AND register R with value at memory address XY The text file contains something like this each in a new line. (in hexidecimal) 120F B007 290B My problem is copying each individual instruction into an unsigned integer...how do I do this? #include <stdio.h> int main(){ FILE *f; unsigned int num[80]; f=fopen("values.txt","r"); if (f==NULL){ printf("file doesnt exist?!"); } int i=0; while (fscanf(f,"%x",num[i]) != EOF){ fscanf(f,"%x",num[i]); i++; } fclose(f); printf("%x",num[0]); }

    Read the article

  • How to iteratively generate k elements subsets from a set of size n in java?

    - by Bea Metitiri
    Hi, I'm working on a puzzle that involves analyzing all size k subsets and figuring out which one is optimal. I wrote a solution that works when the number of subsets is small, but it runs out of memory for larger problems. Now I'm trying to translate an iterative function written in python to java so that I can analyze each subset as it's created and get only the value that represents how optimized it is and not the entire set so that I won't run out of memory. Here is what I have so far and it doesn't seem to finish even for very small problems: public static LinkedList<LinkedList<Integer>> getSets(int k, LinkedList<Integer> set) { int N = set.size(); int maxsets = nCr(N, k); LinkedList<LinkedList<Integer>> toRet = new LinkedList<LinkedList<Integer>>(); int remains, thresh; LinkedList<Integer> newset; for (int i=0; i<maxsets; i++) { remains = k; newset = new LinkedList<Integer>(); for (int val=1; val<=N; val++) { if (remains==0) break; thresh = nCr(N-val, remains-1); if (i < thresh) { newset.add(set.get(val-1)); remains --; } else { i -= thresh; } } toRet.add(newset); } return toRet; } Can anybody help me debug this function or suggest another algorithm for iteratively generating size k subsets? EDIT: I finally got this function working, I had to create a new variable that was the same as i to do the i and thresh comparison because python handles for loop indexes differently.

    Read the article

  • Copy object using pointer (templates)

    - by Azodious
    How the push_back of stl::vector is implemented so it can make copy of any datatype .. may be pointer, double pointer and so on ... I'm implementing a template class having a function push_back almost similar to vector. Within this method a copy of argument should be inserted in internal memory allocated memory. but the argument is a pointer. (an object pointer). Can you pls tell how to create copy from pointer. so that if i delete the pointer in caller still the copy exists in my template class? Code base is as follows: template<typename T> class Vector { public: void push_back(const T& val_in) { T* a = *(new T(val_in)); m_pData[SIZE++] = a; } } Caller: Vector<MyClass*> v(3); MyClass* a = new MyClass(); a->a = 0; a->b = .5; v.push_back(a); delete a; Thanks.

    Read the article

  • Objective-C Result from a Static Method saved to class instance variable giving "EXC_BAD_ACCESS" when used.

    - by KinGBin
    I am trying to store the md5 string as a class instance variable instead of the actual password. I have a static function that will return a md5 string which I'm trying to store in an instance variable instead of the actual password. I have the following setter for my class instance variable: -(void)setPassword:(NSString *)newpass{ if(newpass != password){ password = [utils md5HexDigest:newpass]; } } This will pass back the correct md5 string and save it to the password variable in my init function: [self setPassword:pword];. If I call another instance method and try to access self.password" I will get "EXC_BAD_ACCESS". I understand that the memory is getting released, but I have no clue to make sure it stays. I have tried alloc init with autorelease with no luck. This is the md5HexDigest function getting called during the init (graciously found in another stackoverflow question): + (NSString*)md5HexDigest:(NSString*)input { const char* str = [input UTF8String]; unsigned char result[CC_MD5_DIGEST_LENGTH]; CC_MD5(str, strlen(str), result); NSMutableString *ret = [NSMutableString stringWithCapacity:CC_MD5_DIGEST_LENGTH*2]; for(int i = 0; i<CC_MD5_DIGEST_LENGTH; i++) { [ret appendFormat:@"%02x",result[i]]; } return ret; } Any help/pointers would be greatly appreciated. I would rather have the md5 string saved in memory than the actual password calling the md5 every time I needed to use the password. Thanks in advance.

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • How does git fetches commits associated to a file ?

    - by liadan
    I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

  • Processor, OS : 32bit, 64 bit

    - by Sandbox
    I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#. I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind. I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background. Can someone explain me these things in a plain simple English or point me to something which does that. EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things. BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.

    Read the article

  • Receiving "expected expression before" Error When Using A Struct

    - by Zach Dziura
    I'm in the process of creating a simple 2D game engine in C with a group of friends at school. I'd like to write this engine in an Object-Oriented way, using structs as classes, function pointers as methods, etc. To emulate standard OOP syntax, I created a create() function which allocates space in memory for the object. I'm in the process of testing it out, and I'm receiving an error. Here is my code for two files that I'm using to test: test.c: #include <stdio.h> int main() { typedef struct { int i; } Class; Class *test = (Class*) create(Class); test->i = 1; printf("The value of \"test\" is: %i\n", test->i); return 0; } utils.c: #include <stdio.h> #include <stdlib.h> #include "utils.h" void* create(const void* class) { void *obj = (void*) malloc(sizeof(class)); if (obj == 0) { printf("Error allocating memory.\n"); return (int*) -1; } else { return obj; } } void destroy(void* object) { free(object); } The utils.h file simply holds prototypes for the create() and destroy() functions. When I execute gcc test.c utils.c -o test, I'm receiving this error message: test.c: In function 'main': test.c:10:32: error: expected expression before 'Class' I know it has something to do with my typedef at the beginning, and how I'm probably not using proper syntax. But I have no idea what that proper syntax is. Can anyone help?

    Read the article

  • Java: multi-threaded maps: how do the implementations compare?

    - by user346629
    I'm looking for a good hash map implementation. Specifically, one that's good for creating a large number of maps, most of them small. So memory is an issue. It should be thread-safe (though losing the odd put might be an OK compromise in return for better performance), and fast for both get and put. And I'd also like the moon on a stick, please, with a side-order of justice. The options I know are: HashMap. Disastrously un-thread safe. ConcurrentHashMap. My first choice, but this has a hefty memory footprint - about 2k per instance. Collections.sychronizedMap(HashMap). That's working OK for me, but I'm sure there must be faster alternatives. Trove or Colt - I think neither of these are thread-safe, but perhaps the code could be adapted to be thread safe. Any others? Any advice on what beats what when? Any really good new hash map algorithms that Java could use an implementation of? Thanks in advance for your input!

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >