Search Results

Search found 1898 results on 76 pages for 'structures'.

Page 68/76 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

  • Catching the return of main function before it deallocates resources

    - by EpsilonVector
    I'm trying to implement user threads in Linux kernel 2.4, and I ran into something problematic and unexpected. Background: a thread basically executes a single function and dies, except that when I call thread_create for the first time it must turn main() into a thread as well (by default it is not a thread until the first call, which is also when all the related data structures are allocated). Since a thread executes a function and dies, we don't need to "return" anywhere with it, but we do need to save the return value to be reclaimed later with thread_join, so the hack I came up with was: when I allocate the thread stack I place a return address that points to a thread_return_handler function, which deallocates the thread, makes it a zombie, and saves its return value for later. This works for "just run a function and die" threads, but is very problematic with the main thread. Since it actually is the main function, if it returns before the other threads finish the normal return mechanism kicks in, and deallocates all the shared resources, thus screwing up all the running threads. I need to keep it from doing that. Any ideas on how it can be done?

    Read the article

  • Obfuscator for .NET assembly (Maybe just a C++ obfuscator?)

    - by Pirate for Profit
    The software company I work for is using a ton of open source LGPL/BSD/MIT C++ code that we have written wrappers around to port "helper classes" into a .NET assembly, via C++/CLI. These libraries have wrapped old cryptic APIs into easy-to-use ones based on common sense, and will be very helpful for a lot of different tasks will be included in many future client's applications, and we might even license it to other software companies in the same field. So naturally we are tasked with looking into solutions for securing the code from prying eyes. What we're trying to do is stop the casual observer from seeing what's going on. Now I have hacked some crazy shit in EverQuest and other video games in my day so I know with enough tireless effort anything can be done. But we don't want to make it easy for whomever. To the point, besides the Visual Studio compiler's optimizations, is there's a C++ obfuscator or .NET assembly obfuscator (after it's been built o.O) or something that would scramble everything up, re-arrange data structures, string constants, etc. idk? And if such a thing exists, we'd be curious to know how that would impact performance, as some sections of code are time critical (funny saying that using a managed M$ framework).

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Best way to limit results in MySQL with user subcategories

    - by JM4
    I am trying to essentially solve for the following: 1) Find all users in the system who ONLY have programID 1. 2) Find all users in the system who have programID 1 AND any other active program. My tables structures (in very simple terms are as follows): users userID | Name ================ 1 | John Smith 2 | Lewis Black 3 | Mickey Mantle 4 | Babe Ruth 5 | Tommy Bahama plans ID | userID | plan | status --------------------------- 1 | 1 | 1 | 1 2 | 1 | 2 | 1 3 | 1 | 3 | 1 4 | 2 | 1 | 1 5 | 2 | 3 | 1 6 | 3 | 1 | 0 7 | 3 | 2 | 1 8 | 3 | 3 | 1 9 | 3 | 4 | 1 10 | 4 | 2 | 1 11 | 4 | 4 | 1 12 | 5 | 1 | 1 I know I can easily find all members with a specific plan with something like the following: SELECT * FROM users a JOIN plans b ON (a.userID = b.userID) WHERE b.plan = 1 AND b.status = 1 but this will only tell me which users have an 'active' plan 1. How can I tell who ONLY has plan 1 (in this case only userID 5) and how to tell who has plan 1 AND any other active plan? Update: This is not to get a count, I will actually need the original member information, including all the plans they have so a COUNT(*) response may not be what I'm trying to achieve.

    Read the article

  • Opencv video frame not showing Sobel output

    - by user1016950
    This is a continuation question from Opencv video frame giving me an error I think I closed it off, Im new to Stackoverflow. I have code below that Im trying to see its Sobel edge image. However the program runs but the output is just a grey screen where if I mouseover the cursor disappears. Does anyone see the error? or is it a misunderstanding about the data structures Im using IplImage *frame, *frame_copy = 0; // capture frames from video CvCapture *capture = cvCaptureFromFile( "lightinbox1.avi"); //Allows Access to video propertys cvQueryFrame(capture); //Get the number of frames int nframe=(int) cvGetCaptureProperty(capture,CV_CAP_PROP_FRAME_COUNT); //Name window cvNamedWindow( "video:", 1 ); //start loop for(int i=0;i<nframe;i++){ //prepare capture frame extraction cvGrabFrame(capture); cout<<"We are on frame "<<i<<"\n"; //Get this frame frame = cvRetrieveFrame( capture ); con2txt(frame); frame_copy = cvCreateImage(cvSize(frame->width,frame->height),IPL_DEPTH_8U,frame->nChannels ); //show and destroy frame cvCvtColor( frame,frame,CV_RGB2GRAY); //Create Sobel output frame_copy1 = cvCreateImage(cvSize(frame->width,frame->height),IPL_DEPTH_16S,1 ); cvSobel(frame_copy,frame_copy1,2,2,3); cvShowImage("video:",frame_copy1); cvWaitKey(33);} cvReleaseCapture(&capture);

    Read the article

  • [javascript] Populating jsTree based on XML data uploaded to server folder

    - by PFM
    tl:dr How can I populate jsTree based on a folder location instead of an exact XML url? I'm looking for a little direction on this project. Currently I am trying to copy file structures of hard drives as XML files and recreate them using jsTree on the webserver for a completely independent version of the file structure. I have some python script that outputs XML files that are formed to jsTree and automatically uploads to a folder on the server. The problem is now I am a little lost because I have to manually enter each XML file into jsTree code for it to display so I have multiple entries like this: $("#tree1") .jstree({ "plugins" : [ "themes", "xml_data", "ui", "search", "types" ], "xml_data" : { "ajax" : { "url" : "./XML_DATA/DRIVE1.xml" }, "xsl" : "nest" }, I see in the documentation that instead of populated by the direct file the folders are populated by "server.php" but no where in the php code does it point to any directories or files. After considering the problem I thought of a few solutions and could use some advice on them: Should I be trying to write php code to automatically look through my XML_DATA folder to upload each XML file? Should I just upload all the XML to mySQL and populate my tree based on that? Should the javascript be the code looking through the server's folder for XML files? All the XML is formed the same way but the number of XML files on the server will increase and will have to be refreshed as well as they will be overwritten with changes. Any direction would be appreciated, thanks.

    Read the article

  • Concept: Information Into Memory Location.

    - by Richeve S. Bebedor
    I am having troubles conceptualizing an algorithm to be used to transform any information or data into a specific appropriate and reasonable memory location in any data structure that I will be devising. To give you an idea, I have a JPanel object instance and I created another Container type object instance of any subtype (note this is in Java because I love this language), then I collected those instances into a data structure not specifically just for those instances but also applicable to any type of object. Now my procedure for fetching those data again is to extract the object specific features similar in category to all object in that data structure and transform it into a integer data memory location (specifically as much as possible) or any type of data that will pertain to this transformation. And I can already access that memory location without further sorting or applications of O(n) time complex algorithms (which I think preferable but I wanted to do my own way XD). The data structure is of any type either binary tree, linked list, arrays or sets (and the like XD). What is important is I don't need to have successive comparing and analysis of data just to locate information in big structures. To give you a technical idea, I have to an array DS that contains JLabel object instance with a specific name "HelloWorld". But array DS contains other types of object (in multitude). Now this JLabel object has a location in the array at index [124324] (which is if you do any type of searching algorithm just to arrive at that location is conceivably slow because added to it the data structure used was an array *note please disregard the efficiency of the data structure to be used I just want to explain to you my concept XD). Now I want to equate "HelloWorld" to 124324 by using a conceptually made function applicable to all data types. So that I can do a direct search by doing this DS[extractLocation("HelloWorld")] just to get that JLabel instance. I know this may sound crazy but I want to test my concept of non-sorting feature extracting search algorithm for any data structure wherein my main problem is how to transform information to be stored into memory location of where it was stored.

    Read the article

  • Unit Testing Interfaces in Python

    - by Nicholas Mancuso
    I am currently learning python in preperation for a class over the summer and have gotten started by implementing different types of heaps and priority based data structures. I began to write a unit test suite for the project but ran into difficulties into creating a generic unit test that only tests the interface and is oblivious of the actual implementation. I am wondering if it is possible to do something like this.. suite = HeapTestSuite(BinaryHeap()) suite.run() suite = HeapTestSuite(BinomialHeap()) suite.run() What I am currently doing just feels... wrong (multiple inheritance? ACK!).. class TestHeap: def reset_heap(self): self.heap = None def test_insert(self): self.reset_heap() #test that insert doesnt throw an exception... for x in self.inseq: self.heap.insert(x) def test_delete(self): #assert we get the first value we put in self.reset_heap() self.heap.insert(5) self.assertEquals(5, self.heap.delete_min()) #harder test. put in sequence in and check that it comes out right self.reset_heap() for x in self.inseq: self.heap.insert(x) for x in xrange(len(self.inseq)): val = self.heap.delete_min() self.assertEquals(val, x) class BinaryHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinaryHeap() def reset_heap(self): self.heap = BinaryHeap() class BinomialHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinomialHeap() def reset_heap(self): self.heap = BinomialHeap() if __name__ == '__main__': unittest.main()

    Read the article

  • What's wrong with output parameters?

    - by Chris McCall
    Both in SQL and C#, I've never really liked output parameters. I never passed parameters ByRef in VB6, either. Something about counting on side effects to get something done just bothers me. I know they're a way around not being able to return multiple results from a function, but a rowset in SQL or a complex datatype in C# and VB work just as well, and seem more self-documenting to me. Is there something wrong with my thinking, or are there resources from authoritative sources that back me up? What's your personal take on this and why? What can I say to colleagues that want to design with output parameters that might convince them to use different structures? EDIT: interesting turn- the output parameter I was asking this question about was used in place of a return value. When the return value is "ERROR", the caller is supposed to handle it as an exception. I was doing that but not pleased with the idea. A coworker wasn't informed of the need to handle this condition and as a result, a great deal of money was lost as the procedure failed silently!

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Python lists/arrays: disable negative indexing wrap-around

    - by wim
    While I find the negative number wraparound (i.e. A[-2] indexing the second-to-last element) extremely useful in many cases, there are often use cases I come across where it is more of an annoyance than helpful, and I find myself wishing for an alternate syntax to use when I would rather disable that particular behaviour. Here is a canned 2D example below, but I have had the same peeve a few times with other data structures and in other numbers of dimensions. import numpy as np A = np.random.randint(0, 2, (5, 10)) def foo(i, j, r=2): '''sum of neighbours within r steps of A[i,j]''' return A[i-r:i+r+1, j-r:j+r+1].sum() In the slice above I would rather that any negative number to the slice would be treated the same as None is, rather than wrapping to the other end of the array. Because of the wrapping, the otherwise nice implementation above gives incorrect results at boundary conditions and requires some sort of patch like: def ugly_foo(i, j, r=2): def thing(n): return None if n < 0 else n return A[thing(i-r):i+r+1, thing(j-r):j+r+1].sum() I have also tried zero-padding the array or list, but it is still inelegant (requires adjusting the lookup locations indices accordingly) and inefficient (requires copying the array). Am I missing some standard trick or elegant solution for slicing like this? I noticed that python and numpy already handle the case where you specify too large a number nicely - that is, if the index is greater than the shape of the array it behaves the same as if it were None.

    Read the article

  • TFS merging for users that are used to VSS

    - by JacksonD
    I just migrated a team of 7 developers from VSS to TFS. I migrated all of their code into a DEV folder which I then branched into a QA folder (which I branched into a PROD folder). The developers usually don't work on the same files, but there are some shared utility classes. All of the code is for a large ASP.NET web site. When the developers are ready to merge from DEV to QA, they only want to merge their changes. For example, let's say that Developer1 has been working on a project for the last 3 months and he's ready to merge all of his code into QA. However, Developer2 has been working on a different project for the last 2 months which is not ready to be merged. Developer1 and Developer2's changes are not in any way dependent on each other, but they are not separated into different folder structures and they each regularly do a get latest. There doesn't seem to be a way for developer1 to only merge his changes without also merging all of developer2's changes. Currently, developer1 is going through the Pending Changes window and 'Undoing Pending Changes' for all of Developer2's changes, but this is time consuming. They could merge each file individually, but this is also time consuming. Is there an easier way? I am going to have a coronary if I hear one more person explain how much easier it was to work in VSS.

    Read the article

  • Why does Go not seem to recognize size_t in a C header file?

    - by Graeme Perrow
    I am trying to write a go library that will act as a front-end for a C library. If one of my C structures contains a size_t, I get compilation errors. AFAIK size_t is a built-in C type, so why wouldn't go recognize it? My header file looks like: typedef struct mystruct { char * buffer; size_t buffer_size; size_t * length; } mystruct; and the errors I'm getting are: gcc failed: In file included from <stdin>:5: mydll.h:4: error: expected specifier-qualifier-list before 'size_t' on input: typedef struct { char *p; int n; } _GoString_; _GoString_ GoString(char *p); char *CString(_GoString_); #include "mydll.h" I've even tried adding either of // typedef unsigned long size_t or // #define size_t unsigned long in the .go file before the // #include, and then I get "gcc produced no output". I have seen these questions, and looked over the example with no success.

    Read the article

  • C++ gdb GUI

    - by HappyDude
    Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking for a GUI that: Handles all the basics like stepping over & into code, watch variables and breakpoints Understands and can display the contents of complex & nested C++ data types Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types Can handle threaded applications and switch between different threads to step through or view the state of Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? Edit: Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.

    Read the article

  • Vector Usage in MPI(C++)

    - by lsk1985
    I am new to MPI programming,stiil learning , i was successful till creating the Derived data-types by defining the structures . Now i want to include Vector in my structure and want to send the data across the Process. for ex: struct Structure{ //Constructor Structure(): X(nodes),mass(nodes),ac(nodes) { //code to calculate the mass and accelerations } //Destructor Structure() {} //Variables double radius; double volume; vector<double> mass; vector<double> area; //and some other variables //Methods to calculate some physical properties Now using MPI i want to sent the data in the structure across the processes. Is it possible for me to create the MPI_type_struct vectors included and send the data? I tried reading through forums, but i am not able to get the clear picture from the responses given there. Hope i would be able to get a clear idea or approach to send the data PS: i can send the data individually , but its an overhead of sending the data using may MPI_Send/Recieve if we consider the domain very large(say 10000*10000)

    Read the article

  • Linq-to-sql join/where?

    - by Curtis White
    I've the following table structures Users id Types id isBool UsersTypes userid types I want to select all the UserTypes based on id and isBool. I tried this query var q = from usertype in usertypes from type in types where type.isBool == false where userstypes.user == id select usertype; But this did not work as expected. My questions are: Why? Is there any difference in using the join on syntax vs where, where vs where cond1 && cond2? My understanding is query optimizer will optimize. Is there any difference in using where cond1 == var1 && cond2 == var2 with and without the parenthesis? This seems peculiar that it is possible to build this without parenthesis What type of query do I need in this case? I can see that I could do a subquery or use a group but not 100% sure if it is required. An example might be helpful. I'm thinking a subquery may be required in this case.

    Read the article

  • Creating a join based on data from other tables...

    - by Workshop Alex
    I'm dealing with a database structure that can be defined as "illogical". It has about 100 different schema's with all different table structures per schema. Only one common factor is a "Version" table in each schema containing about 4 fields. (Thus, there are about 100 Version tables in the database.) There's also another table (view, actually) containing a list of all the schema's in the database that have a version table. I need a stored procedure that walks through all the schema's and selects all data from the Version table, adding the schema name as a fifth field to the result. Basically, this stored procedure is to return a list of all version records per schema. My idea: first walk through the schema list to create one new SQL statements that will JOIN all the schema.version tables into one SQL statement. Then I return the result of that query. How to do this? Or does anyone have a better suggestion? (No, redesigning the structure is NOT an option.)

    Read the article

  • Python: How to transfer varrying length arrays over a network connection

    - by Devin
    Hi, I need to transfer an array of varying length in which each element is a tuple of two integers. As an example: path = [(1,1),(1,2)] path = [(1,1),(1,2),(2,2)] I am trying to use pack and unpack, however, since the array is of varying length I don't know how to create a format such that both know the format. I was trying to turn it into a single string with delimiters, such as: msg = 1&1~1&2~ sendMsg = pack("s",msg) or sendMsg = pack("s",str(msg)) on the receiving side: path = unpack("s",msg) but that just prints 1 in this case. I was also trying to send 4 integers as well, which send and receive fine, so long as I don't include the extra string representing the path. sendMsg = pack("hhhh",p.direction[0],p.direction[1],p.id,p.health) on the receive side: x,y,id,health = unpack("hhhh",msg) The first was for illustration as I was trying to send the format "hhhhs", but either way the path doesn't come through properly. Thank-you for your help. I will also be looking at sending a 2D array of ints, but I can't seem to figure out how to send these more 'complex' structures across the network. Thank-you for your help.

    Read the article

  • why does vector.size() read in one line too little?

    - by ace
    when running the following code, the amount of lines will read on less then there actually is (if the input file is main itself, or otherwise) why is this and how can i change that fact (besides for just adding 1)? #include <fstream> #include <iostream> #include <string> #include <vector> using namespace std; int main() { // open text file for input string file_name; cout << "please enter file name: "; cin >> file_name; // associate the input file stream with a text file ifstream infile(file_name.c_str()); // error checking for a valid filename if ( !infile ) { cerr << "Unable to open file " << file_name << " -- quitting!\n"; return( -1 ); } else cout << "\n"; // some data structures to perform the function vector<string> lines_of_text; string textline; // read in text file, line by line while (getline( infile, textline, '\n' )) { // add the new element to the vector lines_of_text.push_back( textline ); // print the 'back' vector element - see the STL documentation cout << "line read: " << lines_of_text.back() << "\n"; } cout<<lines_of_text.size(); return 0; }

    Read the article

  • In C, when do structure names have to be included in structure initializations and definitions?

    - by Tyler
    I'm reading The C Programming Language by K&R and in the section on structures I came across these code snippets: struct maxpt = { 320, 200 }; and /* addpoints: add two points */ struct addpoint(struct point p1, struct point p2) { p1.x += p2.x; p1.y += p2.y; return p1; } In the first case, it looks like it's assigning the values 320 and 200 to the members of the variable maxpt. But I noticed the name of the struct type is missing (shouldn't it be "struct struct_name maxpt = {320, 200}"? In the second case, the function return type is just "struct" and not "struct name_of_struct". I don't get why they don't include the struct names - how does it know what particular type of structure it's dealing with? My confusion is compounded by the fact that in previous snippets they do include the structure name, such as in the return type for the following function, where it's "struct point" and not just "struct". Why do they include the name in some cases and not in others? /* makepoint: make a point from x and y components */ struct point makepoint(int x, int y) { struct point temp; temp.x = x; temp.y = y; return temp; }

    Read the article

  • Java: How ArrayList memory management

    - by cka3o4nik
    In my Data Structures class we have studies the Java ArrayList class, and how it grows the underlying array when a user adds more elements. That is understood. However, I cannot figure out how exactly this class frees up memory when lots of elements are removed from the list. Looking at the source, there are three methods that remove elements: [code] public E remove(int index) { RangeCheck(index); modCount++; E oldValue = (E) elementData[index]; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work return oldValue; } public boolean remove(Object o) { if (o == null) { for (int index = 0; index < size; index++) if (elementData[index] == null) { fastRemove(index); return true; } } else { for (int index = 0; index < size; index++) if (o.equals(elementData[index])) { fastRemove(index); return true; } } return false; } private void fastRemove(int index) { modCount++; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work } {/code] None of them reduce the datastore array. I even started questioning if memory free up ever happens, but empirical tests show that it does. So there must be some other way it is done, but where and how? I checked the parent classes as well with no success.

    Read the article

  • Is a python dictionary the best data structure to solve this problem?

    - by mikip
    Hi I have a number of processes running which are controlled by remote clients. A tcp server controls access to these processes, only one client per process. The processes are given an id number in the range of 0 - n-1. Were 'n' is the number of processes. I use a dictionary to map this id to the client sockets file descriptor. On startup I populate the dictionary with the ids as keys and socket fd of 'None' for the values, i.e no clients and all pocesses are available When a client connects, I map the id to the sockets fd. When a client disconnects I set the value for this id to None, i.e. process is available. So everytime a client connects I have to check each entry in the dictionary for a process which has a socket fd entry of None. If there are then the client is allowed to connect. This solution does not seem very elegant, are there other data structures which would be more suitable for solving this? Thanks

    Read the article

  • C++: Conceptual problem in designing an intepreter

    - by sub
    I'm programming an interpreter for an experimental programming language (educational, fun,...) So far, everything went well (Tokenizer & Parser), but I'm getting a huge problem with some of the data structures in the part that actually runs the tokenized and parsed code. My programming language basically has only two types, int and string, and they are represented as C++ strings (std class) and ints Here is a short version of the data structure that I use to pass values around: enum DataType { Null, Int, String } class Symbol { public: string identifier; DataType type; string stringValue; int intValue; } I can't use union because string doesn't allow me to. This structure above is beginning to give me a headache. I have to scatter code like this everywhere in order to make it work, it is beginning to grow unmaintainable: if( mySymbol.type == Int ) { mySymbol.intValue = 1234; } else { mySymbol.stringValue = "abcde"; } I use the Symbol data structure for variables, return values for functions and general representation of values in the programming language. Is there any better way to solve this? I hope so!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >