Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 495/555 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • error: invalid type argument of '->' (have 'struct node')

    - by Roshan S.A
    Why cant i access the pointer "Cells" like an array ? i have allocated the appropriate memory why wont it act like an array here? it works like an array for a pointer of basic data types. #include<stdio.h> #include<stdlib.h> #include<ctype.h> #define MAX 10 struct node { int e; struct node *next; }; typedef struct node *List; typedef struct node *Position; struct Hashtable { int Tablesize; List Cells; }; typedef struct Hashtable *HashT; HashT Initialize(int SIZE,HashT H) { int i; H=(HashT)malloc(sizeof(struct Hashtable)); if(H!=NULL) { H->Tablesize=SIZE; printf("\n\t%d",H->Tablesize); H->Cells=(List)malloc(sizeof(struct node)* H->Tablesize); should it not act like an array from here on? if(H->Cells!=NULL) { for(i=0;i<H->Tablesize;i++) the following lines are the ones that throw the error { H->Cells[i]->next=NULL; H->Cells[i]->e=i; printf("\n %d",H->Cells[i]->e); } } } else printf("\nError!Out of Space"); } int main() { HashT H; H=Initialize(10,H); return 0; } The error I get is as in the title-error: invalid type argument of '->' (have 'struct node').

    Read the article

  • Which MySQL Fork/Version to Pick??

    - by Drew
    As most of you know, Sun acquired MySQL (and later Oracle acquired Sun), and during these acquisitions, there were a lot of FUD in MySQL community which resulted in creation of various forks. Today we have MySQL from MySQL, Percona (XtraDB) MySQL, OurDelta MySQL, MariaDB, Drizzle to name a few. Which brings us to the source of the problem. We are in the process of upgrading our databases (hardware/software) and I would like to know which one of the forks should I go with. Each has their own set of pros/cons. We are currently using MySQL 5.0.x from MySQL/Linux on an 8-core machine. Our new hardware is a monster with 32 cores and 32GB of memory connecting to a fast NetApp Storage via FC. I would like to stick with MySQL from MySQL but I have heard horror stories on how badly MySQL 5.1 performs on many cores. I have also heard that MySQL 5.4 performs better on multi-core machines but that's still not production ready. In addition, I have also heard a lot of good things about Percona builds. This is what I know so far: MySQL 5.1 from MySQL: Reliable choice, but doesn't scale well on a big machine Percona: Scales well, good backing company. I don't have much experience with it MariaDB: Don't know much about it besides that it was founded by Original MySQL developers (including Monty) OurDelta: Don't know much Drizzle: Mostly optimized for cloud computing I would like to know what's the general notion about this problem. Which build/version should I go with? How are you guys picking your builds/versions? Thanks!

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Java: autofiltering list?

    - by Jason S
    I have a series of items arriving which are used in one of my data structures, and I need a way to keep track of those items that are retained. interface Item {} class Foo implements Item { ... } class Baz implements Item { ... } class StateManager { List<Foo> fooList; Map<Integer, Baz> bazMap; public List<Item> getItems(); } What I want is that if I do the following: for (int i = 0; i < SOME_LARGE_NUMBER; ++i) { /* randomly do one of the following: * 1) put a new Foo somewhere in the fooList * 2) delete one or more members from the fooList * 3) put a new Baz somewhere in the bazMap * 4) delete one or more members from the bazMap */ } then if I make a call to StateManager.getItems(), I want to return a list of those Foo and Baz items, which are found in the fooList and the bazMap, in the order they were added. Items that were deleted or displaced from fooList and bazMap should not be in the returned list. How could I implement this? SOME_LARGE_NUMBER is large enough that I don't have the memory available to retain all the Foo and Baz items, and then filter them.

    Read the article

  • Is C++ (one of) the best language to learn at first

    - by AlexV
    C++ is one of the most used programming language in the world since like 25+ years. My first job as programmer was in C++ and I coded in C++ everyday for nearly 4 years. Now I do mostly PHP, but I will forever cherish this C++ background. C++ has helped me understand many "under the hood" features/behaviors/restrictions of many other (and different) programming languages like PHP and Delphi. I'm a full time programmer for 6+ years now and since I have a quite varied programming background I often get questions by "newbies" as where to start to become a "good" programmer. I think C++ is one of the best language to start with because it gives you a real usefull experience that will last and will teach you how things work under the hood. It's not the easier one to learn for a newbie, but in my opinion it's one that will reward in the long term. I would like to know your opinion on this matter to add to my arguments when I guide "newbies". After this introduction, here's my question : Is C++ (one of) the best language to learn at first for you. Since it's subjective, I've marked this question as community wiki. EDIT: This question is not about why Java (or C# or any other language) is better than C++ to start with, it's about what's make C++ a good choice or not a good choice to learn as one of your firsts languages. For example, for me C++ made me understand how the memory works. Now today in many languages everything is managed by the garbadge collector and some people don't even know that. I'm glad I know how it works underneath and I think it can help you to write better code.

    Read the article

  • Does The Clear Method On A Collection Release The Event Subscriptions?

    - by DaveB
    I have a collection private ObservableCollection<Contact> _contacts; In the constructor of my class I create it _contacts = new ObservableCollection<Contact>(); I have methods to add and remove items from my collection. I want to track changes to the entities in my collection which implement the IPropertyChanged interface so I subscribe to their PropertyChanged event. public void AddContact(Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged += new PropertyChangedEventHandler(Contact_PropertyChanged); _contacts.Add(contact); } public void AddContact(int index, Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged += new PropertyChangedEventHandler(Contact_PropertyChanged); _contacts.Insert(index, contact); } When I remove an entity from the collection, I unsubscribe from the PropertyChanged event. I am told this is to allow the entity to be garbage collected and not create memory issues. public void RemoveContact(Contact contact) { ((INotifyPropertyChanged)contact).PropertyChanged -= Contact_PropertyChanged; _contacts.Remove(contact); } So, I hope this is all good. Now, I need to clear the collection in one of my methods. My first thought would be to call _contacts.Clear(). Then I got to wondering if this releases those event subscriptions? Would I need to create my own clear method? Something like this: public void ClearContacts() { foreach(Contact contact in _contacts) { this.RemoveContact(contact); } } I am hoping one of the .NET C# experts here could clear this up for me or tell me what I am doing wrong.

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Better use a tuple or numpy array for storing coordinates

    - by Ivan
    Hi, I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • If setUpBeforeClass() fails, test failures are hidden in PHPUnit's JUnit XML output

    - by Adam Monsen
    If setUpBeforeClass() throws an exception, no failures or errors are reported in the PHPUnit's JUnit XML output. Why? Example test class: <?php class Test extends PHPUnit_Framework_TestCase { public static function setUpBeforeClass() { throw new \Exception('masks all failures in xml output'); } public function testFoo() { $this->fail('failing'); } } Command line: phpunit --verbose --log-junit out.xml Test.php Console output: PHPUnit 3.6.10 by Sebastian Bergmann. E Time: 0 seconds, Memory: 3.25Mb There was 1 error: 1) Test Exception: masks all failures in xml output /tmp/pu/Test.php:6 FAILURES! Tests: 0, Assertions: 0, Errors: 1. JUnit XML output: <?xml version="1.0" encoding="UTF-8"?> <testsuites> <testsuite name="Test" file="/tmp/phpunit-broken/Test.php"/> </testsuites> More info: $ php --version PHP 5.3.10-1ubuntu3.1 with Suhosin-Patch (cli) (built: May 4 2012 02:21:57) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with Xdebug v2.1.0, Copyright (c) 2002-2010, by Derick Rethans

    Read the article

  • What does it mean for an OS to "execute within user processes"? Do any modern OS's use that approach

    - by Chris Cooper
    I have recently become interested in operating system, and a friend of mine lent me a book called Operating Systems: Internals and Design Principles (I have the third edition), published in 1998. It's been a very interesting book so far, but I have come to the part dealing with process control, and it's using UNIX System V as one of its examples of an operating system that executes within user processes. This concept has struck me as a little strange. First of all, does this mean that OS instructions and data are stored in each user of the processes? Probably not, because that would be an absurdly redundant scheme. But if not, then what does it mean to "execute within" a user process? Do any modern operating systems use this approach? It seems much more logical to have the operating system execute as its own process, or even independently of all processes, if you're short on memory. All the inter-accessiblilty of process data required for this layout seems to greatly complicate things. (But maybe that's just because I don't quite get the concept ;D) Here is what the book says: "Execution within User Processes: An alternative that is common with operation systems on smaller machines is to execute virtually all operating system software in the context of a user process. ... "

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • cell and array in Matlab

    - by Tim
    Hi, I am a little confused about the usage of cell and array in Matlab. I would like to hear about your understandings. Here are my observations: (1). array can dynamically adjust its own memory to allow dynamic number of elements, while cell seems not act in the same way. a=[]; a=[a 1]; b={}; b={b 1}; (2). several elements can be retrieved from cell, while they seem not from array. a={'1' '2'}; figure, plot(...); hold on; plot(...) ; legend(a{1:2}); b=['1' '2']; figure, plot(...); hold on; plot(...) ; legend(b(1:2)); % b(1:2) is an array, not its elements, so it is wrong with legend. Are these correct? What are some other different usages between the cell and array? Thanks and regards!

    Read the article

  • Request size limitation when using MultipartHttpServletRequest of Spring 3.0

    - by Spiderman
    I'd like to know what is the size limitation if I upload list of files in one client's form submition using HTTP multipart content type. On the server side I am using Spring's MultipartHttpServletRequest to handle the request. mM questions: Is there should be different file size limitation and total request size limitation or file size is the only limitation and the request is capable of uploading 100s of files as lonng as they are not too large. Doest the Spring request wrapper read the complete request and store it in the JAVA heap memory or it store temporaray files of it to be able to use big quota. Is the use of reading the httpservlet request in streaming would change the size limitation than using complete http request read at-once by the application server. What is the bottleneck of this process - Java heap size, the quota of the filesystem on which my web-server runs, the maximum allowed BLOB size that the DataBase in which I am gonna save the file alows? or Spring internal limitations? Related threads that still don't have exact answer to this: does-spring-framework-support-streaming-mode-in-mutlipart-requests is-there-a-way-to-get-raw-http-request-stream-from-java-servlet-handler how-to- drop-body-of-a-request-after-checking-headers-in-servlet apache-commons-fileupload-throws-malformedstreamexception

    Read the article

  • Why does multiple sessions started on the same page gets the same session id?

    - by Calmarius
    I tryed the following: <?php session_name('user'); session_start(); // sets an 'user' cookie with session id. // This session stores the user's login data. // Its an ordinary session. $USERSESSION=$_SESSION; // saving this session $userSessId=session_id(); session_write_close(); //closing this session session_name('chatroom'); session_start(); // set a 'chatroom' cookie but contains the same session id and points to the same data :( . // This session would be used to store the chat text. This session would be // shared between all users joined in this chat room by setting the same session id for all members. // by calling session_regenerate_id would make a diffent session id for every user and it won't be a chat room anymore. ?> So I want to do a chat like thing with sessions. On the client side it would be done with ajax that polls this php page in every 5-10 seconds. Sessions may be cached in the server's memory so it can be accessible fast. I can store the chat in the database but my service runs on a free webhost which is limited, only 4 mysql connections allowed at a time which is almost nothing. I try to touch my database as least times as possible.

    Read the article

  • function returns after an XMLHttpRequest

    - by ashays
    Alright, I know questions like this have probably been asked dozens of times, but I can't seem to find a working solution for my project. Recently, while using jQuery for a lot of AJAX calls, I've found myself in a form of callback hell. Whether or not jQuery is too powerful for this project is beyond the scope of this question. So basically here's some code that shows what's going on: function check_form(table) { var file = "/models/"+table+".json"; var errs = {}; var xhr = $.getJSON(file, function(json) { for (key in json) { var k = key; var r = json[k]; $.extend(errs, check_item("#"+k,r)); } }); return errs; } And... as you can probably guess, I get an empty object returned. My original idea was to use some sort of onReadyStateChange idea that would return whenever the readyState had finally hit 4. This causes my application to hang indefinitely, though. I need these errors to decide whether or not the form is allowed to submit or not (as well as to tell the user where the errors are in the application. Any ideas? Edit. It's not the prettiest solution, but I've managed to get it to work. Basically, check_form has the json passed to it from another function, instead of loading it. I was already loading it there, too, so it's probably best that I don't continue to load the same file over and over again anyways. I was just worried about overloading memory. These files aren't absolutely huge, though, so I guess it's probably okay.

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • Weak hashmap with weak references to the values?

    - by Razor Storm
    I am building an android app where each entity has a bitmap that represents its sprite. However, each entity can be be duplicated (there might be 3 copies of entity asdf for example). One approach is to load all the sprites upfront, and then put the correct sprite in the constructors of the entities. However, I want to decode the bitmaps lazily, so that the constructors of the entities will decode the bitmaps. The only problem with this is that duplicated entities will load the same bitmap twice, using 2x the memory (Or n times if the entity is created n times). To fix this, I built a SingularBitmapFactory that will store a decoded Bitmap into a hash, and if the same bitmap is asked for again, will simply return the previously hashed one instead of building a new one. Problem with this, though, is that the factory holds a copy of all bitmaps, and so won't ever get garbage collected. What's the best way to switch the hashmap to one with weakly referenced values? In otherwords, I want a structure where the values won't be GC'd if any other object holds a reference to it, but as long as no other objects refers it, then it can be GC'd.

    Read the article

  • Recursion in assembly?

    - by Davis
    I'm trying to get a better grasp of assembly, and I am a little confused about how to recursively call functions when I have to deal with registers, popping/pushing, etc. I am embedding x86 assembly in C++. Here I am trying to make a method which given an array of integers will build a linked list containing these integers in the order they appear in the array. I am doing this by calling a recursive function: insertElem (struct elem *head, struct elem *newElem, int data) -head: head of the list -data: the number that will be inserted at the end of a list -newElem: points to the location in memory where I will store the new element (data field) My problem is that I keep overwriting the registers instead of a typical linked list. For example, if I give it an array {2,3,1,8,3,9} my linked-list will return the first element (head) and only the last element, because the elements keep overwriting each other after head is no longer null. So here my linked list looks something like: 2--9 instead of 2--3--1--8--3--9 I feel like I don't have a grasp on how to organize and handle the registers. newElem is in EBX and just keeps getting rewritten. Thanks in advance!

    Read the article

  • Implementing list position locator in C++?

    - by jfrazier
    I am writing a basic Graph API in C++ (I know libraries already exist, but I am doing it for the practice/experience). The structure is basically that of an adjacency list representation. So there are Vertex objects and Edge objects, and the Graph class contains: list<Vertex *> vertexList list<Edge *> edgeList Each Edge object has two Vertex* members representing its endpoints, and each Vertex object has a list of Edge* members representing the edges incident to the Vertex. All this is quite standard, but here is my problem. I want to be able to implement deletion of Edges and Vertices in constant time, so for example each Vertex object should have a Locator member that points to the position of its Vertex* in the vertexList. The way I first implemented this was by saving a list::iterator, as follows: vertexList.push_back(v); v->locator = --vertexList.end(); Then if I need to delete this vertex later, then rather than searching the whole vertexList for its pointer, I can call: vertexList.erase(v->locator); This works fine at first, but it seems that if enough changes (deletions) are made to the list, the iterators will become out-of-date and I get all sorts of iterator errors at runtime. This seems strange for a linked list, because it doesn't seem like you should ever need to re-allocate the remaining members of the list after deletions, but maybe the STL does this to optimize by keeping memory somewhat contiguous? In any case, I would appreciate it if anyone has any insight as to why this happens. Is there a standard way in C++ to implement a locator that will keep track of an element's position in a list without becoming obsolete? Much thanks, Jeff

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Do all C compilers allow functions to return structures?

    - by Jordan S
    I am working on a program in C and using the SDCC compiler for a 8051 architecture device. I am trying to write a function called GetName that will read 8 characters from Flash Memory and return the character array in some form. I know that it is not possible to return an array in C so I am trying to do it using a struct like this: //********************FLASH.h file******************************* MyStruct GetName(int i); //Function prototype #define NAME_SIZE 8 typedef struct { char Name[NAME_SIZE]; } MyStruct; extern MyStruct GetName(int i); // *****************FLASH.c file*********************************** #include "FLASH.h" MyStruct GetName( int i) { MyStruct newNameStruct; //... // Fill the array by reading data from Flash //... return newNameStruct; } I don't have any references to this function yet but for some reason, I get a compiler error that says "Function cannot return aggregate." Does this mean that my compiler does not support functions that return structs? Or am I just doing something wrong?

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >