Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 186/677 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • C++ gdb GUI

    - by HappyDude
    Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking for a GUI that: Handles all the basics like stepping over & into code, watch variables and breakpoints Understands and can display the contents of complex & nested C++ data types Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types Can handle threaded applications and switch between different threads to step through or view the state of Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? Edit: Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • Question about array subscripting in C#

    - by Michael J
    Back in the old days of C, one could use array subscripting to address storage in very useful ways. For example, one could declare an array as such. This array represents an EEPROM image with 8 bit words. BYTE eepromImage[1024] = { ... }; And later refer to that array as if it were really multi-dimensional storage BYTE mpuImage[2][512] = eepromImage; I'm sure I have the syntax wrong, but I hope you get the idea. Anyway, this projected a two dimension image of what is really single dimensional storage. The two dimensional projection represents the EEPROM image when loaded into the memory of an MPU with 16 bit words. In C one could reference the storage multi-dimensionaly and change values and the changed values would show up in the real (single dimension) storage almost as if by magic. Is it possible to do this same thing using C#? Our current solution uses multiple arrays and event handlers to keep things synchronized. This kind of works but it is additional complexity that we would like to avoid if there is a better way.

    Read the article

  • HTML prevent line break (between two table tags)

    - by arik-so
    Hello, I have following code: <table> <tr> <td>Table 1</td> </tr> </table> <table> <tr> <td>Table 2</td> </tr> </table> Very unfortunately, a line break is inserted between these two tables. I have tried putting them both in a single span and setting the whitespace to nowrap, but at no avail. Please, could you tell me how I can simply put these elements in a single row, without setting the float attribute in CSS and without surrounding each table with a <td> {table} </td> and then putting this in a table row. Thanks a lot in advance. I have asked Google, but it just wouldn't say anything ^^ StackOverflow remained silent so far, too

    Read the article

  • Velocity CTP: can we 'search' for objects?

    - by Stato Machino
    It appears that 'tags' allow us to associate a 'search term' with the objects placed into the Velocity cache space. However, these can only be queried within a 'region'. Further, regions somehow limit the locality of objects in the cache to a single server (or maybe something kinda like that). So this appears to make it hard to perform any operation for which the unique Id of the cached item is not persisted or continuously available to the application that stores and retrieves objects to and from the cache. In any case, I can't see an easy way to 'cleanse' the cache of objects or to find objects across the entire cache that may share some prefix, postfix or infix values in the cache key so that i can clear out the cache of object repeatedly created in unit tests, for example. And I am unsure about the consequences of regions being associated with single server cache locations. So I would appreciate any help with the following questions: What is the difference between a 'distributed cache' (called a 'partitioned' cache??) when using regions, and a 'local cache'? 1.a. In particular, are the region-oriented values in a distributed cache visible through a cache factory that is configured to 'see' the entire cache space? Are the operations of creating and removing 'regions' efficient enough that it would be reasonable to create a region and a group of tags for each bundle of objects that need to be cached? 2.a. Or does this just push the problem of scoping the 'search for objects' up the chain because the ability of the DataCache object to query down through regions and tags as limited as querying for the cache keys of objects themselves. Thanks, Stato

    Read the article

  • How can I pass an array resulting from a Perl method by reference?

    - by arareko
    Some XML::LibXML methods return arrays instead of references to arrays. Instead of doing this: $self->process_items($xml->findnodes('items/item')); I want to do something like: $self->process_items(\$xml->findnodes('items/item')); So that in process_items() I can dereference the original array instead of creating a copy: sub process_items { my ($self, $items) = @_; foreach my $item (@$items) { # do something... } } I can always store the results of findnodes() into an array and then pass the array reference to my own method, but let's say I want to try a reduced version of my code. Is that the correct syntax for passing the method results or should I use something different? Thanks! EDIT: Now suppose I want to change process_items() to process_item() so I can do stuff on a single element of the referenced array inside a loop. Something like: $self->process_item($_) for ([ $xml->findnodes('items/item') ]); This doesn't work as process_item() is executed only once because a single value is passed to the for loop (the reference to the array from findnodes()). What's the proper way of using $_ in this case?

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • Which CSS identifier is used for the selected tab in tabbed tables in browsers other than IE?

    - by David Navarre
    When you have a table on a form in Notes, you can choose to display only one row at a time (via the Special Table Row Display parameter on the Table Rows tab of the Table properties). In a Notes document displayed using Internet Explorer that contains such a table, a row is displayed with a cell for each "tab". The TD that serves as the tab for the selected "Notes table row" is assigned <td class="dominoSelTopTab">, while the other tabs get <td class="dominoTopTab">. However, when using other browsers, it's not nearly as simple. In Firefox, each "tab" ends up as a single-celled-single-row-table within the table with very little to identify it. <td><table border="1" cellpadding="2"> <tr><td><div align="center"><b>Tab 2</b></div></td></tr> </table></td> A non-selected tab would show as follows: <td><table border="1" cellpadding="2"> <tr><td><div align="center"><a name="1." href="/Projects/MyCSS.nsf/0c3b9489476440c085257a62006d97d6/d482a1767a4af77f85257a62006db064?OpenDocument&amp;TableRow=1.0#1." target="_self">Tab 1</a></div></td></tr> </table></td> So, the question is, how do I identify the selected tabs and the non-selected tabs when not using IE? Note: For those who are not Notes developers, the HTML is auto-generated from the visual design as laid out in the Notes designer client. I would replace it all with manual HTML, except there is so much of it that doing so would consume far too much time.

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • What considerations should be made when creating a reporting framework for a business?

    - by Andrew Dunaway
    It's a pretty classic problem. The company I work for has numerous business reports that are used to track sales, data feeds, and various other metrics. Of course this also means that there is a conglomerate of disparate frameworks, ASP.net pages, and areas where these reports can be found. There have been some attempts at consolidating these into a single entity, but nothing has stuck yet. Since this is a common problem, and I am sure solved innumerable times, I wanted to see what others have done. For the most part these can be boiled down to the following pieces: A SQL query against our database to gather data A presentation of data, generally in a data grid Filtering that can vary based on data types and the business needs Some way to organize the reports, a single drop down gets long and unmanageable quickly A method to download data to alter further, perhaps a csv file My first thought was to create a framework in Silverlight with Linq to Sql. Mainly just because I like it and want to play with it which probably is not the best reason. I also thought the controls grant a lot of functionality like sorting, dragging columns, etc. I was also curious about the printing in Silverlight 4. Which brings me around to my original question, what is the best way to do this? Is there a package out there I can just buy that will do it for me? The Silverlight approach seems pretty easy, after it's setup and templated, but maybe it's a bad idea and I can learn from someone else?

    Read the article

  • Database (MySQL) structuring: pros and cons of multiple tables

    - by Gideon
    I am collecting data and storing it MySQL, for: 75 variables 55 countries Each year I have, at this stage since I am building this tool created a single table, of variables / countries (storing 1 year worth of data). Next year (and for several years after that) a new set of data will be input for each country. There are therefore 3 variables in controlling data returned to a user reviewing all collected data. The general form of any query would be: Show me these specifics variables, for these specific countries, for these specific years. (Show me average age and weight, for USA and Canada, for 2012 and 2009, for example) My question is, it seems that I have two options for arranging this data: -Multiple tables where I create a table of country / variable for each year data is collected - Single table and simply add a column (field) for the year that data relates to. As far as I can tell I could make these database calls with either sructure, but is one more powerful / efficient / quicker, and why? Thanks for your consideration. It's a PDO / PHP interface if that is relevent.

    Read the article

  • How to add root node in the output method = text

    - by Akhil
    My sample input is <?xml version="1.0" encoding="UTF-8"?> <ns0:JDBC_RECEIVERDATA_MT_response xmlns:ns0="urn:parmalat.com.au:TESTSQL_REPLICATION"> <Statement_response> <response_1> <row> <XML_F52E2B61-18A1-11d1-B105-00805F49916B><![CDATA[<TransactionLog TID="1400" SeqNo="3337446" SQLTransaction="Insert into TankerLoads Values(141221,53,299,18,1,426148,6,&apos;Nov 19 2007 12:00AM&apos;,&apos;Dec 30 1899 12:59PM&apos;,3.00,20682,0,&apos;Zevo&apos;,&apos;Nov 19 2007 12:00AM&apos;,0)"/></row> <row> <XML_F52E2B61-18A1-11d1-B105-00805F49916B>um = 141221"/&gt;&lt;TransactionLog TID="1400" SeqNo="3337452" SQLTransaction="Insert into MilkPickups Values(790195,141221,0,&amp;apos;Nov 19 2007 12:00AM&amp;apos;,2433,&amp;apos;Nov 19 2007 12:00AM&amp;apos;,&amp;apos;Dec 30 1899 11:26AM&amp;apos;,3131,2.90)"/&gt; Like this I have mutiple records and my output should be like <root> <TransactionLog TID="1400" SeqNo="3337446" SQLTransaction="Insert into TankerLoads Values(141221,53,299,18,1,426148,6,'Nov 19 2007 12:00AM','Dec 30 1899 12:59PM',3.00,20682,0,'Zevo','Nov 19 2007 12:00AM',0)" /> <TransactionLog TID="1400" SeqNo="3337447" SQLTransaction="Update TankerLoads Set TankerNum = 53,DriverNum = 299,CarterNum = 18,MilkTypeNum = 1,SampleNum = 426148,ReceivalBayNum = 6,UnloadDate = 'Nov 19 2007 12:00AM',UnloadTime = 'Dec 30 1899 12:59PM',Temperature = 3.00,Volume = 20682,NetWeight = 0,WeighbridgeDocket = 'Zevo',LoadPickupDate = 'Nov 19 2007 12:00AM',IsValidated = 0 Where TankerLoadNum = 141221" /></root> AND I AM USING OUTPUT METHOD AS TEXT BECAUSE IF I USE XML THE TAG ARE REPLACED WITH &LT &GT WHICH I DONT MOREOVER IF YOU SEE THE ABOVE TWO ROWS THE LAST LINE IS HALF THE RECORD IS IN THE FIRST ROW AND CONTINUING THE OTHER HALF IN THE SECOND ROW SO i USED WHICH LEAVES SINGLE SPACE SO EVEN I DONT WANT THAT SINGLE SPACE..I HOPE I AM CLEAR IF NOT PLEASE LET ME KNOW I WILL ADD MORE COMMENTS..TPLEASE HELP ME OUT..THNKYOU

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • ggplot: showing % instead of counts in charts of categorical variables

    - by wishihadabettername
    I'm plotting a categorical variable and instead of showing the counts for each category value, I'm looking for a way to get ggplot to display the percentage of values in that category. Of course, it is possible to create another variable with the calculated percentage and plot that one, but I have to do it several dozens of times and I hope to achieve that in one command. I was experimenting with something like qplot (mydataf) + stat_bin(aes(n=nrow(mydataf), y=..count../n)) + scale_y_continuous(formatter="percent") but I must be using it incorrectly, as I got errors. To easily reproduce the setup, here's a simplified example: mydata <- c ("aa", "bb", null, "bb", "cc", "aa", "aa", "aa", "ee", null, "cc"); mydataf <- factor(mydata); qplot (mydataf); #this shows the count, I'm looking to see % displayed. In the real case I'll probably use ggplot instead of qplot, but the right way to use stat_bin still eludes me. Thank you. UPDATE: I've also tried these four approaches: ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent'); ggplot(mydataf, aes(x = levels(mydataf), y = (..count..)/sum(..count..))) + scale_y_continuous(formatter = 'percent') + geom_bar(); but all 4 give: Error: ggplot2 doesn't know how to deal with data of class factor The same error appears for the simple case of ggplot (data=mydataf, aes(levels(mydataf))) + geom_bar() so it's clearly something about how ggplot interacts with a single vector. I'm scratching my head, googling for that error gives a single result.

    Read the article

  • MVC design pattern in complex iPad app: is one fat controller acceptable?

    - by nutsmuggler
    I am building a complex iPad application; think of it as a scrapbook. For the purpose of this question, let's consider a page with two images over it. My main view displays my doc data rendered as a single UIImage; this because I need to do some global manipulation over them. This is my DisplayView. When editing I need to instantiate an EditorView with my two images as subviews; this way I can interact with a single image, (rotate it, scale it, move it). When editing is triggered, I hide my DisplayView and show my EditorView. In a iPhone app, I'd associate each main view (that is, a view filling the screen) to a view controller. The problem is here there is just one view controller; I've considered passing the EditorView via a modal view controller, but it's not an option (there a complex layout with a mask covering everything and palettes over it; rebuilding it in the EditorView would create duplicate code). Presently the EditorView incorporates some logic (loads data from the model, invokes some subviews for fine editing, saves data back to the model); EditorView subviews also incorporate some logic (I manipulate images and pass them back to the main EditorView). I feel this logic belongs more to a controller. On the other hand, I am not sure making my only view controller so fat a good idea. What is the best, cocoa-ish implementation of such a class structure? Feel free to ask for clarifications. Cheers.

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • fluent nhibernate - storing and retrieving three classes in/from one table

    - by Will I Am
    Noob question. I have this situation where I have these objects: class Address { string Street; string City; ... } class User { string UserID; Address BillingAddress; Address MailingAddress; ... } What is the proper way of storing this data using (fluent) nHibernate? I could use a separate Address table and create a reference, but they are 1:1 relationships so I don't really want to incur the overhead of a join. Ideally I would store this as a single flat record. So, my question is, what is the proper way of storing an instance of class 'User' in such a way that it stores its contents and also the two addresses as a single record? My knowledge is failing me on how I can store this information in such a way that the two Address records get different column names (e.g. BillingAddress_Street and MailingAddress_Street, for example), and also how to read a record back into a User instance.

    Read the article

  • Flash automatically names objects on stage "instance#"

    - by meowMIX3R
    Hi, I have 2 TLF text boxes already placed on my main stage. In the property inspector window I give these the instance names: "txt1" and "txt2". I am trying to have a single mouseup event, and figure out which text box it occurred on. My document class has the following code: package { import flash.display.Sprite; import flash.events.KeyboardEvent; public class SingleEvent extends Sprite{ public function SingleEvent() { // constructor code root.addEventListener(KeyboardEvent.KEY_UP, textChanged,false,0,true); } private function textChanged(e:KeyboardEvent){ trace(e.target.name); trace(" " + e.target); switch(e.target){ case txt1: trace("txt1 is active"); break; case txt2: trace("txt2 is active"); break; default: break; } } } } Example output is: instance15 [object Sprite] instance21 [object Sprite] Since the objects are already on the stage, I am not sure how to get flash to recognize them as "txt1" and "txt2" instead of "instance#". I tried setting the .name property, but it had no effect. In the publish settings, I have "Automatically declare stage instances" checked. Also, is it possible to have a single change event for multiple slider components? The following never fires: root.addEventListener(SliderEvent.CHANGE, sliderChanged,false,0,true); Thanks for any tips

    Read the article

  • Java - multithreaded access to a local value store which is periodically cleared

    - by Telax
    I'm hoping for some advice or suggestions on how best to handle multi threaded access to a value store. My local value storage is designed to hold onto objects which are currently in use. If the object is not in use then it is removed from the store. A value is pumped into my store via thread1, its entry into the store is announced to listeners, and the value is stored. Values coming in on thread1 will either be totally new values or updates for existing values. A timer is used to periodically remove any value from the store which is not currently in use and so all that remains of this value is its ID held locally by an intermediary. Now, an active element on thread2 may wake up and try to access a set of values by passing a set of value IDs which it knows about. Some values will be stored already (great) and some may not (sadface). Those values which are not already stored will be retrieved from an external source. My main issue is that items which have not already been stored and are currently being queried for may arrive in on thread1 before the query is complete. I'd like to try and avoid locking access to the store whilst a query is being made as it may take some time.

    Read the article

  • Does BeginReceive() get everything sent by BeginSend()?

    - by IVlad
    I'm writing a program that will have both a server side and a client side, and the client side will connect to a server hosted by the same program (but by another instance of it, and usually on another machine). So basically, I have control over both aspects of the protocol. I am using BeginReceive() and BeginSend() on both sides to send and receive data. My question is if these two statements are true: Using a call to BeginReceive() will give me the entire data that was sent by a single call to BeginSend() on the other end when the callback function is called. Using a call to BeginSend() will send the entire data I pass it to the other end, and it will all be received by a single call to BeginReceive() on the other end. The two are basically the same in fact. If the answer is no, which I'm guessing is the case based on what I've read about sockets, what is the best way to handle commands? I'm writing a game that will have commands such as PUT X Y. I was thinking of appending a special character (# for example) to the end of each command, and each time I receive data, I append it to a buffer, then parse it only after I encounter a #.

    Read the article

  • Is there an Easier way to Get a 3 deep Panel Control from a Form in order to add a new Control to it programmatically?

    - by Mark Sweetman
    I have a VB Windows Program Created by someone else, it was programmed so that anyone could Add to the functionality of the program through the use of Class Libraries, the Program calls them (ie... the Class Libraries, DLL files) Plugins, The Plugin I am creating is a C# Class Library. ie.. .dll This specific Plugin Im working on Adds a Simple DateTime Clock Function in the form of a Label and inserts it into a Panel that is 3 Deep. The Code I have I have tested and it works. My Question is this: Is there a better way to do it? for instance I use Controls.Find 3 different times, each time I know what Panel I am looking for and there will only be a single Panel added to the Control[] array. So again Im doing a foreach on an array that only holds a single element 3 different times. Now like I said the code works and does as I expected it to. It just seems overly redudant, and Im wondering if there could be a performance issue. here is the code: foreach (Control p0 in mDesigner.Controls) if (p0.Name == "Panel1") { Control panel1 = (Control)p0; Control[] controls = panel1.Controls.Find("Panel2", true); foreach (Control p1 in controls) if (p1.Name == "Panel2") { Control panel2 = (Control)p1; Control[] controls1 = panel2.Controls.Find("Panel3", true); foreach(Control p2 in controls1) if (p2.Name == "Panel3") { Control panel3 = (Control)p2; panel3.Controls.Add(clock); } } }

    Read the article

  • Should Service Depend on Many Repositories, or Break Them Up?

    - by Josh Pollard
    I'm using a repository pattern for my data access. So I basically have a repository per table/class. My UI currently uses service classes to actually get things done, and these service classes wrap, and therefore depend on repositories. In many cases my services are only dependent upon one or two repositories, so things aren't too crazy. Unfortunately, one of my forms in the UI expects the user to enter data that will span five different tables. For this form I made a single service class that depends upon five repositories. Then the methods within the service for saving and loading the data call the appropriate methods on all of the corresponding repositories. As you can imagine, the save and load methods in this service are really big. Also, unit testing this service is getting really difficult because I have to setup so many fake repositories. Would it have been a better choice to break this single service apart into a few smaller services? It would put more code at the UI layer, but would make the services smaller and more testable.

    Read the article

  • Populate an Object Model from a data dataTable(C#3.0)

    - by Newbie
    I have a situation I am getting data from some external sources and is populating into the datatable. The data looks like this DATE WEEK FACTOR 3/26/2010 1 RM_GLOBAL_EQUITY 3/26/2010 1 RM_GLOBAL_GROWTH 3/26/2010 2 RM_GLOBAL_VALUE 3/26/2010 2 RM_GLOBAL_SIZE 3/26/2010 2 RM_GLOBAL_MOMENTUM 3/26/2010 3 RM_GLOBAL_HIST_BETA I have a object model like this public class FactorReturn { public int WeekNo { get; set; } public DateTime WeekDate { get; set; } public Dictionary<string, decimal> FactorCollection { get; set; } } As can be seen that the Date field is always constant. And a single(means unique) week can have multiple FACTORS. i.e. For a date(3/26/2010), for Week No. 1, there are two FACTORS(RM_GLOBAL_EQUITY and RM_GLOBAL_GROWTH). Similarly, For a date(3/26/2010), for Week No. 2, there are three FACTORS(RM_GLOBAL_VALUE , RM_GLOBAL_SIZE and RM_GLOBAL_MOMENTUM ). Now we need to populate this data into our object model. The final output will be WeekDate: 3/26/2010 WeekNo : 1 FactorCollection : RM_GLOBAL_EQUITY FactorCollection : RM_GLOBAL_GROWTH WeekNo : 2 FactorCollection : RM_GLOBAL_VALUE FactorCollection : RM_GLOBAL_SIZE FactorCollection : RM_GLOBAL_MOMENTUM WeekNo : 3 FactorCollection : RM_GLOBAL_HIST_BETA That is, overall only 1 single collection, where the Factor type will vary depending on week numbers. I have tried but of useless. Nothing works. Could you please help me?. I feel it is very tough I am using C# 3.0 Thanks

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >