Search Results

Search found 19525 results on 781 pages for 'say'.

Page 606/781 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • NSXMLParser: how do I wait until loading has finished?

    - by Archagon
    Let's say I'm using NSXMLParser to load a level (stored as an XML document, obviously) into my iPhone game. NSXMLParser works by assigning a delegate and sending it messages at key moments. How do I ensure that my entire level is loaded before doing anything else? I know I can make my main class the delegate and implement parserDidEndDocument, but this feels very hacky. My main class shouldn't have to know anything about the way the parsing is done! On the other hand, if I make a separate class/delegate for parsing my level, my main class has no way of knowing when the parsing is finished, unless it queries the parsing class continuously or the parsing class sends it a message. Either way, the main class would be tied to the implementation of the parsing class. Can I hide all this event-driven business from the main class and simply make the parser return the level object when it's done? (i.e., newLevel = [[GameLevel alloc] initFromXML:xmlfile], which would in turn use NSXMLParser to load the level and then somehow return when finished.) At the moment, I'm using an external DOM parser, but I'm curious how this would be done with NSXMLParser. Sorry if this is a stupid question -- I'm a bit new to this!

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • SQL Server 2005 and 'General network error'

    - by Mariusz
    I know there is a lot of information in the Internet about solving this problem, but it didn't help me. My Delphi application uses dbExpress controls to access the database and execute SQL queries. Once every couple of days, however, it stops working because the database connection fails. This happens on several different computers with different versions of Windows. MSSQL Server 2005 (version 9.0.4035) is installed on each of them. The above mentioned application executes queries every couple of seconds, and they are mainly insert commands. Every couple of days I get a series of exceptions like the following one: [DBNETLIB][ConnectionOpen (PreLoginHandshake()).]General network error. Check your network documentation. And then the SQL server becomes inaccessible until I restart it manually. The information I found in the Internet say that I should install some service packs, change some registry entries etc., but believe me, none of these helps and I don't know what else to do now. Could you please help me solve this problem? Any clues or ideas? I can give you some more information about the server or the application if necessary. Thank you very much in advance.

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Reachability sometimes fails, even when we do have an internet connection

    - by stoutyhk
    Hi I've searched but can't see a similar question. I've added a method to check for an internet connection per the Reachability example. It works most of the time, but when installed on the iPhone, it quite often fails even when I do have internet connectivity (only when on 3G/EDGE - WiFi is OK). Basically the code below returns NO. If I switch to another app, say Mail or Safari, and connect, then switch back to the app, then the code says the internet is reachable. Kinda seems like it needs a 'nudge'. Anyone seen this before? Any ideas? Many thanks James + (BOOL) doWeHaveInternetConnection{ BOOL success; // google should always be up right?! const char *host_name = [@"google.com" cStringUsingEncoding:NSASCIIStringEncoding]; SCNetworkReachabilityRef reachability = SCNetworkReachabilityCreateWithName(NULL, host_name); SCNetworkReachabilityFlags flags; success = SCNetworkReachabilityGetFlags(reachability, &flags); BOOL isAvailable = success && (flags & kSCNetworkFlagsReachable) && !(flags & kSCNetworkFlagsConnectionRequired); if (isAvailable) { NSLog(@"Google is reachable: %d", flags); }else{ NSLog(@"Google is unreachable"); } return isAvailable; }

    Read the article

  • Loading through Ajax request and bookmarked URL

    - by Varun
    I am working on a ticket system, having the following requirement: The home page is divided into two sections: Sec-1. Some filter options are shown here.(like closed-tickets, open-tickets, all-tickets, tickets-assigned-to-me etc.). You can select one or more of these filters. sec-2. List of tickets satisfying above filters will be displayed here. Now this is what I want: As I change the filters -- the change should be reflected in the URL, so that one is able to bookmark it. -- an ajax request will go and list of tickets satisfying the selected filters will be updated in sec-2. I want the same code to be used to load the tickets in both ways- (a) by selecting that set of filters and (b) by using the bookmark to reload the page. I have little idea on how to do it: The URL will contain the selected filters.(appended after #) changing filters on the page will modify the hash part of URL and call a function (say ajaxHandler()) to parse the URL to get the filters and then make an ajax request to get the list of tickets to be displayed in section2. and I will call the same function ajaxHandler() in window.onload. Is this the way? Any suggestions?

    Read the article

  • no statement parsed and wrong number or types of arguments - cfstoredproc

    - by Travis
    I have an Oracle procedure - editBacklog which I'm calling from a CFM page via cfstoredproc. After several changes to the procedure I started getting ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'EDITBACKLOG'. I've gotten this before and found that if I changed the name of the procedure it starts working again. I changed the name to editBacklog2 and it worked as I expected it to. I changed the name back to editBacklog and got the same error. I changed the name back to editBacklog2 again and started getting ORA-01003: no statement parsed. NOTHING has changed at this point except for the names. I changed the name yet again to editBacklog3 and it works as expected. As of right now editBacklog = ORA-06550 editBacklog2 = ORA-01003 editBacklog3 = works (kinda) This whole thing started when I was trying to fix an ORA-01821: date format not recognized error. I fear when I start changing things I'll start getting the same lame behavior described above. Either Oracle or CF is messing with me and I'll end up liking one of them less because of it. I assume it's probably cfstoredproc caching metadata or something but neither google, livedocs, or OTN have much to say about my situation. I'm not the SA or DBA. Anyone have any ideas?

    Read the article

  • Scope of Connection Object for a Website using Connection Pooling (Local or Instance)

    - by Danny
    For a web application with connection polling enabled, is it better to work with a locally scoped connection object or instance scoped connection object. I know there is probably not a big performance improvement between the two (because of the pooling) but would you say that one follows a better pattern than the other. Thanks ;) public class MyServlet extends HttpServlet { DataSource ds; public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } void SomeWork(String sql) { Connection conn = null; try { conn = ds.getConnection(); // execute some sql ..... } finally { if(conn != null) { conn.close(); // return to pool } } } } Or public class MyServlet extends HttpServlet { DataSource ds; Connection conn;* public void init() throws ServletException { ds = (DataSource) getServletContext().getAttribute("DBCPool"); } protected void doGet(HttpServletRequest arg0, HttpServletResponse arg1) throws ServletException, IOException { try { conn = ds.getConnection(); SomeWork("SELECT * FROM A"); SomeWork("SELECT * FROM B"); } finally { if(conn != null) { conn.close(); // return to pool } } } void SomeWork(String sql) { // execute some sql ..... } }

    Read the article

  • SVN Workflow - Chicken Before the Egg - Before merging V1 with V2, I need code from V1 to work on V2

    - by Jake
    Hi, Our distributed team (3 internal devs and 3+ external devs) use SVN to manage our codebase for a web site. We have a branch for each minor version (4.1.0, 4.1.1, 4.1.2, etc...). We have a trunk which we merge each version into when we do a release and publish to our site. An example of the problem we are having is thus: A new feature is added, lets call it "Ability to Create A Project" to 4.1.1. Another feature which depends on the one in 4.1.1 is scheduled to go in 4.1.2, called "Ability to Add Tasks to Projects". So, on Monday, we say 4.1.1 is 'closed' and needs to be tested. Our remote developers usually will start working on features/tickets for 4.1.2 at this point. Throughout the week we will test 4.1.1 and fix any bugs and commit them back to 4.1.1. Then, on Friday or so, we will tag 4.1.1, merge it with trunk, and finally, merge it with 4.1.2. But, for the 4-5 days we are testing, 4.1.2 doesn't have the code from 4.1.1 that some of the new features for 4.1.2 depend on. So a dev who is adding the "Ability to Add Tasks To Projects" feature, doesn't have the "Ability to Create a Project" feature to build upon, and has to do some file copy shenanigans to be able to keep working on it. What could/should we do to smooth out this process? P.S. Apologies if this question has been asked before - I did search but couldn't find what I'm looking for.

    Read the article

  • python win32com EXCEL data input error

    - by Rafal
    Welcome, I'm exporting results of my script into Excel spreadsheet. Everything works fine, I put big sets of data into SpreadSheet, but sometimes an error occurs: File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 550, in __setattr__ self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) pywintypes.com_error: (-2147352567, 'Exception.', (0, None, None, None, 0, -2146777998), None)*** I suppose It's not a problem of input data format. I put several different types of data strings, ints, floats, lists and it works fine. When I run the sript for the second time it works fine - no error. What's going on? PS. This is code that generates error, what's strange is that the error doesn't occur always. Say 30% of runs results in an error. : import win32com.client def Generate_Excel_Report(): Excel=win32com.client.Dispatch("Excel.Application") Excel.Workbooks.Add(1) Cells=Excel.ActiveWorkBook.ActiveSheet.Cells for i in range(100): Row=int(35+i) for j in range(10): Cells(int(Row),int(5+j)).Value="string" for i in range(100): Row=int(135+i) for j in range(10): Cells(int(Row),int(5+j)).Value=32.32 #float Generate_Excel_Report() The strangest for me is that when I run the script with the same code, the same input many times, then sometimes an error occurs, sometimes not. Thanks in advance for any help

    Read the article

  • Dynamicly set TextBlock's text binding

    - by Mitchell Skurnik
    I am attempting to write a multilingual application in Silverlight 4.0 and I at the point where I can start replacing my static text with dynamic text from a SampleData xaml file. Here is what I have: My Database <SampleData:something xmlns:SampleData="clr-namespace:Expression.Blend.SampleData.MyDatabase" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <SampleData:something.mysystemCollection> <SampleData:mysystem ID="1" English="Menu" German="Menü" French="Menu" Spanish="Menú" Swedish="Meny" Italian="Menu" Dutch="Menu" /> </SampleData:something.mysystemCollection> </SampleData:something> My UserControl <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="Something.MyUC" d:DesignWidth="1000" d:DesignHeight="600"> <Grid x:Name="LayoutRoot" DataContext="{Binding Source={StaticResource MyDatabase}}"> <Grid Height="50" Margin="8,20,8,0" VerticalAlignment="Top" d:DataContext="{Binding mysystemCollection[1]}" x:Name="gTitle"> <TextBlock x:Name="Title" Text="{Binding English}" TextWrapping="Wrap" Foreground="#FF00A33D" TextAlignment="Center" FontSize="22"/> </Grid> </Grid> </UserControl> As you can see, I have 7 languages that I want to deal with. Right now this loads the English version of my text just fine. I have spent the better part of today trying to figure out how to change the binding in my code to swap this out when I needed (lets say when I change the language via drop down). Any help would be great!

    Read the article

  • How do I add a button to my navigationController's right side after pushing another view controller

    - by bobobobo
    So, immediately after pushing a view controller to my tableView, // Override to support row selection in the table view. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here -- // for example, create and push another view controller. AnotherViewController *anotherViewController = [[AnotherViewController alloc] initWithNibName:@"AnotherView" bundle:nil]; [self.navigationController pushViewController:anotherViewController animated:YES]; Ok, so that makes another view slide on, and you can go back to the previous view ("pop" the current view) by clicking the button that automatically appears in the top left corner of the navigation bar now. Ok, so SAY I want to populate the RIGHT SIDE of the navigation bar with a DONE button, like in the "Notes" app that comes with the iPhone. How would I do that? I tried code like this: UIBarButtonItem * doneButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector( doneFunc ) ]; self.navigationController.navigationItem.rightBarButtonItem = doneButton ; // not it.. [doneButton release] ; doneFunc is defined, and everything, just the button never appears on the right side..

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • simple Stata program

    - by Cyrus S
    I am trying to write a simple program to combine coefficient and standard error estimates from a set of regression fits. I run, say, 5 regressions, and store the coefficient(s) and standard error(s) of interest into vectors (Stata matrix objects, actually). Then, I need to do the following: Find the mean value of the coefficient estimates. Combine the standard error estimates according to the formula suggested for combining results from "multiple imputation". The formula is the square root of the formula for "T" on page 6 of the following document: http://bit.ly/b05WX3 I have written Stata code that does this once, but I want to write this as a function (or "program", in Stata speak) that takes as arguments the vector (or matrix, if possible, to combine multiple estimates at once) of regression coefficient estimates and the vector (or matrix) of corresponding standard error estimates, and then generates 1 and 2 above. Here is the code that I wrote: (breg is a 1x5 vector of the regression coefficient estimates, and sereg is a 1x5 vector of the associated standard error estimates) mat ones = (1,1,1,1,1) mat bregmean = (1/5)*(ones*breg’) scalar bregmean_s = bregmean[1,1] mat seregmean = (1/5)*(ones*sereg’) mat seregbtv = (1/4)*(breg - bregmean#ones)* (breg - bregmean#ones)’ mat varregmi = (1/5)*(sereg*sereg’) + (1+(1/5))* seregbtv scalar varregmi_s = varregmi[1,1] scalar seregmi = sqrt(varregmi_s) disp bregmean_s disp seregmi This gives the right answer for a single instance. Any pointers would be great! UPDATE: I completed the code for combining estimates in a kXm matrix of coefficients/parameters (k is the number of parameters, m the number of imputations). Code can be found here: http://bit.ly/cXJRw1 Thanks to Tristan and Gabi for the pointers.

    Read the article

  • Enum Naming Convention - Plural

    - by o.k.w
    I'm asking this question despite having read similar but not exactly what I want at http://stackoverflow.com/questions/495051/c-naming-convention-for-enum-and-matching-property I found I have a tendency to name enums in plural and then 'use' them as singular, example: public enum EntityTypes { Type1, Type2 } public class SomeClass { /* some codes */ public EntityTypes EntityType {get; set;} } Of course it works and this is my style, but can anyone find potential problem with such convention? I do have an "ugly" naming with the word "Status" though: public enum OrderStatuses { Pending, Fulfilled, Error, Blah, Blah } public class SomeClass { /* some codes */ public OrderStatuses OrderStatus {get; set;} } Additional Info: Maybe my question wasn't clear enough. I often have to think hard when naming the variables of the my defined enum types. I know the best practice, but it doesn't help to ease my job of naming those variables. I can't possibly expose all my enum properties (say "Status") as "MyStatus". My question: Can anyone find potential problem with my convention described above? It is NOT about best practice. Question rephrase: Well, I guess I should ask the question this way: Can someone come out a good generic way of naming the enum type such that when used, the naming of the enum 'instance' will be pretty straightforward?

    Read the article

  • When do married programmers find time to work?

    - by dave-keiture
    Hi people, Maybe my problem is unique (though I don't believe so), and probably I should go with it to the psychologist, but sinse I get married (~year ago) I feel that I don't have enough time to work anymore. For years I've been working at my day job doing regular stuff, and at nights I've been committing to the opensource projects, freelancing and self-educating. Now, I can't work at home at all, because once I start working, my wife either says that there're more important thing to do (walk out the dog, buy something in a grocery, etc.) or that I don't pay anough attention to her.. As a workaround, I've started getting up at 3-4AM in the morning to do the things I was usually doing nightly before. After a month of such schedule I feel myself totally crushed. So... 2 questions: Do you have the same problems, or I'm just a <..., who should say goodbye to programming, and start doing "really important things"? If you have the same problems, when do you find time to work? Maybe you know any cool lifehack or trick to spend at least some tome for doing the thing you like :) Best regards.

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • XAML Deserialization problem

    - by mrwayne
    Hi, I have a block of XAML of which i'm trying to deserialize. For arguments sake lets say it looks like below. <NS:SomeObject> <NS:SomeObject.SomeProperty> <NS:SomeDifferentObject SomeOtherProp="a value"/> </NS:SomeObject.SomeProperty> </NS:SomeObject> Of which i deserialise using the following code. XamlReader.Load( File.OpenRead( @"c:\SomeFile.xaml")) I have 2 solutions, one i use Unit Testing, and another i have for my web application. When i'm using the unit testing solution, it deserializes fine and works as expected. However, when i try to deserialize using my other project i keep getting an exception like the following. 'NameSpace.SomeObject' value cannot be assigned to property 'SomeProperty' of object 'NameSpace.SomeObject'. Object of Type 'NameSpace.SomeObject' cannot be converted to type 'NameSpace.SomeObject'. It's as if it is getting confused or instantiating 2 different types of objects? Note, i do not have similarly named classes or any sort of namespace conflict. The same codes executes fine in one solution and not the other. The same project files are referenced in both. Please help!

    Read the article

  • Sentiment analysis for twitter in python

    - by Ran
    I'm looking for an open source implementation, preferably in python, of Textual Sentiment Analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). Is anyone familiar with such open source implementation I can use? I'm writing an application that searches twitter for some search term, say "youtube", and counts "happy" tweets vs. "sad" tweets. I'm using Google's appengine, so it's in python. I'd like to be able to classify the returned search results from twitter and I'd like to do that in python. I haven't been able to find such sentiment analyzer so far, specifically not in python. Are you familiar with such open source implementation I can use? Preferably this is already in python, but if not, hopefully I can translate it to python. Note, the texts I'm analyzing are VERY short, they are tweets. So ideally, this classifier is optimized for such short texts. BTW, twitter does support the ":)" and ":(" operators in search, which aim to do just this, but unfortunately, the classification provided by them isn't that great, so I figured I might give this a try myself. Thanks! BTW, an early demo is here and the code I have so far is here and I'd love to opensource it with any interested developer.

    Read the article

  • What are some Java memory management best practices?

    - by Ascalonian
    I am taking over some applications from a previous developer. When I run the applications through Eclipse, I see the memory usage and the heap size increase a lot. Upon further investigation, I see that they were creating an object over-and-over in a loop as well as other things. I started to go through and do some clean up. But the more I went through, the more questions I had like "will this actually do anything?" For example, instead of declaring a variable outside the loop mentioned above and just setting its value in the loop... they created the object in the loop. What I mean is: for(int i=0; i < arrayOfStuff.size(); i++) { String something = (String) arrayOfStuff.get(i); ... } versus String something = null; for(int i=0; i < arrayOfStuff.size(); i++) { something = (String) arrayOfStuff.get(i); } Am I incorrect to say that the bottom loop is better? Perhaps I am wrong. Also, what about after the second loop above, I set "something" back to null? Would that clear out some memory? In either case, what are some good memory management best practices I could follow that will help keep my memory usage low in my applications? Update: I appreciate everyones feedback so far. However, I was not really asking about the above loops (although by your advice I did go back to the first loop). I am trying to get some best practices that I can keep an eye out for. Something on the lines of "when you are done using a Collection, clear it out". I just really need to make sure not as much memory is being taken up by these applications.

    Read the article

  • Using ThreadPool.QueueUserWorkItem in ASP.NET in a high traffic scenario

    - by Michael Hart
    I've always been under the impression that using the ThreadPool for (let's say non-critical) short-lived background tasks was considered best practice, even in ASP.NET, but then I came across this article that seems to suggest otherwise - the argument being that you should leave the ThreadPool to deal with ASP.NET related requests. So here's how I've been doing small asynchronous tasks so far: ThreadPool.QueueUserWorkItem(s => PostLog(logEvent)) And the article is suggesting instead to create a thread explicitly, similar to: new Thread(() => PostLog(logEvent)){ IsBackground = true }.Start() The first method has the advantage of being managed and bounded, but there's the potential (if the article is correct) that the background tasks are then vying for threads with ASP.NET request-handlers. The second method frees up the ThreadPool, but at the cost of being unbounded and thus potentially using up too many resources. So my question is, is the advice in the article correct? If your site was getting so much traffic that your ThreadPool was getting full, then is it better to go out-of-band, or would a full ThreadPool imply that you're getting to the limit of your resources anyway, in which case you shouldn't be trying to start your own threads? Clarification: I'm just asking in the scope of small non-critical asynchronous tasks (eg, remote logging), not expensive work items that would require a separate process (in these cases I agree you'll need a more robust solution).

    Read the article

  • Javascript help fixing a working time ago date function to simply show seconds ago.

    - by Scarface
    Hey guys quick question, I am using a javascript function and it works except I can only make it say 0 seconds ago, when the time is under a minute. Can anyone quickly explain what I am doing wrong? function handleDate( timestamp ) { var n=new Date(), t, ago = " "; if( timestamp ) { t = Math.round( (n.getTime()/1000 - timestamp)/60 ); ago += handleSinceDateEndings( t, timestamp ); } else { ago += ""; } return ago; } function handleSinceDateEndings( t, original_timestamp ) { var ago = " ", date; if( t <= 1 ) { ago += t + " seconds ago"; } else if( t<60) { ago += t + " mins ago"; } else if( t>= 60 && t<= 120) { ago += Math.floor( t / 60 ) + " hour ago" } else if( t<1440 ) { //console.log(t) ago += Math.floor( t / 60 ) + " hours ago"; } else if( t< 2880) { ago += "1 day ago"; } else if( t > 2880 && t < 4320 ) { ago += "2 days ago"; } else { date = new Date( parseInt( original_timestamp )*1000 ) ago += months[ date.getMonth() ] + " " + date.getDate(); } return ago; } var months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];

    Read the article

  • How to write a C program using the fork() system call that generates the Fibonacci sequence in the

    - by Ellen
    The problem I am having is that when say for instance the user enters 7, then the display shows: 0 11 2 3 5 8 13 21 child ends. I cannot seem to figure out how to fix the 11 and why is it displaying that many numbers in the sequence! Can anyone help? The number of the sequence will be provided in the command line. For example, if 5 is provided, the first five numbers in the Fibonacci sequence will be output by the child process. Because the parent and child processes have their own copies of the data, it will be necessary for the child to output the sequence. Have the parent invoke the wait() call to wait for the child process to complete before exiting the program. Perform necessary error checking to ensure that a non-negative number is passed on the command line. #include <stdio.h> #include <sys/types.h> #include <unistd.h> int main() { int a=0, b=1, n=a+b,i,ii; pid_t pid; printf("Enter the number of a Fibonacci Sequence:\n"); scanf("%d", &ii); if (ii < 0) printf("Please enter a non-negative integer!\n"); else { pid = fork(); if (pid == 0) { printf("Child is producing the Fibonacci Sequence...\n"); printf("%d %d",a,b); for (i=0;i<ii;i++) { n=a+b; printf("%d ", n); a=b; b=n; } printf("Child ends\n"); } else { printf("Parent is waiting for child to complete...\n"); wait(NULL); printf("Parent ends\n"); } } return 0; }

    Read the article

  • Lambdas within Extension methods: Possible memory leak?

    - by Oliver
    I just gave an answer to a quite simple question by using an extension method. But after writing it down i remembered that you can't unsubscribe a lambda from an event handler. So far no big problem. But how does all this behave within an extension method?? Below is my code snipped again. So can anyone enlighten me, if this will lead to myriads of timers hanging around in memory if you call this extension method multiple times? I would say no, cause the scope of the timer is limited within this function. So after leaving it no one else has a reference to this object. I'm just a little unsure, cause we're here within a static function in a static class. public static class LabelExtensions { public static Label BlinkText(this Label label, int duration) { Timer timer = new Timer(); timer.Interval = duration; timer.Tick += (sender, e) => { timer.Stop(); label.Font = new Font(label.Font, label.Font.Style ^ FontStyle.Bold); }; label.Font = new Font(label.Font, label.Font.Style | FontStyle.Bold); timer.Start(); return label; } }

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >