Search Results

Search found 2329 results on 94 pages for 'minute'.

Page 85/94 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Getting XText to work

    - by Calmarius
    I know you don't like helping others in their homework but I have to make an XText grammar, write a sample code that matches this grammar and compile it to a html file. The lecturer showed us the steps and everything worked for him... He said "It's so simple it will be a 10 minute work for you". And I believed that. However at home almost nothing works as expected. And of course no more lectures to go only the exam avaits me where I have to show what I done to pass. Moreover the e-mail I sent him bounced back by the mailer-demon... I got Xtext along with Eclipse IDE from the xtext website and I unpacked it and I followed the steps in the official tuturial to get the default project template to work. The tutorial is found here: http://wiki.eclipse.org/Xtext/GettingStarted Now I'm at the step "Model". It says open the "MyModel.mydsl" I do that but the editor does not opened. It said: "Could not open the editor: The editor class could not be instantiated. This usually indicates a missing no-arg constructor or that the editor's class name was mistyped in plugin.xml." Since everything is generated, the error message does not helped me... There was an option to look at the stack trace (it was mile long) and on the top of it there was an exception: java.lang.IllegalStateException: The bundle has not yet been activated. Make sure the Manifest.MF contains 'Bundle-ActivationPolicy: lazy'. I opened Manifast.MF and Bundle-ActivationPolicy: lazy was set... I googled for the solution but no avail. It drove me nuts and I gave up. I have no experience with Eclipse and Java and XText, I just want to do my homework and forget everything until I will need it again... Anyone have experience with XText? Any help appreciated. ps: I will be on it too and I might resolve the problem in several hours. But now I am at a loss.

    Read the article

  • NetBeans not liking libraries in lib-src

    - by DJTripleThreat
    I'm working on a project with a group that is using Eclipse, but I'm using Netbeans. Up until today this wasn't an issue. When updating from the repo they have added some source code as a library under a directory called /lib-src. When I try to compile the code I get an error that it can't find certain packages... these are the packages under /lib-src. Using NetBeans I can add the library as a folder so now the references to those packages are happy. However, I'm getting this new error when compiling: UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.addEntry(HashMap.java:753) at java.util.HashMap.put(HashMap.java:385) at com.android.dx.dex.file.ClassDataItem.addStaticField(ClassDataItem.java:134) at com.android.dx.dex.file.ClassDefItem.addStaticField(ClassDefItem.java:280) at com.android.dx.dex.cf.CfTranslator.processFields(CfTranslator.java:159) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:130) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) at com.android.dx.command.dexer.Main.processClass(Main.java:297) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) at com.android.dx.command.dexer.Main.access$100(Main.java:56) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:134) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.processDirectory(ClassPathOpener.java:190) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:122) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) at com.android.dx.command.dexer.Main.processOne(Main.java:245) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) at com.android.dx.command.dexer.Main.run(Main.java:139) at com.android.dx.command.dexer.Main.main(Main.java:120) at com.android.dx.command.Main.main(Main.java:87) /home/aaron/NetBeansProjects/xbmc-remote/nbproject/build-impl.xml:411: exec returned: 3 BUILD FAILED (total time: 1 minute 25 seconds) I can include the build-impl.xml file if you need it, but I don't think that is main issue. Any ideas?

    Read the article

  • Eclipselink factory.createEntityManager() stalls with more than one instance running

    - by Raven
    Hi, with my RCP program I have the problem that I want to have more than one copy running on my PC. The first instance runs very good. If I start the second instance, everything is fine until I want to access the database. Using this code: .. Map properties = new HashMap(); properties.put("javax.persistence.jdbc.driver", dbDriver); properties.put("javax.persistence.jdbc.url", dbUrl); properties.put("javax.persistence.jdbc.user", dbUser); properties.put("javax.persistence.jdbc.password", dbPass); try { factory = Persistence.createEntityManagerFactory( PERSISTENCE_UNIT_NAME, properties); } catch (Exception e) { System.out.println(e); } em=factory.createEntityManager(); // the second instance stops here ... with an persistence.xml of <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="default"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <class>myclasshere</class> <properties> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.ddl-generation.output-mode" value="database" /> <property name="eclipselink.jdbc.read-connections.min" value="1" /> <property name="eclipselink.jdbc.write-connections.min" value="1" /> <property name="eclipselink.jdbc.batch-writing" value="JDBC" /> <property name="eclipselink.logging.level" value="SEVERE" /> <property name="eclipselink.logging.timestamp" value="false" /> <property name="eclipselink.logging.session" value="false" /> <property name="eclipselink.logging.thread" value="false" /> </properties> </persistence-unit> </persistence> The program stalls / does not continue in the second instance beyond this step: em=factory.createEntityManager(); Debugging the program step by step showed me, that nothing happens at this point. Perhaps the program runs into a timeout. I havent been wainting for more than 1 minute.... Do you have any clues what might cause this stall might?

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

  • Slow performance when utilizing Interop.DSOFile to search files by Custom Document Property

    - by Gradatc
    I am new to the world of VB.NET and have been tasked to put together a little program to search a directory of about 2000 Excel spreadsheets and put together a list to display based on the value of a Custom Document Property within that spreadsheet file. Given that I am far from a computer programmer by education or trade, this has been an adventure. I've gotten it to work, the results are fine. The problem is, it takes well over a minute to run. It is being run over a LAN connection. When I run it locally (using a 'test' directory of about 300 files) it executes in about 4 seconds. I'm not sure even what to expect as a reasonable execution speed, so I thought I would ask here. The code is below, if anyone thinks changes there might be of use in speeding things up. Thank you in advance! Private Sub listByPt() Dim di As New IO.DirectoryInfo(dir_loc) Dim aryFiles As IO.FileInfo() = di.GetFiles("*" & ext_to_check) Dim fi As IO.FileInfo Dim dso As DSOFile.OleDocumentProperties Dim sfilename As String Dim sheetInfo As Object Dim sfileCount As String Dim ifilesDone As Integer Dim errorList As New ArrayList() Dim ErrorFile As Object Dim ErrorMessage As String 'Initialize progress bar values ifilesDone = 0 sfileCount = di.GetFiles("*" & ext_to_check).Length Me.lblHighProgress.Text = sfileCount Me.lblLowProgress.Text = 0 With Me.progressMain .Maximum = di.GetFiles("*" & ext_to_check).Length .Minimum = 0 .Value = 0 End With 'Loop through all files in the search directory For Each fi In aryFiles dso = New DSOFile.OleDocumentProperties sfilename = fi.FullName Try dso.Open(sfilename, True) 'grab the PT Initials off of the logsheet Catch excep As Runtime.InteropServices.COMException errorList.Add(sfilename) End Try Try sheetInfo = dso.CustomProperties("PTNameChecker").Value Catch ex As Runtime.InteropServices.COMException sheetInfo = "NONE" End Try 'Check to see if the initials on the log sheet 'match those we are searching for If sheetInfo = lstInitials.SelectedValue Then Dim logsheet As New LogSheet logsheet.PTInitials = sheetInfo logsheet.FileName = sfilename PTFiles.Add(logsheet) End If 'update progress bar Me.progressMain.Increment(1) ifilesDone = ifilesDone + 1 lblLowProgress.Text = ifilesDone dso.Close() Next lstResults.Items.Clear() 'loop through results in the PTFiles list 'add results to the listbox, removing the path info For Each showsheet As LogSheet In PTFiles lstResults.Items.Add(Path.GetFileNameWithoutExtension(showsheet.FileName)) Next 'build error message to display to user ErrorMessage = "" For Each ErrorFile In errorList ErrorMessage += ErrorFile & vbCrLf Next MsgBox("The following Log Sheets were unable to be checked" _ & vbCrLf & ErrorMessage) PTFiles.Clear() 'empty PTFiles for next use End Sub

    Read the article

  • Occasional Date or timezone discrepancy in hudson or maven with jodatime

    - by TheStijn
    hi, I hope following explanation will make sense because it's a weird problem we're facing and hard to describe. We have a maven project which gets build in hudson and that contains some unit tests where dates are used and asserted. The hudson server runs on solaris. Now, occasionally (like 30% of the times) the unit tests using dates fail because 3,5 hours are deducted from the specified time in the unit test and hence asserts start failing. The other 70% everything works fine although nothing at all changed in the code and we run the hudson job several times an hour. I add following code to a unittest to check the time: @Test public void testDate() { System.out.println("new DateMidnight(2011, 1, 5).toDate();"); System.out.println(new DateMidnight(2011, 1, 5).toDate()); System.out.println(new DateMidnight(2011, 1, 5).toDate().getTime()); Calendar cal = Calendar.getInstance(); cal.set(Calendar.YEAR, 2011); cal.set(Calendar.MONTH, 0); cal.set(Calendar.DAY_OF_MONTH, 5); cal.set(Calendar.HOUR, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); System.out.println("cal.getTime();"); System.out.println(cal.getTime()); System.out.println(cal.getTime().getTime()); } So basically it should print the same thing when using jodatime or plain old Calendar. This is the case in 70% of the runs; for the other 30% I get following printouts: Running TestSuite new DateMidnight(2011, 1, 5).toDate(); Tue Jan 04 21:30:00 MET 2011 1294173000000 cal.getTime(); Wed Jan 05 12:00:00 MET 2011 1294225200000 Local maven tests never appear the pose this problem and we can't figure out what could be the cause of it. Especially, we can't think of a single reason why the tests sometimes pass and sometimes fail without changing any code nor hudson or server setting. Also, we run the maven install with cobertura which means that the unit tests are run twice. It happens also that they pass the first time and fail the second time or the other way around or that they fail both times. Thanks for any help, Stijn

    Read the article

  • ForEach loop: Output something different on every second result

    - by Wade D Ouellet
    I have two for each loops and I am trying to output something different for each second result: foreach ($wppost as $wp) { $wp_title = $wp->post_title; $wp_date = strtotime($wp->post_date); $wp_slug = $wp->post_name; $wp_id = $wp->ID; // Start Permalink Template $wp_showurl = $wp_url; $wp_showurl = str_replace("%year%", date('Y', $wp_date), $wp_showurl); $wp_showurl = str_replace("%monthnum%", date('m', $wp_date), $wp_showurl); $wp_showurl = str_replace("%day%", date('d', $wp_date), $wp_showurl); $wp_showurl = str_replace("%hour%", date('H', $wp_date), $wp_showurl); $wp_showurl = str_replace("%minute%", date('i', $wp_date), $wp_showurl); $wp_showurl = str_replace("%second%", date('s', $wp_date), $wp_showurl); $wp_showurl = str_replace("%postname%", $wp_slug, $wp_showurl); $wp_showurl = str_replace("%post_id%", $wp_id, $wp_showurl); // Stop Permalink Template $wp_posturl = $blog_address . $wp_showurl; echo '<li><a href="'.$wp_posturl.'" title="'.$wp_title.'">'.$wp_title.'</a></li>'; } For this one I want it to echo li class="even" instead of just li for each second result. I think this will have to be changed to a for loop to accomplished this but I am not sure how that is done without breaking it. Exact same for this one: if(is_array($commenters)) { foreach ($commenters as $k) { ?><li><?php if($ns_options["make_links"] == 1) { $url = ns_get_user_url($k->poster_id); if(trim($url) != '') { echo "<a href='" . $url . "'>"; } } if($ns_options["make_links"] == 2) { $url = ns_get_user_profile($k->poster_id); echo "<a href='" . $url . "'>"; } $name = $bbdb->get_var(" SELECT user_login FROM $bbdb->users WHERE ID = $k->poster_id"); echo ns_substr_ellipse($name, $ns_options["name_limit"]); if(trim($url) != '' && $ns_options["make_links"] == 1) { echo "</a>"; } if($ns_options["make_links"] == 2) { echo "</a>"; } if ($ns_options["show_posts"] == 1) { echo " (" . $k->num_posts . ")\n"; } echo $ns_options["end_html"] . "\n"; unset($url); } } else { ?></li><?php } Thanks, Wade

    Read the article

  • Git sh.exe process forking issue on windows XP, slow?

    - by AndyL
    Git is essential to my workflow. I run MSYS Git on Windows XP on my quad core machine with 3GB of RAM, and normally it is responsive and zippy. Suddenly an issue has cropped up whereby it takes 30 seconds to run any command from the Git Bash command prompt, including ls or cd. Interestingly, from the bash prompt it looks likes ls runs fairly quickly, I can then see the output from ls, but it then takes ~30 seconds for the prompt to return. If I switch to the windows command prompt (by running cmd from the start menu) git related commands also take forever, even just to run. For example git status can take close to a minute before anything happens. Sometimes the processes simply don't finish. Note that I have "MSYS Git" installed as well as regular "MSYS" for things like MinGW and make. I believe the problem is related to sh.exe located in C:\Program Files\Git\bin. When I run ls from the bash prompt, or when I invoke git from the windows prompt, task manager shows up to four instances of sh.exe processes that come and go. Here I am waiting for ls to return and you can see the task manager has git.exe running and four instances of sh.exe: If I ctrl-c in the middle of an ls I sometimes get errors that include: sh.exe": fork: Resource temporarily unavailable 0 [main] sh.exe" 1624 proc_subproc: Couldn't duplicate my handle<0x6FC> fo r pid 6052, Win32 error 5 sh.exe": fork: Resource temporarily unavailable Or for git status: $ git status sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable sh.exe": fork: Resource temporarily unavailable Can I fix this so that git runs quickly again, and if so how? Things I have tried: Reboot Upgrade MSYS Git to most recent version & Reboot Upgrade MSYS to most recent version & Reboot Uninstall MSYS & uninstall and reinstall MSYS Git alone & Reboot I'd very much like to not wipe my box and reinstall Windows, but I will if I can't get this fixed. I can no longer code if it takes me 30 s to run git status or cd.

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

  • Is Zend Framework a total waste of my time?

    - by Citizen
    Ok, I'm about 50% done with the "30 minute" quickstart guide from Zend. I must be missing something, because this seems like a total waste of time. The point of this quickguide is to create a guestbook, something I could do in 5 minutes with regular naked non-framework php. Here's my path to zend framework: c:/program files/wamp/www/_zend/ Here's my path to my quickstart project: c:/program files/wamp/www/_zend/bin/quickstart/ I have a number of questions at this point: http://framework.zend.com/docs/quickstart/create-a-model-and-database-table 1: I'm running the command line to run my database loading script. I get an error stating the it can't find the Zend/AutoLoader.php becuase my path to the zend library is wrong. I followed all of the steps. I defined the path to my zend library in the main config file, but for some reason, its defined again in my db loader. In all of these scripts that they have me load, it points the relative path to the zend library as being /../library Problem is, there's nothing in that folder. To get to my actual zend folder, you'd need to be (relatively) /../../../../library Which brings me to my 2nd question: 2: Where the #$#$ is the main Zend files supposed to be? The install directions were basically "put it wherever you want", when the real answer (after a bunch of errors and wasted time was) was "put it somewhere so that its really easy to type the full path a thousand times in command line" and "it also better be in a runnable place on your webserver since its going to create your quickstart application in a subdirectory within zend". Which brings us to the third question 3: Am I supposed to have this libary in both the parent core Zend (wamp/_zend/library) AND my application (quickstart/library)? 4: If that is the case, it seems like a ton of wasted files to be uploading. I'd like to use Zend to create products that my customers will download. 5 megs of overhead seems like a bit much. Zend claims you can use these library components separately, but it looks to me like I'm going to have to upload them every time. Which leads to the next question: 5: It appears that perhaps Zend is more for a single application that is not supposed to be distributed. Is this not the case? 6: According to their default file structure everything but my /public folder would be above public_html on my server if I wanted this to rest on my TLD. I would need to rename every reference of /public/ to /public_html/, or am I missing something else?

    Read the article

  • Suggestions for designing large-scale Java webapp from the group up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • Problem running a simple EJB application

    - by Spi1988
    I am currently running a simple EJB application using a stateless Session Bean. I am working on NetBeans 6.8 with Personal Glassfish 3.0 and I have installed on my system both the Java EE and the Java SE. I don't know whether it is relevent but I am running Windows7 64-bit version. The Session Bean I implemented has just one method sayHello(); which just prints hello on the screen. When I try to run the application I'm getting the following error: pre-init: init-private: init-userdir: init-user: init-project: do-init: post-init: init-check: init: deps-jar: deps-j2ee-archive: MyEnterprise-app-client.init: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: MyEnterprise-app-client.deps-jar: MyEnterprise-app-client.compile: MyEnterprise-app-client.library-inclusion-in-manifest: MyEnterprise-app-client.dist-ear: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: pre-pre-compile: pre-compile: do-compile: post-compile: compile: pre-dist: post-dist: dist-directory-deploy: pre-run-deploy: Starting Personal GlassFish v3 Domain Personal GlassFish v3 Domain is running. Undeploying ... Initializing... Initial deploying MyEnterprise to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\gfdeploy\MyEnterprise Completed initial distribution of MyEnterprise post-run-deploy: run-deploy: run-display-browser: run-ac: pre-init: init-private: init-userdir: init-user: init-project: do-init: post-init: init-check: init: deps-jar: deps-j2ee-archive: MyEnterprise-app-client.init: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: MyEnterprise-app-client.deps-jar: MyEnterprise-app-client.compile: MyEnterprise-app-client.library-inclusion-in-manifest: MyEnterprise-app-client.dist-ear: MyEnterprise-ejb.init: MyEnterprise-ejb.deps-jar: MyEnterprise-ejb.compile: MyEnterprise-ejb.library-inclusion-in-manifest: MyEnterprise-ejb.dist-ear: pre-pre-compile: pre-compile: do-compile: post-compile: compile: pre-dist: post-dist: dist-directory-deploy: pre-run-deploy: Undeploying ... Initial deploying MyEnterprise to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\gfdeploy\MyEnterprise Completed initial distribution of MyEnterprise post-run-deploy: run-deploy: Warning: Could not find file C:\Users\Naqsam\.netbeans\6.8\GlassFish_v3\generated\xml\MyEnterprise\MyEnterpriseClient.jar to copy. Copying 1 file to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist Copying 4 files to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\MyEnterpriseClient Copying 1 file to C:\Users\Naqsam\Documents\NetBeansProjects\MyEnterprise\dist\MyEnterpriseClient java.lang.NullPointerException at org.glassfish.appclient.client.acc.ACCLogger$1.run(ACCLogger.java:149) at java.security.AccessController.doPrivileged(Native Method) at org.glassfish.appclient.client.acc.ACCLogger.reviseLogger(ACCLogger.java:146) at org.glassfish.appclient.client.acc.ACCLogger.init(ACCLogger.java:93) at org.glassfish.appclient.client.acc.ACCLogger.<init>(ACCLogger.java:80) at org.glassfish.appclient.client.AppClientFacade.createBuilder(AppClientFacade.java:360) at org.glassfish.appclient.client.AppClientFacade.prepareACC(AppClientFacade.java:247) at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.premain(AppClientContainerAgent.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:323) at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:338) Java Result: 1 run-MyEnterprise-app-client: run: BUILD SUCCESSFUL (total time: 1 minute 59 seconds) see next post.

    Read the article

  • Subclassing a window from a thread in c#

    - by user258651
    I'm creating a thread that looks for a window. When it finds the window, it overrides its windowproc, and handles WM_COMMAND and WM_CLOSE. Here's the code that looks for the window and subclasses it: public void DetectFileDialogProc() { Window fileDialog = null; // try to find the dialog twice, with a delay of 500 ms each time for (int attempts = 0; fileDialog == null && attempts < 2; attempts++) { // FindDialogs enumerates all windows of class #32770 via an EnumWindowProc foreach (Window wnd in FindDialogs(500)) { IntPtr parent = NativeMethods.User32.GetParent(wnd.Handle); if (parent != IntPtr.Zero) { // we're looking for a dialog whose parent is a dialog as well Window parentWindow = new Window(parent); if (parentWindow.ClassName == NativeMethods.SystemWindowClasses.Dialog) { fileDialog = wnd; break; } } } } // if we found the dialog if (fileDialog != null) { OldWinProc = NativeMethods.User32.GetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC); NativeMethods.User32.SetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC, Marshal.GetFunctionPointerForDelegate(new WindowProc(WndProc)).ToInt32()); } } And the windowproc: public IntPtr WndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam) { lock (this) { if (!handled) { if (msg == NativeMethods.WM_COMMAND || msg == NativeMethods.WM_CLOSE) { // adding to a list. i never access the window via the hwnd from this list, i just treat it as a number _addDescriptor(hWnd); handled = true; } } } return NativeMethods.User32.CallWindowProc(OldWinProc, hWnd, msg, wParam, lParam); } This all works well under normal conditions. But I am seeing two instances of bad behavior in order of badness: If I do not close the dialog within a minute or so, the app crashes. Is this because the thread is getting garbage collected? This would kind of make sense, as far as GC can tell the thread is done? If this is the case, (and I don't know that it is), how can I make the thread stay around as long as the dialog is around? If I immediately close the dialog with the 'X' button (WM_CLOSE) the app crashes. I believe its crashing in the windowproc, but I can't get a breakpoint in there. I'm getting an AccessViolationException, The exception says "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Its a race condition, but of what I don't know. FYI, I had been reseting the old windowproc once I processed the commands, but that was crashing even more often! Any ideas on how I can solve these issues?

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

  • More efficient way of updating UI from Service than intents?

    - by Donal Rafferty
    I currently have a Service in Android that is a sample VOIP client so it listens out for SIP messages and if it recieves one it starts up an Activity screen with UI components. Then the following SIP messages determine what the Activity is to display on the screen. For example if its an incoming call it will display Answer or Reject or an outgoing call it will show a dialling screen. At the minute I use Intents to let the Activity know what state it should display. An example is as follows: Intent i = new Intent(); i.setAction(SIPEngine.SIP_TRYING_INTENT); i.putExtra("com.net.INCOMING", true); sendBroadcast(i); Intent x = new Intent(); x.setAction(CallManager.SIP_INCOMING_CALL_INTENT); sendBroadcast(x); Log.d("INTENT SENT", "INTENT SENT INCOMING CALL AFTER PROCESSINVITE"); So the activity will have a broadcast reciever registered for these intents and will switch its state according to the last intent it received. Sample code as follows: SipCallListener = new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(SIPEngine.SIP_RINGING_INTENT.equals(action)){ Log.d("cda ", "Got RINGING action SIPENGINE"); ringingSetup(); } if(CallManager.SIP_INCOMING_CALL_INTENT.equals(action)){ Log.d("cda ", "Got PHONE RINGING action"); incomingCallSetup(); } } }; IntentFilter filter = new IntentFilter(CallManager.SIP_INCOMING_CALL_INTENT); filter.addAction(CallManager.SIP_RINGING_CALL_INTENT); registerReceiver(SipCallListener, filter); This works however it seems like it is not very efficient, the Intents will get broadcast system wide and Intents having to fire for different states seems like it could become inefficient the more I have to include as well as adding complexity. So I was wondering if there is a different more efficient and cleaner way to do this? Is there a way to keep Intents broadcasting only inside an application? Would callbacks be a better idea? If so why and in what way should they be implemented?

    Read the article

  • Read email from incoming mail server(POP)

    - by nccsbim071
    Hi, I have used an open source code from codeproject to read email from incoming mail server(POP Server). The code can be found at following location: http://www.codeproject.com/KB/IP/Pop3MimeClient.aspx So far it works fine i can read emails. My objective of using this code was to retrieve emails from POP server and process them. My problem is: If i use gmails pop server "pop.gmail.com" and run the appplication. I get only those emails that i have not retrieved since the last time i run the application. but if i use my clients pop server everytime i run the application i get all the emails in the pop server. for example: If i use gmail pop server: pop.gmail.com I have three emails in the pop server. I haven't run the application. I am running the application for the first time. Application reads the email, this time i will get 3 all the three email. I run the application second time, my application will not read any emails this time because i have already read the 3 existing one. This is fine, this is what i want. Now i send email to pop.gmail.com. I run the application again for the third time, this time i will only get the email that has just arrived that is the fourth one. This is good behaviour, this is what i want. But if i use my clients pop server: No matter how many times i run the application, it reads all the emails in the mail box. This will create problem for me, because i am thinking of building a window service that will read emails from pop server and process them. This service will run continuously. I will process emails in the pop serve then sleep for let's say 1 minute and the process the emails again. If the application downloaded from codeproject reads all the emails all the time, my clients mailbox can have like thousands for email in this mail box, so this would not be feasible for me. Is there some settings that is to be made at my client's pop server that will allow my application to retrieve only those emails that i have not read since last time i run the service or any help Please help, thanks,

    Read the article

  • Suggestions for designing large-scale Java webapp from the ground up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • How to force VS 2010 to skip "builds" of projects which haven't changed?

    - by Ladislav Mrnka
    Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code. Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and know without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time. That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B. The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects. Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ... I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute). To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars. I really don't want to get answers like: Make a separate solution Unload projects you don't need etc. I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.

    Read the article

  • MVC Rendered Partial, how to get partial/view model in main model post to controller

    - by user1475788
    I have a text file and when users upload the file, the controller action method parses that file using state machine and uses a generic list to store some values. I pass this back to the view in the form of an IEnumerable. Within my main view, based on this ienumerable list I render a partail view to iterate items and display labels and a textarea. Users could add their input in the text area. When the users hit the save button this ienumrable list from the partial view rendered is null. so please advice any solutions. here is my main view @model RunLog.Domain.Entities.RunLogEntry @{ ViewBag.Title = "Create"; Layout = "~/Views/Shared/_Layout.cshtml"; } @using (Html.BeginForm("Create", "RunLogEntry", FormMethod.Post, new { enctype = "multipart/form-data" })) { <div id="inputTestExceptions" style="display: none;"> <table class="grid" style="width: 450px; margin: 3px 3px 3px 3px;"> <thead> <tr> <th> Exception String </th> <th> Comment </th> </tr> </thead> <tbody> @if (Model.TestExceptions != null) { foreach (var p in Model.TestExceptions) { Html.RenderPartial("RunLogTestExceptionSummary", p); } } </tbody> </table> </div> } partial view as follows: @model RunLog.Domain.Entities.RunLogEntryTestExceptionDisplay <tr> <td> @Model.TestException@ </td> <td>@Html.TextAreaFor(Model.Comment, new { style = "width: 200px; height: 80px;" }) </td> </tr> Controller action [HttpPost] public ActionResult Create(RunLogEntry runLogEntry, String ServiceRequest, string Hour, string Minute, string AMPM, string submit, IEnumerable<HttpPostedFileBase> file, String AssayPerformanceIssues1, IEnumerable<RunLogEntryTestExceptionDisplay> models) { } The problem is test exceptions which contains exception string and comment is comming back null.

    Read the article

  • AlarmManager triggers PendingIntent too soon

    - by Wezelkrozum
    I've searched for 3 days now but didn't find a solution or similar problem/question anywhere else. Here is the deal: Trigger in 1 hour - works correct Trigger in 2 hours - Goes of in 1:23 Trigger in 1 day - Goes of in ~11:00 So why is the AlarmManager so unpredictable and always too soon? Or what am I doing wrong? And is there another way so that it could work correctly? This is the way I register my PendingIntent in the AlarmManager (stripped down): AlarmManager alarmManager = (AlarmManager)parent.getSystemService(ALARM_SERVICE); Intent myIntent = new Intent(parent, UpdateKlasRoostersService.class); PendingIntent pendingIntent = PendingIntent.getService(parent, 0, myIntent, PendingIntent.FLAG_UPDATE_CURRENT); //Set startdate of PendingIntent so it triggers in 10 minutes Calendar start = Calendar.getInstance(); start.setTimeInMillis(SystemClock.elapsedRealtime()); start.add(Calendar.MINUTE, 10); //Set interval of PendingIntent so it triggers every day Integer interval = 1*24*60*60*1000; //Cancel any similar instances of this PendingIntent if already scheduled alarmManager.cancel(pendingIntent); //Schedule PendingIntent alarmManager.setRepeating(AlarmManager.ELAPSED_REALTIME_WAKEUP, start.getTimeInMillis(), interval, pendingIntent); //Old way I used to schedule a PendingIntent, didn't seem to work either //alarmManager.set(AlarmManager.RTC_WAKEUP, start.getTimeInMillis(), pendingIntent); It would be awesome if anyone has a solution. Thanks for any help! Update: 2 hours ago it worked to trigger it with an interval of 2 hours, but after that it triggered after 1:20 hours. It's getting really weird. I'll track the triggers down with a logfile and post it here tomorrow. Update: The PendingIntent is scheduled to run every 3 hours. From the log's second line it seems like an old scheduled PendingIntent is still running: [2012-5-3 2:15:42 519] Updating Klasroosters [2012-5-3 4:15:15 562] Updating Klasroosters [2012-5-3 5:15:42 749] Updating Klasroosters [2012-5-3 8:15:42 754] Updating Klasroosters [2012-5-3 11:15:42 522] Updating Klasroosters But, I'm sure I cancelled the scheduled PendingIntent's before I schedule a new one. And every PendingIntent isn't recreated in the same way, so it should be exactly the same. If not , this threads question isn't relevant anymore.

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Countdown timer using NSTimer in "0:00" format

    - by Joey Pennacchio
    I have been researching for days on how to do this and nobody has an answer. I am creating an app with 5 timers on the same view. I need to create a timer that counts down from "15:00" (minutes and seconds), and, another that counts down from "2:58" (minutes and seconds). The 15 minute timer should not repeat, but it should stop all other timers when it reaches "00:00." The "2:58" timer should repeat until the "15:00" or "Game Clock" reaches 0. Right now, I have scrapped almost all of my code and I'm working on the "2:58" repeating timer, or "rocketTimer." Does anyone know how to do this? Here is my code: #import <UIKit/UIKit.h> @interface FirstViewController : UIViewController { //Rocket Timer int totalSeconds; bool timerActive; NSTimer *rocketTimer; IBOutlet UILabel *rocketCount; int newTotalSeconds; int totalRocketSeconds; int minutes; int seconds; } - (IBAction)Start; @end and my .m #import "FirstViewController.h" @implementation FirstViewController - (NSString *)timeFormatted:(int)newTotalSeconds { int seconds = totalSeconds % 60; int minutes = (totalSeconds / 60) % 60; return [NSString stringWithFormat:@"%i:%02d"], minutes, seconds; } -(IBAction)Start { newTotalSeconds = 178; //for 2:58 newTotalSeconds = newTotalSeconds-1; rocketCount.text = [self timeFormatted:newTotalSeconds]; if(timerActive == NO){ timerActive = YES; newTotalSeconds = 178; [rocketTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(timerLoop) userInfo:nil repeats:YES]; } else{ timerActive = NO; [rocketTimer invalidate]; rocketTimer = nil; } } -(void)timerLoop:(id)sender { totalSeconds = totalSeconds-1; rocketCount.text = [self timeFormatted:totalSeconds]; } - (void)dealloc { [super dealloc]; [rocketTimer release]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view from its nib. timerActive = NO; } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } @end

    Read the article

  • Android - Transforming widgets within transformed widgets and the resulting usability issues.

    - by Ben Rose
    Hello. I'm new to Android application development and I'm currently experimenting with various UI ideas. In the image below, you can see a vertically scrolling list of horizontally scrolling galleries (and also textviews as you can see). I'm also doing some matrix and camera transformations which I will come to in a minute. For the background of the list elements, I use green. Blue is the background of the galleries, and red is the background for the images. These are just for my benefit of learning. The galleries being used are extended classes where I overrode the drawChild method to perform a canvas scale operation in order for the image closest to the center (width) to be larger than the others. The list view going vertically, I overrode the drawChild method and used the camera rotations from lack of depth dimension in the canvas functionality. The items in the list are scaled down and rotated relative to their position's proximity to the center (height). I understood that scrolling and clicking would not necessarily follow along with the image transforms, but it appears as though the parent Gallery class's drawing is constrained to the original coordinates as well (see photo below). I would love to hear any insight any of you have regarding how I can change the coordinates of the galleries in what is rendered via gallery scroll and the touch responsiveness of said gallery. Images in the gallery are not same dimensions, so don't let that throw you in looking at the image below Thanks in advance! Ben link to image (could not embed) -- Update: I was using my test application UI and noticed that when I got the UI to the point of the linked image and then I touched the top portion of the next row in the list, the gallery updated to display the proper representation. So, I added a call to clearFocus() in the drawChild method and that resulted in more accurate drawing. It does seem a tad slower, and since I'm on the Incredible, I'm worried it is a bloated solution. In any event, I would still appreciate any thoughts you have regarding the best way to have the views display properly and how to translate the touch events in the gallery's new displayed area to its touchable coordinates so that scrolling on the actual images works when the gallery has moved. Thanks!

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >