Search Results

Search found 33297 results on 1332 pages for 'java java ee'.

Page 762/1332 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • Java_swt in eclipse

    - by Ken
    Can someone tell me where I can find the executeable "java_swt"? I see multiple sites that say it is embedded in eclipse, and other sites say it is shipped with Mac swt drops. I have the zip file for a mac called "swt-3.5M6-carbon-macosx.zip" and i have the eclipse IDE installed on my test mac machine and windows machine. But i cannot find this executeable which i need to run an swt app smoothly on mac os x. Any help would be appreciated. Thanks.

    Read the article

  • presentation layer to go with apache torque

    - by ed1t
    I'm planning on using Apache torque as my object-relational mapper (ORM), and I was wondering if anybody has any suggestions around what framework to use for presentation layer with torque. maybe Spring? I don't know if this helps, but my application is basically going to be bunch of forms to input data and based on that data, I'll generate reports in form of a graph or chart.

    Read the article

  • ant scp task through a proxy

    - by xask
    I am trying to make an ant build file to remote copy a war file. Ant scp task uses a jsch library for remote copying. How do I make it work through a proxy, the jsch library clearly supports it. does not work for jsch. Jsch does not read environment variables like http_proxy is there another solution ?

    Read the article

  • Inspect in memory hsqldb while debugging

    - by Albert
    We're using hdsqldb in memory to run junit tests which operate against a database. The db is setup before running each test via a spring configuration. All works fine. Now when a tests fails it can be convinient to be able to inspect the values in the in memory database. Is this possible? If so how? Our url is: jdbc.url=jdbc:hsqldb:mem:testdb;sql.enforce_strict_size=true The database is destroyed after each tests. But when the debugger is running the database should also still be alive. I've tried connecting with the sqldb databaseManager. That works, but I don't see any tables or data. Any help is highly appreciated!

    Read the article

  • Loading a class file immediately AFTER startup

    - by Striker
    We have a few war files deployed inside an ear file. Some of the war files have a class that caches static data from our PLM system in singletons. Since some of the classes take several minutes to load we use the load-on-startup in the web.xml to load them ahead of time. This all works fine until we attempt to re-deploy the application on our production servers. (WebLogic 10.3) We get an exception from our PLM API about a dll already being loaded. Our PLM vendor has confirmed that this is a problem and stated that they don't support using the load-on-startup. This is also a huge problem on our development boxes where we have redeploy the app all the time. Most of us, when we're not working on one of the apps that uses a cache, have them commented out. Obviously we can't do that for the production servers. Right now we transfer the ear to the production server, deploy it in the console, wait for it to crash, shut the app server instance down and then start it up again. We need to find a way around this... One suggestion was to create a servlet that we can call after the server boots that will load the various caches. While this will work I'm looking for something a bit cleaner. Is there anyway to detect once the server started and then fire off the methods? Thanks.

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • understanding this regex

    - by DarthVader
    I m trying to understand what the following does. ^([^=]+)(?:(?:\\=)(.+))?$ Any ideas? This is being used here. Obviously it s command line parser but i m trying to understand the syntax so i can actually run the program. This is from commandline-jmxclient , they have no documents on setting JMX properties but in their source code, there is such an option, so i just want to understand how i can invoke that method. Matcher m = Client.CMD_LINE_ARGS_PATTERN.matcher(command); if ((m == null) || (!m.matches())) { throw new ParseException("Failed parse of " + command, 0); } this.cmd = m.group(1); if ((m.group(2) != null) && (m.group(2).length() > 0)) this.args = m.group(2).split(","); else this.args = null;

    Read the article

  • Locating multiple nested If statements using regular expressions

    - by TERACytE
    Is there a way to search for multiple nested if statements in code using a regular expression? For example, an expression that would locate an instance of if statements three or more layers deep with different styles (if, if/else, if/elseif/else): if (...) { <code> if (...) { <code> if (...) <code> } else if (...) { <code> } else { <code> } } else { <code> }

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • Multiplication of 2 positive numbers giving a negative result

    - by krandiash
    My program is an implementation of a bloom filter. However, when I'm storing my hash function results in the bit array, the function (of the form f(i) = (a*i + b) % m where a,b,i,m are all positive integers) is giving me a negative result. The problem seems to be in the calculation of a*i which is coming out to be negative. Ignore the print statements in the code; those were for debugging. Basically, the value of temp in this block of code is coming out to be negative and so I'm getting an ArrayOutOfBoundsException. m is the bit array length, z is the number of hash functions being used, S is the set of values which are members of this bloom filter and H stores the values of a and b for the hash functions f1, f2, ..., fz. public static int[] makeBitArray(int m, int z, ArrayList<Integer> S, int[] H) { int[] C = new int[m]; for (int i = 0; i < z; i++) { for (int q = 0; q < S.size() ; q++) { System.out.println(H[2*i]); int temp = S.get(q)*(H[2*i]); System.out.println(temp); System.out.println(S.get(q)); System.out.println(H[2*i + 1]); System.out.println(m); int t = ((H[2*i]*S.get(q)) + H[2*i + 1])%m; System.out.println(t); C[t] = 1; } } return C; } Any help is appreciated.

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • Hibernate N+1 from select across multiple tables

    - by Marty Pitt
    Given the following hibernate query: String sql = "select distinct changeset " + "from Changeset changeset " + "join fetch changeset.changeEntries as changeEntry " + "join fetch changeEntry.repositoryEntity as repositoryEntity " + "join fetch repositoryEntity.project as project " + "join fetch changeset.author as changesetAuthor " + "where project.id = :projectID "; Why is this resulting in an N+1 problem? I expect this to generate the following single SQL statement (or something similar) select * from Changeset inner join changeEntry on changeset.id = changeEntry.changeset_id inner join repositoryEntity on changeEntry.repositoryentity_id = repositoryentity.id inner join project on repositoryentity.project_id = project.id where project.id = ? Instead, I see many many select statements firing. The data model here looks like this: I would like the full object graph returned from the Select statement in a single trip to the database, which is why I'm explicitly using "fetch" in the hibernate query. The Hibernate log statements are as follows: Hibernate: select distinct changeset0_.id as id2_0_, changeentr1_.id as id1_1_, repository2_.id as id9_2_, project3_.id as id6_3_, user4_.id as id7_4_, changeset0_.author_id as author5_2_0_, changeset0_.createDate as createDate2_0_, changeset0_.message as message2_0_, changeset0_.revision as revision2_0_, changeentr1_.changeType as changeType1_1_, changeentr1_.changeset_id as changeset4_1_1_, changeentr1_.diff as diff1_1_, changeentr1_.repositoryEntity_id as reposito5_1_1_, changeentr1_.repositoryEntityVersion_id as reposito6_1_1_, changeentr1_.sourceChangeEntry_id as sourceCh7_1_1_, changeentr1_.changeset_id as changeset4_0__, changeentr1_.id as id0__, repository2_.project_id as connecti6_9_2_, repository2_.name as name9_2_, repository2_.parent_id as parent7_9_2_, repository2_.path as path9_2_, repository2_.state as state9_2_, repository2_.type as type9_2_, project3_.projectName as connecti2_6_3_, project3_.driverName as driverName6_3_, project3_.isAnonymous as isAnonym4_6_3_, project3_.lastUpdatedRevision as lastUpda5_6_3_, project3_.password as password6_3_, project3_.url as url6_3_, project3_.username as username6_3_, user4_.username as username7_4_, user4_.email as email7_4_, user4_.name as name7_4_, user4_.password as password7_4_, user4_.principles as principles7_4_, user4_.userType as userType7_4_ from Changeset changeset0_ inner join ChangeEntry changeentr1_ on changeset0_.id=changeentr1_.changeset_id inner join RepositoryEntity repository2_ on changeentr1_.repositoryEntity_id=repository2_.id inner join project project3_ on repository2_.project_id=project3_.id inner join users user4_ on changeset0_.author_id=user4_.id where project3_.id=? order by changeset0_.revision desc Hibernate: select repository0_.id as id10_9_, repository0_.changeEntry_id as changeEn2_10_9_, repository0_.repositoryEntity_id as reposito3_10_9_, changeentr1_.id as id1_0_, changeentr1_.changeType as changeType1_0_, changeentr1_.changeset_id as changeset4_1_0_, changeentr1_.diff as diff1_0_, changeentr1_.repositoryEntity_id as reposito5_1_0_, changeentr1_.repositoryEntityVersion_id as reposito6_1_0_, changeentr1_.sourceChangeEntry_id as sourceCh7_1_0_, changeset2_.id as id2_1_, changeset2_.author_id as author5_2_1_, changeset2_.createDate as createDate2_1_, changeset2_.message as message2_1_, changeset2_.revision as revision2_1_, user3_.id as id7_2_, user3_.username as username7_2_, user3_.email as email7_2_, user3_.name as name7_2_, user3_.password as password7_2_, user3_.principles as principles7_2_, user3_.userType as userType7_2_, repository4_.id as id9_3_, repository4_.project_id as connecti6_9_3_, repository4_.name as name9_3_, repository4_.parent_id as parent7_9_3_, repository4_.path as path9_3_, repository4_.state as state9_3_, repository4_.type as type9_3_, project5_.id as id6_4_, project5_.projectName as connecti2_6_4_, project5_.driverName as driverName6_4_, project5_.isAnonymous as isAnonym4_6_4_, project5_.lastUpdatedRevision as lastUpda5_6_4_, project5_.password as password6_4_, project5_.url as url6_4_, project5_.username as username6_4_, repository6_.id as id9_5_, repository6_.project_id as connecti6_9_5_, repository6_.name as name9_5_, repository6_.parent_id as parent7_9_5_, repository6_.path as path9_5_, repository6_.state as state9_5_, repository6_.type as type9_5_, repository7_.id as id10_6_, repository7_.changeEntry_id as changeEn2_10_6_, repository7_.repositoryEntity_id as reposito3_10_6_, repository8_.id as id9_7_, repository8_.project_id as connecti6_9_7_, repository8_.name as name9_7_, repository8_.parent_id as parent7_9_7_, repository8_.path as path9_7_, repository8_.state as state9_7_, repository8_.type as type9_7_, changeentr9_.id as id1_8_, changeentr9_.changeType as changeType1_8_, changeentr9_.changeset_id as changeset4_1_8_, changeentr9_.diff as diff1_8_, changeentr9_.repositoryEntity_id as reposito5_1_8_, changeentr9_.repositoryEntityVersion_id as reposito6_1_8_, changeentr9_.sourceChangeEntry_id as sourceCh7_1_8_ from RepositoryEntityVersion repository0_ left outer join ChangeEntry changeentr1_ on repository0_.changeEntry_id=changeentr1_.id left outer join Changeset changeset2_ on changeentr1_.changeset_id=changeset2_.id left outer join users user3_ on changeset2_.author_id=user3_.id left outer join RepositoryEntity repository4_ on changeentr1_.repositoryEntity_id=repository4_.id left outer join project project5_ on repository4_.project_id=project5_.id left outer join RepositoryEntity repository6_ on repository4_.parent_id=repository6_.id left outer join RepositoryEntityVersion repository7_ on changeentr1_.repositoryEntityVersion_id=repository7_.id left outer join RepositoryEntity repository8_ on repository7_.repositoryEntity_id=repository8_.id left outer join ChangeEntry changeentr9_ on changeentr1_.sourceChangeEntry_id=changeentr9_.id where repository0_.id=? The 2nd one is repeated many times - for a result set of 17 objects, the 2nd statement executed 521 times. I suspect this is as a result of the parent/child relationship in the RepositoryEntity object. For the purposes of this select, I actually only require the parent object fetched. Any suggestions?

    Read the article

  • Disabling JComboBox and retaining original item list

    - by n002213f
    My action listener on a JComboBox invokes a thread. I would like the component to be disabled until the thread completes. I have tried calling seEnabled(false) when the thread start and setEnabled(true) when it completes. Unfortunately setEnabled(false) clears the combo box list as well. Is there a way of disabling the component but retain the original list?

    Read the article

  • android get duration from maps.google.com directions

    - by urobo
    At the moment I am using this code to inquire google maps for directions from an address to another one, then I simply draw it on a mapview from its GeometryCollection. But yet this isn't enough I need also to extract the total expected duration from the kml. can someone give a little sample code to help me? thanks StringBuilder urlString = new StringBuilder(); urlString.append("http://maps.google.com/maps?f=d&hl=en"); urlString.append("&saddr=");//from urlString.append( Double.toString((double)src.getLatitudeE6()/1.0E6 )); urlString.append(","); urlString.append( Double.toString((double)src.getLongitudeE6()/1.0E6 )); urlString.append("&daddr=");//to urlString.append( Double.toString((double)dest.getLatitudeE6()/1.0E6 )); urlString.append(","); urlString.append( Double.toString((double)dest.getLongitudeE6()/1.0E6 )); urlString.append("&ie=UTF8&0&om=0&output=kml"); //Log.d("xxx","URL="+urlString.toString()); // get the kml (XML) doc. And parse it to get the coordinates(direction route). Document doc = null; HttpURLConnection urlConnection= null; URL url = null; try { url = new URL(urlString.toString()); urlConnection=(HttpURLConnection)url.openConnection(); urlConnection.setRequestMethod("GET"); urlConnection.setDoOutput(true); urlConnection.setDoInput(true); urlConnection.connect(); dbf = DocumentBuilderFactory.newInstance(); db = dbf.newDocumentBuilder(); doc = db.parse(urlConnection.getInputStream()); if(doc.getElementsByTagName("GeometryCollection").getLength()>0) { //String path = doc.getElementsByTagName("GeometryCollection").item(0).getFirstChild().getFirstChild().getNodeName(); String path = doc.getElementsByTagName("GeometryCollection").item(0).getFirstChild().getFirstChild().getFirstChild().getNodeValue() ; //Log.d("xxx","path="+ path); String[] pairs = path.split(" "); String[] lngLat = pairs[0].split(","); // lngLat[0]=longitude lngLat[1]=latitude lngLat[2]=height // src GeoPoint startGP = new GeoPoint((int)(Double.parseDouble(lngLat[1])*1E6),(int)(Double.parseDouble(lngLat[0])*1E6)); mMapView01.getOverlays().add(new MyOverLay(startGP,startGP,1)); GeoPoint gp1; GeoPoint gp2 = startGP; for(int i=1;i<pairs.length;i++) // the last one would be crash { lngLat = pairs[i].split(","); gp1 = gp2; // watch out! For GeoPoint, first:latitude, second:longitude gp2 = new GeoPoint((int)(Double.parseDouble(lngLat[1])*1E6),(int)(Double.parseDouble(lngLat[0])*1E6)); mMapView01.getOverlays().add(new MyOverLay(gp1,gp2,2,color)); //Log.d("xxx","pair:" + pairs[i]); } mMapView01.getOverlays().add(new MyOverLay(dest,dest, 3)); // use the default color } }catch (MalformedURLException e){ e.printStackTrace(); }catch (IOException e){ e.printStackTrace(); }catch (SAXException e){ e.printStackTrace(); } catch (ParserConfigurationException e) { e.printStackTrace(); } }

    Read the article

  • Merging two templates in iText

    - by Shaggy Frog
    Let's say I have two PDF templates created with Adobe Acrobat, which are both single-page, 8.5x11 documents. The first template (A.pdf) has content for the top half of the page. The second template (B.pdf) has content for the bottom half of the page. (It just so happens the content in both templates does not "overlap" each other.) I would like to use iText to take these two templates and create a single, "merged" template from it (C.pdf) that is only a single page (with A.pdf's content on the top half and B.pdf's content on the bottom half). (I do not want to "merge" these two files into a 2-page document. I need the final product to be a single page.) I will be running iText in a servlet environment (Tomcat 6) but I don't think that makes a difference to the answer. Is this possible?

    Read the article

  • Coding Conventions - Naming Enums

    - by Walter White
    Hi all, Is there a document describing how to name enumerations? My preference is that an enum is a type. So, for instance, you have an enum Fruit{Apple,Orange,Banana,Pear, ... } NetworkConnectionType{LAN,Data_3g,Data_4g, ... } I am opposed to naming it: FruitEnum NetworkConnectionTypeEnum I understand it is easy to pick off which files are enums, but then you would also have: NetworkConnectionClass FruitClass Also, is there a good document describing the same for constants, where to declare them, etc.? Walter

    Read the article

  • about Select algorithm

    - by matin1234
    Hi I have read about the select algorithm and I have a question maybe it looks silly!!! but why we consider the array as groups of 5 elements ?? can we consider it with 7 or 3 elements??thanks

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >