Search Results

Search found 32961 results on 1319 pages for 'java'.

Page 759/1319 | < Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >

  • Blob object not working properly even though the class is seralized

    - by GustlyWind
    I have class which is seralized and does convert a very large amount of data object to blob to save it to database.In the same class there is decode method to convert blob to the actual object.Following is the code for encode and decode of the object. private byte[] encode(ScheduledReport schedSTDReport) { byte[] bytes = null; try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(schedSTDReport); oos.flush(); oos.close(); bos.close(); //byte [] data = bos.toByteArray(); //ByteArrayOutputStream baos = new ByteArrayOutputStream(); //GZIPOutputStream out = new GZIPOutputStream(baos); //XMLEncoder encoder = new XMLEncoder(out); //encoder.writeObject(schedSTDReport); //encoder.close(); bytes = bos.toByteArray(); //GZIPOutputStream out = new GZIPOutputStream(bos); //out.write(bytes); //bytes = bos.toByteArray(); } catch (Exception e) { _log.error("Exception caught while encoding/zipping Scheduled STDReport", e); } decode(bytes); return bytes; } /* * Decode the report definition blob back to the * ScheduledReport object. */ private ScheduledReport decode(byte[] bytes) { ByteArrayInputStream bais = new ByteArrayInputStream(bytes); ScheduledReport sSTDR = null; try { ObjectInputStream ois = new ObjectInputStream(bais); //GZIPInputStream in = new GZIPInputStream(bais); //XMLDecoder decoder = new XMLDecoder(in); sSTDR = (ScheduledReport)ois.readObject();//decoder.readObject(); //decoder.close(); } catch (Exception e) { _log.error("IOException caught while decoding/unzipping Scheduled STDReport", e); } return sSTDR; } The problem here is whenver I change something else in this class means any other method,a new class version is created and so the new version the class is unable to decode the originally encoded blob object. The object which I am passing for encode is also seralized object but this problem exists. Any ideas thanks

    Read the article

  • Hibernate cascading: should setting to null on a parent delete children?

    - by EugeneP
    I wonder if Hib works as expected in my case? My Cascading options are set to "all,delete-orphan". Table_A @OneToOne Table_B Table_B @OneToMany Table_C Now it looks like Table_A . getTable_B . getTable_C_Collection() Suppose there are elements in Table_C collection. What I expect from Hibernate: if I set Table_B link to null, then all Table_C collection elements MUST BE DELETED. It does not happen. They become ORPHANED !

    Read the article

  • Date Sorting - Latest to Oldest

    - by Erika Szabo
    Collections.sort(someList, new Comparator<SomeObject>() { public int compare(final SomeObject object1, final SomeObject object2) { return (object1.getSomeDate()).compareTo(object2.getSomeDate()); }} ); Would it give me the objects with latest dates meaning the list will contain the set of objects with latest date to oldest date?

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • versioning fails for onetomany collection holder

    - by Alexander Vasiljev
    given parent entity @Entity public class Expenditure implements Serializable { ... @OneToMany(mappedBy = "expenditure", cascade = CascadeType.ALL, orphanRemoval = true) @OrderBy() private List<ExpenditurePeriod> periods = new ArrayList<ExpenditurePeriod>(); @Version private Integer version = 0; ... } and child one @Entity public class ExpenditurePeriod implements Serializable { ... @ManyToOne @JoinColumn(name="expenditure_id", nullable = false) private Expenditure expenditure; ... } While updating both parent and child in one transaction, org.hibernate.StaleObjectStateException is thrown: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): Indeed, hibernate issues two sql updates: one changing parent properties and another changing child properties. Do you know a way to get rid of parent update changing child? The update results both in inefficiency and false positive for optimistic lock. Note, that both child and parent save their state in DB correctly. Hibernate version is 3.5.1-Final

    Read the article

  • How to create a JAX-RS service where the sub-resource @Path doesn't have a leading slash

    - by Matt
    I've created a JAX-RS service (MyService) that has a number of sub resources, each of which is a subclass of MySubResource. The sub resource class being chosen is picked based on the parameters given in the MyService path, for example: @Path("/") @Provides({"text/html", "text/xml"}) public class MyResource { @Path("people/{id}") public MySubResource getPeople(@PathParam("id") String id) { return new MyPeopleSubResource(id); } @Path("places/{id}") public MySubResource getPlaces(@PathParam("id") String id) { return new MyPlacesSubResource(id); } } where MyPlacesSubResource and MyPeopleSubResource are both sub-classes of MySubResource. MySubResource is defined as: public abstract class MySubResource { protected abstract Results getResults(); @GET public Results get() { return getResults(); } @GET @Path("xml") public Response getXml() { return Response.ok(getResults(), MediaType.TEXT_XML_TYPE).build(); } @GET @Path("html") public Response getHtml() { return Response.ok(getResults(), MediaType.TEXT_HTML_TYPE).build(); } } Results is processed by corresponding MessageBodyWriters depending on the mimetype of the response. While this works it results in paths like /people/Bob/html or /people/Bob/xml where what I really want is /people/Bob.html or /people/Bob.xml Does anybody know how to accomplish what I want to do?

    Read the article

  • Disabling JComboBox and retaining original item list

    - by n002213f
    My action listener on a JComboBox invokes a thread. I would like the component to be disabled until the thread completes. I have tried calling seEnabled(false) when the thread start and setEnabled(true) when it completes. Unfortunately setEnabled(false) clears the combo box list as well. Is there a way of disabling the component but retain the original list?

    Read the article

  • Java_swt in eclipse

    - by Ken
    Can someone tell me where I can find the executeable "java_swt"? I see multiple sites that say it is embedded in eclipse, and other sites say it is shipped with Mac swt drops. I have the zip file for a mac called "swt-3.5M6-carbon-macosx.zip" and i have the eclipse IDE installed on my test mac machine and windows machine. But i cannot find this executeable which i need to run an swt app smoothly on mac os x. Any help would be appreciated. Thanks.

    Read the article

  • Spring - Transaction Readonly

    - by AAK
    Hello Gurus! Just wanted your expert opinions on declarative transaction management for Spring. Here is my setup - A. DAO Layer is Plain old JDBC using jdbcTemplete (No hibernate etc) B. Service Layer is POJO with declarative trasnactions as follows - save*, readonly=false, rollback for Throwable Things work fine with above setup. However when I say get*, readonly=true I see errors in my log file saying - Database connection cannot be marked as readonly. This happens for all get* methods in Service Layer. Now my questions - A. Do I have to say get* as readonly? All my get* methods are pure read DB operations. I do not wish to run them in any transaction context. How serious is the above error? B. When I remove the get* confiiguration, I do not see the errors, morever all my simple get* operations are performed without transactions. Is this the way to go? C. Why would anyone want to have transactional methods where readonly = true? Is there any practical significance of this configuration? Thank you! As always your resposes are much appreciated!

    Read the article

  • No exception, no error, still i dont recieve the json object from my http post

    - by user2978538
    My source code: final Thread t = new Thread() { public void run() { Looper.prepare(); HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); HttpResponse response; JSONObject obj = new JSONObject(); try { HttpPost post = new HttpPost("http://pc.dyndns-office.com/mobile.asp"); obj.put("Model", ReadIn1); obj.put("Product", ReadIn2); obj.put("Manufacturer", ReadIn3); obj.put("RELEASE", ReadIn4); obj.put("SERIAL", ReadIn5); obj.put("ID", ReadIn6); obj.put("ANDROID_ID", ReadIn7); obj.put("Language", ReadIn8); obj.put("BOARD", ReadIn9); obj.put("BOOTLOADER", ReadIn10); obj.put("BRAND", ReadIn11); obj.put("CPU_API", ReadIn12); obj.put("DISPLAY", ReadIn13); obj.put("FINGERPRINT", ReadIn14); obj.put("HARDWARE", ReadIn15); obj.put("UUID", ReadIn16); StringEntity se = new StringEntity(obj.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); post.setHeader("host", "http://pc.dyndns-office.com/mobile.asp"); response = client.execute(post); if (response != null) { InputStream in = response.getEntity().getContent(); } } catch (Exception e) { e.printStackTrace(); } Looper.loop(); } }; t.start(); } } i want to send an Json object to a Website. As far as I can see, I set the header, but still I get this exception, can someone help me? (I'm using Android-Studio) __ Edit: i don't get any exceptions anymore, but still i do not receive the json packet. When i manually call the website i get a log file entry. Does anyone know, what's wrong? Edit2: When i debug i get as response "HTTP/1.1 400 bad request" i'm sure its not an permission problem. Any ideas?

    Read the article

  • Servlet mapping url patterns

    - by Scobal
    I have the following urls that need mapping to two different servlets. Can anyone suggest a working url-pattern please? vehlocsearch-ws: /ws/vehlocsearch/vehlocsearch /ws/vehavailrate/vehavailratevehlocsearch /ws/vehavailrate/vehavailratevehlocsearch.wsdl vehavailrate-ws: /ws/vehavailrate/vehavailrate /ws/vehavailrate/vehavailratevehavailrate /ws/vehavailrate/vehavailratevehavailrate.wsdl So far I have this, which feels right, but isn't: <servlet-mapping> <servlet-name>vehlocsearch-ws</servlet-name> <url-pattern>*.vehlocsearch*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>vehavailrate-ws</servlet-name> <url-pattern>*.vehavailrate*</url-pattern> </servlet-mapping> Note: I have no control over the incoming urls

    Read the article

  • Hibernate N+1 from select across multiple tables

    - by Marty Pitt
    Given the following hibernate query: String sql = "select distinct changeset " + "from Changeset changeset " + "join fetch changeset.changeEntries as changeEntry " + "join fetch changeEntry.repositoryEntity as repositoryEntity " + "join fetch repositoryEntity.project as project " + "join fetch changeset.author as changesetAuthor " + "where project.id = :projectID "; Why is this resulting in an N+1 problem? I expect this to generate the following single SQL statement (or something similar) select * from Changeset inner join changeEntry on changeset.id = changeEntry.changeset_id inner join repositoryEntity on changeEntry.repositoryentity_id = repositoryentity.id inner join project on repositoryentity.project_id = project.id where project.id = ? Instead, I see many many select statements firing. The data model here looks like this: I would like the full object graph returned from the Select statement in a single trip to the database, which is why I'm explicitly using "fetch" in the hibernate query. The Hibernate log statements are as follows: Hibernate: select distinct changeset0_.id as id2_0_, changeentr1_.id as id1_1_, repository2_.id as id9_2_, project3_.id as id6_3_, user4_.id as id7_4_, changeset0_.author_id as author5_2_0_, changeset0_.createDate as createDate2_0_, changeset0_.message as message2_0_, changeset0_.revision as revision2_0_, changeentr1_.changeType as changeType1_1_, changeentr1_.changeset_id as changeset4_1_1_, changeentr1_.diff as diff1_1_, changeentr1_.repositoryEntity_id as reposito5_1_1_, changeentr1_.repositoryEntityVersion_id as reposito6_1_1_, changeentr1_.sourceChangeEntry_id as sourceCh7_1_1_, changeentr1_.changeset_id as changeset4_0__, changeentr1_.id as id0__, repository2_.project_id as connecti6_9_2_, repository2_.name as name9_2_, repository2_.parent_id as parent7_9_2_, repository2_.path as path9_2_, repository2_.state as state9_2_, repository2_.type as type9_2_, project3_.projectName as connecti2_6_3_, project3_.driverName as driverName6_3_, project3_.isAnonymous as isAnonym4_6_3_, project3_.lastUpdatedRevision as lastUpda5_6_3_, project3_.password as password6_3_, project3_.url as url6_3_, project3_.username as username6_3_, user4_.username as username7_4_, user4_.email as email7_4_, user4_.name as name7_4_, user4_.password as password7_4_, user4_.principles as principles7_4_, user4_.userType as userType7_4_ from Changeset changeset0_ inner join ChangeEntry changeentr1_ on changeset0_.id=changeentr1_.changeset_id inner join RepositoryEntity repository2_ on changeentr1_.repositoryEntity_id=repository2_.id inner join project project3_ on repository2_.project_id=project3_.id inner join users user4_ on changeset0_.author_id=user4_.id where project3_.id=? order by changeset0_.revision desc Hibernate: select repository0_.id as id10_9_, repository0_.changeEntry_id as changeEn2_10_9_, repository0_.repositoryEntity_id as reposito3_10_9_, changeentr1_.id as id1_0_, changeentr1_.changeType as changeType1_0_, changeentr1_.changeset_id as changeset4_1_0_, changeentr1_.diff as diff1_0_, changeentr1_.repositoryEntity_id as reposito5_1_0_, changeentr1_.repositoryEntityVersion_id as reposito6_1_0_, changeentr1_.sourceChangeEntry_id as sourceCh7_1_0_, changeset2_.id as id2_1_, changeset2_.author_id as author5_2_1_, changeset2_.createDate as createDate2_1_, changeset2_.message as message2_1_, changeset2_.revision as revision2_1_, user3_.id as id7_2_, user3_.username as username7_2_, user3_.email as email7_2_, user3_.name as name7_2_, user3_.password as password7_2_, user3_.principles as principles7_2_, user3_.userType as userType7_2_, repository4_.id as id9_3_, repository4_.project_id as connecti6_9_3_, repository4_.name as name9_3_, repository4_.parent_id as parent7_9_3_, repository4_.path as path9_3_, repository4_.state as state9_3_, repository4_.type as type9_3_, project5_.id as id6_4_, project5_.projectName as connecti2_6_4_, project5_.driverName as driverName6_4_, project5_.isAnonymous as isAnonym4_6_4_, project5_.lastUpdatedRevision as lastUpda5_6_4_, project5_.password as password6_4_, project5_.url as url6_4_, project5_.username as username6_4_, repository6_.id as id9_5_, repository6_.project_id as connecti6_9_5_, repository6_.name as name9_5_, repository6_.parent_id as parent7_9_5_, repository6_.path as path9_5_, repository6_.state as state9_5_, repository6_.type as type9_5_, repository7_.id as id10_6_, repository7_.changeEntry_id as changeEn2_10_6_, repository7_.repositoryEntity_id as reposito3_10_6_, repository8_.id as id9_7_, repository8_.project_id as connecti6_9_7_, repository8_.name as name9_7_, repository8_.parent_id as parent7_9_7_, repository8_.path as path9_7_, repository8_.state as state9_7_, repository8_.type as type9_7_, changeentr9_.id as id1_8_, changeentr9_.changeType as changeType1_8_, changeentr9_.changeset_id as changeset4_1_8_, changeentr9_.diff as diff1_8_, changeentr9_.repositoryEntity_id as reposito5_1_8_, changeentr9_.repositoryEntityVersion_id as reposito6_1_8_, changeentr9_.sourceChangeEntry_id as sourceCh7_1_8_ from RepositoryEntityVersion repository0_ left outer join ChangeEntry changeentr1_ on repository0_.changeEntry_id=changeentr1_.id left outer join Changeset changeset2_ on changeentr1_.changeset_id=changeset2_.id left outer join users user3_ on changeset2_.author_id=user3_.id left outer join RepositoryEntity repository4_ on changeentr1_.repositoryEntity_id=repository4_.id left outer join project project5_ on repository4_.project_id=project5_.id left outer join RepositoryEntity repository6_ on repository4_.parent_id=repository6_.id left outer join RepositoryEntityVersion repository7_ on changeentr1_.repositoryEntityVersion_id=repository7_.id left outer join RepositoryEntity repository8_ on repository7_.repositoryEntity_id=repository8_.id left outer join ChangeEntry changeentr9_ on changeentr1_.sourceChangeEntry_id=changeentr9_.id where repository0_.id=? The 2nd one is repeated many times - for a result set of 17 objects, the 2nd statement executed 521 times. I suspect this is as a result of the parent/child relationship in the RepositoryEntity object. For the purposes of this select, I actually only require the parent object fetched. Any suggestions?

    Read the article

  • Consistent Equals() results, but inconsistent TreeMap.containsKey() result

    - by smessing
    I have the following object Node: private class Node implements Comparable<Node>(){ private String guid(); ... public boolean equals(Node o){ return (this == o); } public int hashCode(){ return guid.hashCode(); } ... } And I use it in the following TreeMap: TreeMap<Node, TreeSet<Edge>> nodes = new TreeMap<Node, TreeSet<Edge>>(); Now, the tree map is used in a class called Graph to store nodes currently in the graph, along with a set of their edges (from the class Edge). My problem is when I try to execute: public containsNode(n){ for (Node x : nodes.keySet()) { System.out.println("HASH CODE: "); System.out.print(x.hashCode() == n.hashCode()); System.out.println("EQUALS: "); System.out.print(x.equals(n)); System.out.println("CONTAINS: "); System.out.print(nodes.containsKey(n)); System.out.println("N: " + n); System.out.println("X: " + x); } } I sometimes get the following: HASHCODE: true EQUALS: true CONTAINS: false N: foo X: foo Anyone have an idea as to what I'm doing wrong? I'm still new to all this, so I apologize in advance if I'm overlooking something simple (I know hashCode() doesn't really matter for TreeMap, but I figured I'd include it).

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • String Manipulation: Spliting Delimitted Data

    - by Milli Szabo
    I need to split some info from a asterisk delimitted data. Data Format: NAME*ADRESS LINE1*ADDRESS LINE2 Rules: 1. Name should be always present 2. Address Line 1 and 2 might not be 3. There should be always three asterisks. Samples: MR JONES A ORTEGA*ADDRESS 1*ADDRESS2* Name: MR JONES A ORTEGA Address Line1: ADDRESS 1 Address Line2: ADDRESS 2 A PAUL*ADDR1** Name: A PAUL Address Line1: ADDR1 Address Line2: Not Given My algo is: 1. Iterate through the characters in the line 2. Store all chars in a temp variables until first * is found. Reject the data if no char is found before first occurence of asterisk. If some chars found, use it as the name. 3. Same as step 2 for finding address line 1 and 2 except that this won't reject the data if no char is found My algo looks ugly. The code looks uglier. Spliting using //* doesn't work either since name can be replaced with address line 1 If the data is *Address 1*Address2, split will create two indexes in the array where index 0 will have the value of Address 1 and index 2 will have the value of Address2. Where's the name. Was there a name? Any suggestion?

    Read the article

  • understanding this regex

    - by DarthVader
    I m trying to understand what the following does. ^([^=]+)(?:(?:\\=)(.+))?$ Any ideas? This is being used here. Obviously it s command line parser but i m trying to understand the syntax so i can actually run the program. This is from commandline-jmxclient , they have no documents on setting JMX properties but in their source code, there is such an option, so i just want to understand how i can invoke that method. Matcher m = Client.CMD_LINE_ARGS_PATTERN.matcher(command); if ((m == null) || (!m.matches())) { throw new ParseException("Failed parse of " + command, 0); } this.cmd = m.group(1); if ((m.group(2) != null) && (m.group(2).length() > 0)) this.args = m.group(2).split(","); else this.args = null;

    Read the article

  • Do Blob properties on entities affect query performance?

    - by Jaroslav Záruba
    Hello I'm trying to make my mind on whether to store a binary representation of an entity as its Blob property, or whether I better keep the blobs in some separate 'wrapping' class. Possible impact on memory heap and/or a query execution time are my concerns in the first case, complexity votes against the other one. I know Blobs are not indexed, i.e. index size is not what I'm worrying about. Also I assume for blobs Datastore puts defaultFetchGroup to false, but does it mean that blobs don't make a difference in queries? Regards J. Záruba

    Read the article

  • struts validation problem in IE

    - by user265201
    I am using Struts 2.1.8 and facing validation problem in IE. I am getting the following error An exception occurred: Error. Error message: Invalid argument. I tried out to figure out the cause and found the following. My generated javascript code is: field = form.elements['district.name']; var error = "Enter only alphabets for district"; if (continueValidation && field.value != null && !field.value.match("^[a-zA-Z ]*$")) { addError(field, error); errors = true; } I tried to mock up by putting the same code in a function and calling it in onclick event. The method addError() throws the exception and the reason is field variable. If I change it to field[0], it works fine. How to fix this error?

    Read the article

  • Multiple Active Calls in Blackberry

    - by Umar Siddique
    i need help in finding any software for blackberry device which can do multiple active calls like if i hold my first call and make my second call then also put this call on hold without disconnecting the first call and recieve or make third call is there any solution for this ?

    Read the article

  • copy or clone a HSSFWorkbook

    - by Fortega
    Hi, Currently I am doing the following in a loop (at least 300 times): - create a HSSFWorkbook from a template file - add some values to specific cells in the workbook - save the workbook as a new excel file The first line takes about 70% of the time (reading excel file). What I would like to do is to take this out of the loop, and read the file only once. In the loop, I would like to copy or clone the template HSSFWorkbook. However, I can't find anything about copying/cloning a HSSFWorkbook. Did some of you do this before? Any tips?

    Read the article

  • How does lookup work?

    - by badgirl
    Hello. I have BeanTreeView, and some nodes in it. Every node has constructor public class ProjectNode extends AbstractNode { public ProjectNode(MainProject obj, DiagramsChildren childrens) { super (new ProjectsChildren(), Lookups.singleton(obj)); setDisplayName ( obj.getName()); } I set Rootnode as a root for tree in ExplorerTopComponent as this: private final ExplorerManager mgr = new ExplorerManager(); public ExplorerTopComponent(){ associateLookup (ExplorerUtils.createLookup(mgr, getActionMap())); mgr.setRootContext(new RootNode()); } And now, how I can get MainProject obj from some node? I need to get it in another class.

    Read the article

< Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >