Search Results

Search found 33297 results on 1332 pages for 'java java ee'.

Page 796/1332 | < Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >

  • How can I get jsonp to play nice with my class?

    - by George Edison
    This whole jsonp thing is quite confusing... Here is what I want to do: I have a class DataRetriever The class has a method GetData GetData makes a jsonp request with the following code: var new_tag = document.createElement('script'); new_tag.type = 'text/javascript'; new_tag.src = 'http://somesite.com/somemethod?somedata'; // Add the element var bodyRef = document.getElementsByTagName("body").item(0); bodyRef.appendChild(new_tag); Now, the jsonp data from the server somesite.com can call a function in my code with the data. The problem is, how does the data get delivered to the instance of DataRetriever that requested it? I'm really stuck here.

    Read the article

  • Programmatically loading Entity classes with JPA 2.0?

    - by Dennetik
    With Hibernate you can load your Entity classes as: sessionFactory = new AnnotationConfiguration() .addPackage("test.animals") .addAnnotatedClass(Flight.class) .addAnnotatedClass(Sky.class) .addAnnotatedClass(Person.class) .addAnnotatedClass(Dog.class); Is there a way to do the same thing - programmatically loading your Entity classes - in a JPA 2.0 compliant way?

    Read the article

  • Is there a useDirtyFlag option for Tomcat 6 cluster configuration?

    - by kevinjansz
    In Tomcat 5.0.x you had the ability to set useDirtyFlag="false" to force replication of the session after every request rather than checking for set/removeAttribute calls. <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.SimpleTcpReplicationManager" expireSessionsOnShutdown="false" **useDirtyFlag="false"** doClusterLog="true" clusterLogName="clusterLog"> ... The comments in the server.xml stated this may be used to make the following work: <% HashMap map = (HashMap)session.getAttribute("map"); map.put("key","value"); %> i.e. change the state of an object that has already been put in the session and you can be sure that this object still be replicated to the other nodes in the cluster. According to the Tomcat 6 documentation you only have two "Manager" options - DeltaManager & BackupManager ... neither of these seem to allow this option or anything like it. In my testing the default setup: <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> where you get the DeltaManager by default, it's definitely behaving as useDirtyFlag="true" (as I'd expect). So my question is - is there an equivalent in Tomcat 6? Looking at the source I can see a manager implementation "org.apache.catalina.ha.session.SimpleTcpReplicationManager" which does have the useDirtyFlag but the javadoc comments in this state it's "Tomcat Session Replication for Tomcat 4.0" ... I don't know if this is ok to use - I'm guessing not as it's not mentioned in the main cluster configuration documentation.

    Read the article

  • How do I execute a sequence of servlets?

    - by Legend
    I have some servlets that act as individual URLs for populating a database for some dummy testing. Something of the form: public class Populate_ServletName extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { resp.setContentType("text/plain"); //Insert records //Print confirmation } } I have about 6 such servlets which I want to execute in a sequence. I was thinking of using setLocation to set the next page to be redirected but was not sure if this is the right approach because the redirects should happen after the records have been inserted. Specifically, I am looking for something like this: public class Populate_ALL extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { resp.setContentType("text/plain"); //Call Populate_1 //Call Populate_2 //Call Populate_3 //... } } Any suggestions?

    Read the article

  • What to do when ServerSocket throws IOException and keeping server running

    - by s5804
    Basically I want to create a rock solid server. while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... } catch (IOException e) { e.printStackTrace(); // NOW WHAT? } } In the IOException block, what to do? Is the Server socket at fault so it need to be recreated? For example wait a few seconds and then serverSocket = ServerSocketFactory.getDefault().createServerSocket(MY_PORT); However if the server socket is still OK, then it is a pity to close it and kill all previously accepted connections that are still communicating. EDIT: After some answers, here my attempt to deal with the IOException. Would the implementation be guaranteeing keeping the server up and only re-create server socket when only necessary? while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... bindExceptionCounter = 0; } catch (IOException e) { e.printStackTrace(); recreateServerSocket(); } } private void recreateServerSocket() { while (keepRunning) { try { logger.info("Try to re-create Server Socket"); ServerSocket socket = ServerSocketFactory.getDefault().createServerSocket(RateTableServer.RATE_EVENT_SERVER_PORT); // No exception thrown, then use the new socket. serverSocket = socket; break; } catch (BindException e) { logger.info("BindException indicates that the server socket is still good.", e); bindExceptionCounter++; if (bindExceptionCounter < 5) { break; } } catch (IOException e) { logger.warn("Problem to re-create Server Socket", e); e.printStackTrace(); try { Thread.sleep(30000); } catch (InterruptedException ie) { logger.warn(ie); } } } }

    Read the article

  • Clean up upon the kill signal

    - by Begui
    How do you handle clean up when the program receives a kill signal? For instance, there is an application I connect to that wants any third party app (my app) to send a finish command. What is the best say to send that finish command when my app has been destroyed with a kill -9?

    Read the article

  • Why do I not see stricter scoping more often?

    - by Ben
    I've found myself limiting scope fairly often. I find it makes code much clearer, and allows me to reuse variables much more easily. This is especially handy in C where variables must be declared at the start of a new scope. Here is an example of what I mean. { int h = 0; foreach (var item in photos) { buffer = t.NewRow(); h = item.IndexOf("\\x\\"); buffer["name"] = item.Substring(h, item.Length - h); t.Rows.Add(buffer); } } With this example, I've limited the scope of h, without initializing it in every iteration. But I don't see many other developers doing this very often. Why is that? Is there a downside to doing this?

    Read the article

  • Deleting orphans with JPA

    - by homaxto
    I have a one-to-one relation where I use CascadeType.PERSIST. This has over time build up a huge amount of child records that has not been deleted, to such an extend that it is reflected in the performance. Now I wish to add some code that cleans up the database removing all the child records that are not referenced by a parent. At the moment we are talking 400K+ records, at I need to run the code on all customer installations just to be sure they do not run into the same problem. I think the best solution would be to run a named query (because we support two databases) that deletes the necessary records, and this is where I get into problems, because how should I write it in JPQL? The result I want can be defined like the following sql statement, which unfortunaltely does not run on MySQL. DELETE FROM child c1 WHERE c1.pk NOT IN (SELECT DISTINCT p.pk FROM child c2 JOIN parent p ON p.child = c2.pk);

    Read the article

  • Cab't run a web application with GWText

    - by Anto
    I am using the GWT and the GWTExt libraries with Eclipse for the first time. I have followed all the procedures but when I go run the web application the following error appears: 1) In the Problems tab, I have this message: Description Resource Path Location Type The following classpath entry 'C:\Documents and Settings\CiuffreA\Desktop\GWTExt\gwtext-2.0.5\gwtext.jar' will not be available on the server's classpath GWTProject Unknown Google Web App Problem 2) In the Development Mode tab, the following 2 messages appears: 23:41:25.906 [ERROR] [mockupproject] Unable to load module entry point class com.example.myproject.client.MockUpProject Failed to load module 'mockupproject' from user agent 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1042 Safari/532.5' at localhost:3853 If anyone has a clue about where the problem may be, please give me a hint...

    Read the article

  • Example of standalone Apache Qpid (amqp) Junit Test

    - by Wiretap
    Does anyone have an example of using Apache Qpid within a standalone junit test. Ideally I want to be able to create a queue on the fly which I can put/get msgs within my test. So I'm not testing QPid within my test, I'll use integration tests for that, however be very useful to test methods handling msgs with having to mock out a load of services.

    Read the article

  • Stub web calls in Scala

    - by Dennis Laumen
    I'm currently writing a wrapper of the Spotify Metadata API to learn Scala. Everything's fine and dandy but I'd like to unit test the code. To properly do this I'll need to stub the Spotify API and get consistent return values (stuff like popularity of tracks changes very frequently). Does anybody know how to stub web calls in Scala, the JVM in general or by using some external tool I could hook up into my Maven setup? PS I'm basically looking for something like Ruby's FakeWeb... Thanks in advance!

    Read the article

  • Apache Commons Net FTPClient and listFiles()

    - by Vladimir
    Can anyone explain me what's wrong with the following code? I tried different hosts, FTPClientConfigs, it's properly accessible via firefox/filezilla... FTPClientConfig config = new FTPClientConfig(FTPClientConfig.SYST_L8); FTPClient client = new FTPClient(); client.configure(config); client.connect("c64.rulez.org"); client.login("anonymous", "anonymous"); client.enterRemotePassiveMode(); FTPFile[] files = client.listFiles(); Assert.assertTrue(files.length > 0);

    Read the article

  • Refactored App Engine project - now Eclipse is infinitely building the project without end

    - by Yog
    I renamed some files in my App Engine project and refactored code, changing references to variables. Everything seemed fine until I changed the references in the web.xml for the project. Then I got a complaint about some error with the DataNucleus enhancer and now the project build process is stuck at 22%. I tried stopping Eclipse and restarting but the build process keeps hanging. Any tips on how to clean out whatever it's getting stuck on?

    Read the article

  • Loading velocity template inside a jar file

    - by Rafael
    I have a project where I want to load a velocity template to complete it with parameters. The whole application is packaged as a jar file. What I initially thought of doing was this: VelocityEngine ve = new VelocityEngine(); URL url = this.getClass().getResource("/templates/"); File file = new File(url.getFile()); ve = new VelocityEngine(); ve.setProperty(RuntimeConstants.RESOURCE_LOADER, "file"); ve.setProperty(RuntimeConstants.FILE_RESOURCE_LOADER_PATH, file.getAbsolutePath()); ve.setProperty(RuntimeConstants.FILE_RESOURCE_LOADER_CACHE, "true"); ve.init(); VelocityContext context = new VelocityContext(); if (properties != null) { stringfyNulls(properties); for (Map.Entry<String, Object> property : properties.entrySet()) { context.put(property.getKey(), property.getValue()); } } final String templatePath = templateName + ".vm"; Template template = ve.getTemplate(templatePath, "UTF-8"); String outFileName = File.createTempFile("p2d_report", ".html").getAbsolutePath(); BufferedWriter writer = new BufferedWriter(new FileWriter(new File(outFileName))); template.merge(context, writer); writer.flush(); writer.close(); And this works fine when I run it in eclipse. However, once I package the program and try to run it using the command line I get an error because the file could not be found. I imagine the problem is in this line: ve.setProperty(RuntimeConstants.FILE_RESOURCE_LOADER_PATH, file.getAbsolutePath()); Because in a jar the absolute file does not exist, since it's inside a zip, but I couldn't yet find a better way to do it. Anyone has any ideas?

    Read the article

  • apache cxf: multiple endpoint /multipleCXFServlet servlet

    - by robinmag
    Hi, I've a cxf webservice with multiple endpoint. I've succesfully deploy it. The problem is all endpoint's WSDL appear in the same servlet URL. Can i have 2 org.apache.cxf.transport.servlet.CXFServlet servlet in the same web.xml and each servlet serve one endpoint so that i have endpoint1 at http:/locahost/app/endpoint1 and endpoint2 at http:/locahost/app/endpoint2 . Thank you.

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • JMF. Create new custom streamdatasource

    - by Afro Genius
    Hi there. I am looking to create a means of building a DataSource object (and hence a Processor) that gets data from a stream instead of a file, RTP, and so on. I am writing a module for a much larger application that is meant to transparently transcode audio data. Going through the JMF docs only specify how to create a source from file however I need to be able to create a source from a stream within my application. Any idea where I can start looking?

    Read the article

  • DB2 Driver Connection Hanging in Glassfish Connection Pool

    - by Ant
    We have an intermittent issue around the DB2 used from a Glassfish connection pool. What happens is this: Under situations where the database (DB2 on ZOS) is under stress, our application (which is a multi-threaded application using connections to DB2 via a Glassfish connection pool) stops doing anything. The following are observed: 1) Looking at the server using JConsole, we can see a thread waiting indefinitely in the DB2 driver's getConnection() method. We can also see that it has gained a lock on a Vector within the driver. Several other threads are also calling the getConnection() method in the driver, and are hanging waiting for the lock on the Vector to be released. 2) Looking at the database itself, we can see that there are connections from the Glassfish server open and waiting to be used. It seems that there is some sort of mismatch between the connection pool on Glassfish and the connections actually open to DB2. Has anyone come across this issue before? Or something similar? If you need any more information that I haven't provided, then please let me know!

    Read the article

  • 500 Worker Threads, what kind of thread pool?

    - by Submerged
    I am wondering if this is the best way to do this. I have about 500 threads that run indefinitely, but Thread.sleep for a minute when done one cycle of processing. ExecutorService es = Executors.newFixedThreadPool(list.size()+1); for (int i = 0; i < list.size(); i++) { es.execute(coreAppVector.elementAt(i)); //coreAppVector is a vector of extends thread objects } The code that is executing is really simple and basically just this class aThread extends Thread { public void run(){ while(true){ Thread.sleep(ONE_MINUTE); //Lots of computation every minute } } } I do need a separate threads for each running task, so changing the architecture isn't an option. I tried making my threadPool size equal to Runtime.getRuntime().availableProcessors() which attempted to run all 500 threads, but only let 8 (4xhyperthreading) of them execute. The other threads wouldn't surrender and let other threads have their turn. I tried putting in a wait() and notify(), but still no luck. If anyone has a simple example or some tips, I would be grateful! Well, the design is arguably flawed. The threads implement Genetic-Programming or GP, a type of learning algorithm. Each thread analyzes advanced trends makes predictions. If the thread ever completes, the learning is lost. That said, I was hoping that sleep() would allow me to share some of the resources while one thread isn't "learning"

    Read the article

  • ArrayList<String> NullPointerException

    - by Carlucho
    Am trying to solve a labyrinth by DFS, using adj List to represent the vertices and edges of the graph. In total there are 12 nodes (3 rows[A,B,C] * 4 cols[0,..,3]). My program starts by saving all the vertex labels (A0,..C3), so far so good, then checks the adjacent nodes, also no problems, if movement is possible, it proceeds to create the edge, here its where al goes wrong. adjList[i].add(vList[j].label); I used the debugger and found that vList[j].label is not null it contains a correct string (ie. "B1"). The only variables which show null are in adjList[i], which leads me to believe i have implemented it wrongly. this is how i did it. public class GraphList { private ArrayList<String>[] adjList; ... public GraphList(int vertexcount) { adjList = (ArrayList<String>[]) new ArrayList[vertexCount]; ... } ... public void addEdge(int i, int j) { adjList[i].add(vList[j].label); //NULLPOINTEREXCEPTION HERE } ... } I will really appreaciate if anyone can point me on the right track regrading to what its going wrong... Thanks!

    Read the article

  • ExecutorService memory leak on exception

    - by TofuBeer
    I am having a hard time tracking this down since the profiler keeps crashing (hotspot error). Before I go too deep into figuring it out I'd like to know if I really have a problem or not :-) I have a few thread pools created via: Executors.newFixedThreadPool(10); The threads connect to different web sites and, on occasion, I get connection refused and wind up throwing an exception. When I later on call Future.get() to get the result it will then catch the ExecutionException that wraps the exception that was thrown when the connection could not be made. The program uses a fairly constant amount of memory up until the point in time that the exceptions get thrown (they tend to happen in batches when a particular site is overloaded). After that point the memory again remains constant but at a higher level. So my question is along the lines of is the memory behaviour (reported by "top" on Unix) expected because the exceptions just triggered something or do I probably have an actual leak that I'll need to track down? Additionally when Future.get() throws an exception is there anything else I need to do besides catch the exception (such as call Future.cancel() on it)?

    Read the article

  • Vector ArrayIndexOutOfBounds

    - by Esmond
    I'm having an ArrayIndexOutofBounds exception with the following code. The exception is thrown at the line where Node nodeJ = vect.get(j) but it does not make sense to me since j is definitely smaller than i and Node nodeI = vect.get(i) does not throw any exception. any help is appreciated. public static Vector join(Vector vect) throws ItemNotFoundException { Vector<Node> remain = vect; for (int i = 1; i < vect.size(); i++) { Node nodeI = vect.get(i); for (int j = 0; j < i; j++) {//traverse the nodes before nodeI Node nodeJ = vect.get(j); if (nodeI.getChild1().getSeq().equals(nodeJ.getSeq())) { nodeI.removeChild(nodeJ); nodeI.setChild(nodeJ); remain.remove(j); } if (nodeI.getChild2().getSeq().equals(nodeJ.getSeq())) { nodeI.removeChild(nodeJ); nodeI.setChild(nodeJ); remain.remove(j); } } } return remain; }

    Read the article

  • Draw a position from a 2d Array on respected canvas location

    - by Anon
    Background: I have two 2d arrays. Each index within each 2d array represents a tile which is drawn on a square canvas suitable for 8 x 8 tiles. The first 2d array represents the ground tiles and is looped and drawn on the canvas using the following code: //Draw the map from the land 2d array map = new Canvas(mainFrame, 20, 260, 281, 281); for(int i=0; i < world.length; i++){ for(int j=0; j < world[i].length; j++){ for(int x=0; x < 280; x=x+35){ for(int y=0; y < 280; y=y+35){ Point p = new Point(x,y); map.add(new RectangleObject(p,35,35,Colour.green)); } } } } This creates a grid of green tiles 8 x 8 across as intended. The second 2d array represents the position on the ground. This 2d array has everyone of its indexes as null apart from one which is comprised of a Person class. Problem I am unsure of how I can draw the position on the grid. I was thinking of a similar loop, so it draws over the previous 2d array another set of 64 tiles. Only this time they are all transparent but the one tile which isn't null. In other words, the tile where Person is located. I wanted to use a search throughout the loop using a comparative if statement along the lines of if(!(world[] == null)){ map.add(new RectangleObject(p,35,35,Colour.red));} However my knowledge is limited and I am confused on how to implement it.

    Read the article

< Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >