Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 82/284 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • ASP.NET / WCF - Execute Server.Execute Asynchronously

    - by user208662
    Hello, I need to run the HttpContext.Current.Server.Execute method in my ASP.NET application. This application has a WCF operation that does some processing. Currently, I am to do my processing correctly from within my WCF operation. However, I would like to do this asynchronously. In an error to attempt this asynchronously, I tried running Server.Execute in the DoWork event handler of a BackgroundWorker. Unfortunately, this throws an error that says "object reference not set to an instance of an object" The HttpContext element is not null. I checked that. It is some property nested in the HttpContext object that appears to be null. However, I have not been able to identify why this won't work. It happens as soon as I move the processing to the BackgroundWorker thread. My question is, how can I asynchronously execute the Server.Execute method? Thank you,

    Read the article

  • C# threading pattern that will let me flush

    - by Jeff Alexander
    I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to do thread my work. I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish. Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue. Anyone have any suggestion on a threading pattern that fits my needs?

    Read the article

  • Removing the contents of a Chan or MVar in a single discrete step

    - by Bill
    I'm writing a discrete simulation where request values from multiple threads accumulate in a centralized queue. Every n milliseconds, a manager wakes up to process requests. When the manager wakes up, it should retrieve all of the contents of the central queue in a single discrete step. While processing these, any client threads attempting to submit to the queue should block. When processing completes, the queue reopens and the manager goes back to sleep. What's the best way to do this? The retry behavior of STM isn't really what I want. If I use a Chan or MVar, there's no way to prevent clients from enqueuing additional requests during processing. One approach is to use an MVar as a mutex on a Chan holding the queue. Are there other ways to do this?

    Read the article

  • How to outperform this regex replacement?

    - by spender
    After considerable measurement, I have identified a hotspot in one of our windows services that I'd like to optimize. We are processing strings that may have multiple consecutive spaces in it, and we'd like to reduce to only single spaces. We use a static compiled regex for this task: private static readonly Regex regex_select_all_multiple_whitespace_chars = new Regex(@"\s+",RegexOptions.Compiled); and then use it as follows: var cleanString= regex_select_all_multiple_whitespace_chars.Replace(dirtyString.Trim(), " "); This line is being invoked several million times, and is proving to be fairly intensive. I've tried to write something better, but I'm stumped. Given the fairly modest processing requirements of the regex, surely there's something faster. Could unsafe processing with pointers speed things further?

    Read the article

  • How to implement paging in NHibernate with a left join query

    - by Gabe Moothart
    I have an NHibernate query that looks like this: var query = Session.CreateQuery(@" select o from Order o left join o.Products p where (o.CompanyId = :companyId) AND (p.Status = :processing) order by o.UpdatedOn desc") .SetParameter("companyId", companyId) .SetParameter("processing", Status.Processing) .SetResultTransformer(Transformers.DistinctRootEntity); var data = query.List<Order>(); I want to implement paging for this query, so I only return x rows instead of the entire result set. I know about SetMaxResults() and SetFirstResult(), but because of the left join and DistinctRootEntity, that could return less than x Orders. I tried "select distinct o" as well, but the sql that is generated for that (using the sqlserver 2008 dialect) seems to ignore the distinct for pages after the first one (I think this is the problem). What is the best way to accomplish this?

    Read the article

  • How to give highest priority to events generated from main thread than those generated from secondar

    - by martjno
    I have a c++ application written in wxWidgets, which has a main thread (GUI) and a working thread (calculations). The working thread executes commands requested by the main thread and communicates the result to the main thread posting an event after every step of the processing. The problem is that when the working thread is sending many events consecutively, the gui requests made by the user (i.e. interrupt the processing clicking a button) won't be processed by the event handler until the working thread has finished. This is actually happening on OSX, on Windows it works perfectly. I've tried to wxThread::SetPriority and wxThread::Yield but nothing changes. It is working if I put wxThread::Sleep in the working thread, but this slows down very much the processing.

    Read the article

  • .NET: What is the purpose of the ProhibitDtd property in XmlReaderSettings? Why is DTD a security i

    - by Cheeso
    The documentation says: When set to true, the XmlReader throws an XmlException when any DTD content is encountered. Do not enable DTD processing if you are concerned about Denial of Service issues or if you are dealing with untrusted sources. If you have DTD processing enabled, you can use the XmlSecureResolver to restrict the resources that the XmlReader can access. You can also design your application so that the XML processing is memory and time constrained. For example, configure time-out limits in your ASP.NET application. Can someone please explain the issue? Why would a reader application want to prohibit the retrieval of a DTD? Where is the denial-of-service issue, if it is a reading application? What is the "trust" issue that is mentioned? Thanks

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • .Remove(object) on a List<T> returned from LINQ to SQL compiled query won't delete the Object right

    - by soldieraman
    I am returning two lists from the database using LINQ to SQL compiled query. While looping the first list I remove duplicates from the second list as I dont want to process already existing objects again. eg. //oldCustomers is a List returned by my Compiled Linq to SQL Statmenet that I have added a .ToList() at the end to //Same goes for newCustomers for (Customer oC in oldCustomers) { //Do some processing newCustomers.Remove(newCusomters.Find(nC=> nC.CustomerID == oC.CusomterID)); } for (Cusomter nC in newCustomers) { //Do some processing } DataContext.SubmitChanges() I expect this to only save the changes that have been made to the customers in my processing and not Remove or Delete any of my customers from the database. Correct? I have tried it and it works fine - but I am trying to know if there is any rare case it might actually get removed

    Read the article

  • Lightweight messaging (async invocations) in Java

    - by Sergey Mikhanov
    I am looking for lightweight messaging framework in Java. My task is to process events in a SEDA’s manner: I know that some stages of the processing could be completed quickly, and others not, and would like to decouple these stages of processing. Let’s say I have components A and B and processing engine (be this container or whatever else) invokes component A, which in turn invokes component B. I do not care if execution time of component B will be 2s, but I do care if execution time of component A is below 50ms, for example. Therefore, it seems most reasonable for component A to submit a message to B, which B will process at the desired time. I am aware of different JMS implementations and Apache ActiveMQ: they are too heavyweight for this. I searched for some lightweight messaging (with really basic features like messages serialization and simplest routing) to no avail. Do you have anything to recommend in this issue?

    Read the article

  • RadGrid Ajax PopAlert

    - by user272671
    I have a situation where I need to, based on user selection and some server-side processing, display a message to allow user to choose to continue processing or cancel. I have a RadGrid populated with data from the database. When User adds a new item to the grid,I want to do some processing in the back end and then inform the user of what could result and give him/her the choice to continue and believe that a message box or modal popup/radalert is the best way to do it, but how do I create the message in the back end and then using a popup, display the message and block until user responds. How do I do it please?

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Struggling to "clear" a CGLayer -- can it even be done?

    - by Joe Blow
    So I'm doing this repetitively - making a CGLayer, doing some processing, and then releasing it. This happens a lot in real time -- so surely there is a lot of overhead in making a whole new CGLayer each time? Surely it would be better to just keep lair around, and start fresh each time? However, I do not know any way, at all, to "erase" or "start from blank" a CGLayer?? Can anyone help on this? There is a function CGContextBeginPath(cc) but it's confusing: it seems to only clear out "that" path, it does not erase all of the CGLayer back to a blank canvas. how to return a CGLayer to a blank canvas????? Does anyone know? CGLayerRef lair = CGLayerCreateWithContext( UIGraphicsGetCurrentContext(), CGSizeMake(1024,768), NULL); CGContextRef cc = CGLayerGetContext(ether); // various processing here CGContextAddPath(cc, somePath); // various processing here CGLayerRelease(lair); Any ideas?!

    Read the article

  • Is there any memory restrictions on an ASP.Net application? HttpHandler?

    - by tpower
    I have an ASP.Net MVC application that allows users to upload images. When I try to upload a really large file (400MB) I get an error. I assumed that my image processing code (home brew) was very inefficient, so I decided I would try using a third party library to handle the image processing parts. Because I'm using TDD, I wanted to first write a test that fails. But when I test the controller action with the same large file it is able to do all the image processing without any trouble. The error I get is "Out of memory". I'm sure my code is probably using a lot more memory than it needs to but I just want to know why my test passes. The other difference is that I'm using SWFUpload which is not used with the test. Could this be the cause?

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

  • JBoss Seam project can not be run/deployed

    - by user1494328
    I created sample application in Seam framework (Seam Web Project) and JBoss Server 7.1. When I try run application, console dislays: 23:29:35,419 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,452 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015877: Stopped deployment secoundProject-ds.xml in 1ms 23:29:35,455 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015863: Replacement of deployment "secoundProject-ds.xml" by deployment "secoundProject-ds.xml" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE: Failed to process phase PARSE of deployment \"secoundProject-ds.xml\""}} 23:29:35,457 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "secoundProject-ds.xml" 23:29:35,920 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,952 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" My secoundProject-ds.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE datasources PUBLIC "-//JBoss//DTD JBOSS JCA Config 1.5//EN" "http://www.jboss.org/j2ee/dtd/jboss-ds_1_5.dtd"> <datasources> <local-tx-datasource> <jndi-name>secoundProjectDatasource</jndi-name> <use-java-context>true</use-java-context> <connection-url>jdbc:mysql://localhost:3306/database</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password></password> </local-tx-datasource> </datasources> When I comment tags errors disappear, but application is disabled in browser (The requested resource (/secoundProject/) is not available.). What should I do to fix this problem?

    Read the article

  • java.lang.OutOfMemoryError on ec2 machine

    - by vinchan
    I have a java app on a large instance that will spawn up to 800 threads. I can run the application fine as user "root" but not as another user which I created. I get the deadly. java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:657) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1325) nightmare. I have tried increasing the stack size already in limits.conf to no avail. Please, help me out. What is different here for the root and other user?

    Read the article

  • What would be better in my case - apache, nginx or lighttpd ?

    - by The Devil
    Hey everybody, I'm writing a php site that's expected to get about 200-300 concurrent users browsing it. When initializing the application will load about 30 php classes, some 10 maybe 15 images and a couple of css files. So my question is what else can I do (except optimizing my code and using apc/eaccelerator for php) to get as close as possible to those numbers of concurrent users ? Currently we haven't chosen a server for the site to be hosted on but most probably it'll be a VPS Dual core + 2 or maybe 4gb ram. Is it possible for such a server to handle that load ? Also how could I test it myself and be sure that it'll be able to handle it ? Thanks in advance, Me

    Read the article

  • how limit the number of open TCP streams from same IP to a local port?

    - by JMW
    Hi, i would like to limit the number of concurrent open TCP streams from the the same IP to the server's (local) port. Let's say 4 concurrent conncetions. How can this be done with ip tables? the closest thing, that i've found was: In Apache, is there a way to limit the number of new connections per second/hour/day? iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 86400 --hitcount 100 -j REJECT But this limitation just messures the number of new connections over the time. This might be good for controlling HTTP traffic. But this is not a good solution for me, since my TCP streams usually have a lifetime between 5 minutes and 2 hours. thanks a lot in advance for any reply :)

    Read the article

  • AB failed requests - What can I do about them?

    - by matthewsteiner
    So, in the past I've never had any problems with this app. All benchmarks had 100% success rate. Yesterday I set up nginx to server static content and pass on other requests to apache. Now, if I have 1 concurrent user (-c 1) then everything is fine. But it seems the more concurrent users I have, the more failed requests I get. Not a lot, but maybe about 10 or 15 out of 350. They're "length", whatever that means. Visiting the website with a browser, I don't have any problems at all. How can I find out the cause of these failed requests?

    Read the article

  • TCP and fair bandwidth sharing

    - by lxgr
    The congestion control algorithm(s) of TCP seem to distribute the available bandwidth fairly between individual TCP flows. Is there some way to enable (or more precisely, enforce) fair bandwidth sharing on a per-host instead of a per-flow basis on a router? There should not be an (easy) way for a user to gain a disproportional bandwidth share by using multiple concurrent TCP flows (the way some download managers and most P2P clients do). I'm currently running a DD-WRT router to share a residential DSL line, and currently it's possible to (inadvertently or maliciously) hog most of the bandwidth by using multiple concurrent connections, which affecty VoIP conversations badly. I've played with the QoS settings a bit, but I'm not sure how to enable fair bandwidth sharing on a per-IP basis (per-service is not an option, as most of the flows are HTTP).

    Read the article

  • Can't run my servlet from tomcat server even though the classes are in package

    - by Mido
    Hi there, i am trying to get my servlet to run, i have been searching for 2 days and trying every possible solution and no luck. The servet class is in the appropriate folder (i.e under the package name). I also added the jar files needed in my servlet into lib folder. the web.xml file maps the url and defines the servlet. So i did everything in the documentation and wt people said in here and still getting this error : type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class assign1a.RPCServlet org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoClassDefFoundError: assign1a/RPCServlet (wrong name: server/RPCServlet) java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) java.lang.ClassLoader.defineClass(ClassLoader.java:616) java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2820) org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1143) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1638) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1516) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) note The full stack trace of the root cause is available in the Apache Tomcat/7.0.5 logs. Also here is my servlet code : package assign1a; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import lib.jsonrpc.RPCService; public class RPCServlet extends HttpServlet { /** * */ private static final long serialVersionUID = -5274024331393844879L; private static final Logger log = Logger.getLogger(RPCServlet.class.getName()); protected RPCService service = new ServiceImpl(); public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); response.getWriter().write("rpc service " + service.getServiceName() + " is running..."); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { try { service.dispatch(request, response); } catch (Throwable t) { log.log(Level.WARNING, t.getMessage(), t); } } } Please help me :) Thanks. EDIT: here are the contents of my web.xml file <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <servlet> <servlet-name>jsonrpc</servlet-name> <servlet-class>assign1a.RPCServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>jsonrpc</servlet-name> <url-pattern>/rpc</url-pattern> </servlet-mapping> </web-app>

    Read the article

  • Why does this Java code not utilize all CPU cores?

    - by ReneS
    The attached simple Java code should load all available cpu core when starting it with the right parameters. So for instance, you start it with java VMTest 8 int 0 and it will start 8 threads that do nothing else than looping and adding 2 to an integer. Something that runs in registers and not even allocates new memory. The problem we are facing now is, that we do not get a 24 core machine loaded (AMD 2 sockets with 12 cores each), when running this simple program (with 24 threads of course). Similar things happen with 2 programs each 12 threads or smaller machines. So our suspicion is that the JVM (Sun JDK 6u20 on Linux x64) does not scale well. Did anyone see similar things or has the ability to run it and report whether or not it runs well on his/her machine (= 8 cores only please)? Ideas? I tried that on Amazon EC2 with 8 cores too, but the virtual machine seems to run different from a real box, so the loading behaves totally strange. package com.test; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class VMTest { public class IntTask implements Runnable { @Override public void run() { int i = 0; while (true) { i = i + 2; } } } public class StringTask implements Runnable { @Override public void run() { int i = 0; String s; while (true) { i++; s = "s" + Integer.valueOf(i); } } } public class ArrayTask implements Runnable { private final int size; public ArrayTask(int size) { this.size = size; } @Override public void run() { int i = 0; String[] s; while (true) { i++; s = new String[size]; } } } public void doIt(String[] args) throws InterruptedException { final String command = args[1].trim(); ExecutorService executor = Executors.newFixedThreadPool(Integer.valueOf(args[0])); for (int i = 0; i < Integer.valueOf(args[0]); i++) { Runnable runnable = null; if (command.equalsIgnoreCase("int")) { runnable = new IntTask(); } else if (command.equalsIgnoreCase("string")) { runnable = new StringTask(); } Future<?> submit = executor.submit(runnable); } executor.awaitTermination(1, TimeUnit.HOURS); } public static void main(String[] args) throws InterruptedException { if (args.length < 3) { System.err.println("Usage: VMTest threadCount taskDef size"); System.err.println("threadCount: Number 1..n"); System.err.println("taskDef: int string array"); System.err.println("size: size of memory allocation for array, "); System.exit(-1); } new VMTest().doIt(args); } }

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • The C++ Standard Template Library as a BDB Database (part 1)

    - by Gregory Burd
    If you've used C++ you undoubtedly have used the Standard Template Libraries. Designed for in-memory management of data and collections of data this is a core aspect of all C++ programs. Berkeley DB is a database library with a variety of APIs designed to ease development, one of those APIs extends and makes use of the STL for persistent, transactional data storage. dbstl is an STL standard compatible API for Berkeley DB. You can make use of Berkeley DB via this API as if you are using C++ STL classes, and still make full use of Berkeley DB features. Being an STL library backed by a database, there are some important and useful features that dbstl can provide, while the C++ STL library can't. The following are a few typical use cases to use the dbstl extensions to the C++ STL for data storage. When data exceeds available physical memory.Berkeley DB dbstl can vastly improve performance when managing a dataset which is larger than available memory. Performance suffers when the data can't reside in memory because the OS is forced to use virtual memory and swap pages of memory to disk. Switching to BDB's dbstl improves performance while allowing you to keep using STL containers. When you need concurrent access to C++ STL containers.Few existing C++ STL implementations support concurrent access (create/read/update/delete) within a container, at best you'll find support for accessing different containers of the same type concurrently. With the Berkeley DB dbstl implementation you can concurrently access your data from multiple threads or processes with confidence in the outcome. When your objects are your database.You want to have object persistence in your application, and store objects in a database, and use the objects across different runs of your application without having to translate them to/from SQL. The dbstl is capable of storing complicated objects, even those not located on a continous chunk of memory space, directly to disk without any unnecessary overhead. These are a few reasons why you should consider using Berkeley DB's C++ STL support for your embedded database application. In the next few blog posts I'll show you a few examples of this approach, it's easy to use and easy to learn.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >