Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 88/284 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • c# Network Programming - HTTPWebRequest Scraping

    - by masterguru
    Hi, I am building a web scraping application. It should scrape a complex web site with concurrent HttpWebRequests from a single host to a single target web server. The application should run on Windows server 2008. One single HttpWebRequest for data could take from 1 minute to 4 minutes to complete (because of long running db operations) I should have at least 100 parallel requests to the target web server, but i have noticed that when i use more then 2-3 long-running requests i have big performance issues (request timeouts/hanging). How many concurrent requests can i have in this scenario from a single host to a single target web server? can i use Thread Pools in the application to run parallel HttpWebRequests to the server? will i have any issues with the default outbound HTTP connection/requests limits? what about Request timeouts when i reach outbound connection limits? what would be the best setup for my scenario? Any help would be appreciated. Thanks

    Read the article

  • TFS Load Testing Web Tests

    - by durilai
    I am configuring a load test and am curious/confused on settings. I am testing an intranet website, that is expected to have 6000 concurrent users. My employer had some previous consultant tell them that the load test users does not matter and that we need to worry about requests/second. They have previously determined that those 6000 users would generate 30 rps, while I feel that is not correct we need to show that we can exceed that number. The previous load test was set for only 200 users and the results showed that it did exceed the 200 rps. They were happy with the results, but that is not how I understand this. My question is, if we need to support 6000 concurrent users should I just set my users to 6000 and run, or is the rps an adequate piece of data to rely on? Any help is appreciated. Thanks

    Read the article

  • Java ThreadPoolExecutor getting stuck while using ArrayBlockingQueue

    - by Ravi Rao
    Hi, I'm working on some application and using ThreadPoolExecutor for handling various tasks. ThreadPoolExecutor is getting stuck after some duration. To simulate this in a simpler environment, I've written a simple code where I'm able to simulate the issue. import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class MyThreadPoolExecutor { private int poolSize = 10; private int maxPoolSize = 50; private long keepAliveTime = 10; private ThreadPoolExecutor threadPool = null; private final ArrayBlockingQueue&lt;Runnable&gt; queue = new ArrayBlockingQueue&lt;Runnable&gt;( 100000); public MyThreadPoolExecutor() { threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue); threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable runnable, ThreadPoolExecutor threadPoolExecutor) { System.out .println(&quot;Execution rejected. Please try restarting the application.&quot;); } }); } public void runTask(Runnable task) { threadPool.execute(task); } public void shutDown() { threadPool.shutdownNow(); } public ThreadPoolExecutor getThreadPool() { return threadPool; } public void setThreadPool(ThreadPoolExecutor threadPool) { this.threadPool = threadPool; } public static void main(String[] args) { MyThreadPoolExecutor mtpe = new MyThreadPoolExecutor(); for (int i = 0; i &lt; 1000; i++) { final int j = i; mtpe.runTask(new Runnable() { @Override public void run() { System.out.println(j); } }); } } } Try executing this code a few times. It normally print outs the number on console and when all threads end, it exists. But at times, it finished all task and then is not getting terminated. The thread dump is as follows: MyThreadPoolExecutor [Java Application] MyThreadPoolExecutor at localhost:2619 (Suspended) Daemon System Thread [Attach Listener] (Suspended) Daemon System Thread [Signal Dispatcher] (Suspended) Daemon System Thread [Finalizer] (Suspended) Object.wait(long) line: not available [native method] ReferenceQueue&lt;T&gt;.remove(long) line: not available ReferenceQueue&lt;T&gt;.remove() line: not available Finalizer$FinalizerThread.run() line: not available Daemon System Thread [Reference Handler] (Suspended) Object.wait(long) line: not available [native method] Reference$Lock(Object).wait() line: 485 Reference$ReferenceHandler.run() line: not available Thread [pool-1-thread-1] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-2] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-3] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-4] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-6] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-8] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-5] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-10] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-9] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-7] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [DestroyJavaVM] (Suspended) C:\Program Files\Java\jre1.6.0_07\bin\javaw.exe (Jun 17, 2010 10:42:33 AM) In my actual application,ThreadPoolExecutor threads go in this state and then it stops responding. Regards, Ravi Rao

    Read the article

  • atomic writes to ehcache

    - by Jacques René Mesrine
    Context I am storing a java.util.List inside ehcache. Key(String) --> List<UserDetail> The ordered List contains a Top 10 ranking of my most active users. Problem Concurrent 3rd party clients might be requesting for this list. I have a requirement to be as current as possible with regards to the ranking. Thus if the ranking is changed due the activities of users, the ordered List in the cache must not be left stale for very long. Once I've recalculated a new List, I want to replace the one in cache immediately. Consider a busy scenario whereby multiple concurrent clients are requesting for the ranking; how can I replace the cache item in an fashion such that: Clients can continue to pull a possibly stale snapshot. They should never get a null value. There will only be 1 server thread that writes to the cache.

    Read the article

  • What happens if two COM classes each without a threading model are implemented in one in-proc COM se

    - by sharptooth
    Consider a situation. I have an in-proc COM server that contains two COM classes. Both classes are marked as "no threading model" in the registry - the "ThreadingModel" value is just absent. Both classes read/write the same set of global variable without any synchronization. As far as I know "no threading model" will enforce COM to disallow concurrent access to the same or different instances of the same class by different threads. Will COM prevent concurrent access to instances of the two abovementioned different classes? Do I need synchronization when accessing the global variables from two different COM classes in this situation?

    Read the article

  • Producer/consumer in Grails?

    - by Mulone
    Hi all, I'm trying to implement a Consumer/Producer app in Grails, after several unsuccessful attempts at implementing concurrent threads. Basically I want to store all the events coming from a clients (through separate AJAX calls) in a single queue and then process such a queue in a linear way as soon as new events are added. This looks like a Producer/Consumer problem: http://en.wikipedia.org/wiki/Producer-consumer_problem How can I implement this in Grails (maybe with a timer or even better by generating an event 'process queue')? Basically I'd like to a have a singleton service waiting for new events in the queue and processing them linearly (even if the queue is loaded by several concurrent processes). Any hints? Cheers!

    Read the article

  • How to prevent column b containing the same value as any column a in Oracle?

    - by Janek Bogucki
    What is a good way to prevent a table with 2 columns, a and b, from having any record where column b is equal to any value in column a? This would be used for a table of corrections like this, MR -> Mr Prf. -> Prof. MRs -> Mrs I can see how it could be done with a trigger and a subquery assuming no concurrent activity but a more declarative approach would be preferable. This is an example of what should be prevented, Wing Commdr. -> Wing Cdr. Wing Cdr. -> Wing Commander Ideally the solution would work with concurrent inserts and updates.

    Read the article

  • spring web application context is not loaded from jar file in WEB-INF/lib when running tomcat in eclipse

    - by Remy J
    I am experimenting with spring, maven, and eclipse but stumbling on a weird issue. I am running Eclipse Helios SR1 with the STS (Spring tools suite) plugin which includes the Maven plugin also. What i want to achieve is a spring mvc webapp which uses an application context loaded from a local application context xml file, but also from other application contexts in jar files dependencies included in WEB-INF/lib. What i'd ultimately like to do is have my persistence layer separated in its own jar file but containing its own spring context files with persistence specific configuration (e.g a jpa entityManagerFactory for example). So to experiment with loading resources from jar dependencies, i created a simple maven project from eclipse, which defines an applicationContext.xml file in src/main/resources Inside, i define a bean <bean id="mybean" class="org.test.MyClass" /> and create the class in the org.test package I run mvn-install from eclipse, which generates me a jar file containing my class and the applicationContext.xml file: testproj.jar |_META-INF |_org |_test |_MyClass.class |_applicationContext.xml I then create a spring mvc project from the Spring template projects provided by STS. I have configured an instance of Tomcat 7.0.8 , and also an instance of springSource tc Server within eclipse. Deploying the newly created project on both servers works without problem. I then add my previous project as a maven dependency of the mvc project. the jar file is correctly added in the Maven Dependencies of the project. In the web.xml that is generated, i now want to load the applicationContext.xml from the jar file as well as the existing one generated for the project. My web.xml now looks like this: org.springframework.web.context.ContextLoaderListener <!-- Processes application requests --> <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> classpath*:applicationContext.xml, /WEB-INF/spring/appServlet/servlet-context.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> Also, in my servlet-context.xml, i have the following: <context:component-scan base-package="org.test" /> <context:component-scan base-package="org.remy.mvc" /> to load classes from the jar spring context (org.test) and to load controllers from the mvc app context. I also change one of my controllers in org.remy.mvc to autowire MyClass to verify that loading the context has worked as intended. public class MyController { @Autowired private MyClass myClass; public void setMyClass(MyClass myClass) { this.myClass = myClass; } public MyClass getMyClass() { return myClass; } [...] } Now this is the weird bit: If i deploy the spring mvc web on my tomcat instance inside eclipse (run on server...) I get the following error : org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:442) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:458) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:339) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:306) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1133) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1087) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:996) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4834) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5155) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5150) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NoClassDefFoundError: org/test/MyClass at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.getDeclaredMethods(Class.java:1791) at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:446) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandlerMethods(DefaultAnnotationHandlerMapping.java:172) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandler(DefaultAnnotationHandlerMapping.java:118) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.detectHandlers(AbstractDetectingUrlHandlerMapping.java:79) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.initApplicationContext(AbstractDetectingUrlHandlerMapping.java:58) at org.springframework.context.support.ApplicationObjectSupport.initApplicationContext(ApplicationObjectSupport.java:119) at org.springframework.web.context.support.WebApplicationObjectSupport.initApplicationContext(WebApplicationObjectSupport.java:72) at org.springframework.context.support.ApplicationObjectSupport.setApplicationContext(ApplicationObjectSupport.java:73) at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:106) at org.springframework.context.support.ApplicationContextAwareProcessor.postProcessBeforeInitialization(ApplicationContextAwareProcessor.java:85) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:394) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1413) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 25 more 20-Feb-2011 10:54:53 org.apache.catalina.core.ApplicationContext log SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass If i build the war file (using maven "install" goal), and then deploy that war file in the webapps directory of a standalone tomcat server (7.0.8 as well) it WORKS :-( What am i missing ? Thanks for the help.

    Read the article

  • Push data to flex client

    - by KensoDev
    Howday, I want to push data to flex clients. I am talking about anywhere between 5000-15000 concurrent users, need to get data every time a currency is changed so that means lots of changes for lots of users. I have been looking into WebOrb.net, but the performance seem very poor (100 users concurrent) for a product so pricy (we purchased a license). So, I have to look into alternatives, I know there's fluorineFx but it seems no one is really using it for products and it lacks in examples and documentation. My question is: what products can answer my needs (.net backend) and what are the performance I can expect out of these products? Thanks

    Read the article

  • Do MyISAM holes get filled in automatically?

    - by NNN
    When you run a delete on a MyISAM table, it leaves a hole in the table until the table is optimized. This affects concurrent inserts. On the default setting, concurrent_inserts only work for tables without holes. However, in the documentation for MyISAM, under the concurrent_insert section it says: Enables concurrent inserts for all MyISAM tables, even those that have holes. For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread. Otherwise, MySQL acquires a normal write lock and inserts the row into the hole. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_concurrent_insert Does that mean MyISAM automatically fills in holes whenever a new row is insert into the table? Previously I thought the holes would not be fixed until you OPTIMIZE a table.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • java.io.FileNotFoundException for valid URL

    - by Alexei
    Hello. I use library rome.dev.java.net to fetch RSS. Code is URL feedUrl = new URL("http://planet.rubyonrails.ru/xml/rss"); SyndFeedInput input = new SyndFeedInput(); SyndFeed feed = input.build(new XmlReader(feedUrl)); You can check that http://planet.rubyonrails.ru/xml/rss is valid URL and the page is shown in browser. But I get exception from my application java.io.FileNotFoundException: http://planet.rubyonrails.ru/xml/rss at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1311) at com.sun.syndication.io.XmlReader.<init>(XmlReader.java:237) at com.sun.syndication.io.XmlReader.<init>(XmlReader.java:213) at rssdaemonapp.ValidatorThread.run(ValidatorThread.java:32) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) I don't use any proxy. I get this exception on my PC and on the production server and only for this URL, other URLs are working.

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

  • How can I find file system concurrency issues ?

    - by krosenvold
    I have an application running on Linux, and I find myself wanting windows (!). The problem is that every 1000 times or so I run into concurrency problems that are consistent with concurrent reading/writing of files. I am fairly sure that this behavior would be prohibited by file locking under Windows, but I don't have any sufficiently fast windows box to check. There is simply too much file access (too much data) to expect strace to work reliably - the sheer volume of output is likely to change the problem too. It also happens on different files every time. Ideally I would like to change/reconfigure the linux file system to be more restrictive (as in fail-fast) wrt concurrent access. Are there any tools/settings I can use to achieve this ?

    Read the article

  • Actors in Scala.net

    - by weijiajun
    I have recently completed some study of erlang, and was intrigued by scala for its feature set and the ease of interpolating with java (and possibly .net) applications. I am finally studying actors and was wondering if there is an actor mechanism that currently works in .net. I have looked at the libararies that come down with sbaz and have found that there is a scala.Concurrent but no scala.actors.Actor. I tried to use the scala.Concurrent.Channel but was unable to use the ! to send messages. I was just wondering if this is something that is currently available and if so how do you go about setting it up.

    Read the article

  • Android UnknownHost in asyncTask - loading web page

    - by Sneha
    I followed this tutorial for AsyncTask and getting the following error log: 03-23 11:44:42.936: WARN/System.err(315): java.net.UnknownHostException: www.google.co.in 03-23 11:44:42.936: WARN/System.err(315): at java.net.InetAddress.lookupHostByName(InetAddress.java:513) 03-23 11:44:42.936: WARN/System.err(315): at java.net.InetAddress.getAllByNameImpl(InetAddress.java:278) 03-23 11:44:42.936: WARN/System.err(315): at java.net.InetAddress.getAllByName(InetAddress.java:242) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:136) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:348) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) 03-23 11:44:42.936: WARN/System.err(315): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) 03-23 11:44:42.944: WARN/System.err(315): at org.apache.http.impl.client.AbstractHttpC lient.execute(AbstractHttpClient.java:465) 03-23 11:44:42.944: WARN/System.err(315): at com.test.async.AsyncTaskExampleActivity$DownloadWebPageTask.doInBackground (AsyncTaskExampleActivity.java:36) 03-23 11:44:42.944: WARN/System.err(315): at com.test.async.AsyncTaskExampleActivity$DownloadWebPageTask.doInBackground (AsyncTaskExampleActivity.java:1) 03-23 11:44:42.944: WARN/System.err(315): at android.os.AsyncTask$2.call(AsyncTask.java:185) 03-23 11:44:42.944: WARN/System.err(315): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 03-23 11:44:42.944: WARN/System.err(315): at java.util.concurrent.FutureTask.run (FutureTask.java:137) 03-23 11:44:42.944: WARN/System.err(315): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 03-23 11:44:42.944: WARN/System.err(315): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 03-23 11:44:42.944: WARN/System.err(315): at java.lang.Thread.run(Thread.java:1096) How do i fix it?? My Code: public class AsyncTaskExampleActivity extends Activity { private TextView textView; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); textView = (TextView) findViewById(R.id.TextView01); } private class DownloadWebPageTask extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... urls) { String response = ""; Log.i("", "in doInBackgroundddddddddd.........."); Log.i("", "in readWebpageeeeeeeeeeeee"); /* * try { InetAddress i = * InetAddress.getByName("http://google.co.in"); } catch * (UnknownHostException e1) { e1.printStackTrace(); } */ for (String url : urls) { Log.i("", "in for looooooop doInBackgroundddddddddd.........."); DefaultHttpClient client = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); try { Log .i("", "afetr for looooooop try doInBackgroundddddddddd.........."); HttpResponse execute = client.execute(httpGet); Log .i("", "afetr for looooooop try client ..execute doInBackgroundddddddddd.........."); InputStream content = execute.getEntity().getContent(); BufferedReader buffer = new BufferedReader( new InputStreamReader(content)); String s = ""; while ((s = buffer.readLine()) != null) { response += s; Log .i("", "afetr while looooooop try client ..execute doInBackgroundddddddddd.........."); } } catch (Exception e) { e.printStackTrace(); } } Log .i("", "afetr lasttttttttttttt b4 response doInBackgroundddddddddd.........."); return response; } @Override protected void onPostExecute(String result) { Log.i("", "in onPostExecuteeee.........."); textView.setText(result); } } public void readWebpage(View view) { /* * System.setProperty("http.proxyHost", "10.132.116.10"); * System.setProperty("http.proxyPort", "3128"); */ DownloadWebPageTask task = new DownloadWebPageTask(); task.execute(new String[] { "http://google.co.in" }); Log.i("", "in readWebpageeeeeeeeeeeee after execute.........."); } } main.xml: <Button android:id="@+id/readWebpage" android:layout_width="match_parent" android:layout_height="wrap_content" android:onClick="readWebpage" android:text="Load Webpage"> </Button> <TextView android:id="@+id/TextView01" android:layout_width="match_parent" android:layout_height="match_parent" android:text="Example Text"> </TextView> Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.test.async" android:versionCode="1" android:versionName="1.0"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".AsyncTaskExampleActivity" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Thanks Sneha

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • MS Access vs SQL Server and others ? Is it worth taking a db server when less than 2 Gb and only 20

    - by asksuperuser
    After my experiment with MSAccess vs MySQL which shows MS Access hugely overperforming Mysql odbc insert by a factor 1000% before I would do the same experiment with SQL Server I searched for some other's people and found this one: http://blog.nkadesign.com/2009/access-vs-sql-server-some-stats-part-1/ which says "As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth." So is worth bother with some db server when data is less than 2 Gb and users are about 20 (knowing that MS Access theorically supports up to 255 concurrent users though practically it's around a dozen concurrent users only). Are there any real world studies that really compare MS Access with other db in these specific use case ? Because professionaly speaking I keep hearing people systematically recommend DB server from people who have never used Access just because they think DB Server can only perform better in every case which I used to think myself I confess.

    Read the article

  • Handling file upload in a non-blocking manner

    - by Kaliyug Antagonist
    The background thread is here Just to make objective clear - the user will upload a large file and must be redirected immediately to another page for proceeding different operations. But the file being large, will take time to be read from the controller's InputStream. So I unwillingly decided to fork a new Thread to handle this I/O. The code is as follows : The controller servlet /** * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse * response) */ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub System.out.println("In Controller.doPost(...)"); TempModel tempModel = new TempModel(); tempModel.uploadSegYFile(request, response); System.out.println("Forwarding to Accepted.jsp"); /*try { Thread.sleep(1000 * 60); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ request.getRequestDispatcher("/jsp/Accepted.jsp").forward(request, response); } The model class package com.model; import java.io.IOException; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.utils.ProcessUtils; public class TempModel { public void uploadSegYFile(HttpServletRequest request, HttpServletResponse response) { // TODO Auto-generated method stub System.out.println("In TempModel.uploadSegYFile(...)"); /* * Trigger the upload/processing code in a thread, return immediately * and notify when the thread completes */ try { FileUploaderRunnable fileUploadRunnable = new FileUploaderRunnable( request.getInputStream()); /* * Future<FileUploaderRunnable> future = ProcessUtils.submitTask( * fileUploadRunnable, fileUploadRunnable); * * FileUploaderRunnable processed = future.get(); * * System.out.println("Is file uploaded : " + * processed.isFileUploaded()); */ Thread uploadThread = new Thread(fileUploadRunnable); uploadThread.start(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } /* * catch (InterruptedException e) { // TODO Auto-generated catch block * e.printStackTrace(); } catch (ExecutionException e) { // TODO * Auto-generated catch block e.printStackTrace(); } */ System.out.println("Returning from TempModel.uploadSegYFile(...)"); } } The Runnable package com.model; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.nio.ByteBuffer; import java.nio.channels.Channels; import java.nio.channels.ReadableByteChannel; public class FileUploaderRunnable implements Runnable { private boolean isFileUploaded = false; private InputStream inputStream = null; public FileUploaderRunnable(InputStream inputStream) { // TODO Auto-generated constructor stub this.inputStream = inputStream; } public void run() { // TODO Auto-generated method stub /* Read from InputStream. If success, set isFileUploaded = true */ System.out.println("Starting upload in a thread"); File outputFile = new File("D:/06c01_output.seg");/* * This will be changed * later */ FileOutputStream fos; ReadableByteChannel readable = Channels.newChannel(inputStream); ByteBuffer buffer = ByteBuffer.allocate(1000000); try { fos = new FileOutputStream(outputFile); while (readable.read(buffer) != -1) { fos.write(buffer.array()); buffer.clear(); } fos.flush(); fos.close(); readable.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println("File upload thread completed"); } public boolean isFileUploaded() { return isFileUploaded; } } My queries/doubts : Spawning threads manually from the Servlet makes sense to me logically but scares me coding wise - the container isn't aware of these threads after all(I think so!) The current code is giving an Exception which is quite obvious - the stream is inaccessible as the doPost(...) method returns before the run() method completes : In Controller.doPost(...) In TempModel.uploadSegYFile(...) Returning from TempModel.uploadSegYFile(...) Forwarding to Accepted.jsp Starting upload in a thread Exception in thread "Thread-4" java.lang.NullPointerException at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:512) at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:497) at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:559) at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:324) at org.apache.coyote.Request.doRead(Request.java:422) at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:287) at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:407) at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:310) at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:202) at java.nio.channels.Channels$ReadableByteChannelImpl.read(Unknown Source) at com.model.FileUploaderRunnable.run(FileUploaderRunnable.java:39) at java.lang.Thread.run(Unknown Source) Keeping in mind the point 1., does the use of Executor framework help me in anyway ? package com.utils; import java.util.concurrent.Future; import java.util.concurrent.ScheduledThreadPoolExecutor; public final class ProcessUtils { /* Ensure that no more than 2 uploads,processing req. are allowed */ private static final ScheduledThreadPoolExecutor threadPoolExec = new ScheduledThreadPoolExecutor( 2); public static <T> Future<T> submitTask(Runnable task, T result) { return threadPoolExec.submit(task, result); } } So how should I ensure that the user doesn't block and the stream remains accessible so that the (uploaded)file can be read from it?

    Read the article

  • Java concurrency - Should block or yield?

    - by teto
    Hi, I have multiple threads each one with its own private concurrent queue and all they do is run an infinite loop retrieving messages from it. It could happen that one of the queues doesn't receive messages for a period of time (maybe a couple seconds), and also they could come in big bursts and fast processing is necessary. I would like to know what would be the most appropriate to do in the first case: use a blocking queue and block the thread until I have more input or do a Thread.yield()? I want to have as much CPU resources available as possible at a given time, as the number of concurrent threads may increase with time, but also I don't want the message processing to fall behind, as there is no guarantee of when the thread will be reescheduled for execution when doing a yield(). I know that hardware, operating system and other factors play an important role here, but setting that aside and looking at it from a Java (JVM?) point of view, what would be the most optimal?

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Launching a WPF Window in a Separate Thread, Part 1

    - by Reed
    Typically, I strongly recommend keeping the user interface within an application’s main thread, and using multiple threads to move the actual “work” into background threads.  However, there are rare times when creating a separate, dedicated thread for a Window can be beneficial.  This is even acknowledged in the MSDN samples, such as the Multiple Windows, Multiple Threads sample.  However, doing this correctly is difficult.  Even the referenced MSDN sample has major flaws, and will fail horribly in certain scenarios.  To ease this, I wrote a small class that alleviates some of the difficulties involved. The MSDN Multiple Windows, Multiple Threads Sample shows how to launch a new thread with a WPF Window, and will work in most cases.  The sample code (commented and slightly modified) works out to the following: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create and show the Window Window1 tempWindow = new Window1(); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Set the apartment state newWindowThread.SetApartmentState(ApartmentState.STA); // Make the thread a background thread newWindowThread.IsBackground = true; // Start the thread newWindowThread.Start(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This sample creates a thread, marks it as single threaded apartment state, and starts the Dispatcher on that thread. That is the minimum requirements to get a Window displaying and handling messages correctly, but, unfortunately, has some serious flaws. The first issue – the created thread will run continuously until the application shuts down, given the code in the sample.  The problem is that the ThreadStart delegate used ends with running the Dispatcher.  However, nothing ever stops the Dispatcher processing.  The thread was created as a Background thread, which prevents it from keeping the application alive, but the Dispatcher will continue to pump dispatcher frames until the application shuts down. In order to fix this, we need to call Dispatcher.InvokeShutdown after the Window is closed.  This would require modifying the above sample to subscribe to the Window’s Closed event, and, at that point, shutdown the Dispatcher: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This eliminates the first issue.  Now, when the Window is closed, the new thread’s Dispatcher will shut itself down, which in turn will cause the thread to complete. The above code will work correctly for most situations.  However, there is still a potential problem which could arise depending on the content of the Window1 class.  This is particularly nasty, as the code could easily work for most windows, but fail on others. The problem is, at the point where the Window is constructed, there is no active SynchronizationContext.  This is unlikely to be a problem in most cases, but is an absolute requirement if there is code within the constructor of Window1 which relies on a context being in place. While this sounds like an edge case, it’s fairly common.  For example, if a BackgroundWorker is started within the constructor, or a TaskScheduler is built using TaskScheduler.FromCurrentSynchronizationContext() with the expectation of synchronizing work to the UI thread, an exception will be raised at some point.  Both of these classes rely on the existence of a proper context being installed to SynchronizationContext.Current, which happens automatically, but not until Dispatcher.Run is called.  In the above case, SynchronizationContext.Current will return null during the Window’s construction, which can cause exceptions to occur or unexpected behavior. Luckily, this is fairly easy to correct.  We need to do three things, in order, prior to creating our Window: Create and initialize the Dispatcher for the new thread manually Create a synchronization context for the thread which uses the Dispatcher Install the synchronization context Creating the Dispatcher is quite simple – The Dispatcher.CurrentDispatcher property gets the current thread’s Dispatcher and “creates a new Dispatcher if one is not already associated with the thread.”  Once we have the correct Dispatcher, we can create a SynchronizationContext which uses the dispatcher by creating a DispatcherSynchronizationContext.  Finally, this synchronization context can be installed as the current thread’s context via SynchronizationContext.SetSynchronizationContext.  These three steps can easily be added to the above via a single line of code: // Create a thread Thread newWindowThread = new Thread(new ThreadStart( () => { // Create our context, and install it: SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext( Dispatcher.CurrentDispatcher)); Window1 tempWindow = new Window1(); // When the window closes, shut down the dispatcher tempWindow.Closed += (s,e) => Dispatcher.CurrentDispatcher.BeginInvokeShutdown(DispatcherPriority.Background); tempWindow.Show(); // Start the Dispatcher Processing System.Windows.Threading.Dispatcher.Run(); })); // Setup and start thread as before This now forces the synchronization context to be in place before the Window is created and correctly shuts down the Dispatcher when the window closes. However, there are quite a few steps.  In my next post, I’ll show how to make this operation more reusable by creating a class with a far simpler API…

    Read the article

  • LibPCL issues on Ubuntu 13.10

    - by user254885
    i wanted to install the Point Cloud Library but it does not work i use an ODROID board(ARM processor) Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libpcl-all : Depends: libpcl-1.7-all but it is not going to be installed E: Unable to correct problems, you have held broken packages. by compiling v1.7 , i get these errors : /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__fcntl_nocancel': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:37: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__libc_fcntl': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:53: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:57: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-open64.o): In function `__libc_open64': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:41: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:45: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(cancellation.o):/build/buildd/eglibc-2.17/nptl/cancellation.c:96: more undefined references to `__libc_do_syscall' follow collect2: error: ld returned 1 exit status make[2]: *** [bin/pcl_convert_pcd_ascii_binary] Error 1 make[1]: *** [io/tools/CMakeFiles/pcl_convert_pcd_ascii_binary.dir/all] Error 2 make: *** [all] Error 2 i could not find anything in google to solve these errors i believe some packages were not ported for ARM processors any help would be appreciated $ dpkg --list | grep headers ii linux-headers-3.0.63-odroidx2 20130215 ii linux-headers-3.0.71-odroidx2 20130415 ii linux-headers-3.0.74-odroidx2 20130417 ii linux-headers-3.0.75-odroidx2 20130426 ii linux-headers-3.1.10-6 3.1.10-6.10 ii linux-headers-3.1.10-6-ac100 3.1.10-6.10 ii linux-headers-ac100 3.1.10.6.2 installing packages did'nt do well sudo apt-get install linux-generic [sudo] password for odroid: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: debugedit libasound2-dev libestools2.1-dev librpmbuild3 librpmsign1 thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-ko Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic Suggested packages: fdutils linux-doc-3.11.0 linux-source-3.11.0 linux-tools The following NEW packages will be installed: linux-generic linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 58.2 MB of archives. After this operation, 203 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-3.11.0-17-generic armhf 3.11.0-17.31 [44.5 MB] Get:2 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-generic armhf 3.11.0.17.18 [2,356 B] Get:3 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17 all 3.11.0-17.31 [12.6 MB] Get:4 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17-generic armhf 3.11.0-17.31 [1,128 kB] Get:5 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-generic armhf 3.11.0.17.18 [2,350 B] Get:6 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-generic armhf 3.11.0.17.18 [1,726 B] Fetched 58.2 MB in 13s (4,379 kB/s) Selecting previously unselected package linux-image-3.11.0-17-generic. (Reading database ... 258618 files and directories currently installed.) Unpacking linux-image-3.11.0-17-generic (from .../linux-image-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Examining /etc/kernel/preinst.d/ Done. Selecting previously unselected package linux-image-generic. Unpacking linux-image-generic (from .../linux-image-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-headers-3.11.0-17. Unpacking linux-headers-3.11.0-17 (from .../linux-headers-3.11.0-17_3.11.0-17.31_all.deb) ... Selecting previously unselected package linux-headers-3.11.0-17-generic. Unpacking linux-headers-3.11.0-17-generic (from .../linux-headers-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Selecting previously unselected package linux-headers-generic. Unpacking linux-headers-generic (from .../linux-headers-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-generic. Unpacking linux-generic (from .../linux-generic_3.11.0.17.18_armhf.deb) ... Setting up linux-image-3.11.0-17-generic (3.11.0-17.31) ... Running depmod. update-initramfs: deferring update (hook will be called later) cp: cannot stat ‘/boot/initrd.img-3.11.0-17-generic’: No such file or directory Failed to copy /boot/initrd.img-3.11.0-17-generic to /boot/initrd.img at /var/lib/dpkg/info/linux-image-3.11.0-17-generic.postinst line 730. dpkg: error processing linux-image-3.11.0-17-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.11.0-17-generic; however: Package linux-image-3.11.0-17-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured Setting up linux-headers-3.11.0-17 (3.11.0-17.31) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up linux-headers-3.11.0-17-generic (3.11.0-17.31) ... Examining /etc/kernel/header_postinst.d. Setting up linux-headers-generic (3.11.0.17.18) ... dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.11.0.17.18); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.11.0-17-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1) i had to uninstall these cos they were messing up other packages installation(buildessentials were already installed)

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >