Search Results

Search found 3458 results on 139 pages for 'concurrent queue'.

Page 100/139 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • How to implement buffering with timeout in RX

    - by Gaspar Nagy
    I need to implement an event processing, that is done delayed when there are no new events arriving for a certain period. (I have to queue up a parsing task when the text buffer changed, but I don't want to start the parsing when the user is still typing.) I'm new in RX, but as far as I see, I would need a combination of BufferWithTime and the Timeout methods. I imagine this to be working like this: it buffers the events until they are received regularly within a specified time period between the subsequent events. If there is a gap in the event flow (longer than the timespan) it should return propagate the events buffered so far. Having a look at how Buffer and Timeout is implemented, I could probably implement my BufferWithTimeout method (if everyone have one, please share with me), but I wonder if this can be achieved just by combining the existing methods. Any ideas?

    Read the article

  • Buffer management for socket application best practice

    - by Poni
    Having a Windows IOCP app............ I understand that for async i/o operation (on network) the buffer must remain valid for the duration of the send/read operation. So for each connection I have one buffer for the reading. For sending I use buffers to which I copy the data to be sent. When the sending operation completes I release the buffer so it can be reused. So far it's nice and not of a big issue. What remains unclear is how do you guys do this? Another thing is that even when having things this way, I mean multi-buffers, the receiver side might be flooded (talking from experience) with data. Even setting SO_RCVBUF to 25MB didn't help in my testings. So what should I do? Have a to-be-sent queue?

    Read the article

  • What should the Java main method be for a standalone application (for Spring JMS) ?

    - by Brandon
    I am interested in creating a Spring standalone application that will run and wait to receive messages from an ActiveMQ queue using Spring JMS. I have searched a lot of places and cannot find a consistent way of implementing the main method for such a standalone application. There appears to be few examples of Spring standalone applications. I have looked at Tomcat, JBoss, ActiveMQ and other examples from the around the web but I have not come to a conclusion so ... What is the best practice for implementing a main method for a Java application (specifically Spring with JMS) ?

    Read the article

  • Inheritance classes in Scheme

    - by DreamWalker
    Now I research OOP-part of Scheme. I can define class in Scheme like this: (define (create-queue) (let ((mpty #t) (the-list '())) (define (enque value) (set! the-list (append the-list (list value))) (set! mpty #f) the-list) (define (deque) (set! the-list (cdr the-list)) (if (= (length the-list) 0) (set! mpty #t)) the-list) (define (isEmpty) mpty) (define (ptl) the-list) (define (dispatch method) (cond ((eq? method 'enque) enque) ((eq? method 'deque) deque) ((eq? method 'isEmpty) isEmpty) ((eq? method 'print) ptl))) dispatch)) (Example from css.freetonik.com) Can I implement class inheritance in Scheme?

    Read the article

  • CollectionChanged notification across threads?

    - by Mark
    I'm writing a download manager using C#/WPF, and I just encountered this error: This type of CollectionView does not support changes to its SourceCollection from a thread different from the Dispatcher thread. The basic flow of my program is that a few web pages/downloads are enqueued at the start, and then they're downloaded asynchronously. When an HTML page has completed downloading, I parse it and look for more stuff to download, then enqueue it directly from within the worker thread. I get that error when trying to send out the CollectionChanged event on my customized queue class. However, I need to fire that event so that the GUI can get updated. What are my options?

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Delay function from running for n seconds then run it once. (2minute question)

    - by Ozaki
    TLDR I have a function that runs on the end of a pan in an openlayers map. Don't want it to fire continously. I have a function that runs on the end of panning a map. I want it so that it will not fire the function until say 3 secconds after the pan has finished. Although I don't want to queue up the function to fire 10 or so times like setTimeout is currently doing. How can I delay a function from running for n seconds then run it only once no matter how many times it has been called?

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • How to Promote iPhone App

    - by Ricibald
    I published my app. I request a review to more than 150 sites but only 8 of them give me a (good) review. I can't reach the most famous blogs because I have to pay to bring on top of their queue. Now, my app seems to be nice but I have only 65 users playing it. This 65 users seems to really like it because I can see there are a lot of games played tracked in my stats. A lot of users say that my app is "fun like Doodle Jump and very addictive". So, what's the problem? Why only 65 people? In a word, how can simple games such "Doodle Jump" or "Flight Control" enter in top 10? What's their secret? What's my mistake? UPDATE: I have a site describing my app.

    Read the article

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

  • How to make a thread try to reconnect to the Database x times using JDBCTemplate

    - by gillJ
    Hi, I have a single thread trying to connect to a database using JDBCTemplate as follows: JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource); try{ jdbcTemplate.execute(new CallableStatementCreator() { @Override public CallableStatement createCallableStatement(Connection con) throws SQLException { return con.prepareCall(query); } }, new CallableStatementCallback() { @Override public Object doInCallableStatement(CallableStatement cs) throws SQLException { cs.setString(1, subscriberID); cs.execute(); return null; } }); } catch (DataAccessException dae) { throw new CougarFrameworkException( "Problem removing subscriber from events queue: " + subscriberID, dae); } I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?

    Read the article

  • ArrayBlockingQueue - How to "interrupt" a thread that is wating on .take() method

    - by bernhard
    I use an ArrayBlockingQueue in my code. Clients will wait untill an element becomes available: myBlockingQueue.take(); How can I "shutdown" my service in case no elements are present in the queue and the take() ist wating indefenitely for an element to become available? This method throws an InterruptedException. My question is, how can I "evoke" an Interrupted Exception so that take() will quit? (I also tought about notify(), but it seems I doesnt help here..) I know I could insert an special "EOF/QUIT" marker Element but is this really the only solution? UPDATE (regarding the comment, that points to another question with two solutions: one mentioned above using a "Poisoning Pill Object" and the second one is Thread.interrupt(): The myBlockingQueue.take() is used NOT in a Thread (extending Thread) but rather implements Runnable. It seems a Runnable does not provide the .interrupt() method? How could I interrupt the Runnable? Million Thanks Bernhard

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • Close resources before exiting JFrame and TCP communication in Java

    - by Oz Molaim
    1. I'm writing a chat based application on TCP communication. I'm using NetBeans and I want to add functionality to the default EXIT_ON_CLOSE when exiting JFrame. The reason of course is because I want to clean resources and end threads safely. How can I call a method that clear resources and only then close the JFrame safely and end the process. 2. I need to implement the server side. The server has List/HashMap/Queue of 'Socket' with their chat nick-names. Is there any simple design pattern to do it correctly because I don't want to re-invent the wheel. thanks.

    Read the article

  • make jquery animation faster

    - by darkandcold
    Hello, while loading a page i am using a animation, wid=jQuery(window).width()+400; jQuery('#div').animate({'marginLeft' : '+='+wid+'px'},{queue:false, duration:20000 }) div, is being moved to left in 20 sec. I use this animation for loading page. when page is loaded <body onload=myfunction()> is called. when myfunction is called (page is loadad completly) i want to my animation faster. how to change an animation duration while it's animating?

    Read the article

  • How to limit the number of the same Activity on the stack for an Android application

    - by johnrock
    Is this possible in an Android app? I want to make it so that no matter how many times a user starts activityA, when they hit the back button they will never get more than one occurence of activityA. What I am finding in my current code is that I have only two options: 1. I can call finish() in activityA which will prevent it from being accessible via the back button completely, or 2. I do not call finish(), and then if the user starts activityA (n) times during their usage, there will be (n) instances when hitting the back button. Again, I want to have activityA accessible by hitting the back button, but there is no reason to keep multiple instances of the same activity on the stack. Is there a way to limit the number of instances of an activity in the queue to only 1?

    Read the article

  • Apache, Tomcat and mod_jk for load balancing

    - by pHk
    Hi guys. I've set-up a basic Apache (2.2.x) and Tomcat (6.0.x) set-up using mod_jk for load balancing using the worker.properties file. Preliminary testing seems to show that this works relatively well, and it was quite easy to set-up. However; the fact that it was so easy to set-up has got me a little worried. We're dealing with 100 - 300 concurrent users using the same web application (deployed on 2 or 3 Tomcat instances). I have done a little Googling and looking around on here and there seems to be more than 1 way to accomplish this (one example on here used a balancer:// style URL, which I've never seen before in an Apache config). For example, one question I ask myself is how reliable the load detection on mod_jk really is (Busyness, Session, Request, etc). In your experience, does this set-up prove to be reliable in real world scenarios? Any pointers on improvements, pit falls or interesting literature/articles? I've worked with Apache before, but am in no way an expert. Thanks in advance.

    Read the article

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • Server configration for our website [duplicate]

    - by Varun Varunesh
    This question already has an answer here: Can you help me with my capacity planning? 2 answers We are a start-up and 6 month back we have launched our beta version website. Now we are in a phase of building our website and web-services for the final product. This website will be based on PHP, Python, MySql database and with wamp server. Right now in the beta version we are using Azure VM for hosting, with configuration of 786MB RAM and Shared CPU. We have 200 avg users daily coming to our website. Now we are trying to increase the number of users from 200 to 1500 daily users. And I am thinking our server should have capability to handle at least 100 concurrent user. Also we have developed web-services for our mobile-apps. Which can also increase loads on the sever. So here are the question that takes me here, I am pretty much confused about whether to go with shared hosting or VM based hosting. If VM, then what configuration will be best for our requirement (as I discussed above) ? Currently our VM is a Windows based server and its very simple to manage, So other than cost factor why should I go for Linux based sever? What other factor should I keep in mind while choosing the server as per our requirement ?

    Read the article

  • Nginx + PHP-FPM Too Many Resources

    - by user3393046
    My Server has the following Specs CPU: 6 Cores Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz RAM: 32 GB I have a problem with nginx+php-fpm. They are taking too many resources for an unknown reason. Even if i restart the nginx + php-fpm the start up processes will use many resources. My nginx Config is the following: user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 300000; events { worker_connections 6000; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } My php-fpm pool config is the following [www] user = nginx group = nginx listen = /var/run/php5-fpm.sock listen.owner = nginx listen.group = nginx listen.allowed_clients = 127.0.0.1 pm = ondemand pm.max_children = 1500; pm.process_idle_timeout = 5; chdir = / security.limit_extensions = .php I'm using on pm.ondemand since my website has to support many concurrent connections at the same time and i was unable to to it with dynamic/static. I guess this isnt the problem because as i said earlier when i restart nginx+php-fpm at the same time, they are taking too much resources without any request. Here is the screenshot with the CPU Usage http://s28.postimg.org/v54q25zod/Untitled.png

    Read the article

  • IconDownloader, problem with lazy downloading

    - by Junior B.
    My problem is simple to be described but it seems to be hard to solve. The problem is loading icons, with a custom class like IconDownloader.m provided by an official example from Apple, avoiding crashes if I release the view. I've added the IconDownloader class to my app, but it's clear that this approach is good only if the tableview is the root. The big problem is when the view is not the root one. F.e: if I start to scroll my second view (the app now load the icons) and, without leaving it the time to finish the download, I go back to root, the app crash because the view that have to be updated with new icons doesn't exist anymore. One possible solution could be implement an OperationQueue in the view, but with this approach I've to stop the queue when I change the view and restart it when I come back and the idea to have N queues don't make me enthusiastic. Anyone found a good solution for this problem?

    Read the article

  • Using thread inter-communication to increase my server app's IO throughput; not sure how

    - by Howard Guo
    My server application creates a new thread for each incoming connection. Incoming requests are serialized in a BlockingQueue. There is one worker thread taking items from the queue, produce a response and send the response through socket. I have noticed a throughput issue: Currently, worker thread is responsible of sending the response message through socket, thus severely wasting processing power and throughput. I am considering: rather than sending the response itself, why not telling network IO threads to send the response? However, when I think about thread inter-communication, I cannot yet figure out how to approach it: Worker thread will produce a response, but how will it inform the response message to IO thread? Is there a standard/best practice? Thank you.

    Read the article

  • Under what conditions will sendmail try to immediately resend a message instead of waiting for the standard requeue interval?

    - by Mike B
    CentOS 5.8 | Sendmail 8.14.4 I used to think that if SendMail experienced a temporary (400-class) error during delivery, it would place the message in a deferred queue (e.g. /var/spool/mqueue) and retry an hour later. For the most part, that appears to be the case. But every now and then, I'll notice log entries like this (email/domains renamed to protect the innocent :-) ) : Dec 5 01:43:03 foobox-out sendmail [11078]: qBE3l7js123022: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=124588, relay=exbox.foo.com. [10.10.10.10], dsn=4.0.0, stat=Deferred: 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel Dec 5 01:53:34 foobox-out sendmail [12763]: qBE3l7js123022: to=<[email protected]>, delay=00:10:31, xdelay=00:00:00, mailer=relay, pri=214588, relay=exbox.foo.com., dsn=4.0.0, stat=Deferred: 452 4.3.1 Insufficient system resources Dec 5 02:53:35 foobox-out sendmail [23255]: qBE3l7js123022: to=<[email protected]>, delay=01:10:32, xdelay=00:00:01, mailer=relay, pri=304588, relay=exbox.foo.com. [10.10.10.10], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) Why did Sendmail try again just 10 minutes after the first attempt and then wait another hour before trying again? If this is expected behavior, what scenarios will cause this faster requeue interval to occur?

    Read the article

  • Can I prevent Flash's Input Events from stacking up when my framerate low?

    - by Matt W
    My Flash game targets 24 fps, but slows to 10 on slower machines. This is fine, except Flash decides to throttle the queue of incoming MouseEvent and KeyboardEvents, and they stack up and the Events fall behind. Way behind. It's so bad that, at 10 fps, if I spam the Mouse and Keyboard for a few seconds not much happens, then, after I stop, the game seems to play itself for the next 5 seconds as the Events trickle in. Spooky, I know. Does anyone know a way around this? I basically need to say to Flash, "I know you think we're falling behind, but throttling the input events won't help. Give them to me as soon as you get them, please."

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >