Search Results

Search found 3458 results on 139 pages for 'concurrent queue'.

Page 103/139 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • How can I receive messages using over http without MSMQ

    - by pduncan
    I need a reliable messaging framework that runs over http/https (due to client security requirements) and that doesn't use MSMQ (because some clients will use Windows XP Home). The clients only need to be able to receive messages, not send them. We already have a message queue on the server for each user, and the receivers have been getting messages by connecting to an HttpHandler on the server and getting a Stream from WebResponse.GetResponseStream() We keep this stream open, and pull messages off of it using Stream.Read(). This MOSTLY works, but Stream.Read() is a blocking call, and we can't reliably interrupt it. We need to be able to stop and start the receiver without losing messages, but the old stream often hangs around, even after we call Thread.Abort on its thread. Any suggestions?

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • DispatcherOperations.Wait()

    - by Mark
    What happens if you call dispatcherOperation.Wait() on an operation that has already completed? Also, the docs say that it returns a DispatcherOperationStatus, but wouldn't that always be Completed since it (supposedly) doesn't return until it's done? I was trying to use it like this: private void Update() { while (ops.Count > 0) ops.Dequeue().Wait(); } public void Add(T item) { lock (sync) { if (dispatcher.CheckAccess()) { list.Add(item); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, item)); } else { ops.Enqueue(dispatcher.BeginInvoke(new Action<T>(Add), item)); } } } I'm using this in WPF, so all the Add operations have to occur on the UI thread, but I figured I could basically just queue them up without having to wait for it to switch threads, and then just call Update() before any read operations to ensure that the list is up to date, but my program started hanging.

    Read the article

  • Hibernate inserting into join table

    - by Karl
    I got several entities. Two of them got a many-to-many relation. When I do a bigger operation on these entities it fails with this exception: org.hibernate.exception.ConstraintViolationException: could not insert collection rows: I execute the operation i a @Transactional context. I don't do any explicit flushing i my daos. The flush is triggered by a query. In the queue are 15 elements (all of the same structure). one of them always fails (but it's always a different one (I checked) and always at a different position). Does anybody have a hint for me for what I might do wrong? My Mapping: @ManyToMany(targetEntity = CategoryImpl.class) protected Set<Category> categories = new HashSet<Category>();

    Read the article

  • understanding memory mapping in directx

    - by numerical25
    So my question is ... " When your using the mapping feature to write into a memory buffer, are you really just saving the whole procedure into a queue so directX executes it when finished with other tasks???" I ask this question because this is my perception of mapping when writing to a buffer. I just want to make sure my perception is correct. I understand that the monitor moves extremely slow in compared to the processor, and I am sure the processor can execute 10 times the amount the screen can refresh. So is this one of the reason you should map when writing to a buffer. so each procedure can be done in a orderly fashion. If someone could elaborate, that would be great. thanks

    Read the article

  • Creating tifs directly from VB.Net.

    - by ajl
    The current applications uses .net printdocument to create print jobs which it sends to a standard printer. We use the blackice tif print driver to capture the output and manage it from there. The problem is that some print jobs take 30 seconds to come out of the queue, and blackice will not allow you to change settings on the driver (like output filename) until the job is complete. This means the application has to wait 30 seconds until it can print the next job. Is there a better way? Can I create/print tif images directly from .Net without a 3rd party print driver? Do I risk quality to do this?

    Read the article

  • CRT not initialized

    - by jfhs
    I'm trying to compile one project with MSVC 2010, compilation is ok, but when I try to run the app, it gives me CRT not initialized error. It is a console application, so I tried to specify mainCRTStartup as Entry Point, but it didn't help. In the same solution there are other projects, and they don't have such a problem. The difference which I see between them is that one which is not working, uses boost. Boost v1.38.0 if this is important. Runtime Library is Multi-threaded DLL. Linker command line is: /OUT:"D:\temp\ghost\Release\ghost.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"..\zlib\lib" /LIBPATH:"..\mysql\lib\opt" /LIBPATH:"..\boost\lib" "ws2_32.lib" "winmm.lib" "zdll.lib" "StormLibRAS.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" "D:\temp\ghost\bncsutil\vc8_build\Release\BNCSutil.lib" /MANIFEST /ManifestFile:"Release\ghost.exe.intermediate.manifest" /ALLOWISOLATION /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"D:\temp\ghost\Release\ghost.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /PGD:"D:\temp\ghost\Release\ghost.pgd" /LTCG /TLBID:1 /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 /ERRORREPORT:QUEUE

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Creating TCP network errors for unit testing

    - by Robert S. Barnes
    I'd like to create various network errors during testing. I'm using the Berkely sockets API directly in C++ on Linux. I'm running a mock server in another thread from within Boost.Test which listens on localhost. For instance, I'd like to create a timeout during connect. So far I've tried not calling accept in my mock server and setting the backlog to 1, then making multiple connections, but all seem to successfully connect. I would think that if there wasn't room in the backlog queue I would at least get a connection refused error if not a timeout. I'd like to do this all programatically if possible, but I'd consider using something external like IPchains to intentionally drop certain packets to certain ports during testing, but I'd need to automate creating and removing rules so I could do it from within my Boost.Test unit tests. I suppose I could mock the various system calls involved, but I'd rather go through a real TCP stack if possible. Ideas?

    Read the article

  • Having a fork match the original repo when the original master branch can't be merged in?

    - by a2h
    The related questions that SO offer me only answer simple cases that can be solved with a pull - however, that won't work for my case. There's a repository I've forked, with just a master branch, and I've forked it, and I've worked in both my master, and a new branch of my own, rw-style. The owner of the forked repository's committed some of my changes but not others; the black dots on the top right below represent commits from both my master and rw-style branches. I'm aware using the fork queue is not a good idea, so I'm staying away from it. Using git pull does work, but it creates a conflict that I would then need to resolve, and it also results in duplicate history for my master branch, and that doesn't look particularly pretty. I don't know any other solutions right now, so I'm currently considering just creating a patch from two commits that I haven't yet pushed, deleting my fork, creating it again from the original, and then applying my patches on top of it. Is that the only solution?

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • How do I know if a boost thread is done ?

    - by jules
    I am using boost::thread to process messages in a queue. When a first message comes I start a message processing thread. When a second message comes I check if the message processing thread is done. if it is done I start a new one if it is not done I don nothing. How do I know if the thread is done ? I tried with joinable() but it is not working, as when the thread is done, it is still joinable. I also tried to interrupt the process at once, and add an interruption point at the end of my thread, but it did not work. Thanks

    Read the article

  • Online Game programming in Google App Engine: AI

    - by Hortinstein
    I am currently in the planning stages of a game for google app engine, but cannot wrap my head around how I am going to handle AI. I intend to have persistant NPCs that will move about the map, but short of writing a program that generates the same XML requests I use to control player actions, than run it on another server I am stuck on how to do it. I have looked at the Task Queue feature, but due to long running processes not being an option on the App engine, I am a little stuck. I intend to run multiple server instances with 200+ persistant NPC entities that I will need to update. Most action is slowly roaming around based on player movements/concentrations, and attacking close range players(you can probably guess the type of game im developing)

    Read the article

  • getting started with qpid server

    - by London
    I'd like to get started with qpid, can anyone recommend some useful resources/links/code examples for me to study? I'm interested about java implementations?I'd like to create message sender/listener. I've already done messaging with jboss where I send the message to the queue and the listener picks it up from there, what is different with qpid? Is it the same only replacing jboss as a server with qpid or?I'm new to all of this it might sound weird when I ask it. Oh yes similar question: http://stackoverflow.com/questions/1693099/get-started-with-qpid Doesn't really help me much. Any hints would help me a lot. thank you

    Read the article

  • TeamCity build triggers don't automatically run

    - by Phil.Wheeler
    I've been playing around with and learning a bit about TeamCity and have the server correctly set up with my .Net MVC project committed in Subversion successfully and build configurations and triggers sorted to kick off when any changes are committed to the repository. TeamCity polls on its default time period and is picking up that changes have been committed, but it is adding these to the queue without actually ever running them. I have to manually click the "Run" button to kick them off. What setting do I need to change in order to ensure that any new changes are automatically run?

    Read the article

  • Many clients on a wireless AP for UDP broadcast packets

    - by distorteddisco
    I asked this question on StackOverflow and was directed over here, so I'd appreciate any advice. I'm deploying a smartphone application as part of a live music performance that depends on receiving UDP broadcast packets from a wireless access point. I'm guessing that between 20 and 50 clients will be connected at any one time. I'm aware that a maximum of 20 clients per access point is advised, but as the UDP broadcast packets are ground through the LAN, how would I be able to link multiple APs together? I'm looking for recommendations on a suitable AP for this. The actual data transmission rates are very low - only a few kB/s - as I'm just sending small messages to the smartphone apps, and there will be no WAN internet connection. I tried it with a few connected peers on an adhoc wireless connection without any problems, but ran into dropped packet issues on an old WRT54G running ddwrt, though it's in pretty rough shape. What's the best way to do this? I suppose I could limit concurrent wireless connections to 20 clients... but more would be nice. EDIT: I should also say that it's purely one-way communication; the smartphone application is only receiving broadcast packets, not sending anything.

    Read the article

  • Developing a high-performance, scalable Comet application

    - by Rob
    Well, the title says most of it. I'm looking to develop a chat application that will hopefully become something more, and currently I'm considering my options for what I should build it on top of. I've taken a look at Tornado with Redis as my primary option - Tornado, being a Comet server, is perfect for long polling to retrieve the messages on Redis, which I have the intention of using as both a persistent data store, as well as a message queue with its nifty subpub features. However, I've also heard good things about Django, RabbitMQ, MongoDB and Orbited. JavaScript isn't a big problem for me, so Orbited's JavaScript support isn't too much of a boon. Really, I'd probably be happy to develop on the route I've chosen for myself, but if there are any gaping deficiencies in my plan, I'd like some kind person to point them out before I find I've wasted months on this.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Grails external Jms broker (active mq)

    - by TheBigS
    I have what will become an 'external' activemq server I'd like grails to be able to talk to. Right now I am just running it on my dev box. Here is what I have setup right now: 1) Run activemq server 2) Run activemq/examples using ant to produce messages 3) View ActiveMQ admin site: http://localhost:8161/admin/queues.jsp verify that messages are in the queue. 4) Follow Mini Tutorial to create the Service and Controller: http://www.grails.org/ActiveMQ+Plugin 5) Configured my Grails resources.groovy file as follows: beans = { jmsConnectionFactory(SingleConnectionFactory){ targetConnectionFactory = { ActiveMQConnectionFactory cf -> brokerURL = 'tcp://localhost:61616' } } } When I run the grails app I get a BindException saying port 61616 is already in use. How do I configure this to use my server that is already running? I've tried changing 'localhost' to '127.0.0.1' and to my LAN ip, but no luck, it keeps trying to setup its own embedded activemq server. Any ideas?

    Read the article

  • Converting a C# code to F#??

    - by Brendon
    Hello all I am just a beginner in programing i wish covert some code from C# to F#, I have encotered this code: float[] v1=new float[10]; ... //Enqueue the Execute command. Queue.Execute(kernelVecSum, null, **new long[] { v1.Length }**, null, null); I have previously ask how to convert the v1 object, I think i know how, But how do i use the function call especially the "new long[] { v1.Length }" part of the function argument, what does "new long[] { v1.Length }" mean?? I have created v1 like this "let v1 = [| for i in 1.0 .. 10.0 -> 2.0 * i |]" Is it correct?? or should i use v1 like this "let v1 = ref [| for i in 1.0 .. 10.0 -> 2.0 * i |]" ???

    Read the article

  • Active MQ showing more than one (ghost) consumer

    - by Shoey
    We have a system that uses ActiveMQ (Queues) - and have exactly one producer and one consumer (implemented as a Windows Service in .NET). Over the weekend, the infrastructure team had a reboot of the servers on the network, and from then on we noticed that there are more than one ghost consumer appearing that listens to the queue and we also suspect reads and deletes the messages. My questions are: is there any way from the Active MQ management console to find out what the consumers are (hostnames, etc). and Are there any scenarios in which inadvertent consumers get 'created'? For instance, there were suggestions about the active MQ journal folders getting corrupted after a reboot, or there is another suggestion that another machine with Active MQ broker automatically makes itself a consumer of all the queues on the main/live active mq server.

    Read the article

  • Secure xml messages being read from database into app.

    - by scope-creep
    I have an app that reads xml from a database using NHibernate Dal. The dal calls stored procedures to read and encapsulate the data from the schema into an xml message, wrap it up to a message and enqueue it on an internal queue for processing. I would to secure the channel from the database reads to the dequeue action. What would be the best way to do it. I was thinking of signing the xml using System.Security.Cryptography.Xml namespace, but is their any other techniques or approaches I need to know about? Any help would be appreciated. Bob.

    Read the article

  • Storing task state between multiple django processes

    - by user366148
    I am building a logging-bridge between rabbitmq messages and Django application to store background task state in the database for further investigation/review, also to make it possible to re-publish tasks via the Django admin interface. I guess it's nothing fancy, just a standard Producer-Consumer pattern. Web application publishes to message queue and inserts initial task state into the database Consumer, which is a separate python process, handles the message and updates the task state depending on task output The problem is, some tasks are missing in the db and therefore never executed. I suspect it's because Consumer receives the message earlier than db commit is performed. So basically, returning from Model.save() doesn't mean the transaction has ended and the whole communication breaks. Is there any way I could fix this? Maybe some kind of post_transaction signal I could use? Thank you in advance.

    Read the article

  • php convert images and upload to amazon s3

    - by faraklit
    I am looking for a best practice while uploading images to amazon s3 server and serving from there. We need four different sizes of an image. So just after image upload we convert the image and scale in 4 different widths and heights. And then we send them to the amazon s3 using official php api. // ... // image conversions, bucket setting, s3 initialization etc. $sizes= array("", "48", "64", "128"); foreach($sizes as $size) { $filename = $upload_path.$dest_file.$size.$ext; $s3->batch()->create_object($bucket, , array( 'fileUpload' => $filename, 'acl' => AmazonS3::ACL_PUBLIC, )); } But for a 1M image the client sometimes wait up to 30 seconds which is a very long time. Instead of sending images immediately to S3, it may be better to add them to a job queue. But the user should see the uploaded image immediately.

    Read the article

  • background music stops after 3/4 runs in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >