Search Results

Search found 369 results on 15 pages for 'simultaneous'.

Page 4/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • Is it possible to configure simultaneous authentication against 2 different AD domains by IIS 7?

    - by just3ws
    Basically, I need to be able to attempt to authenticate against two different AD domains from IIS. I'd like to be able to automatically query both AD's and whichever comes back with an authentication wins. The users are completely separate and will only exist in their respective domain.         IIS           |           |   /-------------\   |                 |  ------        ------  AD1         AD2  JoeU        AmyU  JillU         JohnU So, if IIS requests to authenticate JoeU it will query both domains. JoeU will be found in AD1 so we can ignore whatever response comes back from AD2. Is this even possible using stock IIS 7? Is there a middleware or something to allow this type of configuration on IIS 7? Would this be a job for some kind of middleware sitting between IIS and the AD domains?

    Read the article

  • Does Windows 7 support multiple simultaneous nVidia graphics cards with different drivers?

    - by mckenzieg1
    On one of my dev machines at work (currently running XP), I have two nVidia graphics cards: Quadro NVS 440 (my original card, for my three primary monitors) GeForce GTX 275 (just added, for CUDA development) I can get both cards to work OK by installing the latest GeForce drivers, but I get some annoying-but-not-crippling display artifacts on the Quadro's screens (mostly scattered black rectanges where repainting fails for a few bits of UI in certain applications). Under XP, this seems to be the best I can do. I can use Device Manager to supposedly install different nVidia drivers for the two cards (the latest Quadro drivers for the NVS, the latest GeForce drivers for the GTX), but I actually end up with the same driver for both, because the driver dlls all have the same names and get installed on top of one another in the system directory. I have read that Win7 has a new video driver architecture that better supports multiple heterogeneous cards. Does anyone know if that will handle my scenario? If so, it will give me a compelling reason to get that machine on Win7 ASAP.

    Read the article

  • Multi-file, simultaneous, drag-and-drop file uploads in the browser without ActiveX?

    - by qiq
    I like how Windows Skydrive lets you drag files into Internet Explorer where an ActiveX component uploads those files to your Skydrive account in a queue. This avoids the cumbersome traditional HTML approach where you present multiple "Browse" buttons and the user has to select individual files one by one, click Upload and then select more files after the first batch completes. What I'm not sure is how the same effect could be achieved in a web app without ActiveX. Any suggestions?

    Read the article

  • Does it make sense to have more than one UDP Datagram socket on standby? Are "simultaneous" packets

    - by Gubatron
    I'm coding a networking application on Android. I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages. I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files. My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones? Update "The latter function calls sock_queue_rcv_skb() (in sock.h), which queues the UDP packet on the socket's receive buffer. If no more space is left on the buffer, the packet is discarded. Filtering also is performed by this function, which calls sk_filter() just like TCP did. Finally, data_ready() is called, and UDP packet reception is completed."

    Read the article

  • Does it make sense to have more than one UDP Datagram socket on standby? Are simultaneous packets dr

    - by Gubatron
    I'm coding a networking application on Android. I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages. I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files. My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones?

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • How to do simultaneous builds in two Git branches?

    - by james creasy
    I've looked at git-new-workdir, but I don't want the history to be shared because the branches have a release-main relationship. That is, changes in the release branch I want to propagate to the main line, but changes in the main line I don't want in the release line. A common pattern for me is to fix a bug in the release line, integrate it to the main line, then start builds in both branches at the same time. Is there a way to do this with git-new-workdir, do I need to clone, or is there a better solution? Thanks

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • Analog and digital audio output at the same time

    - by wim
    My speakers use a digital input, but my headphones use an analog input. I have them both plugged in, and when I want to use the headphones I just turn off the speakers and switch on the headphones. I know that simultaneous output on digital and analog is supported by the hardware, because it worked fine in Windows XP. But on Ubuntu, I seem to only get one at a time, depending on which setting is selected in the combo box located at System -> Preferences -> Sound -> Hardware. How can I get simultaneous analog and digital output without having to switch the profile every time? I'm on Ubuntu 11.04 and it's an HDA Intel chip.

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • Locking database edit by key name

    - by Will Glass
    I need to prevent simultaneous edits to a database field. Users are executing a push operation on a structured data field, so I want to sequence the operations, not simply ignore one edit and take the second. Essentially I want to do synchronized(key name) { push value onto the database field } and set up the synchronized item so that only one operation on "key name" will occur at a time. (note: I'm simplifying, it's not always a simple push). A crude way to do this would be a global synchronization, but that bottlenecks the entire app. All I need to do is sequence two simultaneous writes with the same key, which is rare but annoying occurrence. This is a web-based java app, written with Spring (and using JPA/MySQL). The operation is triggered by a user web service call. (the root cause is when a user sends two simultaneous http requests with the same key). I've glanced through the Doug Lea/Josh Bloch/et al Concurrency in Action, but don't see an obvious solution. Still, this seems simple enough I feel there must be an elegant way to do this.

    Read the article

  • How to play multiple audio sources simultaneously in Silverlight

    - by Shurup
    I want to play simultaneous multiply audio sources in Silverlight. So I've created a prototype in Silverlight 4 that should play a two mp3 files containing the same ticks sound with an intervall 1 second. So these files must be sounded as one sound if they will be played together with any whole second offsets (0 and 1, 0 and 2, 1 and 1 seconds, etc.) I my prototype I use two MediaElement (me and me2) objects. DateTime startTime; private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); var timer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(1) }; timer.Tick += RefreshData; timer.Start(); } First file should be played at 00:00 sec. and the second in 00:02 second. void RefreshData(object sender, EventArgs e) { if(me.CurrentState != MediaElementState.Playing) { startTime = DateTime.Now; me.Play(); return; } var elapsed = DateTime.Now - startTime; if(me2.CurrentState != MediaElementState.Playing && elapsed >= TimeSpan.FromSeconds(2)) { me2.Play(); ((DispatcherTimer)sender).Stop(); } } The tracks played every time different and not simultaneous as they should (as one sound). Addition: I've tested a code from the Bobby's answer. private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); // This code plays well enough. // me.Play(); // me2.Play(); // But adding the 2 second offset using the timer, // they play no simultaneous. var timer = new DispatcherTimer { Interval = TimeSpan.FromSeconds(2) }; timer.Tick += (source, arg) => { me2.Play(); ((DispatcherTimer)source).Stop(); }; timer.Start(); } Is it possible to play them together using only one MediaElement or any implementation of MediaStreamSource that can play multiply sources?

    Read the article

  • Flash upload problems (FileReferenceList, timeouts, #2038 )

    - by binaryLV
    Hello! I'm having problems with timeouts while trying to upload multiple files by using FileReferenceList. upload() is being called in a loop for all selected files on Event.SELECT event of FileReferenceList, but only 2 files are being uploaded simultaneously (I also see 2 opened sockets that are used for uploading by running netstat -aon | find "127.0.0.1:80"). If uploading of any file is not started in 60 seconds, I get a #2038 error. E.g., if I try to upload three large files (like 500MB each), first two uploads are started immediately, third one is not started (because limit is 2 simultaneous uploads) and it fails after 60 seconds with #2038 error (I'm fairly sure that this is because of timeout - tested it). This could be solved by calling upload() only when uploading previous file is completed, but I don't want to "hard-code" the number of possible simultaneous uploads (2 on my PC). Is there any way to get/set this number at runtime?

    Read the article

  • PHP: Simulate multiple MySQL connections to same database

    - by Varun
    An insert query is constantly getting logged in my MySQL slow query log. I want to see how much time the INSERT query is taking with 100 simultaneous insert operations(to the same table). So I need to simulate the follwoing. 500 different simultaneous connections from PHP to the same database on a mysql server, all of which are inserting a row(simultaneously) to the same table. DO I need to use any load testing tool? Or Can I simply write a PHP script to do this? Any thoughts?

    Read the article

  • Examples of MMOs with small dev teams

    - by Aliud Alius
    I'd like to see a list of MMOs with small development teams in order to better understand where small teams have found a place for themselves. While examples of MMORPGs are of interest, so are games focusing on socializing, trade and commerce, city or empire building, crafting, exploration, strategy and so on. Any shipping game supporting between, say 800 and 10,000 simultaneous players belongs on this list. Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >