Search Results

Search found 1122 results on 45 pages for 'concurrent'.

Page 37/45 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • asp.net mvc, IIS 6 vs IIS7.5, and integrated windows authentication causing javascript errors?

    - by chris
    This is a very strange one. I have an asp.net MVC 1 app. Under IIS6, with no anon access - only integrated windows auth - every thing works fine. I have the following on most of my Foo pages: <% using (Html.BeginForm()) { %> Show All: <%= Html.CheckBox("showAll", new { onClick = "$(this).parent('form:first').submit();" })%> <% } %> Clicking on the checkbox causes a post, the page is reloaded, everything is good. When I look at the access logs, that's what I see, with one oddity - the js library is requested during the page first request, but not for any subsequent page requests. Log looks like: GET / 401 GET / 200 GET /Content/Site.css 304 GET /Scripts/jquery-1.3.2.min.js 401 GET /Scripts/jquery-ui-1.7.2.custom.min.js 401 GET /Scripts/jquery.tablesorter.min.js 401 GET /Scripts/jquery-1.3.2.min.js 304 GET /Scripts/jquery-ui-1.7.2.custom.min.js 304 GET /Scripts/jquery.tablesorter.min.js 304 GET /Content/Images/logo.jpg 401 GET /Content/Images/logo.jpg 304 GET /Foo 401 GET /Foo 200 POST /Foo/Delete 302 GET /Foo/List 200 POST /Foo/List 200 This corresponds to home page, click on "Foo", delete a record, click a checkbox (which causes the 2nd POST). Under IIS7.5, it sometimes fails - the click on the check box doesn't cause a postback, but there are no obvious reasons why. I've noticed under IIS7.5 that every single page request re-issues the requests for the js libraries - the first one a 401, followed by either a 200 (OK) or 304 (not modified), as opposed to the above log extract where that only happened during the 1st request. Is there any way to eliminate the 401 requests? Could a timing issue have something to do with the click being ignored? Would increasing the number of concurrent connections help? Any other ideas? I'm at a bit of a loss to explain this.

    Read the article

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • In Java Concurrency In Practice by Brian Goetz, why is the Memoizer class not annotated with @ThreadSafe?

    - by dig_dug
    Java Concurrency In Practice by Brian Goetz provides an example of a efficient scalable cache for concurrent use. The final version of the example showing the implementation for class Memoizer (pg 108) shows such a cache. I am wondering why the class is not annotated with @ThreadSafe? The client, class Factorizer, of the cache is properly annotated with @ThreadSafe. The appendix states that if a class is not annotated with either @ThreadSafe or @Immutable that it should be assumed that it isn't thread safe. Memoizer seems thread-safe though. Here is the code for Memoizer: public class Memoizer<A, V> implements Computable<A, V> { private final ConcurrentMap<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>(); private final Computable<A, V> c; public Memoizer(Computable<A, V> c) { this.c = c; } public V compute(final A arg) throws InterruptedException { while (true) { Future<V> f = cache.get(arg); if (f == null) { Callable<V> eval = new Callable<V>() { public V call() throws InterruptedException { return c.compute(arg); } }; FutureTask<V> ft = new FutureTask<V>(eval); f = cache.putIfAbsent(arg, ft); if (f == null) { f = ft; ft.run(); } } try { return f.get(); } catch (CancellationException e) { cache.remove(arg, f); } catch (ExecutionException e) { throw launderThrowable(e.getCause()); } } } }

    Read the article

  • Binding from View-Model to View-Model of a child User Control in Silverlight? 2 sources - 1 target..

    - by andrej351
    Hi there, So i have a UserControl for one of my Views and have another 'child' UserControl inside that. The outer 'parent' UserControl has a Collection on its View-Model and a Grid control on it to display a list of Items. I want to place another UserControl inside this UserControl to display a form representing the details of one Item. The outer / parent UserControl's View-Model already has a property on it to hold the currently selected Item and i would like to bind this to a DependancyProperty on the inner / child UserControl. I would then like to bind that DependancyProperty to a property on the child UserControl's View-Model. I can then set the DependancyProperty once in XAML with a binding expression and have the child UserControl do all its work in its View-Model like it should. The code i have looks like this.. Parent UserControl: <UserControl x:Class="ItemsListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel}"> <!-- Grid Control here... --> <ItemDetailsView Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel.SelectedItem}" /> </UserControl> Child UserControl: <UserControl x:Class="ItemDetailsView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel}" ItemDetailsView.Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel.Item, Mode=TwoWay}"> <!-- Form controls here... --> </UserControl> The selected Item is bound to the DependancyProperty fine. However from the DependancyProperty to the child View-Model does not. It appears to be a situation where there are two concurrent bindings which need to work but with the same target for two sources. Why won't the second (in the child UserControl) binding work?? Is there a way to acheive the behaviour I'm after?? Cheers.

    Read the article

  • Comet and Simultaneous Ajax request

    - by Amitd
    Hi , I am trying to use a COMET solution using ASP.NET . Trouble is i want to implement sending and notification part in the same page. On IE7, whenever i try to send a request ,it just gets queued up. After reading on internet and stackoverflow pages i found that i can only do 2 simultaneous asyn ajax requests per page. So until i close my comet Ajax request,my 2nd request doesnt get completed ,doesnt even go out from the browser. And when i checked with Firefox i just one Ajax comet request running all time..so doesnt that leave me one more ajax request? Also the solution uses IRequiressessionstate for Asynchronous HTTP Handler which i had removed.but still it creates problems on multiple instances of IE7. I had one work around which is stated here http://support.microsoft.com/kb/282402 it means we can increase the request limit from registry by default is 2. By changing "MaxConnectionsPer1_0Server" key in hive "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" we can increase the number of requests. Basically i want to broadcast information to multiple clients connected to a server using Comet and the clients can also send messages to the Server. Broadcasting works but the send request back to server doesnt work. Im using IIS 6 and ASP.NET . Are there any more workarounds or ways to send more requests? References : http://stackoverflow.com/questions/561046/how-many-concurrent-ajax-xmlhttprequest-requests-are-allowed-in-popular-browser http://stackoverflow.com/questions/349381/ajax-php-sessions-and-simultaneous-requests http://stackoverflow.com/questions/2412807/jquery-ajax-request-blocked-by-long-running-ajax-request http://stackoverflow.com/questions/898190/jquery-making-simultaneous-ajax-requests-is-it-possible

    Read the article

  • consistency of Trigger Procedure (before row trigger) Postgresql

    - by elgcom
    Using Postgresql. I try to use TRIGGER procedure to make some consistency check on INSERT. The question is ...... whether "BEFORE INSERT FOR EACH ROW" can make sure each row to insert "checked" and "inserted" one after another? do I need extra lock on table to survive from concurrent insert? check for new row1 - insert row1 - check for new row2 - insert row2 -- -- -- unexpired product name is unique. CREATE TABLE product ( "name" VARCHAR(100) NOT NULL, "expired" BOOLEAN NOT NULL ); CREATE OR REPLACE FUNCTION check_consistency() RETURNS TRIGGER AS $$ BEGIN IF EXISTS (SELECT * FROM product WHERE name=NEW.name AND expired='false') THEN RAISE EXCEPTION 'duplicated!!!'; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trigger_check_consistency BEFORE INSERT ON product FOR EACH ROW EXECUTE PROCEDURE check_consistency(); -- INSERT INTO product VALUES("prod1", true); INSERT INTO product VALUES("prod1", false); INSERT INTO product VALUES("prod1", false); // exception! this is OK name | expired ============== p1 | true p1 | true p1 | false This is not OK name | expired ============== p1 | true p1 | false p1 | false or maybe I should ask, how can I use Trigger to implement "Primary" or "Unique" constraint-like SQL.

    Read the article

  • Custom HTTPHandler causing caching or session issues?

    - by Jan de Jager
    So i have a custom CMS running under .Net 3.5 written entirely in c#. The engine is optimized to render for mobile devices, but also server to normal web browsers. It also supports cookieless sessions. Great... I've chosen not to cache anything (including browser data) in order to control the rendering completely from data. This has been all good until lately. The engine implements a basic login function that simply logs the user state within a session object. The behavior is rather strange. User will click through the site no problem. Then login. The login will either go through successfully or just redisplay the login screen, suggesting a cached page being returned or redisplayed... If the login is successful the concurrent page hits will switch arbitrarily between logged in and logged out state... Also suggesting either the session state is not accessible or a cached page being returned. I have debugged the hell out of the thing.... including using fiddler and the like. When debugging the behavior disappears. Huh? One of the sites running on the engine is http://www.wiseguy.mobi (sorry customized for South Africa, so you'll probably not be able to get the password Text Message)!

    Read the article

  • What are good NoSQL and non-relational database solutions for audit/logging database

    - by Juha Syrjälä
    What would be suitable database for following? I am especially interested about your experiences with non-relational NoSQL systems. Are they any good for this kind of usage, which system you have used and would recommend, or should I go with normal relational database (DB2)? I need to gather audit trail/logging type information from bunch of sources to a centralized server where I could generate reports efficiently and examine what is happening in the system. Typically a audit/logging event would consist always of some mandatory fields, for example globally unique id (some how generated by program that generated this event) timestamp event type (i.e. user logged in, error happened etc) some information about source (server1, server2) Additionally the event could contain 0-N key-value pairs, where value might be up to few kilobytes of text. It must run on Linux server It should work with high amount of data (100GB for example) it should support some kind of efficient full text search It should allow concurrent reading and writing It should be flexible to add new event types and add/remove key-value pairs to new events. Flexible=no changes should be required to database schema, application generating the events can just add new event types/new fields as needed. it should be efficient to make queries against database. For reporting and exploring what happened. For example: How many events with type=X occurred in some time period. Get all events where field A has value Y. Get all events with type X and field A has value 1 and field B is not 2 and event occurred in last 24h

    Read the article

  • Paypal subscriptions IPN - problem with users subscribing multiple times

    - by Brian Armstrong
    I'm using paypal subscriptions and the instant payment notification (IPN) to handle subscribers on my site. For the most part it works well but there is one occasional problem I've encountered. Usually if a user cancels their subscription, I wait for the "end of term" (subscr_eot) notification before disabling access to my site. So if they prepay for the whole month, and then cancel right away, they still have access for the rest of the month (as it should be). But some users are having this problem where they: Cancel their subscription Before the "end of term" is reached they decide to re-subscribe When the "end of term" is reached for their first subscription, my app receives the notification and fires off an email to the user with something like "your account has been disabled, if you ever want to sign up again, you can re-subscribe by clicking here". This confuses them because they are thinking...that's weird, I thought I subscribed like a week ago (and they did). So they go subscribe AGAIN. Now they have two concurrent running subscriptions to my site and I get a support email in a month or two ("wtf you billed me twice this month jerk!!") So I haven't found a good way to fix this. I guess the best solution would be to do an additional API call when the "end of term" notification is received which asks paypal "hey did this person already re-subscribe?". If so then no need to fire off that email. But I haven't seen any way to do this API call yet. Another solution is to disable their account immediately when they cancel (the "subscr_cancel" notification) but then I get different angry support emails "hey I prepaid for the whole month why was my account disabled already!!". Anyone else solved this?

    Read the article

  • How should I handle persistence in a Java MUD? OptimisticLockException handling

    - by Chase
    I'm re-implementing a old BBS MUD game in Java with permission from the original developers. Currently I'm using Java EE 6 with EJB Session facades for the game logic and JPA for the persistence. A big reason I picked session beans is JTA. I'm more experienced with web apps in which if you get an OptimisticLockException you just catch it and tell the user their data is stale and they need to re-apply/re-submit. Responding with "try again" all the time in a multi-user game would make for a horrible experience. Given that I'd expect several people to be targeting a single monster during a fight I think the chance of an OptimisticLockException would be high. My view code, the part presenting a telnet CLI, is the EJB client. Should I be catching the PersistenceExceptions and TransactionRolledbackLocalExceptions and just retrying? How do you decide when to stop? Should I switch to pessimistic locking? Is persisting after every user command overkill? Should I be loading the entire world in RAM and dumping the state every couple of minutes? Do I make my session facade a EJB 3.1 singleton which would function as a choke point and therefore eliminating the need to do any type of JPA locking? EJB 3.1 singletons function as a multiple reader/single writer design (you annotate the methods as readers and writers). Basically, what is the best design and java persistence API for highly concurrent data changes in an application where it is not acceptable to present resubmit/retry prompts to the user?

    Read the article

  • What off-the-shelf licensing system will meet my needs?

    - by Anders Pedersen
    I'm looking for an off-the-shelf license system for desktop software. After some research on the net -- and of course here on StackOverflow -- I haven't found one the suits our needs. I have a couple of must-have features and some would-be-nice features: Must have: Encrypted unlock key Possibility to automate the unlock key generation on my website User info in key so that I can show name and company in an about box and perhaps in reports Nice to have: License managing tools Online activation Nice upgrade possibilities to a version with concurrent license model and subscription model I have looked at Manco, but I find them difficult to work with and the documentation is minimal. Further, I couldn't get the name in the key. Also, the automatic generation of a key on my website has to be done with an application web service, but I would rather program against a DLL. Next I looked at xheo. It is easier to use and the documentation is better, but the price is substantially higher and here you can only get the user name in the license file that you then have to provide together with the unlock key. Could anyone share their experiences on what you are using and how it is working for you?

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • Nested multithread operations tracing

    - by Sinix
    I've a code alike void ExecuteTraced(Action a, string message) { TraceOpStart(message); a(); TraceOpEnd(message); } The callback (a) could call ExecuteTraced again, and, in some cases, asynchronously (via ThreadPool, BeginInvoke, PLINQ etc, so I've no ability to explicitly mark operation scope). I want to trace all operation nested (even if they perform asynchronously). So, I need the ability to get last traced operation inside logical call context (there may be a lot of concurrent threads, so it's impossible to use lastTraced static field). There're CallContext.LogicalGetData and CallContext.LogicalSetData, but unfortunately, LogicalCallContext propagates changes back to the parent context as EndInvoke() called. Even worse, this may occure at any moment if EndInvoke() was called async. http://stackoverflow.com/questions/883486/endinvoke-changes-current-callcontext-why Also, there is Trace.CorrelationManager, but it based on CallContext and have all the same troubles. There's a workaround: use the CallContext.HostContext property which does not propagates back as async operation ended. Also, it does'nt clone, so the value should be immutable - not a problem. Though, it's used by HttpContext and so, workaround is not usable in Asp.Net apps. The only way I see is to wrap HostContext (if not mine) or entire LogicalCallContext into dynamic and dispatch all calls beside last traced operation. Help, please!

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • Synchronization of Nested Data Structures between Threads in Java

    - by Dominik
    I have a cache implementation like this: class X { private final Map<String, ConcurrentMap<String, String>> structure = new HashMap...(); public String getValue(String context, String id) { // just assume for this example that there will be always an innner map final ConcurrentMap<String, String> innerStructure = structure.get(context); String value = innerStructure.get(id); if(value == null) { synchronized(structure) { // can I be sure, that this inner map will represent the last updated // state from any thread? value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } } return value; } } Is this implementation thread-safe? Can I be sure to get the last updated state from any thread inside the synchronized block? Would this behaviour change if I use a java.util.concurrent.ReentrantLock instead of a synchronized block, like this: ... if(lock.tryLock(3, SECONDS)) { try { value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } finally { lock.unlock(); } } ... I know that final instance members are synchronized between threads, but is this also true for the objects held by these members? Maybe this is a dumb question, but I don't know how to test it to be sure, that it works on every OS and every architecture.

    Read the article

  • Switching from Sourcesafe - What to look for in a product

    - by asp316
    We're looking to move off of sourcesafe and on to a more robust source control system for our .Net apps. We're also looking for scripted/automated deployments. I'm a .Net developer (web and winforms). However, most of our development staff is RPG for the IBM iSeries and the devs use Aldon's LMI for source control and deployment. Our manager would prefer to stick with Aldon so all of our products are in the same system. However, I don't have experience with Aldon's products on the .Net side. I've used TFS and Subversion with Tortoise a bit, but not enough to recommend one or the other, especially in comparison to Aldon's product. Does anybody have experience with Aldon's products? If so, thoughts please? Also, other than the obvious things source control systems do, are there things I should avoid or are there must haves? I'm open to any system. A bit of background, I'm the only .Net dev in our company but I let operations do the deployments. I do want the ability to support concurrent checkouts if we hire a new dev.

    Read the article

  • Lightweight spinlocks built from GCC atomic operations?

    - by Thomas
    I'd like to minimize synchronization and write lock-free code when possible in a project of mine. When absolutely necessary I'd love to substitute light-weight spinlocks built from atomic operations for pthread and win32 mutex locks. My understanding is that these are system calls underneath and could cause a context switch (which may be unnecessary for very quick critical sections where simply spinning a few times would be preferable). The atomic operations I'm referring to are well documented here: http://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Atomic-Builtins.html Here is an example to illustrate what I'm talking about. Imagine a RB-tree with multiple readers and writers possible. RBTree::exists() is read-only and thread safe, RBTree::insert() would require exclusive access by a single writer (and no readers) to be safe. Some code: class IntSetTest { private: unsigned short lock; RBTree<int>* myset; public: // ... void add_number(int n) { // Aquire once locked==false (atomic) while (__sync_bool_compare_and_swap(&lock, 0, 0xffff) == false); // Perform a thread-unsafe operation on the set myset->insert(n); // Unlock (atomic) __sync_bool_compare_and_swap(&lock, 0xffff, 0); } bool check_number(int n) { // Increment once the lock is below 0xffff u16 savedlock = lock; while (savedlock == 0xffff || __sync_bool_compare_and_swap(&lock, savedlock, savedlock+1) == false) savedlock = lock; // Perform read-only operation bool exists = tree->exists(n); // Decrement savedlock = lock; while (__sync_bool_compare_and_swap(&lock, savedlock, savedlock-1) == false) savedlock = lock; return exists; } }; (lets assume it need not be exception-safe) Is this code indeed thread-safe? Are there any pros/cons to this idea? Any advice? Is the use of spinlocks like this a bad idea if the threads are not truly concurrent? Thanks in advance. ;)

    Read the article

  • Hooking the http/https protocol in IE causes GET requests to be sequential

    - by watsonmw
    I'm using the PassthruAPP method to hook into HTTP/HTTPS requests made by IE. It's working well for the most part, however I noticed a problem. Only one download thread is active at a time. I can see two IInternetProtocol objects getting created, but IE uses only one at a time. This is happening with IE7. The odd thing is that the problem occurs when overriding the existing default HTTP/HTTPS handler, even if the handler is not the one being used to make the request. E.g. Registering a handler for the HTTPS protocol will cause HTTP requests to be made sequentially, even though HTTP requests are not hooked. I installed Google Gears and it has the same problem. This always happens for the first few items on the page, but it seems that after the document complete is issued, concurrent downloads can occur again. For example Javascript code that is executed after the page has finished loading can load images concurrently just fine. One option is to try to IAT patch the 'IInternetProtocol' registered for HTTP requests, but Google Gears does this already and it has the same problem. I know installing a HTTP Proxy is another option, but I don't want to monkey with the users' HTTP Proxy settings if there another option.

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • dynamically created radiobuttonlist

    - by Janet
    Have a master page. The content page has a list with hyperlinks containing request variables. You click on one of the links to go to the page containing the radiobuttonlist (maybe). First problem: When I get to the new page, I use one of the variables to determine whether to add a radiobuttonlist to a placeholder on the page. I tried to do it in page)_load but then couldn't get the values selected. When I played around doing it in preInit, the first time the page is there, I can't get to the page's controls. (Object reference not set to an instance of an object.) I think it has something to do with the MasterPage and page content? The controls aren't instantiated until later? (using vb by the way) Second problem: Say I get that to work, once I hit a button, can I still access the passed request variable to determine the selected item in the radiobuttonlist? Protected Sub Page_PreInit(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.PreInit 'get sessions for concurrent Dim Master As New MasterPage Master = Me.Master Dim myContent As ContentPlaceHolder = CType(Page.Master.FindControl("ContentPlaceHolder1"), ContentPlaceHolder) If Request("str") = "1" Then Dim myList As dsSql = New dsSql() ''''instantiate the function to get dataset Dim ds As New Data.DataSet ds = myList.dsConSessionTimes(Request("eid")) If ds.Tables("conSessionTimes").Rows.Count > 0 Then Dim conY As Integer = 1 CType(myContent.FindControl("lblSidCount"), Label).Text = ds.Tables("conSessionTimes").Rows.Count.ToString Sorry to be so needy - but maybe someone could direct me to a page with examples? Maybe seeing it would help it make sense? Thanks....JB

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >