Search Results

Search found 1122 results on 45 pages for 'concurrent'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • A linked list with multiple heads in Java

    - by Emile
    Hi, I have a list in which I'd like to keep several head pointers. I've tried to create multiple ListIterators on the same list but this forbid me to add new elements in my list... (see Concurrent Modification exception). I could create my own class but I'd rather use a built-in implementation ;) To be more specific, here is an inefficient implementation of two basic operations and the one wich doesn't work : class MyList <E { private int[] _heads; private List<E _l; public MyList ( int nbHeads ) { _heads = new int[nbHeads]; _l = new LinkedList<E(); } public void add ( E e ) { _l.add(e); } public E next ( int head ) { return _l.get(_heads[head++]); // ugly } } class MyList <E { private Vector<ListIterator<E _iters; private List<E _l; public MyList ( int nbHeads ) { _iters = new Vector<ListIterator<E(nbHeads); _l = new LinkedList<E(); for( ListIterator<E iter : _iters ) iter = _l.listIterator(); } public void add ( E e ) { _l.add(e); } public E next ( int head ) { // ConcurrentModificationException because of the add() return _iters.get(head).next(); } } Emile

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • I am confused -- Will this code always work?

    - by Shekhar
    Hello, I have written this piece of code public class Test{ public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); for(int i = 1;i<= 4;i++){ new Thread(new TestTask(i, list)).start(); } while(list.size() != 4){ // this while loop required so that all threads complete their work } System.out.println("List "+list); } } class TestTask implements Runnable{ private int sequence; private List<Integer> list; public TestTask(int sequence, List<Integer> list) { this.sequence = sequence; this.list = list; } @Override public void run() { list.add(sequence); } } This code works and prints all the four elements of list on my machine. My question is that will this code always work. I think there might be a issue in this code when two/or more threads add element to this list at the same point. In that case it while loop will never end and code will fail. Can anybody suggest a better way to do this? I am not very good at multithreading and don't know which concurrent collection i can use? Thanks Shekhar

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Special technology needed for browser based chat?

    - by orokusaki
    On this post, I read about the usage of XMPP. Is this sort of thing necessary, and more importantly, my main question expanded: Can a chat server and client be built efficiently using only standard HTTP and browser technologies (such as PHP and JS, or RoR and JS, etc)? Or, is it best to stick with old protocols like XMPP find a way to integrate them with my application? I looked into CampFire via LiveHTTPHeaders and Firebug for about 5 minutes, and it appears to use Ajax to send a request which is never answered until another chat happens. Is this just CampFire opening a new thread on the server to listen for an update and then returning a response to the request when the thread hears an update? I noticed that they're requesting on a specific port (8043 if memory serves me) which makes me think that they're doing something more complex than just what I mentioned. Also, the URL requested started with /tcp/ which I found interesting. Note: I don't expect to ever have more than 150 users live-chatting in all the rooms combined at the same time. I understand that if I was building a hosted pay for chat service like CampFire with thousands of concurrent users, it would behoove me to invest time in researching special technologies vs trying to reinvent the wheel in a simple way in my app. Also, if you're going to do it with server polling, how often would you personally poll to maximize response without slamming the server?

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Learning Haskell maps, folds, loops and recursion

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Binding a Value from a View-Model to the View-Model of a child User Control in Silverlight?

    - by andrej351
    Hi there, So i have a UserControl for one of my Views and have another 'child' UserControl inside that. The outer 'parent' UserControl has a Collection on its View-Model and a Grid control on it to display a list of Items. I want to place another UserControl inside this UserControl to display a form representing the details of one Item. The outer / parent UserControl's View-Model already has a property on it to hold the currently selected Item and i would like to bind this to a DependancyProperty on the inner / child UserControl. I would then like to bind that DependancyProperty to a property on the child UserControl's View-Model. I can then set the DependancyProperty once in XAML with a binding expression and have the child UserControl do all its work in its View-Model like it should. The code i have looks like this.. Parent UserControl: <UserControl x:Class="ItemsListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel}"> <!-- Grid Control here... --> <ItemDetailsView Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel.SelectedItem}" /> </UserControl> Child UserControl: <UserControl x:Class="ItemDetailsView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel}" ItemDetailsView.Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel.Item, Mode=TwoWay}"> <!-- Form controls here... --> </UserControl> The selected Item is bound to the DependancyProperty fine. However from the DependancyProperty to the child View-Model does not. I've used this sort of apporach in a WPF app without problems. It appears to be a situation where there are two concurrent bindings which need to work but with the same target for two sources. Why won't the second (in the child UserControl) binding work?? Is there a way to acheive the behaviour I'm after?? Cheers.

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • Windows service threading call to WCF service

    - by Sam Brinsted
    Hi, I have a windows service that is reading data from a database and then submitting it to a WCF serivce. Once that has finished it is stamping a processed date on the original record. Trouble I am currently having is to do with threading. The call to the WCF serivce is relatively long and I want to have a number of concurrent calls to the service to help improve the throughput of the windows service. Currently I have a submitToService method on a new worker class. Upon reading a new row from the database I am creating a new thread which is calling this method. This obviously isn't too good as the number of threads quickly shoots up and overburdens the WCF service. I have put a thread.sleep in the submit method and am sure to call System.Threading.Thread.CurrentThread.Abort(); after the submission has finished. However, I don't seem to see the number of threads go down. How can I just have a fixed number of threads that can be used in the windows service? I did think about using a thread pool but read somewhere that wasn't a good choice for a windows service. Thanks very much.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >