Search Results

Search found 3641 results on 146 pages for 'threads'.

Page 37/146 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • What are the relative merits for implementing an Erlang-style "Continuation" pattern in C#

    - by JoeGeeky
    What are the relative merits (or demerits) for implementing an Erlang-style "Continuation" pattern in C#. I'm working on a project that has a large number of Lowest priority threads and I'm wondering if my approach may be all wrong. It would seem there is a reasonable upper limit to the number of long-running threads that any one Process 'should' spawn. With that said, I'm not sure what would signal the tipping-point for too many thread or when alternate patterns such as "Continuation" would be more suitable. In this case, many of the threads do a small amount of work and then sleep until woken to go again (Ex. Heartbeat, purge caches, etc...). This continues for the life of the Process.

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • Graphic card for parallel programming vs traditional methods

    - by Sambatyon
    With a simple search in amazon one can see that the modern approach for parallel programming is to use your graphic card. However I am still a little bit skeptical about it. My last computer has an 8 core CPU which I need is enough for basic all my parallel needs, if I need more I will probably use MPI through a network using my old machines. All in all, Why and/or when should I use CUDA or another method which uses my graphic card instead of traditional methods like pthreads, java threads, boost threads or the new C++ 11 threads? What about using processes?

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • Threading iPhone

    - by bobobobo
    Say I have a group of large meshes that I have to intersect rays against. Assume also, for whatever reason, I cannot further simplify/reduce poly check count by spatial subdivisioning. I can do this in parallel: bool intersects( list of meshes ) // a mesh is a group of triangles { create n threads foreach mesh in meshes assign to a thread in threads wait until ( threads.run() ) ; // run asynchronously // when they're all done // pull out intersected triangles // from per-thread context data } Can you do this in ios for games? Or is the overhead of thread creation and mutex waiting going to beat-out the benefit of multithreading?

    Read the article

  • Disqus thread migration. Gotchas?

    - by sramsay
    I've been migrating a site to a new domain. The site itself is pretty straightforward (it uses Jekyll), and everything has gone fine -- except migration of Disqus threads. I've had partial success -- some of the threads have migrated successfully, but not all. I've tried the domain migration wizard (which caught a few), the URL mapper (which caught a few), and the 301 redirect crawler (which caught a few). But the remaining threads just won't move, no matter which method I use. So, I suppose I suppose I'm asking if there are any "gotchas" I should know about with this. When you execute any of these migration tools, it says it will "take awhile." Does that mean hours? Days? I can't tell if it's working, and there's no logging or error reporting that I can see.

    Read the article

  • Thread count in Java game

    - by Taylor Hill
    I'm just curious as to what a reasonable number of threads is for a simple 2D mmo in Java. Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads. This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).

    Read the article

  • Scalability of multi-threading in game server

    - by Taylor Hill
    What is a reasonable number of threads for a simple 2D mmo in Java? Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads. This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Sync Vs. Async Sockets Performance in C#

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • C++/CLI managed thread cleanup

    - by Guillermo Prandi
    Hi. I'm writing a managed C++/CLI library wrapper for the MySQL embedded server. The mysql C library requires me to call mysql_thread_init() for every thread that will be using it, and mysql_thread_end() for each thread that exits after using it. Debugging any given VB.Net project I can see at least seven threads; I suppose my library will see only one thread if VB doesn't explicitly create worker threads itself (any confirmations on that?). However, I need clients to my library to be able to create worker threads if they need to, so my library must be thread-aware to some degree. The first option I could think of is to expose some "EnterThread()" and "LeaveThread()" methods in my class, so the client code will explicitly call them at the beginning and before exiting their DoWork() method. This should work if (1) .Net doesn't "magically" create threads the user isn't aware of and (2) the user is careful enough to have the methods called in a try/finally structure of some sort. However, I don't like it very much to have the user handle things manually like that, and I wonder if I could give her a hand on that matter. In a pure Win32 C/C++ DLL I do have the DllMain DLL_THREAD_ATTACH and DLL_THREAD_DETACH pseudo-events, and I could use them for calling mysql_thread_init() and mysql_thread_end() as needed, but there seem to be no such thing in C++/CLI managed code. At the expense of some performance (not much, I think) I can use TLS for detecting the "usage from a new thread" case, but I can imagine no mechanism for the thread exiting case. So, my questions are: (1) could .net create application threads without the user being aware of them? and (2) is there any mechanism I could use similar to DLL_THREAD_ATTACH / DLL_THREAD_DETACH from managed C++/CLI? Thanks in advance.

    Read the article

  • Threading best practice when using SFTP in C#

    - by Christian
    Ok, this is more one of these "conceptual questions", but I hope I got some pointers in the right direction. First the desired scenario: I want to query an SFTP server for directory and file lists I want to upload or download files simulaneously Both things are pretty easy using a SFTP class provided by Tamir.SharpSsh, but if I only use one thread, it is kind of slow. Especially the recursion into subdirs gets very "UI blocking", because we are talking about 10.000 of directories. My basic approach is simple, create some kind of "pool" where I keep 10 open SFTP connections. Then query the first worker for a list of dirs. If this list was obtained, send the next free workers (e.g. 1-10, first one is also free again) to get the subdirectory details. As soon as there is a worker free, send him for the subsubdirs. And so on... I know the ThreadPool, simple Threads and did some Tests. What confuses me a little bit is the following: I basically need... A list of threads I create, say 10 Connect all threads to the server If a connection drops, create a new thread / sftp client If there is work to do, take the first free thread and handle the work I am currently not sure about the implementation details, especially the "work to do" and the "maintain list of threads" parts. Is it a good idea to: Enclose the work in an object, containing a job description (path) and a callback Send the threads into an infinite loop with 100ms wait to wait for work If SFTP is dead, either revive it, or kill the whole thread and create a new one How to encapsulate this, do I write my own "10ThreadsManager" or are there some out Ok, so far... Btw, I could also use PRISM events and commands, but I think the problem is unrelated. Perhaps the EventModel to signal a done processing of a "work package"... Thanks for any ideas, critic.. Chris

    Read the article

  • perl multiple tasks problem

    - by Alice Wozownik
    I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks. A few problems: How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits? Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well. Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Tomcat stops responding to JK requests

    - by Bruno Reis
    Hello. I have a nasty issue with load-balanced Tomcat servers that are hanging up. Any help would be greatly appreciated. The system I'm running Tomcat 6.0.26 on HotSpot Server 14.3-b01 (Java 1.6.0_17-b04) on three servers sitting behind another server that acts as load balancer. The load balancer runs Apache (2.2.8-1) + MOD_JK (1.2.25). All of the servers are running Ubuntu 8.04. The Tomcat's have 2 connectors configured: an AJP one, and a HTTP one. The AJP is to be used with the load balancer, while the HTTP is used by the dev team to directly connect to a chosen server (if we have a reason to do so). I have Lambda Probe 1.7b installed on the Tomcat servers to help me diagnose and fix the problem soon to be described. The problem Here's the problem: after about 1 day the application servers are up, JK Status Manager starts reporting status ERR for, say, Tomcat2. It will simply get stuck on this state, and the only fix I've found so far is to ssh the box and restart Tomcat. I must also mention that JK Status Manager takes a lot longer to refresh when there's a Tomcat server in this state. Finally, the "Busy" count of the stuck Tomcat on JK Status Manager is always high, and won't go down per se -- I must restart the Tomcat server, wait, then reset the worker on JK. Analysis Since I have 2 connectors on each Tomcat (AJP and HTTP), I still can connect to the application through the HTTP one. The application works just fine like this, very, very fast. That is perfectly normal, since I'm the only one using this server (as JK stopped delegating requests to this Tomcat). To try to better understand the problem, I've taken a thread dump from a Tomcat which is not responding anymore, and from another one that has been restarted recently (say, 1 hour before). The instance that is responding normally to JK shows most of the TP-ProcessorXXX threads in "Runnable" state, with the following stack trace: java.net.SocketInputStream.socketRead0 ( native code ) java.net.SocketInputStream.read ( SocketInputStream.java:129 ) java.io.BufferedInputStream.fill ( BufferedInputStream.java:218 ) java.io.BufferedInputStream.read1 ( BufferedInputStream.java:258 ) java.io.BufferedInputStream.read ( BufferedInputStream.java:317 ) org.apache.jk.common.ChannelSocket.read ( ChannelSocket.java:621 ) org.apache.jk.common.ChannelSocket.receive ( ChannelSocket.java:559 ) org.apache.jk.common.ChannelSocket.processConnection ( ChannelSocket.java:686 ) org.apache.jk.common.ChannelSocket$SocketConnection.runIt ( ChannelSocket.java:891 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:690 ) java.lang.Thread.run ( Thread.java:619 ) The instance that is stuck show most (all?) of the TP-ProcessorXXX threads in "Waiting" state. These have the following stack trace: java.lang.Object.wait ( native code ) java.lang.Object.wait ( Object.java:485 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:662 ) java.lang.Thread.run ( Thread.java:619 ) I don't know of the internals of Tomcat, but I would infer that the "Waiting" threads are simply threads sitting on a thread pool. So, if they are threads waiting inside of a thread pool, why wouldn't Tomcat put them to work on processing requests from JK? Solution? So, as I've stated before, the only fix I've found is to stop the Tomcat instance, stop the JK worker, wait the latter's busy count slowly go down, start Tomcat again, and enable the JK worker once again. What is causing this problem? How should I further investigate it? What can I do to solve it? Thanks in advance.

    Read the article

  • Using NHibernate's HQL to make a query with multiple inner joins

    - by Abu Dhabi
    The problem here consists of translating a statement written in LINQ to SQL syntax into the equivalent for NHibernate. The LINQ to SQL code looks like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; It's essentially trying to grab a list of posts within a given forum thread. The best I've been able to come up with (with the help of the helpful users of this site) for NHibernate is: var whatevervar = session.CreateQuery("select t.Thread_topic, p.Post_time, " + "u.User_display_name, u.User_signature, " + "u.User_avatar, p.Post_body, p.Post_topic " + "from THREADPOST tp " + "inner join tp.Thread_ as t " + "inner join tp.Post_ as p " + "inner join p.User_ as u " + "where tp.Thread_ = :what") .SetParameter<THREAD>("what", threadid) .SetResultTransformer(Transformers.AliasToBean(typeof(MyDTO))) .List<MyDTO>(); But that doesn't parse well, complaining that the aliases for the joined tables are null references. MyDTO is a custom type for the output: public class MyDTO { public string thread_topic { get; set; } public DateTime post_time { get; set; } public string user_display_name { get; set; } public string user_signature { get; set; } public string user_avatar { get; set; } public string post_topic { get; set; } public string post_body { get; set; } } I'm out of ideas, and while doing this by direct SQL query is possible, I'd like to do it properly, without defeating the purpose of using an ORM. Thanks in advance!

    Read the article

  • Using a custom IList obtained through NHibernate

    - by Abu Dhabi
    Hi.I'm trying to write a web page in .NET, using C# and NHibernate 2.1. The pertinent code looks like this: var whatevervar = session.CreateSQLQuery("select thread_topic, post_time, user_display_name, user_signature, user_avatar, post_topic, post_body from THREAD, [USER], POST, THREADPOST where THREADPOST.thread_id=" + id + " and THREADPOST.thread_id=THREAD.thread_id and [USER].user_id=POST.user_id and POST.post_id=THREADPOST.post_id ORDER BY post_time;").List(); (I have tried to use joins in HQL, but then fell back on this query, due to HQL's unreadability.) The problem is that I'm getting a result that is incompatible with a repeater. When I try this: posts.DataSource = whatevervar.; posts.DataBind(); ...I get this: DataBinding: 'System.Object[]' does not contain a property with the name 'user_avatar'. In an earlier project, I used LINQ to SQL for this same purpose, and it looked like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; That worked, and now I want to do the same with NHibernate. Unfortunately, I don't know how to make the repeater recognize the fields of the result of the query. Thanks in advance!

    Read the article

  • How to use IObservable/IObserver with ConcurrentQueue or ConcurrentStack

    - by James Black
    I realized that when I am trying to process items in a concurrent queue using multiple threads while multiple threads can be putting items into it, the ideal solution would be to use the Reactive Extensions with the Concurrent data structures. My original question is at: http://stackoverflow.com/questions/2997797/while-using-concurrentqueue-trying-to-dequeue-while-looping-through-in-parallel/ So I am curious if there is any way to have a LINQ (or PLINQ) query that will continuously be dequeueing as items are put into it. I am trying to get this to work in a way where I can have n number of producers pushing into the queue and a limited number of threads to process, so I don't overload the database. If I could use Rx framework then I expect that I could just start it, and if 100 items are placed in within 100ms, then the 20 threads that are part of the PLINQ query would just process through the queue. There are three technologies I am trying to work together: Rx Framework (Reactive LINQ) PLING System.Collections.Concurrent structures

    Read the article

  • Java Swing Threading with Updatable JProgressBar

    - by Anthony Sparks
    First off I've been working with Java's Concurency package quite a bit lately but I have found an issue that I am stuck on. I want to have and Application and the Application can have a SplashScreen with a status bar and the loading of other data. So I decided to use SwingUtilities.invokeAndWait( call the splash component here ). The SplashScreen then appears with a JProgressBar and runs a group of threads. But I can't seem to get a good handle on things. I've looked over SwingWorker and tried using it for this purpose but the thread just returns. Here is a bit of sudo-code. and the points I'm trying to achieve. Have an Application that has a SplashScreen that pauses while loading info Be able to run multiple threads under the SplashScreen Have the progress bar of the SplashScreen Update-able yet not exit until all threads are done. Launching splash screen try { SwingUtilities.invokeAndWait( SplashScreen ); } catch (InterruptedException e) { } catch (InvocationTargetException e) { } Splash screen construction SplashScreen extends JFrame implements Runnable{ public void run() { //run threads //while updating status bar } } I have tried many things including SwingWorkers, Threads using CountDownLatch's, and others. The CountDownLatch's actually worked in the manner I wanted to do the processing but I was unable to update the GUI. When using the SwingWorkers either the invokeAndWait was basically nullified (which is their purpose) or it wouldn't update the GUI still even when using a PropertyChangedListener. If someone else has a couple ideas it would be great to hear them. Thanks in advance.

    Read the article

  • Thread scheduling C

    - by MRP
    include <pthread.h> include <stdio.h> include <stdlib.h> #define NUM_THREADS 4 #define TCOUNT 5 #define COUNT_LIMIT 13 int done = 0; int count = 0; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,1}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void *inc_count(void *t) { int i; long my_id = (long)t; long run_time = thread_runtime[my_id]; if (my_id==2 && done ==0) { for(i=0; i< 5 ; i++) { if( i==4 ){done =1;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==3 && done==1) { for(i=0; i< 4 ; i++) { if(i == 3 ){ done = 2;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==4&& done == 2) { for(i=0; i< 8 ; i++) { pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n",my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } pthread_exit(NULL); } void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); count += 125; printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } int main (int argc, char *argv[]) { int i, rc; long t1=1, t2=2, t3=3, t4=4; pthread_t threads[4]; pthread_attr_t attr; pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } so this code creates 4 threads. thread 1 keeps track of the count value while the other 3 increment the count value. the run time is the number of times the thread will increment the count value. I have a done value that allows the first thread to increment the count value first until its run time is up.. so its like a First Come First Serve. my question is, is there a better way of implementing this? I have read about SCHED_FIFO or SCHED_RR.. I guess I dont know how to implement them into this code or if it can be.

    Read the article

  • Why delete-orphan needs "cascade all" to run in JPA/Hibernate ?

    - by Jerome C.
    Hello, I try to map a one-to-many relation with cascade "remove" (jpa) and "delete-orphan", because I don't want children to be saved or persist when the parent is saved or persist (security reasons due to client to server (GWT, Gilead)) But this configuration doesn't work. When I try with cascade "all", it runs. Why the delete-orphan option needs a cascade "all" to run ? here is the code (without id or other fields for simplicity, the class Thread defines a simple many-to-one property without cascade): when using the removeThread function in a transactional function, it does not run but if I edit cascade.Remove into cascade.All, it runs. @Entity public class Forum { private List<ForumThread> threads; /** * @return the topics */ @OneToMany(mappedBy = "parent", cascade = CascadeType.REMOVE, fetch = FetchType.LAZY) @Cascade(org.hibernate.annotations.CascadeType.DELETE_ORPHAN) public List<ForumThread> getThreads() { return threads; } /** * @param topics the topics to set */ public void setThreads(List<ForumThread> threads) { this.threads = threads; } public void addThread(ForumThread thread) { getThreads().add(thread); thread.setParent(this); } public void removeThread(ForumThread thread) { getThreads().remove(thread); } } thanks.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >