Search Results

Search found 3641 results on 146 pages for 'threads'.

Page 49/146 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Quad Core host with hyper-threading, how many processors to configure in VirtualBox?

    - by Anthony
    I have a quad-core i7 processor with hyperthreading (8 logical cores), when I configured a virtual machine to use 8 processors, VirtualBox gave me a warning saying that I only have four cores (which is true) and that this may cause a performance issue. But hyper-threading is a hardware feature, so the OS sees 8 cores and it sends instructions to all 8. What if setting it to 4 caused the VM to use 2 cores (4 threads) instead of 4 simultaneous threads (on all 4 cores)? Does the warning I got take into account that my machine has hyper-threading?

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • ffmpeg encoding on QuadCore

    - by Gotys
    I am trying to push my server's CPU cores to the max, but no success. Encoding 2-pass style, set my "-threads" to 128 . When running 2nd pass , the CPU seems to be at 98% usage, but first pass run totally ignores "-threads" option. Using libx264 . Here is my preset: flags=+loop+mv4 cmp=256 partitions=+parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 me_method=hex subq=7 trellis=1 refs=5 bf=3 flags2=+bpyramid+wpred+mixed_refs+dct8x8 coder=1 me_range=16 g=250 keyint_min=25 sc_threshold=40 i_qfactor=0.71 qmin=10 qmax=51 qdiff=4 Is there any reason why the 1st pass is not utilizing my CPUs ? Thank you in advance! This community has always been very kind to me.

    Read the article

  • Windows 7 connect to Lion file sharing

    - by Automaton
    Trying to access my Mac from a Windows 7 computer, I fail with the infamous error 86 incorrect password. Now this appears to be a well-known problem with countless threads on the internet giving as many "solutions" as there are discussion threads about it (mostly ranging from installing third-party commercial samba servers, to switching to some other protocol, to compiling a plain-vanilla Samba installation - the latter which I will probably do when I give up this :) ) I am stubborn, and I believe there must be some problem here that can be solved or worked around, but there is surprisingly little detail about this problem. It appears to have something to do with a mismatch of authentication methods. Trying to run samba in debug mode: sudo /usr/sbin/smbd -debug -stdout gets me this output when trying to access it from Win 7 ... smb1_dispatch_one [smb_dispatch.cpp:377] dispatching SMB_COM_SESSION_SETUP_ANDX smb1_dispatch_session_setup [session_setup.cpp:261] FIXME erase existing sessions log_gss_error [gssapi_mechanism.cpp:97] gssapi: gss-code: Miscellaneous failure (see text) log_gss_error [gssapi_mechanism.cpp:113] gssapi: mech-code: unknown mech-code 22 for mech unknown What is the problem here, and how do I fix it?

    Read the article

  • Multithreaded Windows FOR batch command

    - by Axarydax
    Hi, do you know if there is a simple way to run FOR command in batch file on multiple threads? What's the point of having 4 cores if I can't run my tasks in 4 parallel threads? For example, if I am optimizing PNGs with PNGOUT, the command I would use is for %i in (*.png) do pngout "%i" But this is highly paralellizable task in which the sub-tasks do not depend on each other at all. To run this in 4 'queues' I'd write something like for -thread 4 %i in (*.png) do pngout "%i" Do I need to write my own for-like app that would be able to do this or is there available free solution?

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • Equalizer for Toshiba Satellite Pro Laptop (Windows 7) (has Realtek sound)

    - by Need Help
    Hi, I've been trying to find a way to have a real equalizer for my Toshiba Satellite Pro Laptop. I don't know much about computers but the guys in the store seem to know less than I do! From what I've been able to glean through friends, Windows 7 no longer supports a real equalizer function with Realtek, and the result is a fake "equalizer" that does nothing. I have hearing issues so not being able properly adjust the higher frequencies is causing me physical pain and ringing in my ears. I've tried to decipher the threads I find online but am quite confused by them. Basically I need an equalizer that will work with Windows 7 (with everything, internet, skype, music, etc). The current drivers offered by Realtek do nothing except make the sound inaudible. The one that may possibly work and provide an actual equalizer is a few versions back and can't be found anywhere. Thanks! And sorry if this is a duplicate question but I am confused a bit by the threads online.

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Async & Await in C# with Xamarin

    - by Wallym
     One of the great things about the .NET Framework is that Microsoft has worked long and hard to improve many features. Since the initial release of .NET 1.0, there has been support for threading via .NET threads as well as an application-level threadpool. This provided a great starting point when compared to Visual Basic 6 and classic ASP programming. The release of.NET 4 brought significant improvements in the area of threading, asynchronous operations and parallel operations. While the improvements made working with asynchronous operations easier, new problems were introduced, since many of these operations work based on callbacks. For example: How should a developer handle error checking? The program flow tends to be non-linear. Fixing bugs can be problematic. It is hard for a developer to get an understanding of what is happening within an application. The release of .NET 4.5 (and C# 5.0), in the fall of 2012, was a blockbuster update with regards to asynchronous operations and threads. Microsoft has added C# language keywords to take this non-linear callback-based program flow and turn it into a much more linear flow. Recently, Xamarin has updated Xamarin.Android and Xamarin.iOS to support async. This article will look at how Xamarin has implemented the .NET 4.5/C# 5 support into their Xamarin.iOS and Xamarin.Android productions. There are three general areas that I'll focus on: A general look at the asynchronous support in Xamarin's mobile products. This includes async, await, and the implications that this has for cross-platform code. The new HttpClient class that is provided in .NET 4.5/Mono 3.2. Xamarin's extensions for asynchronous operations for Android and iOS. FYI: Be aware that sometimes the OpenWeatherMap API breaks, for no reason.  I found this out after I shipped the article in.

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • Bluetooth headset A2DP works, HSP/HFP not (no sound/no mic)

    - by Stefan Armbruster
    My Philips SBH9001 headset pairs fine using Ubuntu 12.04. In the audio settings it's properly detected as A2DP device and as HSP/HFP device. Hardware: Thinkpad X230, Ubuntu 12.04 64bit, Kernel 3.6.0-030600rc3-generic (build from Ubuntu mainline repo), Bluetooth device is USB-Id 0a5c:21e6 from Broadcom, Headset is a Philips SBH9001. Note: Kernel 3.6 rc3 is used because of a fix for audio on the dockingstation that is not in any previous branches. Playing audio in A2DP works just fine out of the box, but when switching the headset to HSP/HSP mode there is no sound nor does the microphone work. When connecting the headset, /var/log/syslog shows: Aug 25 21:32:47 x230 bluetoothd[735]: Badly formated or unrecognized command: AT+CSRSF=1,1,1,1,1,7 Aug 25 21:32:49 x230 rtkit-daemon[1879]: Successfully made thread 17091 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:32:49 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:32:50 x230 kernel: [ 4860.627585] input: 00:1E:7C:01:73:E1 as /devices/virtual/input/input17 When switching from A2DP (standard profile) to HSP/HFP: Aug 25 21:34:36 x230 bluetoothd[735]: /org/bluez/735/hci0/dev_00_1E_7C_01_73_E1/fd3: fd(34) ready Aug 25 21:34:36 x230 rtkit-daemon[1879]: Successfully made thread 17309 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:34:36 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:34:41 x230 bluetoothd[735]: Audio connection got disconnected Any hints how to get HSP/HFP working here?

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Is Play Framework good for doing some logic parallely?

    - by pmichna
    I'm going to build a web application that's going to host urban games. A user visits my website, clicks "Start game" and starts receiving some SMS messages when gets to some location and has to answer them to get points. My question: is Play suitable for this kind of application? From what I've read I know for sure it's ok for traditional web applications: user interface for same data storage and manipulation. But what if after clicking the "start button" some logic has to go on its own course? How would I handle parallely checking geolocation of the players (I have API for that)? I guess in some threads that would ping them every ~5 sec. and do some processing but is it possible to just "disconnect" them from the main user interface? So to sum up: I want an application written in Play that starts a separate thread for a game after clicking "start game" and other users are able to view their data (statisctics etc.), while the threads work their way with the game logic. I found something like jobs but they are documented for version 1.2 (current one is 2.2). Sorry for my somewhat fuzzy explenation, I tried to do my best.

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • chrome memory and cpu footprint

    - by nmizar
    I've searched the forums for an answer but I couldn't find quite the answer I was looking for [1] , so I thought I it could as well be of interest to more people around here. I carry out a big part of my job on the browser (or for the browser, if you want to put it that way). I tend to use Chrome, because it's got natively many of the newest features that I need (DevTools stuff, mainly but not only). BTW, I'm usually running the last available Chrome version/build on a desktop Vaio with 4GB RAM and dual core CPU and Ubuntu 12.04 as distro and Gnome as window manager. So, I was curious about a) why does Chrome spawn so many threads even opening only three of four tabs and b) is there any way to allocate more memory to Chrome to prevent its performance from degrading? Thanks in advance, Nacho PS [1] I found threads about Chrome freezing or running out of memory but not about the reasons for this being so or for avoiding it to happen. PPS Of course, I could always buy a newer and more capable machine and that is exactly what I'm trying to evaluate: is this a question of outdated hardware or the problem will keep appearing on any (decently but not hugely sized) machine?

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • ArchBeat Link-o-Rama for October 24, 2013

    - by OTN ArchBeat
    Video: How To Embed Custom Content Into Fusion Applications Watch this video tutorial from the Fusion Applications Developer Relations YouTube Channel to learn how to embed reports, charts, twitter streams, web pages, news feeds, and other custom content into Fusion Applications. Oracle GoldenGate 12c - New Release, New Features | Michael Rainey Rittman Mead's Michael Rainey takes you on guided tour through the GoldenGate 12c features that "are relevant to data warehouse and data migration work we typically see in the business intelligence world." Reproducing WebLogic Stuck Threads with ADF CreateInsert Operation and ORDER BY Clause | Andrejus Baranovskis Another post from Oracle ACE Director Andrejus Baranovsikis on dealing with WebLogic Stuck Threads. This one includes a test case application you can download. Oracle WebLogic 12.1.2 Installation in VirtualBox with 0 MHz | Dr. Frank Munz Oracle ACE Director Frank Munz shares the results of some detective work to discover the cause of a strange problem in an Oracle WebLogic installation. The Impact of SaaS - The Times They Are A-Changin' | Floyd Teter Oracle ACE Director Floyd Teter shares some truly interesting insight gained in conversations with three Fortune 500 CIOs. Thought for the Day "All the mistakes I ever made were when I wanted to say 'No' and said 'Yes'." — Moss Hart, playwright, screenwriter (October 24, 1904 – December 20, 1961) Source: brainyquote.com

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >