Search Results

Search found 5945 results on 238 pages for 'green threads'.

Page 7/238 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • ODEE Green Field (Windows) Part 3 - SOA Suite

    - by AndyL-Oracle
     So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic - you probably noticed, like I did, that it's a pretty quick install. I'm pretty certain this had everything to do with how quickly the next post made it to the internet! So let's dig in. Make sure you've followed the steps from the initial post to obtain the necessary software and prerequisites! Unpack the RCU (Repository Creation Utility). This ZIP file contains a directory (rcuHome) that should be extracted into your ORACLE_HOME. Run the RCU – execute rcuHome/bin/rcu.bat. Click Next. Select Create and click Next. Enter the database connection details and click Next – any failure to connection will show in the Messages box. Click Ok Expand and select the SOA Infrastructure item. This will automatically select additional required components. You can change the prefix used, but DEV is recommended. If you are creating a sandbox that includes additional components like WebCenter Content and UMS, you may select those schemas as well but they are not required for a basic ODEE installation. Click Next. Click OK. Specify the password for the schema(s). Then click Next. Click Next. Click OK. Click OK. Click Create. Click Close. Unpack the SOA Suite installation files into a single directory e.g. SOA. Run the installer – navigate and execute SOA/Disk1/setup.exe. If you receive a JDK error, switch to a command line to start the installer. To start the installer via command line, do Start?Run?cmd and cd into the SOA\Disk1 directory. Run setup.exe –jreLoc < pathtoJRE >. Ensure you do not use a path with spaces – use the ~1 notation as necessary (your directory must not exceed 8 characters so “Program Files” becomes “Progra~1” and “Program Files (x86)” becomes “Progra~2” in this notation). Click Next. Select Skip and click Next. Resolve any issues shown and click Next. Verify your oracle home locations. Defaults are recommended. Click Next. Select your application server. If you’ve already installed WebLogic, this should be automatically selected for you. Click Next. Click Install. Allow the installation to progress… Click Next. Click Finish. You can save the installation details if you want. That should keep you satisfied for the moment. Get ready, because the next posts are going to be meaty! 

    Read the article

  • Setting minimum threads in thread pool

    - by expert
    I have an application with 4 worker threads from the thread pool. It was waking up every 0.5 second. as written in msdn the thread pool monitors every 0,5 second to create idle threads. I set the nuber of minimum threads to 4 and it solved the problem - no more background activity all the time. My question is - I have another applicatiopn which has the same number of threads threads-4, but here setting min thread to 4 doesn't help but when setting min thread to 5 then the background monitoring stops. What might be the difference between 2 application with the same number of threads from the thread pool- 4 threads.On one setting min threads to 4 helps and the other only setting min threads to 5 helps?

    Read the article

  • Coloring of collapsed threads in mutt

    - by Rich
    I'm trying to figure out the syntax of colouring collapsed threads in the mutt index. The documentation for mutt patterns doesn't seem to include a description of how this works, and so far I've been completely unable to figure it out by trial and error. What I'd like is for collapsed threads that contain any unread (new) messages to be always coloured green. If collapsed threads with no unread messages contain any flagged messages, then I'd like them to be red. So far, every set of patterns I've tried results in threads that contain both flagged and unread messages being coloured red (I want them green). These work: color index green default "~N" # unread messages color index green default "~N~F" # unread flagged messages color index red default "~F" # flagged messages color index green default "~v~(~N)" # collapsed thread with unread But these don't: color index green default "~v~(~N~F)" # attempt to keep threads with unread green color index red default "~v~(~F)" # colours collapsed threads with flagged and unread red color index red default "~v~(!~N~F)" # ditto color index red default "~v~(^!~N~F)" # ditto color index red default "~v~(~F)~(!~N)" # ditto color index red default "~v~(~F)~v~(!~N)" # ditto I've also tried switching the order of the "~v~(~F)" and "~v~(~N)" commands in the file, but the "flagged" rule always seems to take precedence over the "new" rule. Ideally I'd like to understand how the syntax for colouring collapsed threads works, but at this point I'd happily settle for a set of rules that achieves the colourscheme described above.

    Read the article

  • Why the performance of following code is degrading when I use threads ?

    - by DotNetBeginner
    Why the performance of following code is degrading when I use threads ? **1.Without threads int[] arr = new int[100000000]; //Array elements - [0][1][2][3]---[100000000-1] addWithOutThreading(arr); // Time required for this operation - 1.16 sec Definition for addWithOutThreading public void addWithOutThreading(int[] arr) { UInt64 result = 0; for (int i = 0; i < 100000000; i++) { result = result + Convert.ToUInt64(arr[i]); } Console.WriteLine("Addition = " + result.ToString()); } **2.With threads int[] arr = new int[100000000]; int part = (100000000 / 4); UInt64 res1 = 0, res2 = 0, res3 = 0, res4 = 0; ThreadStart starter1 = delegate { addWithThreading(arr, 0, part, ref res1); }; ThreadStart starter2 = delegate { addWithThreading(arr, part, part * 2, ref res2); }; ThreadStart starter3 = delegate { addWithThreading(arr, part * 2, part * 3, ref res3); }; ThreadStart starter4 = delegate { addWithThreading(arr, part * 3, part * 4, ref res4); }; Thread t1 = new Thread(starter1); Thread t2 = new Thread(starter2); Thread t3 = new Thread(starter3); Thread t4 = new Thread(starter4); t1.Start(); t2.Start(); t3.Start(); t4.Start(); t1.Join(); t2.Join(); t3.Join(); t4.Join(); Console.WriteLine("Addition = "+(res1+res2+res3+res4).ToString()); // Time required for this operation - 1.30 sec Definition for addWithThreading public void addWithThreading(int[] arr,int startIndex, int endIndex,ref UInt64 result) { for (int i = startIndex; i < endIndex; i++) { result = result + Convert.ToUInt64(arr[i]); } }

    Read the article

  • Threads, événements et QObject, première partie sur les boucles événementielles, traduction par Vivien Duboué

    Le but de ce document n'est pas de vous enseigner la manière d'utiliser les threads, de faire des verrouillages appropriés, d'exploiter le parallélisme ou encore d'écrire des programmes extensibles : il y a de nombreux bons livres sur ce sujet ; par exemple, jetez un coup d'%u0153il sur la liste des lectures recommandées par la documentation. Ce petit article est plutôt voué à être un guide pour introduire les utilisateurs dans le monde des threads de Qt 4, afin d'éviter les écueils les plus récurrents et de les aider à développer des codes à la fois plus robustes et mieux structurés.

    Read the article

  • shared transaction ID function among multiple threads

    - by poly
    I'm writing an application in C that requires multiple threads to request a unique transaction ID from a function as shown below; struct list{ int id; struct list *next }; function generate_id() { linked-list is built here to hold 10 millions } my concern is how to sync between two or more threads so that transaction id can be unique among them without using mutex, is this possible? Please share anything even if I need to change linked list to something else.

    Read the article

  • CUDA, more threads for same work = Longer run time despite better occupancy, Why?

    - by zenna
    I encountered a strange problem where increasing my occupancy by increasing the number of threads reduced performance. I created the following program to illustrate the problem: #include <stdio.h> #include <stdlib.h> #include <cuda_runtime.h> __global__ void less_threads(float * d_out) { int num_inliers; for (int j=0;j<800;++j) { //Do 12 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; num_inliers += threadIdx.x*5; num_inliers += threadIdx.x*6; num_inliers += threadIdx.x*7; num_inliers += threadIdx.x*8; num_inliers += threadIdx.x*9; num_inliers += threadIdx.x*10; num_inliers += threadIdx.x*11; num_inliers += threadIdx.x*12; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } __global__ void more_threads(float *d_out) { int num_inliers; for (int j=0;j<800;++j) { // Do 4 computations num_inliers += threadIdx.x*1; num_inliers += threadIdx.x*2; num_inliers += threadIdx.x*3; num_inliers += threadIdx.x*4; } if (threadIdx.x == -1) d_out[blockIdx.x*blockDim.x+threadIdx.x] = num_inliers; } int main(int argc, char* argv[]) { float *d_out = NULL; cudaMalloc((void**)&d_out,sizeof(float)*25000); more_threads<<<780,128>>>(d_out); less_threads<<<780,32>>>(d_out); return 0; } Note both kernels should do the same amount of work in total, the (if threadIdx.x == -1 is a trick to stop the compiler optimising everything out and leaving an empty kernel). The work should be the same as more_threads is using 4 times as many threads but with each thread doing 4 times less work. Significant results form the profiler results are as followsL: more_threads: GPU runtime = 1474 us,reg per thread = 6,occupancy=1,branch=83746,divergent_branch = 26,instructions = 584065,gst request=1084552 less_threads: GPU runtime = 921 us,reg per thread = 14,occupancy=0.25,branch=20956,divergent_branch = 26,instructions = 312663,gst request=677381 As I said previously, the run time of the kernel using more threads is longer, this could be due to the increased number of instructions. Why are there more instructions? Why is there any branching, let alone divergent branching, considering there is no conditional code? Why are there any gst requests when there is no global memory access? What is going on here! Thanks

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Go Green for the Holiday with our Saint Patrick’s Day Wallpaper Five Pack

    - by Asian Angel
    Happy St Pats Day 3 [DesktopNexus] Happy St Pats Day [DesktopNexus] Shamrocks and Gold [DesktopNexus] Shamrock oh a lot of Shamrocks [DesktopNexus] Luck of the Irish [DesktopNexus] Need some great icons to go with your new wallpapers? Then be sure to grab a set from our Saint Patrick’s Day Icons Three Pack post. Internet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?How To Make a Youtube Video Into an Animated GIF

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Tor check failed though Vidalia shows green onion

    - by Wolter Hellmund
    I have installed tor successfully and Vidalia shows it is running without problems; however, when I check if I am using tor in this website I get an error message saying I am not using tor. I have tried two things to fix this: I installed ProxySwitchy on Google Chrome, and created a profile for Tor (with address 127.0.0.1, port 8118), but enabling the proxy doesn't change the results in the tor check website linked before. I changed my network proxy settings through System Settings Network from None to Manual, and selected as address always 127.0.0.1 and as port 8118 for all but for the socket, for which I entered 9050 instead. This makes internet stop working completely. How can I fix this problem?

    Read the article

  • Green Technology Management

    Computer Recycling: Computer recycling is the recycling or reuse of computers. It includes both finding another use for materials and having systems dismantled in a manner that allows for the safe e... [Author: Chris Loo - Computers and Internet - April 09, 2010]

    Read the article

  • Green Technology Management

    Computer Recycling: Computer recycling is the recycling or reuse of computers. It includes both finding another use for materials and having systems dismantled in a manner that allows for the safe e... [Author: Chris Loo - Computers and Internet - April 26, 2010]

    Read the article

  • Green bar on top of every (web) video, distorted colors too

    - by Walter Maier-Murdnelch
    Hey since a few days I have a problem with some videos I watch on the web (youtube, vimeo etc). There is a green bar on the top of the video and the colors are distorted, it looks like they are somehow shifted. I am not sure since when this problem appeared, I guess it might have been an update for the flash player. Anyways, I found a workaround. Disabling the hardware acceleration helps. right click on video - settings - disable hardware acceleration. After reloading of the page the green bar is gone. The problem is taht this setting doesn't seem to be persistent and so I have to disable it again on every other video. How can I make this setting permanent or how to get rid of the green bar alltogether? (I will add a screenshot later)

    Read the article

  • Go Green - Recycle Your PPC Campaigns Part 3

    For your PPC benefit, it is important for you to identify the paid and unpaid search engine marketing strategies. If you want to get a number of effective link building requests under your PPC campaign management, then you can make use of content placement reports as it will save a lot of time and energy.

    Read the article

  • "Growing Green"

    Organizations are managing more information, reducing fuel consumption, and developing clean energy with Oracle technology.

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Cutting Paper through Visualization and Collaboration

    - by [email protected]
    It's hard not to hear about "Going Green" these days. Many are working to be more environmentally conscious in their personal lives, and this has extended to the corporate world as well. I know I'm always looking for new ways. Environmental responsibility is important at Oracle too, and we have an entire section of our website dedicated to our solutions around the Eco-Enterprise. You can check it out here: http://www.oracle.com/green/index.html Perhaps the biggest and most obvious challenge in the world of business is the fact that we use so much paper. There are many good reasons why we print today too. For example: Printing off a document, spreadsheet, or CAD design that will be reviewed and marked up while on a plane Having a printout of a facility when a field engineer performs on-site maintenance During a multi-party design review where a number of people will review a drawing in a meeting room, scribbling onto a large scale drawing print to provide their collaborative comments These are just a few potential use cases, and they're valid ones. However, there's a way in which you can turn these paper processes into digital ones. AutoVue allows you to view, mark-up, and collaborate on all the data you would print. Indeed, this is the core of what AutoVue does. So if we take the examples above, we could address each as follows: Allow you to view the document, spreadsheet, or CAD drawing in AutoVue on your laptop. Even if you originally had this data vaulted in some time of system of record (like an ECM solution) and view your data from there, AutoVue allows you to "Work Offline" and take the documents you need to review on your laptop. From there, the many annotation tools in AutoVue will give you what you need to comment upon the documents that you are reviewing. The challenge with the mobile workforce is always access to information. People who perform maintenance and repair operations often are in locations with little to no Internet connectivity. However, technology is coming to these people in the form of laptops, tablet PCs, and other portable devices too. AutoVue can address situations with limited bandwidth through our streaming technology for viewing, meaning that the most up to date information can be pulled up from the central server - without the need for large data transfer. When there is no connectivity at all, the "Work Offline" option will handle this. For a design review session, the Real-Time Collaboration capabilities of AutoVue will let all the participants view the same document in a synchronized view, allowing each person to be able to mark-up the document at the same time. Since this is done in a web-based manner, not only is it not necessary to print the document, but you benefit by reducing the travel needed for these sessions. Not only are trees spared, but jet fuel as well. There are many steps involved with "Going Green", but each step is a necessary one. What we do today will directly influence our future generations, and we're looking to help.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >