Search Results

Search found 9133 results on 366 pages for 'global interpreter lock'.

Page 63/366 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • After reading this article I'm thinking to move eric meyer reset to global reset, would it be wise m

    - by metal-gear-solid
    After reading this article I'm thinking to change my css reset from eric meyer reset to global reset * {margin:0;padding:0} or thinking to use like this only body, div, dl, dt, dd, ul, ol, li, h1, h2, h3, h4, h5, h6, pre, code, form, fieldset, legend, textarea, p, blockquote, th, td, a { border: 0 none; font-family: inherit; font-size: 100%; font-style: inherit; font-weight: inherit; text-decoration: none; vertical-align: baseline; } table { border-collapse: collapse; border-spacing: 0; } li { list-style: none; }

    Read the article

  • Is it possible to have a global exception hook?

    - by Heinrich Ulbricht
    Hi, my code is fairly well covered with exception handling (try..except). Some exceptions are not expected to happen and some exceptions happen fairly often, which is expected and ok. Now I want to add some automated tests for this code. It would be good to know how many exceptions happened during execution, so I can later see if the expected number was raised or anything unexpected happened. I don't want to clutter every exception handling block with debug code, so my question is: Is there a way to install some kind of global exception handler which sits right before all other exception handling blocks? I am searching for a central place to log these exceptions. Thanks for any suggestions! (And if this matters: it is Delphi 2009)

    Read the article

  • How do I temporarily monkey with a global module constant?

    - by Daniel
    Greetings, I want to tinker with the global memcache object, and I found the following problems. Cache is a constant Cache is a module I only want to modify the behavior of Cache globally for a small section of code for a possible major performance gain. Since Cache is a module, I can't re-assign it, or encapsulate it. I Would Like To Do This: Deep in a controller method... code code code... old_cache = Cache Cache = MyCache.new code code code... Cache = old_cache code code code... However, since Cache is a constant I'm forbidden to change it. Threading is not an issue at the moment. :) Would it be "good manners" for me to just alias_method the special code I need just for a small section of code and then later unalias it again? That doesn't pass the smell test IMHO. Does anyone have any ideas? TIA, -daniel

    Read the article

  • ASP.NET MVC: Is is possible to set a global variable?

    - by Sergio
    Hello, I have a process within my MVC 2 application that takes a large amount of time and alters many rows in the database in the process. There is a chance that two or more users could attempt to perform this action at the same time, which would lead to undesirable effects. Is there a way to set a global flag somewhere within asp.net that I can check against all requests to see if the action in question is currently being executed? (a bit that I flip prior to running query, and then then flip back on completition) Or is there a better way of handling this situation? Thanks

    Read the article

  • Can this example be done with pointers instead of global variable?

    - by Louise
    This is a simplified example of the problem I have: #include <stdio.h> #include <stdlib.h> void f2(int** a) { printf("a: %i\n", **a); } void f1(int* a) { f2(&a); } int main() { int a = 3; f1(&a); // prints "a: 3" f2(???); return 0; } The problem is that I would like to be able to use f2() both in main() and in f1(). Can that be done without using global variables?

    Read the article

  • Servicestack CorsFeature Global Options Handler Not Firing on Certain Routes;

    - by gizmoboy
    I've got a service setup using the CorsFeature, and am using the approach that mythz suggested in other answers, collected in a function used in the appHost file: private void ConfigureCors(Funq.Container container) { Plugins.Add(new CorsFeature(allowedOrigins: "*", allowedMethods: "GET, POST, PUT, DELETE, OPTIONS", allowedHeaders: "Content-Type, Authorization, Accept", allowCredentials: true)); PreRequestFilters.Add((httpReq, httpRes) => { //Handles Request and closes Responses after emitting global HTTP Headers if (httpReq.HttpMethod == "OPTIONS") { httpRes.EndRequest(); } }); } However, the pre-request filter is only firing on some of the service requests. One of the base entities we have in the service is a question entity, and there are custom routes defined as follows: [Route("/question")] [Route("/question/{ReviewQuestionId}", "GET,DELETE")] [Route("/question/{ReviewQuestionId}/{ReviewSectionId}", "GET")] Using POSTMAN to fire test queries (all using the OPTIONS verb), we can see that this will fire the pre-request filter: http://localhost/myservice/api/question/ But this will not: http://localhost/myservice/api/question/66 Presumably, this is because the second and third routes explicitly defined the verbs they accept, and OPTIONS isn't one of them. Is it really necessary to spell out OPTIONS in every defined route that restricts the verbs supported?

    Read the article

  • How can I access a global pointer outside of a C function?

    - by patrick
    I am trying to access the data of*tkn within a different function in my program for example: putchar(*tkn); It is a global variable but its not working correctly. Any ideas? #define MAX 20 // globals char *tkn; char array[MAX]; ... void tokenize() { int i = 0, j = 0; char *delim = " "; tkn = strtok (str," "); // get token 1 if (tkn != NULL) { printf("token1: "); while ((*tkn != 0) && (tkn != NULL)) { putchar(*tkn); array[i] = *tkn; *tkn++; i++; } } }

    Read the article

  • Should I use a global var or call the function every time? C++

    - by extintor
    Im using: bool GetOS(LPTSTR pszOS) { OSVERSIONINFOEX osve; BOOL bOsVersionInfoEx; ZeroMemory(&osve, sizeof(OSVERSIONINFOEX)); osve.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); if( !(bOsVersionInfoEx = GetVersionEx ((OSVERSIONINFO *) &osve)) ) return false; TCHAR buf[80]; StringCchPrintf( buf, 80, TEXT("%u.%u.%u.%u"), osve.dwPlatformId, osve.dwMajorVersion, osve.dwMinorVersion, osve.dwBuildNumber); StringCchCat(pszOS, BUFSIZE, buf); return true; } to get the windows version, and I am planning to use pszOS every a few minutes, Should I use pszOS as a global var or call GetOS() every time? What's the best option from a performance point of view.

    Read the article

  • How to config a default global EJB transaction attribute in JBoss Server?

    - by seven
    my project need to migrate from oc4j to jboss. But it seems that default EJB transaction attribute is different between them. For OC4J: If you do not specify any transaction attributes for an EJB method then OC4J uses default transaction attributes. OC4J by default uses Required for CMP 2.0, NotSupported for MDBs and Supports for all other types of EJBs. (refer to official doc) For JBoss: default for all types EJB is Required. (Maybe , refer to un-official site) To migrate my project within less effort, how to config a default global EJB transaction attribute, e.g. Supports, in JBoss Server?

    Read the article

  • Global search box extension - how to make browser to start when suggestion picked.

    - by Ramps
    Hi, I'm implementing global search box extension (sth like SearchableDictionary sample in android sdk). Everything works fine - suggestions are displayed properly. Problem is that I want browser to start when user picks a suggestion. (each suggestion is a link) Columns of my cursor contain SearchManager.SUGGEST_COLUMN_INTENT_DATA, and I use that to pass http link. My searchable xml contains default intent action set to: android:searchSuggestIntentAction="android.intent.action.VIEW". But when user hits the suggestion, my application is started instead of browser. What am I missing? Regards!

    Read the article

  • In CQRS (event-sourced), do you need a global sequence counter in the event store?

    - by Jon M
    In trying to get my head around CQRS (and DDD in general) I have come across situations when two events occur on different aggregates but the order of them has domain meaning. If so then they could happen so close together that a timestamp (as used by the sample implementations I have seen) cannot differentiate them, meaning the event store doesn't contain a 'complete' representation of the domain as there is ambiguity over the order in which events occurred. As an example, the domain could fire a CustomerCreatedEvent which applies to the Customer aggregate, and then a CustomerAssignedToAgent event on the Agent aggregate. The CustomerAssignedToAgent event doesn't make sense if it occurs before the CustomerCreatedEvent, but typically both of these might be fired as a result of one operation which makes it likely that the timestamps would effectively be the same. So am I just modelling things badly? Should there ever be a situation where the sequence of events across different aggregates is important? Or should you keep a global sequence number on your event store, so that you can identify the exact sequence in which events occurred?

    Read the article

  • What should be contained in a global Subversion ignore pattern for Visual Studio 2010?

    - by Chris Simmons
    After installing and using Visual Studio 2010, I'm seeing some newer file types (at least with C++ projects ... don't know about the other types) as compared to 2008. e.g. .sdf, .opensdf, which I guess are the replacement for ncb files with Intellisense info stored in SQL Server Compact files? I also notice .log files are generated, which appear to be build logs. Given this, what's safe to add to my global ignore pattern? Off the bat, I'd assume .sdf, .opensdf, but what else?

    Read the article

  • What are the possible tags of "global" tag in Magento "config.xml" file?

    - by Knowledge Craving
    Can someone professional experienced Magento developer tell me how to accomplish the following in Magento? I want to know what are the possible tags that can fit in the "global" tag of the "config.xml" page of every module's etc folder? I have tried searching for this answer at many places in Internet but in vain. Please provide the full details along with it, because I want at least the users accessing this website find it useful enough, instead of scratching their heads. I really want a detailed explanation, because every newbie like me gets totally confused at this point. From what I know till now, is that in this page, you can set routers, rewrites, cron jobs, admin html, front-end html and many more. But without any strong concepts. So please I want that strong fundamental concept with a detailed explanation about it.

    Read the article

  • Global variable in a recursive function how to keep it at zero?

    - by Grammin
    So if I have a recursive function with a global variable var_: int var_; void foo() { if(var_ == 3) return; else var_++; foo(); } and then I have a function that calls foo() so: void bar() { foo(); return; } what is the best way to set var_ =0 everytime foo is called thats not from within itself. I know I could just do: void bar() { var_ =0; foo(); return; } but I'm using the recursive function a lot and I don't want to call foo and forget to set var_=0 at a later date. Does anyone have any suggestions on how to solve this? Thanks, Josh

    Read the article

  • I set a global bug catcher in my application for unhandled exceptions and it is picking up all exceptions

    - by user1632018
    I am trying to figure out why this is happing. In my vb.net application I set a global handler in the ApplicationEvents.vb which I thought would only pick up any unhandled exceptions, although it is picking up every exception that happens in my application whether or not they are handled with try catch blocks. Here is my code in applicationevents Private Sub MyApplication_UnhandledException(ByVal _ sender As Object, ByVal e As _ Microsoft.VisualBasic.ApplicationServices.UnhandledExceptionEventArgs) _ Handles Me.UnhandledException e.ExitApplication = _ MessageBox.Show(e.Exception.Message & _ vbCrLf & "The application has encountered a bug, would you like to Continue?", "An Error has occured.", _ MessageBoxButtons.YesNo, _ MessageBoxIcon.Question) _ = DialogResult.No End Sub In the rest of my application I set normal try catch blocks like this: Try Catch ex as exception End Try Can anyone tell me why this is happening?

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • How to find out where a thread lock happend?

    - by SchlaWiener
    One of our company's Windows Forms application had a strange problem for several month. The app worked very reliable for most of our customers but on some PC's (mostly with a wireless lan connection) the app sometimes just didn't respond anymore. (You click on the UI and windows ask you to wait or kill the app). I wasn't able to track down the problem for a long time but now I figured out what happend. The app had this line of code // don't blame me for this. Wasn't my code :D Control.CheckForIllegalCrossThreadCalls = false and used some background threads to modify the controls. No I found a way to reproduce the application stopping responding bug on my dev machine and tracked it down to a line where I actually used Invoke() to run a task in the main thread. Me.Invoke(MyDelegate, arg1, arg2) Obviously there was a thread lock somewhere. After removing the Control.CheckForIllegalCrossThreadCalls = false statement and refactoring the whole programm to use Invoke() if modifying a control from a background thread, the problem is (hopefully) gone. However, I am wondering if there is a way to find such bugs without debugging every line of code (Even if I break into debugger after the app stops responding I can't tell what happend last, because the IDE didn't jump to the Invoke() statement) In other words: If my apps hangs how can I figure out which line of code has been executed last? Maybe even on the customers PC. I know VS2010 offers some backwards debugging feature, maybe that would be a solution, but currently I am using VS2008.

    Read the article

  • C#, Can I check on a lock without trying to acquire it?

    - by Biff MaGriff
    Hello, I have a lock in my c# web app that prevents users from running the update script once it has started. I was thinking I would put a notification in my master page to let the user know that the data isn't all there yet. Currently I do my locking like so. protected void butRefreshData_Click(object sender, EventArgs e) { Thread t = new Thread(new ParameterizedThreadStart(UpdateDatabase)); t.Start(this); //sleep for a bit to ensure that javascript has a chance to get rendered Thread.Sleep(100); } public static void UpdateDatabase(object con) { if (Monitor.TryEnter(myLock)) { Updater.RepopulateDatabase(); Monitor.Exit(myLock); } else { Common.RegisterStartupScript(con, AlreadyLockedJavaScript); } } And I do not want to do if(Monitor.TryEnter(myLock)) Monitor.Exit(myLock); else //show processing labal As I imagine there is a slight possibility that it might display the notification when it isn't actually running. Is there an alternative I can use? Edit: Hi Everyone, thanks a lot for your suggestions! Unfortunately I couldn't quite get them to work... However I combined the ideas on 2 answers and came up with my own solution.

    Read the article

  • How should I lock the table in this VB6 / Access application?

    - by Brian Hooper
    I'm working on a VB6 application using an Access database. The application writes messages to a log table from time to time. Several instances of the application may be running simultaneously and to distinguish them they each have their own run number. The run number is deduced from the log table thus... Set record_set = New ADODB.Recordset query_string = "SELECT MAX(RUN_NUMBER) + 1 AS NEW_RUN_NUMBER FROM ERROR_LOG" record_set.CursorLocation = adUseClient record_set.Open query_string, database_connection, adOpenStatic, , adCmdText record_set.MoveLast If IsNull(record_set.Fields("NEW_RUN_NUMBER")) Then run_number = 0 Else run_number = record_set.Fields("NEW_RUN_NUMBER") End If command_string = "INSERT INTO ERROR_LOG (RUN_NUMBER, SEVERITY, MESSAGE) " & _ " VALUES (" & Str$(run_number) & ", " & _ " " & Str$(SEVERITY_INFORMATION) & ", " & _ " 'Run Started'); " database_connection.Execute command_string Obviously there is a small gap between the calculation of the run number and the appearance of the new row in the database, and to prevent another instance getting access between the two operations I'd like to lock the table; something along the lines of SET TRANSACTION READ WRITE RESERVING ERROR_LOG FOR PROTECTED WRITE; How should I go about doing this? Would locking the recordset do any good (the row in the record set doesn't match any particular row in the database)?

    Read the article

  • Microbenchmark showing process-switching faster than thread-switching; what's wrong?

    - by Yang
    I have two simple microbenchmarks trying to measure thread- and process-switching overheads, but the process-switching overhead. The code is living here, and r1667 is pasted below: https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/process_switch_bench.c // on zs, ~2.1-2.4us/switch #include <stdlib.h> #include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <semaphore.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> #include <sys/time.h> #include <pthread.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; sem_t *s0, *s1, *s2; void * threads ( void * unused ) { // Wait till we may fire away sem_wait(s2); for (;;) { pthread_mutex_lock(&LOCK); pthread_mutex_unlock(&LOCK); COUNTER++; sem_post(s0); sem_wait(s1); } return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_mutex_init(&LOCK, NULL); COUNTER = 0; s0 = sem_open("/s0", O_CREAT, 0022, 0); if (s0 == 0) { perror("sem_open"); exit(1); } s1 = sem_open("/s1", O_CREAT, 0022, 0); if (s1 == 0) { perror("sem_open"); exit(1); } s2 = sem_open("/s2", O_CREAT, 0022, 0); if (s2 == 0) { perror("sem_open"); exit(1); } int x, y, z; sem_getvalue(s0, &x); sem_getvalue(s1, &y); sem_getvalue(s2, &z); printf("%d %d %d\n", x, y, z); pid_t pid = fork(); if (pid) { pthread_create(&t1, NULL, threads, NULL); pthread_detach(t1); // Get start time and fire away start = timeInMS(); sem_post(s2); sem_post(s2); // Wait for about a second sleep(1); // Stop thread pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of process switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); // clean up kill(pid, 9); wait(0); sem_close(s0); sem_close(s1); sem_unlink("/s0"); sem_unlink("/s1"); sem_unlink("/s2"); } else { if (1) { sem_t *t = s0; s0 = s1; s1 = t; } threads(0); // never return } return 0; } https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/thread_switch_bench.c // From <http://stackoverflow.com/questions/304752/how-to-estimate-the-thread-context-switching-overhead> // on zs, ~4-5us/switch; tried making COUNTER updated only by one thread, but no difference #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <sys/time.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; pthread_cond_t CONDITION; void * threads ( void * unused ) { // Wait till we may fire away pthread_mutex_lock(&START); pthread_mutex_unlock(&START); int first=1; pthread_mutex_lock(&LOCK); // If I'm not the first thread, the other thread is already waiting on // the condition, thus Ihave to wake it up first, otherwise we'll deadlock if (COUNTER > 0) { pthread_cond_signal(&CONDITION); first=0; } for (;;) { if (first) COUNTER++; pthread_cond_wait(&CONDITION, &LOCK); // Always wake up the other thread before processing. The other // thread will not be able to do anything as long as I don't go // back to sleep first. pthread_cond_signal(&CONDITION); } pthread_mutex_unlock(&LOCK); return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_t t2; pthread_mutex_init(&LOCK, NULL); pthread_mutex_init(&START, NULL); pthread_cond_init(&CONDITION, NULL); pthread_mutex_lock(&START); COUNTER = 0; pthread_create(&t1, NULL, threads, NULL); pthread_create(&t2, NULL, threads, NULL); pthread_detach(t1); pthread_detach(t2); // Get start time and fire away start = timeInMS(); pthread_mutex_unlock(&START); // Wait for about a second sleep(1); // Stop both threads pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of thread switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); return 0; }

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Michelle Kimihira
    By Maria Forney Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Maria Forney
    Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >