Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 13/195 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • Visual Studio 2010: Fun with extensions

    - by BizTalk Visionary
    One of the powerful things that has come into Visual Studio over the last few years has been joy of extensions. With 2010 there seems to be even more!! Of course teaching old dogs like myself new tricks always takes time but interestingly enough some of the rules I learnt early in my working life over 30 years ago still hold true!! A derivation of one that was knocked into during my engineering apprenticeship and associated exams was RTFQ! Read the ‘flippin’ question. (I replaced the original ‘F’ with a more palatable version here!). Today I forgot that rule and didn’t RTFI (I being instructions) and spent a fruitless hour wondering why my Entity Framework POCO generator never appeared in the new project template list when I wanted to add it!! It was simple of course – I had only installed the Entity Framework POCO generator for web sites and not building a web project meant it would never appear!!! A quick look again I found the ‘other’ extension that supported my project type! So RTFI!!

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • unable to install FrontPage 2002 Server Extensions

    - by user1263981
    I am trying to publish website (asp.net 4.0, IIS 5, win server 2003) from my machine to server 2003 and getting error 'The Web server does not appear to have FrontPage Server Extensions installed '. I have followed the following steps but somehow can't see FrontPage 2002 Server Extensions checkbox under IIS details. click Add/Remove Windows Components Click on "Application Server" and click "Details" Click on "IIS" and then "Details" Why can't i see the option for 'FrontPage 2002 Server Extensions'?

    Read the article

  • Do reactive extensions and ETL go together?

    - by Aaron Anodide
    I don't fully understand reactive extensions, but my inital reading caused me think about the ETL code I have. Right now its basically a workflow to to perform various operations in a certain sequence based on conditions it find as it progresses. I can also imagine an event driven way such that only a small amount of imperative logic causes a chain reaction to occur. Of course I don't need a new type of programming model to make an event driven collaboration like that. Just the same I am wondering if ETL is a good fit for potentially exploring Rx further. Is my connection in a valid direction even? If not, could you briefly correct the error in my logic?

    Read the article

  • How To Create Custom Keyboard Shortcuts For Browser Actions and Extensions in Google Chrome

    - by Chris Hoffman
    Geeks love keyboard shortcuts – they can make you faster and more productive than clicking everything with your mouse. We’ve previously covered keyboard shortcuts for Chrome and other browsers, but you can assign your own custom keyboard shortcuts, too. Google Chrome includes a built-in way to assign custom keyboard shortcuts to your browser extensions. You can also use an extension created by a Google employee to create custom keyboard shortcuts for common browser actions – and less common ones. Image Credit: mikeropology on Flickr (modified) Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • BPM Parallel Multi Instance sub processes by Niall Commiskey

    - by JuergenKress
    Here is a very simple scenario: An order with lines is processed. The OrderProcess accepts in an order with its attendant lines. The Fulfillment process is called for each order line. We do not have many order lines, and the processing is simple, so we run this in parallel. Let's look at the definition of the Multi Instance sub-process - Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM,Niall Commiskey,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Nouvelle galerie d'extensions pour le "Facebook pour développeurs" d'Atlassian et pour sa solution de collaboration

    Atlassian ouvre une nouvelle galerie d'extensions Pour sa solutions de collaborations et son « Facebook pour développeurs » Les « Marketplace » sont à la mode. Windows Store, Mac OS, Sap Store, Mozilla Marketplace, Google Play, AppUp Center d'Intel, AppWave d'Embarcadero. On ne compte plus les galeries applicatives. Il n'y avait donc pas de raison qu'Atlassian, start-up australienne basé à Sydney qui a reçu le titre de « Technology Pioneer » du World Economic Forum en 2011 - reçu par le passé par Google, Mozilla, Twitter ou Dropbox - ne s'y mette pas. Au cas où ce nom ne vous dirait rien, Atlassian est l'é...

    Read the article

  • Looking for parallel programming problem

    - by Chris Lieb
    I am trying to come up with a problem that is easily solvable in a parallel manner and that requires communication between threads for a test. I also am trying to avoid problems that require require random waits, which rules out dining philosophers and producer-consumer (bounded buffer), two of the classics. My goal is for the student to be able to write the program in less than 20-30 minutes in front of a computer not knowing of the problem beforehand. (This is to prevent preparation more than to come up with something novel.) I am trying to stress the communication aspect of the program, though the multi-threaded nature is also important. Does anyone have some ideas? Edit: I'm using Google Go for the language and testing comprehension of the goroutines/channels combo vs an actors library that I authored.

    Read the article

  • Optimum Number of Parallel Processes

    - by System Down
    I just finished coding a (basic) ray tracer in C# for fun and for the learning experience. Now I want to further that learning experience. It seems to me that ray tracing is a prime candidate for parallel processing, which is something I have very little experience in. My question is this: how do I know the optimum number of concurrent processes to run? My first instinct tells me: it depends on how many cores my processor has, but like I said I'm new to this and I may be neglecting something.

    Read the article

  • Parallel Threading in Multi-Language Software?

    - by Smarty Twiti
    I'm developing a software that contain many modules/Daemon running in parallel manner, what i'm looking for is how to implement that, i cannot use Thread because some of those modules/Daemon are perhaps implemented in other languages (C,java,C#...). For example I'm using C for Hooking Messages exchanged between Windows kernel and top level applications, Java/C# to use some free library to simply parse XML(for example) or to accept and execute commands over the network..this can be done by C Language but just to improve productivity... Finally for GUI I'm using Ultimate++ (c++) that is like the main process that call and monitor(activate/deactivate/get state) of all other modules/Daemon through an interface. I admit that the development of each module/Daemon in a separate language greatly facilitates maintenance, but especially I am obliged to do that.. What is the best practice way to do that ? All helps will be appreciated.

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • Using the Reactive Extensions with the Silverlight Toolkit and MEF

    - by Bobby Diaz
    I have come across several instances of people having trouble using the new Reactive Extensions (v1.0.2317) in projects that reference the Silverlight Toolkit (Nov09) due to the fact that the original release of the Rx Framework (v1.0.0.0) was bundled with the Toolkit.  The trouble really becomes evident if you are using the Managed Extensibility Framework (MEF) to discover and compose portions of your application.   If you are using the CompositionInitializer, or any other mechanism that probes all of the loaded assemblies for valid exports, you will likely receive the following error: Inspecting the LoaderExceptions property yields the following: System.IO.FileNotFoundException: Could not load file or assembly 'System.Reactive, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1b331ac6720247d9' or one of its dependencies. The system cannot find the file specified.  File name: 'System.Reactive, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1b331ac6720247d9' This is due to some of the Toolkit assemblies referencing the older System.Reactive.dll.  I was able to work around the issue by bypassing the automatic probing of loaded assemblies and instead specified which assemblies my exports could be found.     public MainPage()     {         InitializeComponent();           // the following line causes a ReflectionTypeLoadException         //CompositionInitializer.SatisfyImports(this);           // skip the toolkit assemblies by specifying assemblies         var catalog = new AssemblyCatalog(GetType().Assembly);         var container = new CompositionContainer(catalog);         container.ComposeParts(this);           ShowReferences();     } With some simple xaml, I was able to print out exactly which libraries are currently loaded in the application. You can download the sample project to run it for yourself! Hope that helps!

    Read the article

  • All chromium extensions throw errors since update to 13.10

    - by hugo der hungrige
    Since updating to 13.10 all chromium extensions generate errors: chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot call method 'sendRequest' of undefined include.preload.js:105 Uncaught TypeError: Cannot read property 'onRequest' of undefined include.postload.js:473 GET http://edge.quantserve.com/quant.js superuser.com/:2047 GET http://www.google-analytics.com/__utm.gif?utmwv=5.4.5&utms=2&utmn=590704726…n%3D(organic)%7Cutmcmd%3Dorganic%7Cutmctr%3D(not%2520provided)%3B&utmu=qQ~ ga.js:61 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot read property 'onRequest' of undefined content.js:233 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot read property 'onRequest' of undefined injected.js:169 chrome.extension is not available: 'extension' is not allowed for specified context type content script, extension page, web page, etc.). [VM] binding (56):427 Uncaught TypeError: Cannot call method 'getURL' of undefined content_js_min.js:5 GET http://engine.adzerk.net/z/8476/adzerk2_2_17_47 superuser.com/:1719 Uncaught TypeError: Cannot call method 'sendRequest' of undefined How to fix this?

    Read the article

  • Java Management Extensions with Oracle WebLogic Server 12c–Webcast Nocember 13th 2012

    - by JuergenKress
    Date: Tuesday, November 13, 2012 Time: 10:00 AM PST You’re responsible for evaluating technologies to monitor and configure Oracle WebLogic Server. This Webcast will help you get a complete picture of what Oracle WebLogic Server 12c with Java Management Extensions (JMX) can do for you. Dr. Frank Munz will explain the development of JMX with Spring and compare it to Java EE. A new feature of Oracle WebLogic Server 12c, the RESTful Management API, will also be examined. Learn how JMX in Oracle WebLogic Server 12c is: Highly efficient. It uses WebLogic Scripting Tool (WLST) instead of a client JMX program written in Java, resulting in little overhead. Effective. It bundles optimized tools such as WLST and WebLogic Diagnostic Framework to eliminate the requirement for Java programming on the client side. Compliant. It is fully standard-compliant but also works with open source clients and frameworks. Register for the Webcast today. Speakers: Dr. Frank Munz, Oracle Technologist of the Year Dave Cabelus, Senior Principal Product Manager, Oracle WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Java,Frank Munz,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Consecutive verse Parallel Nunit Testing

    - by Jacobm001
    My office has roughly ~300 webpages that should be tested on a fairly regular basis. I'm working with Nunit, Selenium, and C# in Visual Studio 2010. I used this framework as a basis, and I do have a few working tests. The problem I'm running into is is that when I run the entire suite. In each run, a random test(s) will fail. If they're run individually, they will all pass. My guess is that Nunit is trying to run all 7 tests at the same time and the browser can't support this for obvious reasons. Watching the browser visually, this does seem to be the case. Looking at the screenshot below, I need to figure out a way in which the tests under Index_Tests are run sequentially, not in parallel. errors: Selenium2.OfficeClass.Tests.Index_Tests.index_4: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} Selenium2.OfficeClass.Tests.Index_Tests.index_7: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} example with one test: using OpenQA.Selenium; using NUnit.Framework; namespace Selenium2.OfficeClass.Tests { [TestFixture] public class Index_Tests : TestBase { public IWebDriver driver; [TestFixtureSetUp] public void TestFixtureSetUp() { driver = StartBrowser(); } [TestFixtureTearDown] public void TestFixtureTearDown() { driver.Quit(); } [Test] public void index_1() { OfficeClass index = new OfficeClass(driver); index.Navigate("http://url_goeshere"); index.SendKeyID("txtFiscalYear", "input"); index.SendKeyID("txtIndex", ""); index.SendKeyID("txtActivity", "input"); index.ClickID("btnDisplay"); } } }

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • Is it possible to distribute STDIN over parallel processes?

    - by Erik
    Given the following example input on STDIN: foo bar bar baz === qux bla === def zzz yyy Is it possible to split it on the delimiter (in this case '===') and feed it over stdin to a command running in parallel? So the example input above would result in 3 parallel processes (for example a command called do.sh) where each instance received a part of the data on STDIN, like this: do.sh (instance 1) receives this over STDIN: foo bar bar baz do.sh (instance 2) receives this over STDIN: qux bla do.sh (instance 3) receives this over STDIN: def zzz yyy I suppose something like this is possible using xargs or GNU parallel, but I do not know how.

    Read the article

  • Where has my parallel port gone? ioperm(888,1,1) returns -1.

    - by marcusw
    I have an old Dell Dimension 8200 running Gentoo which I use solely to control various things using the parallel port. After shutting it down a few weeks ago, I started it up again today and tried to access the parallel port like I usually do. Unfortunately, my code bombed out when it tried to call ioperm(888,1,1) to grab the parallel port which returned an error code of -1. There have been no changes to the system be it hardware or software, no updates, no tweaking, no dropping the case, no over-amping the data pins, nothing. The port and the software have been working fine for months with no changes, and were working fine when I shut it down last. Running my code with root privileges changes nothing. What is breaking this and how can I fix it?

    Read the article

  • Parallel port no longer accessible even though no changes to system.

    - by marcusw
    I have an old Dell Dimension 8200 running Gentoo which I use solely to control various things using the parallel port. After shutting it down a few weeks ago, I started it up again today and tried to access the parallel port like I usually do. Unfortunately, my code bombed out when it tried to call ioperm(888,1,1) to grab the parallel port which returned an error code of -1. There have been no changes to the system be it hardware or software, no updates, no tweaking, no dropping the case, no over-amping the data pins, nothing. The port and the software have been working fine for months with no changes, and were working fine when I shut it down last. Running my code with root privileges changes nothing. What is breaking this and how can I fix it?

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • C# [Mono]: MPAPI vs MPI.NET vs ?

    - by Olexandr
    Hi. I'm working on college project. I have to develop distributed computing system. And i decided to do some research to make this task fun :) I've found MPAPI and MPI.NET libraries. Yes, they are .NET libraries(Mono, in my case). Why .NET ? I'm choosing between Ada, C++ and C# so to i've choosed C# because of lower development time. I have two goals: Simplicity; Performance; Cluster computing. So, what to choose - MPAPI or MPI.NET or something else ?

    Read the article

  • MPAPI vs MPI.NET vs ?

    - by Olexandr
    I'm working on college project. I have to develop distributed computing system. And i decided to do some research to make this task fun :) I've found MPAPI and MPI.NET libraries. Yes, they are .NET libraries(Mono, in my case). Why .NET ? I'm choosing between Ada, C++ and C# so to i've choosed C# because of lower development time. I have two goals: Simplicity; Performance; Cluster computing. So, what to choose - MPAPI or MPI.NET or something else ?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >