Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 314/996 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Can't figure out where race condition is occuring

    - by Nik
    I'm using Valgrind --tool=drd to check my application that uses Boost::thread. Basically, the application populates a set of "Book" values with "Kehai" values based on inputs through a socket connection. On a seperate thread, a user can connect and get the books send to them. Its fairly simple, so i figured using a boost::mutex::scoped_lock on the location that serializes the book and the location that clears out the book data should be suffice to prevent any race conditions. Here is the code: void Book::clear() { boost::mutex::scoped_lock lock(dataMutex); for(int i =NUM_KEHAI-1; i >= 0; --i) { bid[i].clear(); ask[i].clear(); } } int Book::copyChangedKehaiToString(char* dst) const { boost::mutex::scoped_lock lock(dataMutex); sprintf(dst, "%-4s%-13s",market.c_str(),meigara.c_str()); int loc = 17; for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(ask[i].changed > 0) { sprintf(dst+loc,"A%i%-21s%-21s%-21s%-8s%-4s",i,ask[i].price.c_str(),ask[i].volume.c_str(),ask[i].number.c_str(),ask[i].postTime.c_str(),ask[i].status.c_str()); loc += 77; } } for(int i = 0; i < Book::NUM_KEHAI; ++i) { if(bid[i].changed > 0) { sprintf(dst+loc,"B%i%-21s%-21s%-21s%-8s%-4s",i,bid[i].price.c_str(),bid[i].volume.c_str(),bid[i].number.c_str(),bid[i].postTime.c_str(),bid[i].status.c_str()); loc += 77; } } return loc; } The clear() function and the copyChangedKehaiToString() function are called in the datagetting thread and data sending thread,respectively. Also, as a note, the class Book: struct Book { private: Book(const Book&); Book& operator=(const Book&); public: static const int NUM_KEHAI=10; struct Kehai; friend struct Book::Kehai; struct Kehai { private: Kehai& operator=(const Kehai&); public: std::string price; std::string volume; std::string number; std::string postTime; std::string status; int changed; Kehai(); void copyFrom(const Kehai& other); Kehai(const Kehai& other); inline void clear() { price.assign(""); volume.assign(""); number.assign(""); postTime.assign(""); status.assign(""); changed = -1; } }; std::vector<Kehai> bid; std::vector<Kehai> ask; tm recTime; mutable boost::mutex dataMutex; Book(); void clear(); int copyChangedKehaiToString(char * dst) const; }; When using valgrind --tool=drd, i get race condition errors such as the one below: ==26330== Conflicting store by thread 1 at 0x0658fbb0 size 4 ==26330== at 0x653AE68: std::string::_M_mutate(unsigned int, unsigned int, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653AFC9: std::string::_M_replace_safe(unsigned int, unsigned int, char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B064: std::string::assign(char const*, unsigned int) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x653B134: std::string::assign(char const*) (in /usr/lib/libstdc++.so.6.0.8) ==26330== by 0x8055D64: Book::Kehai::clear() (Book.h:50) ==26330== by 0x8094A29: Book::clear() (Book.cpp:78) ==26330== by 0x808537E: RealKernel::start() (RealKernel.cpp:86) ==26330== by 0x804D15A: main (main.cpp:164) ==26330== Allocation context: BSS section of /usr/lib/libstdc++.so.6.0.8 ==26330== Other segment start (thread 2) ==26330== at 0x400BB59: pthread_mutex_unlock (drd_pthread_intercepts.c:633) ==26330== by 0xC59565: pthread_mutex_unlock (in /lib/libc-2.5.so) ==26330== by 0x805477C: boost::mutex::unlock() (mutex.hpp:56) ==26330== by 0x80547C9: boost::unique_lock<boost::mutex>::~unique_lock() (locks.hpp:340) ==26330== by 0x80949BA: Book::copyChangedKehaiToString(char*) const (Book.cpp:134) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x8092D05: BookSnapshotManager::getSnaphotDataList() (BookSnapshotManager.cpp:72) ==26330== by 0x8088179: SnapshotServer::getDataList() (SnapshotServer.cpp:246) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) ==26330== by 0x808BB90: boost::_bi::bind_t<void, boost::_mfi::mf0<void, RealThread>, boost::_bi::list1<boost::_bi::value<RealThread*> > >::operator()() (bind_template.hpp:20) ==26330== Other segment end (thread 2) ==26330== at 0x400B62A: pthread_mutex_lock (drd_pthread_intercepts.c:580) ==26330== by 0xC59535: pthread_mutex_lock (in /lib/libc-2.5.so) ==26330== by 0x80546B8: boost::mutex::lock() (mutex.hpp:51) ==26330== by 0x805473B: boost::unique_lock<boost::mutex>::lock() (locks.hpp:349) ==26330== by 0x8054769: boost::unique_lock<boost::mutex>::unique_lock(boost::mutex&) (locks.hpp:227) ==26330== by 0x8094711: Book::copyChangedKehaiToString(char*) const (Book.cpp:113) ==26330== by 0x80937EE: BookSerializer::serializeBook(Book const&, std::string const&) (BookSerializer.cpp:41) ==26330== by 0x808870F: SnapshotServer::run() (SnapshotServer.cpp:183) ==26330== by 0x808BAF5: boost::_mfi::mf0<void, RealThread>::operator()(RealThread*) const (mem_fn_template.hpp:49) ==26330== by 0x808BB4D: void boost::_bi::list1<boost::_bi::value<RealThread*> >::operator()<boost::_mfi::mf0<void, RealThread>, boost::_bi::list0>(boost::_bi::type<void>, boost::_mfi::mf0<void, RealThread>&, boost::_bi::list0&, int) (bind.hpp:253) For the life of me, i can't figure out where the race condition is. As far as I can tell, clearing the kehai is done only after having taken the mutex, and the same holds true with copying it to a string. Does anyone have any ideas what could be causing this, or where I should look? Thank you kindly.

    Read the article

  • Project Euler, Problem 10 java solution not working

    - by Dennis S
    Hi, I'm trying to find the sum of the prime numbers < 2'000'000. This is my solution in java but I can't seem get the correct answer. Please give some input on what could be wrong and general advice on the code is appreciated. Printing 'sum' gives: 1308111344, which is incorrect. /* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. */ class Helper{ public void run(){ Integer sum = 0; for(int i = 2; i < 2000000; i++){ if(isPrime(i)) sum += i; } System.out.println(sum); } private boolean isPrime(int nr){ if(nr == 2) return true; else if(nr == 1) return false; if(nr % 2 == 0) return false; for(int i = 3; i < Math.sqrt(nr); i += 2){ if(nr % i == 0) return false; } return true; } } class Problem{ public static void main(String[] args){ Helper p = new Helper(); p.run(); } }

    Read the article

  • Why isn't my submit button centered?

    - by William
    For some reason my submit button isn't centered. http://prime.programming-designs.com/test_forum/viewboard.php?board=0 #submitbutton{ margin: auto; border: 1px solid #DBFEF8; background-color: #DBFEF8; color: #000000; margin-top: 5px; width: 100px; height: 20px; } here's the html. <form method=post action=add_thread.php?board=<?php echo''.$board.''; ?>> <div id="formdiv"> <div class="fieldtext1">Name</div> <div class="fieldtext1">Trip</div> <input type="text" name=name size=25 /> <input type="text" name=trip size=25 /> <div class="fieldtext2">Comment</div> <textarea name=post rows="4" cols="70"></textarea> <div class="fieldtext2">Fortune</div> <input type="checkbox" name="fortune" value="fortune" /> </div> <input type=submit value="Submit" id="submitbutton"> </form>

    Read the article

  • php syntax error

    - by Jacksta
    I have 3 files 1) show_createtable.html 2) do_showfielddef.php 3) do_showtble.php 1) First file is for creating a new table for a data base, it is a fom with 2 inputs, Table Name and Number of Fields. THIS WORKS FINE! <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <h1>Step 1: Name and Number</h1> <form method="post" action="do_showfielddef.php" /> <p><strong>Table Name:</strong><br /> <input type="text" name="table_name" size="30" /></p> <p><strong>Number of fields:</strong><br /> <input type="text" name="num_fields" size="30" /></p> <p><input type="submit" name="submit" value="go to step2" /></p> </form> </body> </html> 2) this script validates fields and createa another form to enter all the table rows. This for also WORKS FINE! <?php //validate important input if ((!$_POST[table_name]) || (!$_POST[num_fields])) { header( "location: show_createtable.html"); exit; } //begin creating form for display $form_block = " <form action=\"do_createtable.php\" method=\"post\"> <input name=\"table_name\" type=\"hidden\" value=\"$_POST[table_name]\"> <table cellspacing=\"5\" cellpadding=\"5\"> <tr> <th>Field Name</th><th>Field Type</th><th>Table Length</th> </tr>"; //count from 0 until you reach the number fo fields for ($i = 0; $i <$_POST[num_fields]; $i++) { $form_block .=" <tr> <td align=center><input type=\"texr\" name=\"field name[]\" size=\"30\"></td> <td align=center> <select name=\"field_type[]\"> <option value=\"char\">char</option> <option value=\"date\">date</option> <option value=\"float\">float</option> <option value=\"int\">int</option> <option value=\"text\">text</option> <option value=\"varchar\">varchar</option> </select> </td> <td align=center><input type=\"text\" name=\"field_length[]\" size=\"5\"> </td> </tr>"; } //finish up the form $form_block .= " <tr> <td align=center colspan=3><input type =\"submit\" value=\"create table\"> </td> </tr> </table> </form>"; ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create a database table: Step 2</title> </head> <body> <h1>defnie fields for <? echo "$_POST[table_name]"; ?> </h1> <? echo "$form_block"; ?> </body> </html> Problem is here 3) this form creates the tables and enteres them into the database. I am getting an error on line 37 "Parse error: syntax error, unexpected $end in /home/admin/domains/domaina.com.au/public_html/do_createtable.php on line 37" <? $db_name = "testDB"; $connection = @mysql_connect("localhost", "admin_user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $sql = "CREATE TABLE $_POST[table_name]("; for ($i = 0; $i < count($_POST[field_name]); $i++) { $sql .= $_POST[field_name][$i]." ".$_POST[field_type][$i]; if ($_POST[field_length][$i] !="") { $sql .=" (".$_POST[field_length][$i]."),"; } else { $sql .=","; } $sql = substr($sql, 0, -1); $sql .= ")"; $result = mysql_query($sql, $connection) or die(mysql_error()); if ($result) { $msg = "<p>" .$_POST[table_name]." has been created!</p>"; ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create A Database Table: Step 3</title> </head> <body> <h1>Adding table to <? echo "$db_name"; ?>...</h1> <? echo "$msg"; ?> </body> </html>

    Read the article

  • substrings and multiple textfields, AS3

    - by VideoDnd
    How do I get my text fields to populate correctly and show single digits? Description Each textfield receives a substring. This doesn't limit it's input, because the text fields shows extra numbers. The counters are set to 2,200,000.00, just to see if the numbers are populating. Ex A is the one I'm trying to fix. Ex A the one I want to fix //Tweening method 'could substitute code with Tweener' import fl.transitions.Tween; import fl.transitions.easing.*; //Timer that will run a sec and repeat var timer:Timer = new Timer(1000); //Integer values var count:int = +220000000; var fcount:int = 0; //Events and starting timer timer.addEventListener(TimerEvent.TIMER, incrementCounter); addEventListener(Event.ENTER_FRAME, checkOdometerPosition); timer.start(); //Tween Variables var smoothLoop:int = 0; var originalYPosition:Number = 0; var upwardYPosition:Number = -99; //Formatting String function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } //First Digit function checkOdometerPosition(event:Event):void{ if (seconds9.y <= upwardYPosition){ var toText:String = formatCount(fcount); //seconds9.firstDigit.text = formatCount(fcount); seconds9.firstDigit.text = toText.substr(9, 9); seconds9.y = originalYPosition; seconds8.firstDigit.text = toText.substr(8, 8); seconds8.y = originalYPosition; seconds7dec.firstDigit.text = toText.substr(7, 7); seconds7dec.y = originalYPosition; seconds6.firstDigit.text = toText.substr(6, 6); seconds6.y = originalYPosition; seconds5.firstDigit.text = toText.substr(5, 5); seconds5.y = originalYPosition; seconds5.firstDigit.text = toText.substr(4, 4); seconds5.y = originalYPosition; seconds3.firstDigit.text = toText.substr(3, 3); seconds3.y = originalYPosition; seconds2.firstDigit.text = toText.substr(2, 2); seconds2.y = originalYPosition; seconds1.firstDigit.text = toText.substr(1, 1); seconds1.y = originalYPosition; seconds1.firstDigit.text = toText.substr(1, 1); seconds1.y = originalYPosition; seconds0.firstDigit.text = toText.substr(0, 1); seconds0.y = originalYPosition; } } //Second Digit function incrementCounter(event:TimerEvent):void{ count++; fcount=int(count) if (smoothLoop < 9){ smoothLoop++; } else { smoothLoop = 0; } var lolly:String = formatCount(fcount-1); //seconds9.secondDigit.text = formatCount(fcount); seconds9.secondDigit.text = lolly.substr(9, 9); var addTween9:Tween = new Tween(seconds9, "y", Strong.easeOut,0,-222, .7, true); seconds8.secondDigit.text = lolly.substr(8, 8); var addTween8:Tween = new Tween(seconds8, "y", Strong.easeOut,0,-222, .7, true); seconds7dec.secondDigit.text = lolly.substr(7, 7); var addTween7dec:Tween = new Tween(seconds7dec, "y", Strong.easeOut,0,-222, .7, true); seconds6.secondDigit.text = lolly.substr(6, 6); var addTween6:Tween = new Tween(seconds6, "y", Strong.easeOut,0,-222, .7, true); seconds5.secondDigit.text = lolly.substr(5, 5); var addTween5:Tween = new Tween(seconds5, "y", Strong.easeOut,0,-222, .7, true); seconds4.secondDigit.text = lolly.substr(4, 4); var addTween4:Tween = new Tween(seconds4, "y", Strong.easeOut,0,-222, .7, true); seconds3.secondDigit.text = lolly.substr(3, 3); var addTween3:Tween = new Tween(seconds3, "y", Strong.easeOut,0,-222, .7, true); seconds2.secondDigit.text = lolly.substr(2, 2); var addTween2:Tween = new Tween(seconds2, "y", Strong.easeOut,0,-222, .7, true); seconds1.secondDigit.text = lolly.substr(1, 1); var addTween1:Tween = new Tween(seconds1, "y", Strong.easeOut,0,-222, .7, true); seconds0.secondDigit.text = lolly.substr(0, 1); var addTween0:Tween = new Tween(seconds0, "y", Strong.easeOut,0,-222, .7, true); } Ex A has 10 text objects, each with a pair of text fields. It’s move complex than Ex B, because it has a Y animation and pairs of numbers. The text objects are animated to create a scrolling effect. It moves vertically, and has a lead number and a catch up number contained in each symbol. See illustration for more description. Ex B work fine! for example only //STRING SPLITTER COUNTER with nine individual text fields //Timer settings var delay:uint = 1000/100; var repeat:uint = 0; var timer:Timer; timer = new Timer(delay,repeat); timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); //Integer values var count:int = 0; var fcount:int = 0; //Format Count function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } //Split strings off to individual text fields function incrementCounter(event:TimerEvent) { count++; fcount=int(count+220000000) var toText:String = formatCount(fcount); mytext9.text = toText.substr(9, 9); mytext8.text = toText.substr(8, 8); mytext7dec.text = toText.substr(7, 7); mytext6.text = toText.substr(6, 6); mytext5.text = toText.substr(5, 5); mytext4.text = toText.substr(4, 4); mytext3.text = toText.substr(3, 3); mytext2.text = toText.substr(2, 2); mytext1.text = toText.substr(1, 1); mytext0.text = toText.substr(0, 1); }

    Read the article

  • Integrating PayMill: The token filled input field is not created, error "field_invalid_amount"

    - by automatix
    I'm implementing the Credit Card Payment form of PayMill according to the Payment Form docu. So I copied the JS from the Bridge docu page and the form from the Payment Form docu page. But no token is created. When I try to debug the JS and add console.info(error.apierror); into the paymillResponseHandler(...) function, I get the error code: field_invalid_amount. According to the support page There are three possible reasons for this error message: no amount value was provided numbers were rounded wrong delimiter symbol But the amuont is provided and I've already tried different delimiter symbols. What "numbers were rounded" means, is not clear. What can be the problem and how to fix this issue? Code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta name="generator" content="PSPad editor, www.pspad.com"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script> <title> </title> </head> <body> <!-- PayMill HEAD start --> <link rel="stylesheet" href="https://netdna.bootstrapcdn.com/twitter-bootstrap/2.2.1/css/bootstrap.no-responsive.no-icons.min.css" /> <script type="text/javascript"> var PAYMILL_PUBLIC_KEY = '51668632777bf03b57f861c5a7278a38'; </script> <script type="text/javascript" src="https://bridge.paymill.com/"></script> <!-- PayMill HEAD stop --> <!-- PayMill FORM start --> <form id="payment-form" class="span4" action="payment.php" method="POST"> <p class="payment-errors alert-error span3" style="display:none;"> </p> <div id="payment-form-cc"> <div class="controls controls-row"> <div class="span2"> <label class="card-number-label">Kreditkarte </label> <input class="card-number span2" type="text" size="20" value="4111111111111111"/> </div> <div class="span1"> <label class="card-cvc-label">CVC </label> <input class="card-cvc span1" type="text" size="4" value="111"/> </div> </div> <div class="controls controls-row"> <div class="span3 card-icon"> </div> </div> <div class="controls controls-row"> <div class="span3"> <label class="card-holdername-label">Karteninhaber </label> <input class="card-holdername span3" type="text" size="20" value="lala"/> </div> </div> <div class="controls controls-row"> <div class="span3"> <label class="card-expiry-label">Gültigkeitsdatum (MM/YYYY) </label> <input class="card-expiry-month span1" type="text" size="2" value="12"/> <span style="float:left;"> / </span> <input class="card-expiry-year span1" type="text" size="4" value="2015"/> </div> </div> </div> <div class="controls controls-row"> <div class="span2"> <label class="amount-label">Betrag </label> <input class="amount span2" type="text" size="5" value="9,99" name="amount"/> </div> <div class="span1"> <label class="currency-label">Währung </label> <input class="currency span1" type="text" size="3" value="EUR" name="currency"/> </div> </div> <div class="controls controls-row"> <div class="span4"> <button class="submit-button btn btn-primary" type="submit" >Pay!</button> </div> </div> </form> <!-- PayMill FORM stop --> <!-- PayMill FOOT start --> <script type="text/javascript"> function paymillResponseHandler(error, result) { if (error) { console.info(error.apierror); // Displays the error above the form $(".payment-errors").text(error.apierror); } else { console.info('OK'); var form = $("#payment-form"); // Output token var token = result.token; // Insert token into form in order to submit to server form.append( "<input type='hidden' name='paymillToken' value='"+token+"'/>" ); // Submit form form.get(0).submit(); } } </script> <script type="text/javascript"> paymill.createToken({ number: $('.card-number').val(), // required exp_month: $('.card-expiry-month').val(), // required exp_year: $('.card-expiry-year').val(), // required cvc: $('.card-cvc').val(), // required amount_int: $('.card-amount-int').val(), // required, e.g. "4900" for 49.00 EUR currency: $('.currency').val(), // required cardholder: $('.card-holdername').val() // optional }, paymillResponseHandler); </script> <!-- PayMill FOOT stop --> </body> </html>

    Read the article

  • Microbenchmark showing process-switching faster than thread-switching; what's wrong?

    - by Yang
    I have two simple microbenchmarks trying to measure thread- and process-switching overheads, but the process-switching overhead. The code is living here, and r1667 is pasted below: https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/process_switch_bench.c // on zs, ~2.1-2.4us/switch #include <stdlib.h> #include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <semaphore.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> #include <sys/time.h> #include <pthread.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; sem_t *s0, *s1, *s2; void * threads ( void * unused ) { // Wait till we may fire away sem_wait(s2); for (;;) { pthread_mutex_lock(&LOCK); pthread_mutex_unlock(&LOCK); COUNTER++; sem_post(s0); sem_wait(s1); } return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_mutex_init(&LOCK, NULL); COUNTER = 0; s0 = sem_open("/s0", O_CREAT, 0022, 0); if (s0 == 0) { perror("sem_open"); exit(1); } s1 = sem_open("/s1", O_CREAT, 0022, 0); if (s1 == 0) { perror("sem_open"); exit(1); } s2 = sem_open("/s2", O_CREAT, 0022, 0); if (s2 == 0) { perror("sem_open"); exit(1); } int x, y, z; sem_getvalue(s0, &x); sem_getvalue(s1, &y); sem_getvalue(s2, &z); printf("%d %d %d\n", x, y, z); pid_t pid = fork(); if (pid) { pthread_create(&t1, NULL, threads, NULL); pthread_detach(t1); // Get start time and fire away start = timeInMS(); sem_post(s2); sem_post(s2); // Wait for about a second sleep(1); // Stop thread pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of process switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); // clean up kill(pid, 9); wait(0); sem_close(s0); sem_close(s1); sem_unlink("/s0"); sem_unlink("/s1"); sem_unlink("/s2"); } else { if (1) { sem_t *t = s0; s0 = s1; s1 = t; } threads(0); // never return } return 0; } https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/thread_switch_bench.c // From <http://stackoverflow.com/questions/304752/how-to-estimate-the-thread-context-switching-overhead> // on zs, ~4-5us/switch; tried making COUNTER updated only by one thread, but no difference #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <sys/time.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; pthread_cond_t CONDITION; void * threads ( void * unused ) { // Wait till we may fire away pthread_mutex_lock(&START); pthread_mutex_unlock(&START); int first=1; pthread_mutex_lock(&LOCK); // If I'm not the first thread, the other thread is already waiting on // the condition, thus Ihave to wake it up first, otherwise we'll deadlock if (COUNTER > 0) { pthread_cond_signal(&CONDITION); first=0; } for (;;) { if (first) COUNTER++; pthread_cond_wait(&CONDITION, &LOCK); // Always wake up the other thread before processing. The other // thread will not be able to do anything as long as I don't go // back to sleep first. pthread_cond_signal(&CONDITION); } pthread_mutex_unlock(&LOCK); return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_t t2; pthread_mutex_init(&LOCK, NULL); pthread_mutex_init(&START, NULL); pthread_cond_init(&CONDITION, NULL); pthread_mutex_lock(&START); COUNTER = 0; pthread_create(&t1, NULL, threads, NULL); pthread_create(&t2, NULL, threads, NULL); pthread_detach(t1); pthread_detach(t2); // Get start time and fire away start = timeInMS(); pthread_mutex_unlock(&START); // Wait for about a second sleep(1); // Stop both threads pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of thread switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); return 0; }

    Read the article

  • PHP syntax error “unexpected $end”

    - by Jacksta
    I have 3 files 1) show_createtable.html 2) do_showfielddef.php 3) do_showtble.php 1) First file is for creating a new table for a data base, it is a fom with 2 inputs, Table Name and Number of Fields. THIS WORKS FINE! <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <h1>Step 1: Name and Number</h1> <form method="post" action="do_showfielddef.php" /> <p><strong>Table Name:</strong><br /> <input type="text" name="table_name" size="30" /></p> <p><strong>Number of fields:</strong><br /> <input type="text" name="num_fields" size="30" /></p> <p><input type="submit" name="submit" value="go to step2" /></p> </form> </body> </html> 2) this script validates fields and createa another form to enter all the table rows. This for also WORKS FINE! <?php //validate important input if ((!$_POST[table_name]) || (!$_POST[num_fields])) { header( "location: show_createtable.html"); exit; } //begin creating form for display $form_block = " <form action=\"do_createtable.php\" method=\"post\"> <input name=\"table_name\" type=\"hidden\" value=\"$_POST[table_name]\"> <table cellspacing=\"5\" cellpadding=\"5\"> <tr> <th>Field Name</th><th>Field Type</th><th>Table Length</th> </tr>"; //count from 0 until you reach the number fo fields for ($i = 0; $i <$_POST[num_fields]; $i++) { $form_block .=" <tr> <td align=center><input type=\"texr\" name=\"field name[]\" size=\"30\"></td> <td align=center> <select name=\"field_type[]\"> <option value=\"char\">char</option> <option value=\"date\">date</option> <option value=\"float\">float</option> <option value=\"int\">int</option> <option value=\"text\">text</option> <option value=\"varchar\">varchar</option> </select> </td> <td align=center><input type=\"text\" name=\"field_length[]\" size=\"5\"> </td> </tr>"; } //finish up the form $form_block .= " <tr> <td align=center colspan=3><input type =\"submit\" value=\"create table\"> </td> </tr> </table> </form>"; ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create a database table: Step 2</title> </head> <body> <h1>defnie fields for <? echo "$_POST[table_name]"; ?> </h1> <? echo "$form_block"; ?> </body> </html> Problem is here 3) this form creates the tables and enteres them into the database. I am getting an error on line 37 "Parse error: syntax error, unexpected $end in /home/admin/domains/domaina.com.au/public_html/do_createtable.php on line 37" <? $db_name = "testDB"; $connection = @mysql_connect("localhost", "admin_user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $sql = "CREATE TABLE $_POST[table_name]("; for ($i = 0; $i < count($_POST[field_name]); $i++) { $sql .= $_POST[field_name][$i]." ".$_POST[field_type][$i]; if ($_POST[field_length][$i] !="") { $sql .=" (".$_POST[field_length][$i]."),"; } else { $sql .=","; } $sql = substr($sql, 0, -1); $sql .= ")"; $result = mysql_query($sql, $connection) or die(mysql_error()); if ($result) { $msg = "<p>" .$_POST[table_name]." has been created!</p>"; ?> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Create A Database Table: Step 3</title> </head> <body> <h1>Adding table to <? echo "$db_name"; ?>...</h1> <? echo "$msg"; ?> </body> </html>

    Read the article

  • SVG rotation animation having problems on chrome for jelly bean, is there a workaround?

    - by Metalskin
    I've got a strange problem with chrome on jellybean running svg animations triggered from javascript. This JSFiddle example works fine on chrome and firefox on linux, but when I run it on android with chrome I get the final step of the animation painted at the beginning of the animation. I've tried this on both an Nexus 7 and Transformer Prime, they both have the problem. I've tested using firefox on the android devices and the problem doesn't exist. So I'm presuming that it's a defect with chrome. However I've seen other animations using svg that do not have this problem in chrome on jellybean. Is anyone aware of a way to get around this problem, or is there something that I'm doing in my animation/js that is a possible cause of the problem? I've now created a bug report at code.google.com, however I still need a workaround, if anyone can help me (or in case I'm doing something stupid). I've now discovered that this is reproducible on chrome for linux (and I suspect windows). If you click on the button to start the animation before the previous animation has completed then the problem occurs. In this case the hand is drawn at the end of the 45 degree sweep before it starts the sweep. I now suspect that I should be calling something to stop the animation before I change the animation. Anyway, hopefully someone can resolve this problem.

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • Logic error for Gauss elimination

    - by iwanttoprogram
    Logic error problem with the Gaussian Elimination code...This code was from my Numerical Methods text in 1990's. The code is typed in from the book- not producing correct output... Sample Run: SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS USING GAUSSIAN ELIMINATION This program uses Gaussian Elimination to solve the system Ax = B, where A is the matrix of known coefficients, B is the vector of known constants and x is the column matrix of the unknowns. Number of equations: 3 Enter elements of matrix [A] A(1,1) = 0 A(1,2) = -6 A(1,3) = 9 A(2,1) = 7 A(2,2) = 0 A(2,3) = -5 A(3,1) = 5 A(3,2) = -8 A(3,3) = 6 Enter elements of [b] vector B(1) = -3 B(2) = 3 B(3) = -4 SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS The solution is x(1) = 0.000000 x(2) = -1.#IND00 x(3) = -1.#IND00 Determinant = -1.#IND00 Press any key to continue . . . The code as copied from the text... //Modified Code from C Numerical Methods Text- June 2009 #include <stdio.h> #include <math.h> #define MAXSIZE 20 //function prototype int gauss (double a[][MAXSIZE], double b[], int n, double *det); int main(void) { double a[MAXSIZE][MAXSIZE], b[MAXSIZE], det; int i, j, n, retval; printf("\n \t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS"); printf("\n \t USING GAUSSIAN ELIMINATION \n"); printf("\n This program uses Gaussian Elimination to solve the"); printf("\n system Ax = B, where A is the matrix of known"); printf("\n coefficients, B is the vector of known constants"); printf("\n and x is the column matrix of the unknowns."); //get number of equations n = 0; while(n <= 0 || n > MAXSIZE) { printf("\n Number of equations: "); scanf ("%d", &n); } //read matrix A printf("\n Enter elements of matrix [A]\n"); for (i = 0; i < n; i++) for (j = 0; j < n; j++) { printf(" A(%d,%d) = ", i + 1, j + 1); scanf("%lf", &a[i][j]); } //read {B} vector printf("\n Enter elements of [b] vector\n"); for (i = 0; i < n; i++) { printf(" B(%d) = ", i + 1); scanf("%lf", &b[i]); } //call Gauss elimination function retval = gauss(a, b, n, &det); //print results if (retval == 0) { printf("\n\t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS\n"); printf("\n\t The solution is"); for (i = 0; i < n; i++) printf("\n \t x(%d) = %lf", i + 1, b[i]); printf("\n \t Determinant = %lf \n", det); } else printf("\n \t SINGULAR MATRIX \n"); return 0; } /* Solves the system of equations [A]{x} = {B} using */ /* the Gaussian elimination method with partial pivoting. */ /* Parameters: */ /* n - number of equations */ /* a[n][n] - coefficient matrix */ /* b[n] - right-hand side vector */ /* *det - determinant of [A] */ int gauss (double a[][MAXSIZE], double b[], int n, double *det) { double tol, temp, mult; int npivot, i, j, l, k, flag; //initialization *det = 1.0; tol = 1e-30; //initial tolerance value npivot = 0; //mult = 0; //forward elimination for (k = 0; k < n; k++) { //search for max coefficient in pivot row- a[k][k] pivot element for (i = k + 1; i < n; i++) { if (fabs(a[i][k]) > fabs(a[k][k])) { //interchange row with maxium element with pivot row npivot++; for (l = 0; l < n; l++) { temp = a[i][l]; a[i][l] = a[k][l]; a[k][l] = temp; } temp = b[i]; b[i] = b[k]; b[k] = temp; } } //test for singularity if (fabs(a[k][k]) < tol) { //matrix is singular- terminate flag = 1; return flag; } //compute determinant- the product of the pivot elements *det = *det * a[k][k]; //eliminate the coefficients of X(I) for (i = k; i < n; i++) { mult = a[i][k] / a[k][k]; b[i] = b[i] - b[k] * mult; //compute constants for (j = k; j < n; j++) //compute coefficients a[i][j] = a[i][j] - a[k][j] * mult; } } //adjust the sign of the determinant if(npivot % 2 == 1) *det = *det * (-1.0); //backsubstitution b[n] = b[n] / a[n][n]; for(i = n - 1; i > 1; i--) { for(j = n; j > i + 1; j--) b[i] = b[i] - a[i][j] * b[j]; b[i] = b[i] / a[i - 1][i]; } flag = 0; return flag; } The solution should be: 1.058824, 1.823529, 0.882353 with det as -102.000000 Any insight is appreciated...

    Read the article

  • Project Euler, Problem 10 java solution now working

    - by Dennis S
    Hi, I'm trying to find the sum of the prime numbers < 2'000'000. This is my solution in java but I can't seem get the correct answer. Please give some input on what could be wrong and general advice on the code is appreciated. Printing 'sum' gives: 1308111344, which is incorrect. /* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. */ class Helper{ public void run(){ Integer sum = 0; for(int i = 2; i < 2000000; i++){ if(isPrime(i)) sum += i; } System.out.println(sum); } private boolean isPrime(int nr){ if(nr == 2) return true; else if(nr == 1) return false; if(nr % 2 == 0) return false; for(int i = 3; i < Math.sqrt(nr); i += 2){ if(nr % i == 0) return false; } return true; } } class Problem{ public static void main(String[] args){ Helper p = new Helper(); p.run(); } }

    Read the article

  • Java homework help, Error <identifier> expected

    - by user2900126
    Help with java homework this is my assignment that I have, this assignment code I've tried. But when I try to compile it I keep getting errors which I cant seem to find soloutions too: Error says <identifier> expected for Line 67 public static void () Assignment brief To write a simple java classMobile that models a mobile phone. Details the information stored about each mobile phone will include • Its type e.g. “Sony ericsson x90” or “Samsung Galaxy S”; • Its screen size in inches; You may assume that this a whole number from the scale 3 to 5 inclusive. • Its memory card capacity in gigabytes You may assume that this a whole number • The name of its present service provider You may assume this is a single line of text. • The type of contract with service provider You may assume this is a single line of text. • Its camera resolution in megapixels; You should not assume that this a whole number; • The percentage of charge left on the phone e.g. a fully charged phone will have a charge of 100. You may assume that this a whole number • Whether the phone has GPS or not. Your class will have fields corresponding to these attributes . Start by opening BlueJ, creating a new project called myMobile which has a classMobile and set up the fields that you need, Next you will need to write a Constructor for the class. Assume that each phone is manufactured by creating an object and specifying its type, its screen size, its memory card capacity, its camera resolution and whether it has GPS or not. Therefore you will need a constructor that allows you to pass arguments to initialise these five attributes. Other fields should be set to appropriate default values. You may assume that a new phone comes fully charged. When the phone is sold to its owner, you will need to set the service provider and type of contract with that provider so you will need mutator methods • setProvider () - - to set service provider. • setContractType - - to set the type of contract These methods will be used when the phones provider is changed. You should also write a mutator method ChargeUp () which simulates fully charging the phone. To obtain information about your mobile object you should write • accessor methods corresponding to four of its fields: • getType () – which returns the type of mobile; • getProvider () – which returns the present service provider; • getContractType () – which returns its type of contract; • getCharge () – which returns its remaining charge. An accessor method to printDetails () to print, to the terminal window, a report about the phone e.g. This mobile phone is a sony Erricsson X90 with Service provider BigAl and type of contract PAYG. At present it has 30% of its battery charge remaining. Check that the new method works correctly by for example, • creating a Mobile object and setting its fields; • calling printDetails () and t=checking the report corresponds to the details you have just given the mobile; • changing the service provider and contract type by calling setprovider () and setContractType (); • calling printDetails () and checking the report now prints out the new details. Challenging excercises • write a mutator methodswitchedOnFor () =which simulates using the phone for a specified period. You may assume the phone loses 1% of its charge for each hour that it is switched on . • write an accessor method checkcharge () whichg checks the phone remaing charge. If this charge has a value less than 25%, then this method returns a string containg the message Be aware that you will soon need to re-charge your phone, otherwise it returns a string your phone charge is sufficient. • Write a method changeProvider () which simulates changing the provider (and presumably also the type of service contract). Finally you may add up to four additional fields, with appropriate methods, that might be required in a more detailed model. above is my assignment that I have, this assignment code I've tried. But when I try to oompile it I keep getting errors which I cant seem to find soloutions too: Error says <identifier> expected for Line 67 public static void () /** * to write a simple java class Mobile that models a mobile phone. * * @author (Lewis Burte-Clarke) * @version (14/10/13) */ public class Mobile { // type of phone private String phonetype; // size of screen in inches private int screensize; // menory card capacity private int memorycardcapacity; // name of present service provider private String serviceprovider; // type of contract with service provider private int typeofcontract; // camera resolution in megapixels private int cameraresolution; // the percentage of charge left on the phone private int checkcharge; // wether the phone has GPS or not private String GPS; // instance variables - replace the example below with your own private int x; // The constructor method public Mobile(String mobilephonetype, int mobilescreensize, int mobilememorycardcapacity,int mobilecameraresolution,String mobileGPS, String newserviceprovider) { this.phonetype = mobilephonetype; this.screensize = mobilescreensize; this.memorycardcapacity = mobilememorycardcapacity; this.cameraresolution = mobilecameraresolution; this.GPS = mobileGPS; // you do not use this ones during instantiation,you can remove them if you do not need or assign them some default values //this.serviceprovider = newserviceprovider; //this.typeofcontract = 12; //this.checkcharge = checkcharge; Mobile samsungPhone = new Mobile("Samsung", "1024", "2", "verizon", "8", "GPS"); 1024 = screensize; 2 = memorycardcapacity; 8 = resolution; GPS = gps; "verizon"=serviceprovider; //typeofcontract = 12; //checkcharge = checkcharge; } // A method to display the state of the object to the screen public void displayMobileDetails() { System.out.println("phonetype: " + phonetype); System.out.println("screensize: " + screensize); System.out.println("memorycardcapacity: " + memorycardcapacity); System.out.println("cameraresolution: " + cameraresolution); System.out.println("GPS: " + GPS); System.out.println("serviceprovider: " + serviceprovider); System.out.println("typeofcontract: " + typeofcontract); } /** * The mymobile class implements an application that * simply displays "new Mobile!" to the standard output. */ public class mymobile { public static void main(String[] args) { System.out.println("new Mobile!"); //Display the string. } } public static void buildPhones(){ Mobile Samsung = new Mobile("Samsung", "3.0", "4gb", "8mega pixels", "GPS"); Mobile Blackberry = new Mobile("Blackberry", "3.0", "4gb", "8mega pixels", "GPS"); Samsung.displayMobileDetails(); Blackberry.displayMobileDetails(); } public static void main(String[] args) { buildPhones(); } } any answers.replies and help would be greatly appreciated as I really lost!

    Read the article

  • Why is my php script freezing?

    - by William
    What is causing my php code to freeze? I know it's cause of the while loop, but I have $max_threads--; at the end so it shouldn't do that. <html> <head> <?php $db = mysql_connect("host","name","pass") or die("Can't connect to host"); mysql_select_db("dbname",$db) or die("Can't connect to DB"); $sql_result = mysql_query("SELECT MAX(Thread) FROM test_posts", $db); $rs = mysql_fetch_row($sql_result); $max_threads = $rs[0]; $board = $_GET['board']; ?> </head> <body> <?php While($max_threads >= 0) { $sql_result = mysql_query("SELECT MIN(ID) FROM test_posts WHERE Thread=".$max_threads."", $db); $rs = mysql_fetch_row($sql_result); $sql_result = mysql_query("SELECT post FROM test_posts WHERE ID=".$rs[0]."", $db); $post = mysql_fetch_row($sql_result); $sql_result = mysql_query("SELECT name FROM test_posts WHERE ID=".$rs[0]."", $db); $name = mysql_fetch_row($sql_result); $sql_result = mysql_query("SELECT trip FROM test_posts WHERE ID=".$rs[0]."", $db); $trip = mysql_fetch_row($sql_result); if(!empty($post)) echo'<div class="postbox"><h4>'.$name[0].'['.$trip[0].']</h4><hr />' . $post[0] . '<br /><hr />[<a href="http://prime.programming-designs.com/test_forum/viewthread.php?thread='.$max_threads.'">Reply</a>]</div>'; $max_threads--; } ?> </body> </html>

    Read the article

  • Strange type-related error

    - by vsb
    I wrote following program: isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] primes = filter isPrime [1 .. ] it should construct list of prime numbers. But I got this error: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:3:16: Ambiguous type variable `a' in the constraints: `Floating a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `RealFrac a' arising from a use of `isPrime' at 7/main.hs:3:16-22 `Integral a' arising from a use of `isPrime' at 7/main.hs:3:16-22 Possible cause: the monomorphism restriction applied to the following: primes :: [a] (bound at 7/main.hs:3:0) Probable fix: give these definition(s) an explicit type signature or use -XNoMonomorphismRestriction Failed, modules loaded: none. If I specify signature for isPrime function explicitly: isPrime :: Integer -> Bool isPrime x = and [x `mod` i /= 0 | i <- [2 .. truncate (sqrt x)]] I can't even compile isPrime function: [1 of 1] Compiling Main ( 7/main.hs, interpreted ) 7/main.hs:2:45: No instance for (RealFrac Integer) arising from a use of `truncate' at 7/main.hs:2:45-61 Possible fix: add an instance declaration for (RealFrac Integer) In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] In a stmt of a list comprehension: i <- [2 .. truncate (sqrt x)] 7/main.hs:2:55: No instance for (Floating Integer) arising from a use of `sqrt' at 7/main.hs:2:55-60 Possible fix: add an instance declaration for (Floating Integer) In the first argument of `truncate', namely `(sqrt x)' In the expression: truncate (sqrt x) In the expression: [2 .. truncate (sqrt x)] Failed, modules loaded: none. Can you help me understand, why am I getting these errors?

    Read the article

  • Interpretation of range(n) and boolean list, one-to-one map, simpler?

    - by HH
    #!/usr/bin/python # # Description: bitwise factorization and then trying to find # an elegant way to print numbers # Source: http://forums.xkcd.com/viewtopic.php?f=11&t=61300#p2195422 # bug with large numbers such as 99, but main point in simplifying it # def primes(n): # all even numbers greater than 2 are not prime. s = [False]*2 + [True]*2 + [False,True]*((n-4)//2) + [False]*(n%2) i = 3; while i*i < n: # get rid of ** and skip even numbers. s[i*i : n : i*2] = [False]*(1+(n-i*i)//(i*2)) i += 2 # skip non-primes while not s[i]: i += 2 return s # TRIAL: can you find a simpler way to print them? # feeling the overuse of assignments but cannot see a way to get it simpler # p = 49 boolPrimes = primes(p) numbs = range(len(boolPrimes)) mydict = dict(zip(numbs, boolPrimes)) print([numb for numb in numbs if mydict[numb]]) Something I am looking for, can you get TRIAL to be of the extreme simplicity below? Any such method? a=[True, False, True] b=[1,2,3] b_a # any such simple way to get it evaluated to [1,3] # above a crude way to do it in TRIAL

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • CodePlex Daily Summary for Friday, May 21, 2010

    CodePlex Daily Summary for Friday, May 21, 2010New Projects.Net wrapper around the Neo4j Rest Server: Neo4jRestSharp is a .Net API wrapper for the Neo4j Rest Server. Neo4j is an open sourced java based transactional graph database that stores data ...3D Editor Application Framework: A starting point for building 3D editing applications, such as video game editors, particle system editors, 3D modelling tools, visualization tools...Bulk Actions for SharePoint: This project aims to provide some essential and generic bulk actions for SharePoint lists. Idea is to include any custom actions that can be applie...CineRemote - The hometheater control board: CineRemote's purpose is to offer an alternative to expensive control system for dedicated hometheater rooms. CrmContrib: CrmContrib is a collection of useful items for developers and customizers working with the Dynamics CRM platform.db2xls: OleDb,Sql Server,Sqlite,....to excel, from sqlHappyNet - Silverlight reference application: HappyNet is a project using best practices to build an e-commerce web site. It is a full Silverlight application based on a solid architecture (PR...IP Multicast Library: IP Multicast Library makes it easier for developers to add Multicast, messaging to projects.Linkbutton Web Part: This Link Button Web Part can be installed in any SharePoint 2007 web site. You can onfigure a URL with query string that will be used by the Link...Majordomus pro Windows: Nástroj určený pro správce a vývojáře slouží k řízenému spuštění používaných a vypnutí nepotřebných služeb, procesů a aplikací ve Windows. Pomocí s...MRDS Samples: The MRDS Samples site hosts a variety of code samples for Microsoft Robotics Developer Studio (RDS).Mute4: Mute4 is a simple application that allows you to set a mute/vibration profile and it will switch back to your normal profile automatically after a ...Niko Neko Pureya: Niko Neko Pureya is a media player designed for people who watches a series of videos (like anime). It is very simple and easy to use & learn. And ...NVPX - VP8 Video Codec for .Net: NVPx allows you to use the now open-source VP8 codec on the .Net platform.openrs: openrs is an open-source RuneScape 2 emulator designed to be used with newer engine clients.Prism Evaluation: prism evaluationProj4Net: Proj4Net is a C#/.Net library to transform point coordinates from one geographic coordinate system to another, including datum transformation. The ...Read it to me!: Read it to me will allow you to load txt and rtf files and then speak them using SAPI 5 voices that are installed on your computer with an option t...sGSHOPedit: -SilverDice: SilverDice...SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight contains a collection of silverlight controls making life easier for developers. You'll no longer have to worry ...Silverlight Report: Open-Source Silverlight Reporting Engine. This project allows you to create and print reports using Silverlight 4.SimTrain5000: Train simulation project on University College of Northern Denmark.Springshield Sample Site for EPiServer CMS: City of Springshield - The accessible sample site for EPiServer CMS 6.Teach.Net: Teach.Net is a library/framework that can be used to create applications for testing and learning.The Amoeba Project: The Amoeba Project is a platform to be developed to embrace most of the latest Microsoft Technologies. Still in a conceptual stage however, it loo...The Fastcopy Helper: The Fastcopy Helper is a auxiliary tool for fastcopy.vow: vowWCF Client Generator: This code generator avoids the shortcomings of svcutil when generating proxies for services with a large number of methods.WebCycle: WebCycle is a screensaver application that cycles through web pages. This was originally created to cycle through Reporting Services reports so th...XGate2D - XNA 2D Game Engine: XGate2D is 2D game engine built using XNA Framework. XGate2D currently has 8 features: input handler, animation, Graphical User Interface (GUI), ...XNA Catapult Minigame for XNA 4: XNA 4 implementation of the Catapult Minigame Sample from XNA Creators Club.New ReleasesADefHelpDesk: ADefHelpDesk (Standard ASP.NET Version) 01.00.00: ADefHelpDesk a Help Desk / Ticket Tracker module * NOTE: This version is NOT a DotNetNuke module - It is a standard ASP.NET Application * SQL 2005...Bulk Actions for SharePoint: First Release: First Release - Includes following bulk list actions: *Delete *Checkin/Checkout *Publish/Unpublish *Move *Update MetadataCheck-in Wizard for ArenaChMS: v1.2.1: v 1.2.0 updated to work with Arena 2009.2 (see notes below). Added support for "At Kiosk" and "At Location" printing. Added support for print l...ConfigTray: 1.5: Version 1.5 will have a new UI for managing ConfigTray config. Instead of manually editing configtray.exe.config to add/delete/edit settings and fi...CrmContrib: CrmContribWorkflow 1.0 ALPHA1: This is an initial release of the CrmContribWorkflow 1.0 components. At the moment there are only two activities included in this release. Add Cont...DemotToolkit: DemotToolkit-0.1.0.50830: Initial release.DemotToolkit: DemotToolkit-0.1.1.51107: Fixed crashing in some circumstances.Dot Game: Dot Game Stable Release: Dot Game This is latest stable release without network play mode. (Network play mode is under development)Dynamic Survey Forms - SharePoint Web Part: Fix for missing dlls and documentation: Added missing assemblies to setup.zip. Installation instructions.EnhSim: V1.9.8.7: Added Sharpened Twilight ScaleEvent Scavenger: Viewer 3.2.2: Fixed a bug in the viewer where the previous view 'Top x' filter was not restored after the application was reopened.F# Project Extender: V0.9.2.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. Fixed bugs: -VS2010 crash on MoveUp(MoveDown) of renamed file -Adding files brea...FlickrNet API Library: 3.0 Beta 2: The final Beta for the 3.0 release. Fixes a major issue with Photosets.GetList as well as a number of smaller bugs, and adds the new Usage extras ...Folder Bookmarks: Folder Bookmarks 1.5.7: The latest version of Folder Bookmarks (1.5.7), with the new Help feature - all the instructions needed to use the software (If you have any sugges...Linkbutton Web Part: V1.1: Use WinZip to unzip. See docs folder for installation instructions.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Final: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 . (Version 46127) Getting StartedInfo about installation ...MEFedMVVM: MEFedMVVM: This version contains the MEFedMVVM ViewModelLocator and also some basic services such as Mediator and StateManager. You can download the code fr...Mentor Text Database: May 2010 Release with instrumentation: This should function the same as the previous version. Some enhancements have been made, and additional instrumentation has been added to help anal...Merthin: SSF 2010: Code and documentation presented at the Student Science Fair of the Faculty of Mathematics and Computer Science at the University of Habana. The ma...NB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store_02.01.00: NB_Store v2.1.0 THIS IS AN ALPHA RELEASE FOR TESTING ONLY......DO NOT USE IT ON A LIVE SYSTEM.NerdDinner.com - Where Geeks Eat: NerdDinner - Four Database Access Samples: Chris Sells worked with Nick Muhonen from Useable Concepts and Nick created four samples exploring how an ASP.NET MVC application can access databa...openrs: Devstart: Trunk release, empty project.Over Store: OverStore 1.19.0.0: - Version number is increased. - Add methods for specifying custom callback methods to TableMappingRepositoryConfiguration. - Object attaching fu...Rnwood.SmtpServer: Rnwood.SmtpServer 2.0: SmtpServer 2.0 is a .NET SMTP server component written in pure c#. It was written to power http://smtp4dev.codeplex.com/ but can easily be used by ...Scrum Sprint Monitor: v1.0.0.48524 (.NET 4-TFS 2010): What is new in this release? #6132 - Bug with open work hours; Added untested support for MSF for Agile process template; Improved data reporti...SharePoint Rsync List: 1.0.0.0: This initial 1.0 release includes a new feature which manages timer jobs on your sync listShould: Beta 1.1: Updated the namespaces. The extension methods are now in the root Should namespace. The other classes are not in child namespaces.SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight: Kindly give your comments about this project and tell how you feel about it. I'm still new in creating controls, hopefully you guys can support me....Silverlight Report: SilverlightReport_v0.1_alpha_bin: SilverlightReport v0.1 alphaSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.2.0: Fixed a problem with long referenced DetectionResults that might have caused an IndexOutOfRangeException Added Marker.LoadFromResource to get rid...The Fastcopy Helper: My Fastcopy Helper 1.0: This Source Code Is use a method to run it . The method is thinked by my bain. So , The Performance maybe lower.Thinktecture.DataObjectModel: Thinktecture.DataObjectModel v0.12: Some bugs fixed. See ChangeLog.txt for more infos.Umbraco CMS: Umbraco 4.0.4.1: A stability release fixing 13 issues based on feedback from 4.0.3 users. Most importantly is a fix to a serious date bug where day and month could ...Usa*Usa Libraly: Smart.Web.Mobile ver 0.2: Smart.Web.Mobile pictgram convert library for japanese galapagos k-tai( ゚д゚) ver 0.2. - Custom encoding for HttpRequest.ContentEncoding / HttpResp...VCC: Latest build, v2.1.30520.0: Automatic drop of latest buildvow: dream: I have a dreamvow: test: testWCF Client Generator: Version 0.9.1.42927: Initial feature set complete. Detailed UI pending.WebCycle: WebCycle 1.0.20: Initial CodePlex releaseWebCycle: WebCycle 1.0.21: Added Uri validataion before saving settingsWhois Application: 1.5 release: - uses the whois.iana.org to dynamically lookup the whois server for each top level domain - enables enter key press for searchWing Beats: Wing Beats 0.9: This first release is focused on the core functionality and XHTML 1.0 strict generation in Asp.NET MVC.Most Popular ProjectsWeb Service Software FactoryPlasmaAquisição de Sinais Vitais em Tempo Real (Vital signs realtime data acquisition)Octtree XNA-GS DrawableGameComponentRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Most Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationPHPExcelBlogEngine.NETSQL Server PowerShell ExtensionsCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices: Windows Azure Security GuidanceFluent Ribbon Control Suite

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >