Search Results

Search found 12044 results on 482 pages for 'foreach loop'.

Page 126/482 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How do you coordinate with co-workers to give a balanced interview?

    - by goldierox
    My company has been conducting a lot of interviews lately for candidates with various experience levels, ranging from interns to senior candidates. We put our candidates through five 45 minute interview sessions where we try to ask a range of questions. One person always asks the same questions that test logic and communication. The rest typically split time between a whiteboard coding question and a discussion of previous projects, technologies the interviewee has worked with, and what he/she is looking for a job. Generally, we know the range of questions that other people on the loop will ask. Sometimes we switch things up and end up having redundancies. Today, 3 interviewers asked tree-related questions. Other times, we've all honed in on the same project on a resume and have had the interviewee talk about it with everyone. I think a smooth interview process would help us learn more about the candidate while giving the impression to the candidate that we have our act together as a team. How do you coordinate with others in the interview loop to give a balanced interview?

    Read the article

  • Efficiency of iterators and alternatives? [migrated]

    - by user48037
    I have the following code for my game tiles: std::vector<GameObject_Tile*>::iterator it; for(int y = 0; y < GAME_TILES_Y; y++) { for(int x = 0; x < GAME_TILES_X; x++) { for (it = gameTiles[x][y].tiles.begin() ; it != gameTiles[x][y].tiles.end(); ++it) {}}} tiles is: struct Game_Tile { // More specific object types will be added here eventually vector<GameObject_Tile*> tiles; }; My problem is that if I change the vector to just be a single GameObject_Tile* instead and remove the iterator line in the loop I go from about 200fps to 450fps. Some context: The vector/pointer only contains one object in both scenarios. I will eventually need to store multiple, but for testing I just set it to a single pointer. The loop goes through 2,300 objects each frame and draws them. I would like to point out that if I remove the Draw (not seen int he example) method, I gain about 30 frames in both scenarios, the issue is the iteration. So I am wondering why having this as a vector being looped through by an iterator (to get at a single object) is costing me over 200 frames when compared to it being a single pointer? The 200+ frames faster code is: std::vector<GameObject_Tile*>::iterator it; for(int y = 0; y < GAME_TILES_Y; y++) { for(int x = 0; x < GAME_TILES_X; x++) { //gameTiles[x][y].tiles is used as a pointer here instead of using *it }} tiles is: struct Game_Tile { // More specific object types will be added here eventually GameObject_Tile* tiles; };

    Read the article

  • Maze not being random.

    - by Matt Habel
    Hey there, I am building a program that generates a maze so I can later translate the path to my graphical part. I have most of it working, however, every time you can just take the east and south routes, and you'll get to the end. Even if I set the width as high as 64, so the maze is 64*64, I'm able to choose those 2 options and get to the end every time. I really don't understand why it is doing that. The code is below, it's fairly easy to understand. import random width = 8 def check(x,y): """Figures out the directions that Gen can move while""" if x-1 == -1: maze[x][y][3] = 0 if x+1 == width + 1: maze[x][y][1] = 0 if y+1 == width + 1: maze[x][y][2] = 0 if y-1 == -1: maze[x][y][0] = 0 if x + 1 in range(0,width) and visited[x+1][y] == False: maze[x][y][1] = 2 if x - 1 in range(0,width) and visited[x-1][y] == False: maze[x][y][3] = 2 if y + 1 in range(0,width) and visited[x][y+1] == False: maze[x][y][2] = 2 if y - 1 in range(0,width) and visited[x][y-1] == False: maze[x][y][0] = 2 def possibleDirs(x,y): """Figures out the ways that the person can move in each square""" dirs = [] walls = maze[x][y] if walls[0] == 1: dirs.append('n') if walls[1] == 1: dirs.append('e') if walls[2] == 1: dirs.append('s') if walls[3] == 1: dirs.append('w') return dirs def Gen(x,y): """Generates the maze using a depth-first search and recursive backtracking.""" visited[x][y] = True dirs = [] check(x,y) if maze[x][y][0] == 2: dirs.append(0) if maze[x][y][1] == 2: dirs.append(1) if maze[x][y][2] == 2: dirs.append(2) if maze[x][y][3] == 2: dirs.append(3) print dirs if len(dirs): #Randonly selects a derection for the current square to move past.append(current[:]) pos = random.choice(dirs) maze[x][y][pos] = 1 if pos == 0: current[1] -= 1 maze[x][y-1][2] = 1 if pos == 1: current[0] += 1 maze[x+1][y][3] = 1 if pos == 2: current[1] += 1 maze[x][y+1][0] = 1 if pos == 3: current[0] -= 1 maze[x-1][y][1] = 1 else: #If there's nowhere to go, go back one square lastPlace = past.pop() current[0] = lastPlace[0] current[1] = lastPlace[1] #Build the initial values for the maze to be replaced later maze = [] visited = [] past = [] #Generate empty 2d list with a value for each of the xy coordinates for i in range(0,width): maze.append([]) for q in range(0, width): maze[i].append([]) for n in range(0, 4): maze[i][q].append(4) #Makes a list of falses for all the non visited places for x in range(0, width): visited.append([]) for y in range(0, width): visited[x].append(False) dirs = [] print dirs current = [0,0] #Generates the maze so every single square has been filled. I'm not sure how this works, as it is possible to only go south and east to get to the final position. while current != [width-1, width-1]: Gen(current[0], current[1]) #Getting the ways the person can move in each square for i in range(0,width): dirs.append([]) for n in range(0,width): dirs[i].append([]) dirs[i][n] = possibleDirs(i,n) print dirs print visited pos = [0,0] #The user input part of the maze while pos != [width - 1, width - 1]: dirs = [] print pos if maze[pos[0]][pos[1]][0] == 1: dirs.append('n') if maze[pos[0]][pos[1]][1] == 1: dirs.append('e') if maze[pos[0]][pos[1]][2] == 1: dirs.append('s') if maze[pos[0]][pos[1]][3] == 1: dirs.append('w') print dirs path = raw_input("What direction do you want to go: ") if path not in dirs: print "You can't go that way!" continue elif path.lower() == 'n': pos[1] -= 1 elif path.lower() == 'e': pos[0] += 1 elif path.lower() == 's': pos[1] += 1 elif path.lower() == 'w': pos[0] -= 1 print"Good job!" As you can see, I think the problem is at the point where I generate the maze, however, when I just have it go until the current point is at the end, it doesn't fill every maze and is usually just one straight path. Thanks for helping. Update: I have changed the for loop that generates the maze to a simple while loop and it seems to work much better. It seems that when the for loop ran, it didn't go recursively, however, in the while loop it's perfectly fine. However, now all the squares do not fill out.

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • How do I make these fields autopopulate from the database?

    - by dmanexe
    I have an array, which holds values like this: $items_pool = Array ( [0] => Array ( [id] => 1 [quantity] => 1 ) [1] => Array ( [id] => 2 [quantity] => 1 ) [2] => Array ( [id] => 72 [quantity] => 6 ) [3] => Array ( [id] => 4 [quantity] => 1 ) [4] => Array ( [id] => 5 [quantity] => 1 ) [5] => Array ( [id] => 7 [quantity] => 1 ) [6] => Array ( [id] => 8 [quantity] => 1 ) [7] => Array ( [id] => 9 [quantity] => 1 ) [8] => Array ( [id] => 19 [quantity] => 1 ) [9] => Array ( [id] => 20 [quantity] => 1 ) [10] => Array ( [id] => 22 [quantity] => 1 ) [11] => Array ( [id] => 29 [quantity] => 0 ) ) Next, I have a form that I am trying to populate. It loops through the item database, prints out all the possible items, and checks the ones that are already present in $items_pool. <?php foreach ($items['items_poolpackage']->result() as $item): ?> <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>"> <?php endforeach; ?> I know what logically I'm trying to accomplish here, but I can't figure out the programming. What I'm looking for, written loosely is something like this (not real code): <input type="checkbox" name="measure[<?=$item->id?>][checkmark]" value="<?=$item->id?>" <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?>> Specifically this conditional loop through the array, through all the key values (the ID) and if there's a match, the checkbox is selected. <?php if ($items_pool['$item->id']) { echo "SELECTED"; } else { }?> I understand from a loop structured like this that it may mean a lot of 'extra' processing. TL;DR - I need to echo within a loop if the item going through the loop exists within another array.

    Read the article

  • Dispatcher Timer Problem

    - by will
    I am trying to make a game in silverlight that also has widgets in it. To do this I am using a dispatcher timer running a game loop that updates graphics etc. In this I have a variable that has to be accessed by both by the constantly running game loop and UI event code. At first look it seemed that the gameloop had its own local copy of currentUnit (the variable), despite the variable being declared globally. I am trying to update currentUnit with an event by the widget part of the app, but the timer's version of the variable is not being updated. What can I do get the currentUnit in the gameloop loop to be updated whenever I update currentUnit via a click event? Here is the code for setting currentUnit as part of a click event DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(Unit)); currentUnit = serializer.ReadObject(e.Result) as Unit; txtName.Text = currentUnit.name; Canvas.SetLeft(txtName, 100 - (int)Math.Ceiling(txtName.ActualWidth) / 2); txtX.Text = "" + currentUnit.x; txtY.Text = "" + currentUnit.y; txtX.Text = "" + currentUnit.owner; txtY.Text = "" + currentUnit.moved; txtName.Text = "" + currentUnit.GetHashCode(); And here is a snippet from the gameLoop loop //deal with phase changes and showing stuff if (txtPhase.Text == "Move" && movementPanel.Visibility == Visibility.Collapsed) { if (currentUnit != null) { if (currentUnit.owner) { if (currentUnit.moved) { txtMoved.Text = "This Unit has Already Moved!"; movementPanel.Visibility = Visibility.Collapsed; } else { txtMoved.Text = "" + currentUnit.GetHashCode(); movementPanel.Visibility = Visibility.Visible; } } else { txtMoved.Text = "bam"; movementPanel.Visibility = Visibility.Collapsed; } } else { txtMoved.Text = "slam"; movementPanel.Visibility = Visibility.Collapsed; } //loadUnitList(); } Here is the code for my unit class. using System; public class Unit { public int id { get; set; } public string name { get; set; } public string image { get; set; } public int x { get; set; } public int y { get; set; } public bool owner { get; set; } public int rotation { get; set; } public double movement { get; set; } public string type { get; set; } public bool moved { get; set; } public bool fired { get; set; } } Overall, any simple types, like a double is being 'updated' correctly, yet a complex of my own type (Unit) seems to be holding a local copy. Please help, I've asked other places and no one has had an answer for me!

    Read the article

  • Looping through siblings of a specific row/setting up function

    - by Matt
    Trying to work through my javascript book I am referencing to learn the language and got stuck on looping through siblings of a specific row. W3schools and W3 didnt have what i was looking for. Below is a function walk-through... It reads: Create the countRecords() function. The purpose of this function is to count the number of visible rows in the data table after the table headings. The total is then displayed in the table cell with the id "records". Add the follow commands to the function: a. Create a object named headRow that points to the table row with the id "titleRow". Create a variable named rowCount, setting its initial value to 0. b. Create a for loop that uses familial references starting with the first sibling of headRow and moving to the next sibling until there are no siblings left. Within the for loop. test whether the node name of the currentnext sibling until there are no sibilings left. Within the for loop test whether the node name of the current node is equal to "TR". If it is, test wheter the value of its display style is equal to an empty text string. If it is (indicating that it is visible in the document) increate the value of the rowCount variable by 1. c. Change the text of the "records" table cell to the value of the rowCount variable. Don't use innerHTML. Create a text node that contains the value of the rowCount variable and assign it to a variable called txt. Create a variable called record to store the reference to the element "records" table cell. d. Insert an if condition that test whether the "records" cell has any child nodes. If it does, replace the replace the text node of the "record" table cell with the created text node (txt). If it doesn't append the text node to the cell. var headRow; // part a var rowCount = 0; //part b this is where I get lost. I know I need to access the id titleRow but unsure how to set my loop up specifically for this headRow = document.getElementById("titleRow"); for(var i=0; i<headrow.length; i++) { if (something is not equal == "TH") { make code happen here } if (is "TR" == ""){ rowCount = +1; } //part c var txt = document.createTextNode(rowCount); var record = document.getElementsById("records") //part d holding off on this part until I get a,b,c figured out. The HTML supporting snippet: <table id="filters"> <tr><th colspan="2">Filter Product List</th></tr> <tr> <td>Records: </td> <td id="records"></td> </tr> <table id="prodTable"> <tr><th colspan="8">Digital Cameras</th></tr> <tr id="titleRow"> <th>Model</th> <th>Manufacturer</th> <th>Resolution</th> <th>Zoom</th> <th>Media</th> <th>Video</th> <th>Microphone</th> </tr> Thanks for the help!

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • Identifier is undefined

    - by hawk
    I wrote the following code in C++ using VS2012 Express. void ac_search( uint num_patterns, uint pattern_length, const char *patterns, uint num_records, uint record_length, const char *records, int *matches, Node* trie) { // Irrelevant code omitted. } vector<int> ac_benchmark_search( uint num_patterns, uint pattern_length, const char *patterns, uint num_records, uint record_length, const char *records, double &time) { // Prepare the container for the results vector<int> matches(num_records * num_patterns); Trie T; Node* trie = T.addWord(records, num_records, record_length); // error line ac_search(num_patterns, pattern_length, patterns, num_records, record_length, records, matches.data(), trie); // Irrelevant code omitted. return matches; } I get the error identifier "ac_search" is undefined at the function invoking line. I am a bit confused here. because the function ac_search is declared as a global (not inside any container). Why can't I call it at this place? Am I missing something? Update I tried ignore irrelevant code and then included it gradually and found that everything is fine until I include the outer loop of ac_search I get the aforementioned error. here is updated code of the function ac_search: void ac_cpu_string_search(uint num_patterns, uint pattern_length, const char *patterns, uint num_records, uint record_length, const char *records, int *matches, Node* trie) { // Loop over all records //for (uint record_number = 0; record_number < num_records; ++record_number) //{ // // Loop over all patterns for (uint pattern_number = 0; pattern_number < num_patterns; ++pattern_number) { // Execute string search const char *ptr_record = &records[record_number * record_length]; const char *ptr_match = std::strstr(ptr_record, &patterns[pattern_number * pattern_length]); // If pattern was found, then calculate offset, otherwise result is -1 if (ptr_match) { matches[record_number * num_patterns + pattern_number] = static_cast<int>(std::distance(ptr_record, ptr_match)); } else { matches[record_number * num_patterns + pattern_number] = -1; } // } //} } Update 2 I think the error has something to do with the function addWord which belongs to the class Trie. When I commented out this function, I did not get the error anymore. Node* Trie::addWord(const char *records, uint num_records, uint record_length) { // Loop over all records for (uint record_number = 0; record_number < num_records; ++record_number) { const char *ptr_record = &records[record_number * record_length]; string s = ptr_record; Node* current = root; if ( s.length() == 0 ) { current->setWordMarker(); // an empty word return; } for ( int i = 0; i < s.length(); i++ ) { Node* child = current->findChild(s[i]); if ( child != NULL ) { current = child; } else { Node* tmp = new Node(); tmp->setContent(s[i]); current->appendChild(tmp); current = tmp; } if ( i == s.length() - 1 ) current->setWordMarker(); } return current; } void ac_search( uint num_patterns, uint pattern_length, const char *patterns, uint num_records, uint record_length, const char *records, int *matches, Node* trie) { // Irrelevant code omitted. } vector<int> ac_benchmark_search( uint num_patterns, uint pattern_length, const char *patterns, uint num_records, uint record_length, const char *records, double &time) { // Prepare the container for the results vector<int> matches(num_records * num_patterns); Trie T; Node* trie = T.addWord(records, num_records, record_length); // error line ac_search(num_patterns, pattern_length, patterns, num_records, record_length, records, matches.data(), trie); // Irrelevant code omitted. return matches; }

    Read the article

  • Calling Object Methods in Code

    - by Mister R2
    I'm a bit new to PHP, and I'm more experienced with strongly-typed languages such as JAVA, C# or C++.I'm currently writing a web tool in PHP, and I am having an issue trying to do what I want. The simple idea of what I want to do in code is run through some emails I used PHP-IMAP to get. I then create email objects (a class I defined), and put them in an array. Later on the code, however, I cycle through those emails to display them. However, as you might have guessed I'd have an issue with, I try to use an Email Class object method in that later loop -- and I'm pretty sure PHP doesn't know that the variables in the array happen to be Email Class objects! I wrote a toString method, and I want to call it in the loop. While I don't need to do this for the final version of this tool, I would like to find out what I'm missing. This is the class and the loop where I'm calling the method: include 'imap_email_interface.php'; class ImapEmail implements imap_email_interface { // Email data var $msgno; var $to; var $from; var $subject; var $body; var $attachment; // Email behavior /* PHP 4 ~ legacy constructor */ public function ImapEmail($message_number) { $this->__construct(); $this->msgno = $message_number; } /* PHP 5 Constructor */ public function __construct($message_number) { $this->msgno = $message_number; } public function send($send_to) { // Not Yet Needed! Seriously! } public function setHeaderDirectly($TO, $FROM, $SUBJECT) { $this->to = $TO; $this->from = $FROM; $this->subject = $SUBJECT; } public function setHeaderIndirectly($HEADER) { if (isset($HEADER->to[0]->personal)) $this->to = '"'.$HEADER->to[0]->personal.'", '.$HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; else $this->to = $HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; $this->from = '"'.$HEADER->from[0]->personal.'", '.$HEADER->from[0]->mailbox.'@'.$HEADER->from[0]->host; $this->subject = $HEADER->subject; } public function setBody($BODY) { $this->body = $BODY; } public function setAttachment($ATTCH) { $this->attachment = $ATTCH; } public function toString() { $str = '[TO]: ' . $this->to . '<br />' . '[FROM]: ' . $this->from . '<br />' . '[SUBJECT]: ' . $this->subject . '<br />'; $str .= '[Attachment]: '.$this->attachment.'<br />'; return $str; } } ?> The Loop: foreach ($orderFileEmails as $x) { $x->toString(); echo '<br /><br />'; } Any ideas?

    Read the article

  • texture colours opengl

    - by user1324894
    Hi I am making a simple 2D game in c++ and for the map I am doing texture mapping by using tiles and assigning textures to those tiles. However, when I run the programme the textures become black and white when I want them to be the colour they are in the .png image. This is my code: int worldMap[10][10] = { {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,0,0}, }; void background() { glClearColor(0.0,0.0,0.0,0.0); /**********************************************************************************************/ // Texture loading object nv::Image img; // Return true on success if(img.loadImageFromFile("Image_Loading/field.png")) { glGenTextures(1, &myTexture); glBindTexture(GL_TEXTURE_2D, myTexture); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, img.getInternalFormat(), img.getWidth(), img.getHeight(), 0, img.getFormat(), img.getType(), img.getLevel(0)); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT ); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT ); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f); } else MessageBox(NULL, "Failed to load texture", "End of the world", MB_OK | MB_ICONINFORMATION); /**********************************************************************************************/ } void drawTiles (void) { //our function to draw the tiles for (int i = 0; i < 10; i++) //loop through the height of the map { for (int j = 0; j < 10; j++) //loop through the width of the map { if (worldMap[i][j] == 0) //if the map at this position contains a 0 { glBindTexture( GL_TEXTURE_2D, myTexture ); //bind our grass texture to our shape } glPushMatrix(); //push the matrix so that our translations only affect this tile glTranslatef(j, -i, 0); //translate the tile to where it should belong glBegin (GL_QUADS); //begin drawing our quads glTexCoord2d(10, 0); glVertex2f((-10 + mapX),(-10 + mapY)); //with our vertices we have to assign a texcoord glTexCoord2d(10, 0); glVertex2f((10 + mapX),(-10 + mapY)); //so that our texture has some points to draw to glTexCoord2d(10, 10); glVertex2f((10 + mapX),(10 + mapY)); glTexCoord2d(0, 10); glVertex2f((-10 + mapX),(10 + mapY)); glEnd(); glPopMatrix(); //pop the matrix } //end first loop } //end second loop } void display() { glClearColor (0.0,0.0,0.0,1.0); glClear(GL_COLOR_BUFFER_BIT); /**********************************************************************************************/ glMatrixMode( GL_PROJECTION ); glLoadIdentity(); gluOrtho2D( -5, 5, -5, 5); glMatrixMode( GL_MODELVIEW ); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, myTexture); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); drawTiles(); glDisable(GL_TEXTURE_2D); glDisable(GL_BLEND); /**********************************************************************************************/ } void character () { glBegin(GL_POLYGON); glVertex2f((-0.5+characterX),(-0.5 +characterY)); glVertex2f((-0.5+characterX),(0.5+characterY)); glVertex2f((0.5+characterX),(0.5+characterY)); glVertex2f((0.5+characterX),(-0.5+characterY)); glTranslatef(characterX, characterY, 0.0f); glEnd(); } Can anybody help please?

    Read the article

  • Find out when all processes in (void) is done?

    - by Emil
    Hey. I need to know how you can find out when all processes (loaded) from a - (void) are done, if it's possible. Why? I'm loading in data for a UITableView, and I need to know when a Loading... view can be replaced with the UITableView, and when I can start creating the cells. This is my code: - (void) reloadData { NSAutoreleasePool *releasePool = [[NSAutoreleasePool alloc] init]; NSLog(@"Reloading data."); NSURL *urlPosts = [NSURL URLWithString:[NSString stringWithFormat:@"%@", URL]]; NSError *lookupError = nil; NSString *data = [[NSString alloc] initWithContentsOfURL:urlPosts encoding:NSUTF8StringEncoding error:&lookupError]; postsData = [data componentsSeparatedByString:@"~"]; [data release], data = nil; urlPosts = nil; self.numberOfPosts = [[postsData objectAtIndex:0] intValue]; self.postsArrayID = [[postsData objectAtIndex:1] componentsSeparatedByString:@"#"]; self.postsArrayDate = [[postsData objectAtIndex:2] componentsSeparatedByString:@"#"]; self.postsArrayTitle = [[postsData objectAtIndex:3] componentsSeparatedByString:@"#"]; self.postsArrayComments = [[postsData objectAtIndex:4] componentsSeparatedByString:@"#"]; self.postsArrayImgSrc = [[postsData objectAtIndex:5] componentsSeparatedByString:@"#"]; NSMutableArray *writeToPlist = [NSMutableArray array]; NSMutableArray *writeToNoImagePlist = [NSMutableArray array]; NSMutableArray *imagesStored = [NSMutableArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]; int loop = 0; for (NSString *postID in postsArrayID) { if ([imagesStored containsObject:[NSString stringWithFormat:@"%@.png", postID]]){ NSLog(@"Allready stored, jump to next. ID: %@", postID); continue; } NSLog(@"%@.png", postID); NSData *imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[postsArrayImgSrc objectAtIndex:loop]]]; // If image contains anything, set cellImage to image. If image is empty, try one more time or use noImage.png, set in IB if (imageData == nil){ NSLog(@"imageData is empty before trying .jpeg"); // If image == nil, try to replace .jpg with .jpeg, and if that worked, set cellImage to that image. If that is also nil, use noImage.png, set in IB. imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[[postsArrayImgSrc objectAtIndex:loop] stringByReplacingOccurrencesOfString:@".jpg" withString:@".jpeg"]]]; } if (imageData != nil){ NSLog(@"imageData is NOT empty when creating file"); [fileManager createFileAtPath:[rootPath stringByAppendingPathComponent:[NSString stringWithFormat:@"images/%@.png", postID]] contents:imageData attributes:nil]; [writeToPlist addObject:[NSString stringWithFormat:@"%@.png", postID]]; } else { [writeToNoImagePlist addObject:[NSString stringWithFormat:@"%@", postID]]; } imageData = nil; loop++; NSLog(@"imagePlist: %@\nnoImagePlist: %@", writeToPlist, writeToNoImagePlist); } NSMutableArray *writeToAllPlist = [NSMutableArray arrayWithArray:writeToPlist]; [writeToPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:nowPlist]]; [writeToAllPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]]; [writeToNoImagePlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"]]]; [writeToPlist writeToFile:nowPlist atomically:YES]; [writeToAllPlist writeToFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"] atomically:YES]; [writeToNoImagePlist writeToFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"] atomically:YES]; [releasePool release]; }

    Read the article

  • Help with Java Program for Prime numbers

    - by Ben
    Hello everyone, I was wondering if you can help me with this program. I have been struggling with it for hours and have just trashed my code because the TA doesn't like how I executed it. I am completely hopeless and if anyone can help me out step by step, I would greatly appreciate it. In this project you will write a Java program that reads a positive integer n from standard input, then prints out the first n prime numbers. We say that an integer m is divisible by a non-zero integer d if there exists an integer k such that m = k d , i.e. if d divides evenly into m. Equivalently, m is divisible by d if the remainder of m upon (integer) division by d is zero. We would also express this by saying that d is a divisor of m. A positive integer p is called prime if its only positive divisors are 1 and p. The one exception to this rule is the number 1 itself, which is considered to be non-prime. A positive integer that is not prime is called composite. Euclid showed that there are infinitely many prime numbers. The prime and composite sequences begin as follows: Primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, … Composites: 1, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, … There are many ways to test a number for primality, but perhaps the simplest is to simply do trial divisions. Begin by dividing m by 2, and if it divides evenly, then m is not prime. Otherwise, divide by 3, then 4, then 5, etc. If at any point m is found to be divisible by a number d in the range 2 d m-1, then halt, and conclude that m is composite. Otherwise, conclude that m is prime. A moment’s thought shows that one need not do any trial divisions by numbers d which are themselves composite. For instance, if a trial division by 2 fails (i.e. has non-zero remainder, so m is odd), then a trial division by 4, 6, or 8, or any even number, must also fail. Thus to test a number m for primality, one need only do trial divisions by prime numbers less than m. Furthermore, it is not necessary to go all the way up to m-1. One need only do trial divisions of m by primes p in the range 2 p m . To see this, suppose m 1 is composite. Then there exist positive integers a and b such that 1 < a < m, 1 < b < m, and m = ab . But if both a m and b m , then ab m, contradicting that m = ab . Hence one of a or b must be less than or equal to m . To implement this process in java you will write a function called isPrime() with the following signature: static boolean isPrime(int m, int[] P) This function will return true or false according to whether m is prime or composite. The array argument P will contain a sufficient number of primes to do the testing. Specifically, at the time isPrime() is called, array P must contain (at least) all primes p in the range 2 p m . For instance, to test m = 53 for primality, one must do successive trial divisions by 2, 3, 5, and 7. We go no further since 11 53 . Thus a precondition for the function call isPrime(53, P) is that P[0] = 2 , P[1] = 3 , P[2] = 5, and P[3] = 7 . The return value in this case would be true since all these divisions fail. Similarly to test m =143 , one must do trial divisions by 2, 3, 5, 7, and 11 (since 13 143 ). The precondition for the function call isPrime(143, P) is therefore P[0] = 2 , P[1] = 3 , P[2] = 5, P[3] = 7 , and P[4] =11. The return value in this case would be false since 11 divides 143. Function isPrime() should contain a loop that steps through array P, doing trial divisions. This loop should terminate when 2 either a trial division succeeds, in which case false is returned, or until the next prime in P is greater than m , in which case true is returned. Function main() in this project will read the command line argument n, allocate an int array of length n, fill the array with primes, then print the contents of the array to stdout according to the format described below. In the context of function main(), we will refer to this array as Primes[]. Thus array Primes[] plays a dual role in this project. On the one hand, it is used to collect, store, and print the output data. On the other hand, it is passed to function isPrime() to test new integers for primality. Whenever isPrime() returns true, the newly discovered prime will be placed at the appropriate position in array Primes[]. This process works since, as explained above, the primes needed to test an integer m range only up to m , and all of these primes (and more) will already be stored in array Primes[] when m is tested. Of course it will be necessary to initialize Primes[0] = 2 manually, then proceed to test 3, 4, … for primality using function isPrime(). The following is an outline of the steps to be performed in function main(). • Check that the user supplied exactly one command line argument which can be interpreted as a positive integer n. If the command line argument is not a single positive integer, your program will print a usage message as specified in the examples below, then exit. • Allocate array Primes[] of length n and initialize Primes[0] = 2 . • Enter a loop which will discover subsequent primes and store them as Primes[1] , Primes[2], Primes[3] , ……, Primes[n -1] . This loop should contain an inner loop which walks through successive integers and tests them for primality by calling function isPrime() with appropriate arguments. • Print the contents of array Primes[] to stdout, 10 to a line separated by single spaces. In other words Primes[0] through Primes[9] will go on line 1, Primes[10] though Primes[19] will go on line 2, and so on. Note that if n is not a multiple of 10, then the last line of output will contain fewer than 10 primes. Your program, which will be called Prime.java, will produce output identical to that of the sample runs below. (As usual % signifies the unix prompt.) % java Prime Usage: java Prime [PositiveInteger] % java Prime xyz Usage: java Prime [PositiveInteger] % java Prime 10 20 Usage: java Prime [PositiveInteger] % java Prime 75 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 % 3 As you can see, inappropriate command line argument(s) generate a usage message which is similar to that of many unix commands. (Try doing the more command with no arguments to see such a message.) Your program will include a function called Usage() having signature static void Usage() that prints this message to stderr, then exits. Thus your program will contain three functions in all: main(), isPrime(), and Usage(). Each should be preceded by a comment block giving it’s name, a short description of it’s operation, and any necessary preconditions (such as those for isPrime().) See examples on the webpage.

    Read the article

  • C++ Multithreading with pthread is blocking (including sockets)

    - by Sebastian Büttner
    I am trying to implement a multi threaded application with pthread. I did implement a thread class which looks like the following and I call it later twice (or even more), but it seems to block instead of execute the threads parallel. Here is what I got until now: The Thread Class is an abstract class which has the abstract method "exec" which should contain the thread code in a derive class (I did a sample of this, named DerivedThread) Thread.hpp #ifndef THREAD_H_ #define THREAD_H_ #include <pthread.h> class Thread { public: Thread(); void start(); void join(); virtual int exec() = 0; int exit_code(); private: static void* thread_router(void* arg); void exec_thread(); pthread_t pth_; int code_; }; #endif /* THREAD_H_ */ And Thread.cpp #include <iostream> #include "Thread.hpp" /*****************************/ using namespace std; Thread::Thread(): code_(0) { cout << "[Thread] Init" << endl; } void Thread::start() { cout << "[Thread] Created Thread" << endl; pthread_create( &pth_, NULL, Thread::thread_router, reinterpret_cast<void*>(this)); } void Thread::join() { cout << "[Thread] Join Thread" << endl; pthread_join(pth_, NULL); } int Thread::exit_code() { return code_; } void Thread::exec_thread() { cout << "[Thread] Execute" << endl; code_ = exec(); } void* Thread::thread_router(void* arg) { cout << "[Thread] exec_thread function in thread" << endl; reinterpret_cast<Thread*>(arg)->exec_thread(); return NULL; } DerivedThread.hpp #include "Thread.hpp" class DerivedThread : public Thread { public: DerivedThread(); virtual ~DerivedThread(); int exec(); void Close() = 0; DerivedThread.cpp [...] #include "DerivedThread.cpp" [...] int DerivedThread::exec() { //code to be executed do { cout << "Thread executed" << endl; usleep(1000000); } while (true); //dummy, just to let it run for a while } [...] Basically, I am calling this like the here: DerivedThread *thread; cout << "Creating Thread" << endl; thread = new DerivedThread(); cout << "Created thread, starting..." << endl; thread->start(); cout << "Started thread" << endl; cout << "Creating 2nd Thread" << endl; thread = new DerivedThread(); cout << "Created 2nd thread, starting..." << endl; thread->start(); cout << "Started 2nd thread" << endl; What is working great if I am only starting one of these Threads , but if I start multiple which should run together (not synced, only parallel) . But I discovered, that the thread is created, then as it tries to execute it (via start) the problem seems to block until the thread has closed. After that the next Thread is processed. I thought that pthread would do it unblocked for me, so what did I wrong? A sample output might be: Creating Thread [Thread] Thread Init Created thread, starting... [Thread] Created thread [Thread] exec_thread function in thread [Thread] Execute Thread executed Thread executed Thread executed Thread executed Thread executed Thread executed Thread executed .... Until Thread 1 is not terminated, a Thread 2 won't be created not executed. The process above is executed in an other class. Just for the information: I am trying to create a multi threaded server. The concept is like this: MultiThreadedServer Class has a main loop, like this one: ::inet::ServerSock *sock; //just a simple self made wrapper class for sockets DerivedThread *thread; for (;;) { sock = new ::inet::ServerSock(); this->Socket->accept( *sock ); cout << "Creating Thread" << endl; //Threads (according to code sample above) thread = new DerivedThread(sock); //I did not mentoine the parameter before as it was not neccesary, in fact, I pass the socket handle with the connected socket to the thread cout << "Created thread, starting..." << endl; thread->start(); cout << "Started thread" << endl; } So I thought that this would loop over and over and wait for new connections to accept. and when a new client arrives, I am creating a new thread and give the thread the connected socket as a parameter. In the DerivedThread::exec I am doing the handling for the connected client. Like: [...] do { [...] if (this-sock_-read( Buffer, sizeof(PacketStruc) ) 0) { cout << "[Handler_Base] Recv Packet" << endl; //handle the packet } else { Connected = false; } delete Buffer; } while ( Connected ); So I loop in the created thread as long as the client keeps the connection. I think, that the socket may cause the blocking behaviour. Edit: I figured out, that it is not the read() loop in the DerivedThread Class as I simply replaced it with a loop over a simple cout-usleep part. It did also only execute the first one and after first thread finished, the 2nd one was executed. Many thanks and best regards, Sebastian

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Camera for 2.5D Game

    - by me--
    I'm hoping someone can explain this to me like I'm 5, because I've been struggling with this for hours and simply cannot understand what I'm doing wrong. I've written a Camera class for my 2.5D game. The intention is to support world and screen spaces like this: The camera is the black thing on the right. The +Z axis is upwards in that image, with -Z heading downwards. As you can see, both world space and screen space have (0, 0) at their top-left. I started writing some unit tests to prove that my camera was working as expected, and that's where things started getting...strange. My tests plot coordinates in world, view, and screen spaces. Eventually I will use image comparison to assert that they are correct, but for now my test just displays the result. The render logic uses Camera.ViewMatrix to transform world space to view space, and Camera.WorldPointToScreen to transform world space to screen space. Here is an example test: [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render(camera, out worldRender, out viewRender, out screenRender, new Vector3(30, 0, 0), new Vector3(30, 40, 0)); this.ShowRenders(camera, worldRender, viewRender, screenRender); } And here's what pops up when I run this test: World space looks OK, although I suspect the z axis is going into the screen instead of towards the viewer. View space has me completely baffled. I was expecting the camera to be sitting above (0, 0) and looking towards the center of the scene. Instead, the z axis seems to be the wrong way around, and the camera is positioned in the opposite corner to what I expect! I suspect screen space will be another thing altogether, but can anyone explain what I'm doing wrong in my Camera class? UPDATE I made some progress in terms of getting things to look visually as I expect, but only through intuition: not an actual understanding of what I'm doing. Any enlightenment would be greatly appreciated. I realized that my view space was flipped both vertically and horizontally compared to what I expected, so I changed my view matrix to scale accordingly: this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom, this.zoom, 1) * Matrix.CreateScale(-1, -1, 1); I could combine the two CreateScale calls, but have left them separate for clarity. Again, I have no idea why this is necessary, but it fixed my view space: But now my screen space needs to be flipped vertically, so I modified my projection matrix accordingly: this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); And this results in what I was expecting from my first attempt: I have also just tried using Camera to render sprites via a SpriteBatch to make sure everything works there too, and it does. But the question remains: why do I need to do all this flipping of axes to get the space coordinates the way I expect? UPDATE 2 I've since improved my rendering logic in my test suite so that it supports geometries and so that lines get lighter the further away they are from the camera. I wanted to do this to avoid optical illusions and to further prove to myself that I'm looking at what I think I am. Here is an example: In this case, I have 3 geometries: a cube, a sphere, and a polyline on the top face of the cube. Notice how the darkening and lightening of the lines correctly identifies those portions of the geometries closer to the camera. If I remove the negative scaling I had to put in, I see: So you can see I'm still in the same boat - I still need those vertical and horizontal flips in my matrices to get things to appear correctly. In the interests of giving people a repro to play with, here is the complete code needed to generate the above. If you want to run via the test harness, just install the xunit package: Camera.cs: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using System.Diagnostics; public sealed class Camera { private readonly Viewport viewport; private readonly Matrix projectionMatrix; private Matrix? viewMatrix; private Vector3 location; private Vector3 target; private Vector3 up; private float zoom; public Camera(Viewport viewport) { this.viewport = viewport; // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); // defaults this.location = new Vector3(this.viewport.Width / 2, this.viewport.Height, 100); this.target = new Vector3(this.viewport.Width / 2, this.viewport.Height / 2, 0); this.up = new Vector3(0, 0, 1); this.zoom = 1; } public Viewport Viewport { get { return this.viewport; } } public Vector3 Location { get { return this.location; } set { this.location = value; this.viewMatrix = null; } } public Vector3 Target { get { return this.target; } set { this.target = value; this.viewMatrix = null; } } public Vector3 Up { get { return this.up; } set { this.up = value; this.viewMatrix = null; } } public float Zoom { get { return this.zoom; } set { this.zoom = value; this.viewMatrix = null; } } public Matrix ProjectionMatrix { get { return this.projectionMatrix; } } public Matrix ViewMatrix { get { if (this.viewMatrix == null) { // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom) * Matrix.CreateScale(-1, -1, 1); } return this.viewMatrix.Value; } } public Vector2 WorldPointToScreen(Vector3 point) { var result = viewport.Project(point, this.ProjectionMatrix, this.ViewMatrix, Matrix.Identity); return new Vector2(result.X, result.Y); } public void WorldPointsToScreen(Vector3[] points, Vector2[] destination) { Debug.Assert(points != null); Debug.Assert(destination != null); Debug.Assert(points.Length == destination.Length); for (var i = 0; i < points.Length; ++i) { destination[i] = this.WorldPointToScreen(points[i]); } } } CameraFixture.cs: using Microsoft.Xna.Framework.Graphics; using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using Xunit; using XNA = Microsoft.Xna.Framework; public sealed class CameraFixture { [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render( camera, out worldRender, out viewRender, out screenRender, new Sphere(30, 15) { WorldMatrix = XNA.Matrix.CreateTranslation(155, 50, 0) }, new Cube(30) { WorldMatrix = XNA.Matrix.CreateTranslation(75, 60, 15) }, new PolyLine(new XNA.Vector3(0, 0, 0), new XNA.Vector3(10, 10, 0), new XNA.Vector3(20, 0, 0), new XNA.Vector3(0, 0, 0)) { WorldMatrix = XNA.Matrix.CreateTranslation(65, 55, 30) }); this.ShowRenders(worldRender, viewRender, screenRender); } #region Supporting Fields private static readonly Pen xAxisPen = new Pen(Brushes.Red, 2); private static readonly Pen yAxisPen = new Pen(Brushes.Green, 2); private static readonly Pen zAxisPen = new Pen(Brushes.Blue, 2); private static readonly Pen viewportPen = new Pen(Brushes.Gray, 1); private static readonly Pen nonScreenSpacePen = new Pen(Brushes.Black, 0.5); private static readonly Color geometryBaseColor = Colors.Black; #endregion #region Supporting Methods private void Render(Camera camera, out DrawingVisual worldRender, out DrawingVisual viewRender, out DrawingVisual screenRender, params Geometry[] geometries) { var worldDrawingVisual = new DrawingVisual(); var viewDrawingVisual = new DrawingVisual(); var screenDrawingVisual = new DrawingVisual(); const int axisLength = 15; using (var worldDrawingContext = worldDrawingVisual.RenderOpen()) using (var viewDrawingContext = viewDrawingVisual.RenderOpen()) using (var screenDrawingContext = screenDrawingVisual.RenderOpen()) { // draw lines around the camera's viewport var viewportBounds = camera.Viewport.Bounds; var viewportLines = new Tuple<int, int, int, int>[] { Tuple.Create(viewportBounds.Left, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Top), Tuple.Create(viewportBounds.Left, viewportBounds.Top, viewportBounds.Right, viewportBounds.Top), Tuple.Create(viewportBounds.Right, viewportBounds.Top, viewportBounds.Right, viewportBounds.Bottom), Tuple.Create(viewportBounds.Right, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Bottom) }; foreach (var viewportLine in viewportLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0)); worldDrawingContext.DrawLine(viewportPen, new Point(viewportLine.Item1, viewportLine.Item2), new Point(viewportLine.Item3, viewportLine.Item4)); viewDrawingContext.DrawLine(viewportPen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(viewportPen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // draw axes var axisLines = new Tuple<int, int, int, int, int, int, Pen>[] { Tuple.Create(0, 0, 0, axisLength, 0, 0, xAxisPen), Tuple.Create(0, 0, 0, 0, axisLength, 0, yAxisPen), Tuple.Create(0, 0, 0, 0, 0, axisLength, zAxisPen) }; foreach (var axisLine in axisLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6)); worldDrawingContext.DrawLine(axisLine.Item7, new Point(axisLine.Item1, axisLine.Item2), new Point(axisLine.Item4, axisLine.Item5)); viewDrawingContext.DrawLine(axisLine.Item7, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(axisLine.Item7, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // for all points in all geometries to be rendered, find the closest and furthest away from the camera so we can lighten lines that are further away var distancesToAllGeometrySections = from geometry in geometries let geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix from section in geometry.Sections from point in new XNA.Vector3[] { section.Item1, section.Item2 } let viewPoint = XNA.Vector3.Transform(point, geometryViewMatrix) select viewPoint.Length(); var furthestDistance = distancesToAllGeometrySections.Max(); var closestDistance = distancesToAllGeometrySections.Min(); var deltaDistance = Math.Max(0.000001f, furthestDistance - closestDistance); // draw each geometry for (var i = 0; i < geometries.Length; ++i) { var geometry = geometries[i]; // there's probably a more correct name for this, but basically this gets the geometry relative to the camera so we can check how far away each point is from the camera var geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix; // we order roughly by those sections furthest from the camera to those closest, so that the closer ones "overwrite" the ones further away var orderedSections = from section in geometry.Sections let startPointRelativeToCamera = XNA.Vector3.Transform(section.Item1, geometryViewMatrix) let endPointRelativeToCamera = XNA.Vector3.Transform(section.Item2, geometryViewMatrix) let startPointDistance = startPointRelativeToCamera.Length() let endPointDistance = endPointRelativeToCamera.Length() orderby (startPointDistance + endPointDistance) descending select new { Section = section, DistanceToStart = startPointDistance, DistanceToEnd = endPointDistance }; foreach (var orderedSection in orderedSections) { var start = XNA.Vector3.Transform(orderedSection.Section.Item1, geometry.WorldMatrix); var end = XNA.Vector3.Transform(orderedSection.Section.Item2, geometry.WorldMatrix); var viewStart = XNA.Vector3.Transform(start, camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(end, camera.ViewMatrix); worldDrawingContext.DrawLine(nonScreenSpacePen, new Point(start.X, start.Y), new Point(end.X, end.Y)); viewDrawingContext.DrawLine(nonScreenSpacePen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); // screen rendering is more complicated purely because I wanted geometry to fade the further away it is from the camera // otherwise, it's very hard to tell whether the rendering is actually correct or not var startDistanceRatio = (orderedSection.DistanceToStart - closestDistance) / deltaDistance; var endDistanceRatio = (orderedSection.DistanceToEnd - closestDistance) / deltaDistance; // lerp towards white based on distance from camera, but only to a maximum of 90% var startColor = Lerp(geometryBaseColor, Colors.White, startDistanceRatio * 0.9f); var endColor = Lerp(geometryBaseColor, Colors.White, endDistanceRatio * 0.9f); var screenStart = camera.WorldPointToScreen(start); var screenEnd = camera.WorldPointToScreen(end); var brush = new LinearGradientBrush { StartPoint = new Point(screenStart.X, screenStart.Y), EndPoint = new Point(screenEnd.X, screenEnd.Y), MappingMode = BrushMappingMode.Absolute }; brush.GradientStops.Add(new GradientStop(startColor, 0)); brush.GradientStops.Add(new GradientStop(endColor, 1)); var pen = new Pen(brush, 1); brush.Freeze(); pen.Freeze(); screenDrawingContext.DrawLine(pen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } } } worldRender = worldDrawingVisual; viewRender = viewDrawingVisual; screenRender = screenDrawingVisual; } private static float Lerp(float start, float end, float amount) { var difference = end - start; var adjusted = difference * amount; return start + adjusted; } private static Color Lerp(Color color, Color to, float amount) { var sr = color.R; var sg = color.G; var sb = color.B; var er = to.R; var eg = to.G; var eb = to.B; var r = (byte)Lerp(sr, er, amount); var g = (byte)Lerp(sg, eg, amount); var b = (byte)Lerp(sb, eb, amount); return Color.FromArgb(255, r, g, b); } private void ShowRenders(DrawingVisual worldRender, DrawingVisual viewRender, DrawingVisual screenRender) { var itemsControl = new ItemsControl(); itemsControl.Items.Add(new HeaderedContentControl { Header = "World", Content = new DrawingVisualHost(worldRender)}); itemsControl.Items.Add(new HeaderedContentControl { Header = "View", Content = new DrawingVisualHost(viewRender) }); itemsControl.Items.Add(new HeaderedContentControl { Header = "Screen", Content = new DrawingVisualHost(screenRender) }); var window = new Window { Title = "Renders", Content = itemsControl, ShowInTaskbar = true, SizeToContent = SizeToContent.WidthAndHeight }; window.ShowDialog(); } #endregion #region Supporting Types // stupidly simple 3D geometry class, consisting of a series of sections that will be connected by lines private abstract class Geometry { public abstract IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get; } public XNA.Matrix WorldMatrix { get; set; } } private sealed class Line : Geometry { private readonly XNA.Vector3 magnitude; public Line(XNA.Vector3 magnitude) { this.magnitude = magnitude; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { yield return Tuple.Create(XNA.Vector3.Zero, this.magnitude); } } } private sealed class PolyLine : Geometry { private readonly XNA.Vector3[] points; public PolyLine(params XNA.Vector3[] points) { this.points = points; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { if (this.points.Length < 2) { yield break; } var end = this.points[0]; for (var i = 1; i < this.points.Length; ++i) { var start = end; end = this.points[i]; yield return Tuple.Create(start, end); } } } } private sealed class Cube : Geometry { private readonly float size; public Cube(float size) { this.size = size; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var halfSize = this.size / 2; var frontBottomLeft = new XNA.Vector3(-halfSize, halfSize, -halfSize); var frontBottomRight = new XNA.Vector3(halfSize, halfSize, -halfSize); var frontTopLeft = new XNA.Vector3(-halfSize, halfSize, halfSize); var frontTopRight = new XNA.Vector3(halfSize, halfSize, halfSize); var backBottomLeft = new XNA.Vector3(-halfSize, -halfSize, -halfSize); var backBottomRight = new XNA.Vector3(halfSize, -halfSize, -halfSize); var backTopLeft = new XNA.Vector3(-halfSize, -halfSize, halfSize); var backTopRight = new XNA.Vector3(halfSize, -halfSize, halfSize); // front face yield return Tuple.Create(frontBottomLeft, frontBottomRight); yield return Tuple.Create(frontBottomLeft, frontTopLeft); yield return Tuple.Create(frontTopLeft, frontTopRight); yield return Tuple.Create(frontTopRight, frontBottomRight); // left face yield return Tuple.Create(frontTopLeft, backTopLeft); yield return Tuple.Create(backTopLeft, backBottomLeft); yield return Tuple.Create(backBottomLeft, frontBottomLeft); // right face yield return Tuple.Create(frontTopRight, backTopRight); yield return Tuple.Create(backTopRight, backBottomRight); yield return Tuple.Create(backBottomRight, frontBottomRight); // back face yield return Tuple.Create(backBottomLeft, backBottomRight); yield return Tuple.Create(backTopLeft, backTopRight); } } } private sealed class Sphere : Geometry { private readonly float radius; private readonly int subsections; public Sphere(float radius, int subsections) { this.radius = radius; this.subsections = subsections; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var latitudeLines = this.subsections; var longitudeLines = this.subsections; // see http://stackoverflow.com/a/4082020/5380 var results = from latitudeLine in Enumerable.Range(0, latitudeLines) from longitudeLine in Enumerable.Range(0, longitudeLines) let latitudeRatio = latitudeLine / (float)latitudeLines let longitudeRatio = longitudeLine / (float)longitudeLines let nextLatitudeRatio = (latitudeLine + 1) / (float)latitudeLines let nextLongitudeRatio = (longitudeLine + 1) / (float)longitudeLines let z1 = Math.Cos(Math.PI * latitudeRatio) let z2 = Math.Cos(Math.PI * nextLatitudeRatio) let x1 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y1 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x3 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * nextLongitudeRatio) let y3 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * nextLongitudeRatio) let start = new XNA.Vector3((float)x1 * radius, (float)y1 * radius, (float)z1 * radius) let firstEnd = new XNA.Vector3((float)x2 * radius, (float)y2 * radius, (float)z2 * radius) let secondEnd = new XNA.Vector3((float)x3 * radius, (float)y3 * radius, (float)z1 * radius) select new { First = Tuple.Create(start, firstEnd), Second = Tuple.Create(start, secondEnd) }; foreach (var result in results) { yield return result.First; yield return result.Second; } } } } #endregion }

    Read the article

  • using Generics in C# [closed]

    - by Uphaar Goyal
    I have started looking into using generics in C#. As an example what i have done is that I have an abstract class which implements generic methods. these generic methods take a sql query, a connection string and the Type T as parameters and then construct the data set, populate the object and return it back. This way each business object does not need to have a method to populate it with data or construct its data set. All we need to do is pass the type, the sql query and the connection string and these methods do the rest.I am providing the code sample here. I am just looking to discuss with people who might have a better solution to what i have done. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using MWTWorkUnitMgmtLib.Business; using System.Collections.ObjectModel; using System.Reflection; namespace MWTWorkUnitMgmtLib.TableGateway { public abstract class TableGateway { public TableGateway() { } protected abstract string GetConnection(); protected abstract string GetTableName(); public DataSet GetDataSetFromSql(string connectionString, string sql) { DataSet ds = null; using (SqlConnection connection = new SqlConnection(connectionString)) using (SqlCommand command = connection.CreateCommand()) { command.CommandText = sql; connection.Open(); using (ds = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter(command)) { adapter.Fill(ds); } } return ds; } public static bool ContainsColumnName(DataRow dr, string columnName) { return dr.Table.Columns.Contains(columnName); } public DataTable GetDataTable(string connString, string sql) { DataSet ds = GetDataSetFromSql(connString, sql); DataTable dt = null; if (ds != null) { if (ds.Tables.Count 0) { dt = ds.Tables[0]; } } return dt; } public T Construct(DataRow dr, T t) where T : class, new() { Type t1 = t.GetType(); PropertyInfo[] properties = t1.GetProperties(); foreach (PropertyInfo property in properties) { if (ContainsColumnName(dr, property.Name) && (dr[property.Name] != null)) property.SetValue(t, dr[property.Name], null); } return t; } public T GetByID(string connString, string sql, T t) where T : class, new() { DataTable dt = GetDataTable(connString, sql); DataRow dr = dt.Rows[0]; return Construct(dr, t); } public List GetAll(string connString, string sql, T t) where T : class, new() { List collection = new List(); DataTable dt = GetDataTable(connString, sql); foreach (DataRow dr in dt.Rows) collection.Add(Construct(dr, t)); return collection; } } }

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Chunking a List - .NET vs Python

    - by Abhijeet Patel
    Chunking a List As I mentioned last time, I'm knee deep in python these days. I come from a statically typed background so it's definitely a mental adjustment. List comprehensions is BIG in Python and having worked with a few of them I can see why. Let's say we need to chunk a list into sublists of a specified size. Here is how we'd do it in C#  static class Extensions   {       public static IEnumerable<List<T>> Chunk<T>(this List<T> l, int chunkSize)       {           if (chunkSize <0)           {               throw new ArgumentException("chunkSize cannot be negative", "chunkSize");           }           for (int i = 0; i < l.Count; i += chunkSize)           {               yield return new List<T>(l.Skip(i).Take(chunkSize));           }       }    }    static void Main(string[] args)  {           var l = new List<string> { "a", "b", "c", "d", "e", "f","g" };             foreach (var list in l.Chunk(7))           {               string str = list.Aggregate((s1, s2) => s1 + "," + s2);               Console.WriteLine(str);           }   }   A little wordy but still pretty concise thanks to LINQ.We skip the iteration number plus chunkSize elements and yield out a new List of chunkSize elements on each iteration. The python implementation is a bit more terse. def chunkIterable(iter, chunkSize):      '''Chunks an iterable         object into a list of the specified chunkSize     '''        assert hasattr(iter, "__iter__"), "iter is not an iterable"      for i in xrange(0, len(iter), chunkSize):          yield iter[i:i + chunkSize]    if __name__ == '__main__':      l = ['a', 'b', 'c', 'd', 'e', 'f']      generator = chunkIterable(l,2)      try:          while(1):              print generator.next()      except StopIteration:          pass   xrange generates elements in the specified range taking in a seed and returning a generator. which can be used in a for loop(much like using a C# iterator in a foreach loop) Since chunkIterable has a yield statement, it turns this method into a generator as well. iter[i:i + chunkSize] essentially slices the list based on the current iteration index and chunksize and creates a new list that we yield out to the caller one at a time. A generator much like an iterator is a state machine and each subsequent call to it remembers the state at which the last call left off and resumes execution from that point. The caveat to keep in mind is that since variables are not explicitly typed we need to ensure that the object passed in is iterable using hasattr(iter, "__iter__").This way we can perform chunking on any object which is an "iterable", very similar to accepting an IEnumerable in the .NET land

    Read the article

  • Delegate performance of Roslyn Sept 2012 CTP is impressive

    - by dotneteer
    I wanted to dynamically compile some delegates using Roslyn. I came across this article by Piotr Sowa. The article shows that the delegate compiled with Roslyn CTP was not very fast. Since the article was written using the Roslyn June 2012, I decided to give Sept 2012 CTP a try. There are significant changes in Roslyn Sept 2012 CTP in both C# syntax supported as well as API. I found Anoop Madhisidanan’s article that has an example of the new API. With that, I was able to put together a comparison. In my test, the Roslyn compiled delegate is as fast as C# (VS 2012) compiled delegate. See the source code below and give it a try. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; using Roslyn.Compilers; using Roslyn.Scripting.CSharp; using Roslyn.Scripting; namespace RoslynTest { class Program { public Func del; static void Main(string[] args) { Stopwatch stopWatch = new Stopwatch(); Program p = new Program(); p.SetupDel(); //Comment out this line and uncomment the next line to compare //p.SetupScript(); stopWatch.Start(); int result = DoWork(p.del); stopWatch.Stop(); Console.WriteLine(result); Console.WriteLine("Time elapsed {0}", stopWatch.ElapsedMilliseconds); Console.Read(); } private void SetupDel() { del = (s, i) => ++s; } private void SetupScript() { //Create the script engine //Script engine constructor parameters go changed var engine=new ScriptEngine(); //Let us use engine's Addreference for adding the required //assemblies new[] { typeof (Console).Assembly, typeof (Program).Assembly, typeof (IEnumerable<>).Assembly, typeof (IQueryable).Assembly }.ToList().ForEach(asm => engine.AddReference(asm)); new[] { "System", "System.Linq", "System.Collections", "System.Collections.Generic" }.ToList().ForEach(ns=>engine.ImportNamespace(ns)); //Now, you need to create a session using engine's CreateSession method, //which can be seeded with a host object var session = engine.CreateSession(); var submission = session.CompileSubmission>("new Func((s, i) => ++s)"); del = submission.Execute(); //- See more at: http://www.amazedsaint.com/2012/09/roslyn-september-ctp-2012-overview-api.html#sthash.1VutrWiW.dpuf } private static int DoWork(Func del) { int result = Enumerable.Range(1, 1000000).Aggregate(del); return result; } } }  Since Roslyn Sept 2012 CTP is already over a year old, I cannot wait to see a new version coming out.

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • How to prevent ‘Select *’ : The elegant way

    - by Dave Ballantyne
    I’ve been doing a lot of work with the “Microsoft SQL Server 2012 Transact-SQL Language Service” recently, see my post here and article here for more details on its use and some uses. An obvious use is to interrogate sql scripts to enforce our coding standards.  In the SQL world a no-brainer is SELECT *,  all apologies must now be given to Jorge Segarra and his post “How To Prevent SELECT * The Evil Way” as this is a blatant rip-off IMO, the only true way to check for this particular evilness is to parse the SQL as if we were SQL Server itself.  The parser mentioned above is ,pretty much, the best tool for doing this.  So without further ado lets have a look at a powershell script that does exactly that : cls #Load the assembly [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' #Create the object $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $SqlArr = Get-Content "C:\scripts\myscript.sql" $Sql = "" foreach($Line in $SqlArr){ $Sql+=$Line $Sql+="`r`n" } $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false $Batch = "" $BatchStart =0 $Start=0 $End=0 $State=0 $SelectColumns=@(); $InSelect = $false $InWith = $false; while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { $Str = $Sql.Substring($Start,($End-$Start)+1) try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null #Write-Host $TokenPrs if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ $InSelect =$true $SelectColumns+="" } if($TokenPrs -eq [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_FROM){ $InSelect =$false #Write-Host $SelectColumns -BackgroundColor Red foreach($Col in $SelectColumns){ if($Col.EndsWith("*")){ Write-Host "select * is not allowed" exit } } $SelectColumns =@() } }catch{ #$Error $TokenPrs = $null } if($InSelect -and $TokenPrs -ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SELECT){ if($Str -eq ","){ $SelectColumns+="" }else{ $SelectColumns[$SelectColumns.Length-1]+=$Str } } } OK, im not going to pretend that its the prettiest of powershell scripts,  but if our parsed script file “C:\Scripts\MyScript.SQL” contains SELECT * then “select * is not allowed” will be written to the host.  So, where can this go wrong ?  It cant ,or at least shouldn’t , go wrong, but it is lacking in functionality.  IMO, Select * should be allowed in CTEs, views and Inline table valued functions at least and as it stands they will be reported upon. Anyway, it is a start and is more reliable that other methods.

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >